A List Apart

Subscribe to A List Apart feed
Articles for people who make web sites.
Updated: 1 hour 39 min ago

Working with External User Researchers: Part II

Tue, 04/17/2018 - 07:10

In the first installment of the Working with External User Researchers series, we explored the reasons why you might hire a user researcher on contract and helpful things to consider in choosing one. This time, we talk about getting the actual work done.

You’ve hired a user researcher for your project. Congrats! On paper, this person (or team of people) has everything you need and more. You might think the hardest part of your project is complete and that you can be more hands off at this point. But the real work hasn’t started yet. Hiring the researcher is just the beginning of your journey.

Let’s recap what we mean by an external user researcher. Hiring a contract external user researcher means that a person or team is brought on for the duration of a contract to conduct research.

This situation is most commonly found in:

  • organizations without researchers on staff;
  • organizations whose research staff is maxed out;
  • and organizations that need special expertise.

In other words, external user researchers exist to help you gain the insight from your users when hiring one full-time is not an option. Check out Part I to learn more about how to find external user researchers, the types of projects that will get you the most value for your money, writing a request for proposal, and finally, negotiating payment.

Working together Remember why you hired an external researcher

No project or work relationship is perfect. Before we delve into more specific guidelines on how to work well together, remember the reasons why you decided to hire an external researcher (and this specific one) for your project. Keeping them in mind as you work together will help you keep your priorities straight.

External researchers are great for bringing in a fresh, objective perspective

You could ask your full-time designer who also has research skills to wear the research hat. This isn’t uncommon. But a designer won’t have the same depth and breadth of expertise as a dedicated researcher. In addition, they will probably end up researching their own design work, which will make it very difficult for them to remain unbiased.

Product managers sometimes like to be proactive and conduct some form of guerrilla user research themselves, but this is an even riskier idea. They usually aren’t trained on how to ask non-leading questions, for example, so they tend to only hear feedback that validates their ideas.

It isn’t a secret—but it’s well worth remembering—that research participants tend to be more comfortable sharing critical feedback with someone who doesn’t work for the product that is being tested.

The real work begins

In our experience the most important work starts once a researcher is hired. Here are some key considerations in setting them and your own project team up for success.

Be smart about the initial brain dump

Do share background materials that provide important context and prevent redundant work from being done. It’s likely that some insight is already known on a topic that will be researched, so it’s important to share this knowledge with your researcher so they can focus on new areas of inquiry. Provide things such as report templates to ensure that the researcher presents their learnings in a way that’s consistent with your organization’s unique culture. While you’re at it, consider showing them where to find documentation or tutorials about your product, or specific industry jargon.

Make sure people know who they are

Conduct a project kick-off meeting with the external researcher and your internal stakeholders. Influence is often partially a factor of trust and relationships, and for this reason it’s sometimes easy for internal stakeholders to question or brush aside projects conducted by research consultants, especially if they disagree with research insights and recommendations. (Who is this person I don’t know trying to tell me what is best for my product?)

Conduct a kick-off meeting with the broader team

A great way to prevent this potential pushback is to conduct a project kick-off meeting with the external researcher and important internal stakeholders or consumers of the research. Such a meeting might include activities such as:

  • Team introductions.
  • A discussion about the research questions, including an exercise for prioritizing the questions. Especially with contracted-out projects, it’s common for project teams to be tempted to add more questions—question creep—which is why it’s important to have clear priorities from the start.
  • A summary of what’s out of scope for the research. This is another important task in setting firm boundaries around project priorities from the start so the project is completed on time and within budget.
  • A summary of any incoming hypotheses the project team might have—in other words, what they think the answers to the research questions are. This can be an especially impactful exercise to remind stakeholders how their initial thinking changed in response to study findings upon the study being completed.
  • A review of the project phases and timeline, and any threats that could get in the way of the project being completed on time.
  • A review of prior research and what’s already known, if available. This is important for both the external researcher and the most important internal consumers of the research, as it’s often the case that the broader project team might not be aware of prior research and why certain questions already answered aren’t being addressed in the project at hand.
Use a buddy system

Appoint an internal resource who can answer questions that will no doubt arise during the project. This might include questions on how to use an internal lab, questions about whom to invite to a critical meeting, or clarifying questions regarding project priorities. This is also another opportunity to build trust and rapport between your project team and external researcher.

Conducting the research

While an external researcher or agency can help plan and conduct a study for you, don’t expect them to be experts on your product and company culture. It’s like hiring an architect to build your house or a designer to furnish a room: you need to provide guidance early and often, or the end result may not be what you expected. Here are some things to consider to make the engagement more effective.

Be available

A good research contractor will ask lots of questions to make sure they’re understanding important details, such as your priorities and research questions, and to collect feedback on the study plan and research report. While it can sometimes feel more efficient to handle most of these types of questions over email, email can often result in misinterpretations. Sometimes it’s faster to speak to questions that require lots of detail and context rather than type a response. Consider establishing weekly remote or in-person status checks to discuss open questions and action items.

Be present

If moderated sessions are part of the research, plan on observing as many of these as possible. While you should expect the research agency to provide you with a final report, you should not expect them to know which insights are most impactful to your project. They don’t have the background from internal meetings, prior decisions, and discussions about future product directions that an internal stakeholder has. Many of the most insightful findings come from conversations that happen immediately after a session with a research participant. The research moderator and client contact can share their perspectives on what the participant just did and said during their session.

Be proactive

Before the researcher drafts their final report, set up a meeting between them and your internal stakeholders to brainstorm over the main research findings. This will help the researcher identify more insights and opportunities that reflect internal priorities and limitations. It also helps stakeholders build trust in the research findings.

In other words, it’s a waste of everyone’s time if a final report is delivered and basic questions arise from stakeholders that could have been addressed by involving them earlier. This is also a good opportunity to get feedback from stakeholders’ stakeholders, who may have a different (but just as important) influence on the project’s success.

Be reasonable

Don’t treat an external contractor like a PowerPoint jockey. Changing fonts and colors to your liking is fine, but only to a point. Your researcher should provide you with a polished report free from errors and in a professional format, but minute changes are not a constructive use of time and money. Focus more on decisions and recommendations than the aesthetics of the deliverables. You can prevent this kind of situation by providing any templates you want used in your initial brain dump, so the findings don’t have to be replicated in the “right” format for presenting.

When it’s all said and done

Just because the project has been completed and all the agreed deliverables have been received doesn’t mean you should close the door on any additional learning opportunities for both the client and researcher. At the end of the project, identify what worked, and find ways to increase buy-in for their recommendations.

Tell them what happened

Try to identify a check-in point in the future (such as two weeks or months) to let the researcher know what happened because of the research: what decisions were made, what problems were fixed, or other design changes. While you shouldn’t expect your researcher to be perpetually available, if you encounter problems with buy-in, they might be able to provide a quick recommendation.

Maintain a relationship

While it’s typical for vendors to treat their clients to dinner or drinks, don’t be afraid to invite your external researcher to your own happy hour or event with your staff. The success of your next project may rely on getting the right researcher, and you’ll want them to be excited to make themselves available to help you when you need them again.

Going Offline

Tue, 04/10/2018 - 07:10

A note from the editors: We’re excited to share Chapter 1 of Going Offline by Jeremy Keith, available this month from A Book Apart.

Businesses are built on the web. Without the web, Twitter couldn’t exist. Facebook couldn’t exist. And not just businesses—Wikipedia couldn’t exist. Your favorite blog couldn’t exist without the web. The web doesn’t favor any one kind of use. It’s been deliberately designed to accommodate many and varied activities.

Just as many wonderful things are built upon the web, the web itself is built upon the internet. Though we often use the terms web and internet interchangeably, the World Wide Web is just one application that uses the internet as its plumbing. Email, for instance, is another.

Like the web, the internet was designed to allow all kinds of services to be built on top of it. The internet is a network of networks, all of them agreeing to use the same protocols to shuttle packets of data around. Those packets are transmitted down fiber-optic cables across the ocean floor, bounced around with Wi-Fi or radio signals, or beamed from satellites in freakin’ space.

As long as these networks are working, the web is working. But sometimes networks go bad. Mobile networks have a tendency to get flaky once you’re on a train or in other situations where you’re, y’know, mobile. Wi-Fi networks work fine until you try to use one in a hotel room (their natural enemy).

When the network fails, the web fails. That’s just the way it is, and there’s nothing we can do about it. Until now.

Weaving the Web

For as long as I can remember, the World Wide Web has had an inferiority complex. Back in the ’90s, it was outshone by CD-ROMs (ask your parents). They had video, audio, and a richness that the web couldn’t match. But they lacked links—you couldn’t link from something in one CD-ROM to something in another CD-ROM. They faded away. The web grew.

Later, the web technologies of HTML, CSS, and JavaScript were found wanting when compared to the whiz-bang beauty of Flash. Again, Flash movies were much richer than regular web pages. But they were also black boxes. The Flash format seemed superior to the open standards of the web, and yet the very openness of those standards made the web an unstoppable force. Flash—under the control of just one company—faded away. The web grew.

These days it’s native apps that make the web look like an underachiever. Like Flash, they’re under the control of individual companies instead of being a shared resource like the web. Like Flash, they demonstrate all sorts of capabilities that the web lacks, such as access to device APIs and, crucially, the ability to work even when there’s no network connection.

The history of the web starts to sound like an endless retelling of the fable of the tortoise and the hare. CD-ROMs, Flash, and native apps outshine the web in the short term, but the web always seems to win the day somehow.

Each of those technologies proved very useful for the expansion of web standards. In a way, Flash was like the R&D department for HTML, CSS, and JavaScript. Smooth animations, embedded video, and other great features first saw the light of day in Flash. Having shown their usefulness, they later appeared in web standards. The same thing is happening with native apps. Access to device features like the camera and the accelerometer is beginning to show up in web browsers. Most exciting of all, we’re finally getting the ability for a website to continue working even when the network isn’t available.

Service Workers

The technology that makes this bewitching offline sorcery possible is a browser feature called service workers. You might have heard of them. You might have heard that they’re something to do with JavaScript, and technically they are…but conceptually they’re very different from other kinds of scripts.

Usually when you’re writing some JavaScript that’s going to run in a web browser, it’s all related to the document currently being displayed in the browser window. You might want to listen out for events triggered by the user interacting with the document (clicks, swipes, hovers, etc.). You might want to update the contents of the document: add some markup here, remove some text there, manipulate some values somewhere else. The sky’s the limit. And it’s all made possible thanks to the Document Object Model (DOM), a representation of what the browser is rendering. Through the combination of the DOM and JavaScript—DOM scripting, if you will—you can conjure up all sorts of wonderful magic.

Well, a service worker can’t do any of that. It’s still a script, and it’s still written in the same language—JavaScript—but it has no access to the DOM. Without any DOM scripting capabilities, this kind of script might seem useless at first glance. But there’s an advantage to having a script that never needs to interact with the current document. Adding, editing, and deleting parts of the DOM can be hard work for the browser. If you’re not careful, things can get very sluggish very quickly. But if there’s a whole class of script that isn’t allowed access to the DOM, then the browser can happily run that script in parallel to its regular rendering activities, safe in the knowledge that it’s an entirely separate process.

The first kind of script to come with this constraint was called a web worker. In a web worker, you could write some JavaScript to do number-crunching calculations without slowing down whatever else was being displayed in the browser window. Spin up a web worker to generate larger and larger prime numbers, for instance, and it will merrily do so in the background.

A service worker is like a web worker with extra powers. It still can’t access the DOM, but it does have access to the fundamental inner workings of the browser.

Browsers and servers

Let’s take a step back and think about how the World Wide Web works. It’s a beautiful ballet of client and server. The client is usually a web browser—or, to use the parlance of web standards, a user agent: a piece of software that acts on behalf of the user.

The user wants to accomplish a task or find some information. The URL is the key technology that will empower the user in their quest. They will either type a URL into their web browser or follow a link to get there. This is the point at which the web browser—or client—makes a request to a web server. Before the request can reach the server, it must traverse the internet of undersea cables, radio towers, and even the occasional satellite (Fig 1.1).

Fig 1.1: Browsers send URL requests to servers, and servers respond by sending files.

Imagine if you could leave instructions for the web browser that would be executed before the request is even sent. That’s exactly what service workers allow you to do (Fig 1.2).

Fig 1.2: Service workers tell the web browser to do something before they send the request to queue up a URL.

Usually when we write JavaScript, the code is executed after it’s been downloaded from a server. With service workers, we can write a script that’s executed by the browser before anything else happens. We can tell the browser, “If the user asks you to retrieve a URL for this particular website, run this corresponding bit of JavaScript first.” That explains why service workers don’t have access to the Document Object Model; when the service worker is run, there’s no document yet.

Getting your head around service workers

A service worker is like a cookie. Cookies are downloaded from a web server and installed in a browser. You can go to your browser’s preferences and see all the cookies that have been installed by sites you’ve visited. Cookies are very small and very simple little text files. A website can set a cookie, read a cookie, and update a cookie. A service worker script is much more powerful. It contains a set of instructions that the browser will consult before making any requests to the site that originally installed the service worker.

A service worker is like a virus. When you visit a website, a service worker is surreptitiously installed in the background. Afterwards, whenever you make a request to that website, your request will be intercepted by the service worker first. Your computer or phone becomes the home for service workers lurking in wait, ready to perform man-in-the-middle attacks. Don’t panic. A service worker can only handle requests for the site that originally installed that service worker. When you write a service worker, you can only use it to perform man-in-the-middle attacks on your own website.

A service worker is like a toolbox. By itself, a service worker can’t do much. But it allows you to access some very powerful browser features, like the Fetch API, the Cache API, and even notifications. API stands for Application Programming Interface, which sounds very fancy but really just means a tool that you can program however you want. You can write a set of instructions in your service worker to take advantage of these tools. Most of your instructions will be written as “when this happens, reach for this tool.” If, for instance, the network connection fails, you can instruct the service worker to retrieve a backup file using the Cache API.

A service worker is like a duck-billed platypus. The platypus not only lactates, but also lays eggs. It’s the only mammal capable of making its own custard. A service worker can also…Actually, hang on, a service worker is nothing like a duck-billed platypus! Sorry about that. But a service worker is somewhat like a cookie, and somewhat like a virus, and somewhat like a toolbox.

Safety First

Service workers are powerful. Once a service worker has been installed on your machine, it lies in wait, like a patient spider waiting to feel the vibrations of a particular thread.

Imagine if a malicious ne’er-do-well wanted to wreak havoc by impersonating a website in order to install a service worker. They could write instructions in the service worker to prevent the website ever appearing in that browser again. Or they could write instructions to swap out the content displayed under that site’s domain. That’s why it’s so important to make sure that a service worker really belongs to the site it claims to come from. As the specification for service workers puts it, they “create the opportunity for a bad actor to turn a bad day into a bad eternity.”1

To prevent this calamity, service workers require you to adhere to two policies:

Same origin.

HTTPS only.

The same-origin policy means that a website at example.com can only install a service worker script that lives at example.com. That means you can’t put your service worker script on a different domain. You can use a domain like for hosting your images and other assets, but not your service worker script. That domain wouldn’t match the domain of the site installing the service worker.

The HTTPS-only policy means that https://example.com can install a service worker, but http://example.com can’t. A site running under HTTPS (the S stands for Secure) instead of HTTP is much harder to spoof. Without HTTPS, the communication between a browser and a server could be intercepted and altered. If you’re sitting in a coffee shop with an open Wi-Fi network, there’s no guarantee that anything you’re reading in browser from http://newswebsite.com hasn’t been tampered with. But if you’re reading something from https://newswebsite.com, you can be pretty sure you’re getting what you asked for.

Securing your site

Enabling HTTPS on your site opens up a whole series of secure-only browser features—like the JavaScript APIs for geolocation, payments, notifications, and service workers. Even if you never plan to add a service worker to your site, it’s still a good idea to switch to HTTPS. A secure connection makes it trickier for snoopers to see who’s visiting which websites. Your website might not contain particularly sensitive information, but when someone visits your site, that’s between you and your visitor. Enabling HTTPS won’t stop unethical surveillance by the NSA, but it makes the surveillance slightly more difficult.

There’s one exception. You can use a service worker on a site being served from localhost, a web server on your own computer, not part of the web. That means you can play around with service workers without having to deploy your code to a live site every time you want to test something.

If you’re using a Mac, you can spin up a local server from the command line. Let’s say your website is in a folder called mysite. Drag that folder to the Terminal app, or open up the Terminal app and navigate to that folder using the cd command to change directory. Then type:

python -m SimpleHTTPServer 8000

This starts a web server from the mysite folder, served over port 8000. Now you can visit localhost:8000 in a web browser on the same computer, which means you can add a service worker to the website you’ve got inside the mysite folder: http://localhost:8000.

This starts a web server from the mysite folder, served over port 8000. Now you can visit localhost:8000 in a web browser on the same computer, which means you can add a service worker to the website you’ve got inside the mysite folder: http://localhost:8000.


But if you then put the site live at, say, http://mysite.com, the service worker won’t run. You’ll need to serve the site from https://mysite.com instead. To do that, you need a secure certificate for your server.

There was a time when certificates cost money and were difficult to install. Now, thanks to a service called Certbot, certificates are free. But I’m not going to lie: it still feels a bit intimidating to install the certificate. There’s something about logging on to a server and typing commands that makes me simultaneously feel like a l33t hacker, and also like I’m going to break everything. Fortunately, the process of using Certbot is relatively jargon-free (Fig 1.3).

Fig 1.3: The website of EFF’s Certbot.

On the Certbot website, you choose which kind of web server and operating system your site is running on. From there you’ll be guided step-by-step through the commands you need to type in the command line of your web server’s computer, which means you’ll need to have SSH access to that machine. If you’re on shared hosting, that might not be possible. In that case, check to see if your hosting provider offers secure certificates. If not, please pester them to do so, or switch to a hosting provider that can serve your site over HTTPS.

Another option is to stay with your current hosting provider, but use a service like Cloudflare to act as a “front” for your website. These services can serve your website’s files from data centers around the world, making sure that the physical distance between your site’s visitors and your site’s files is nice and short. And while they’re at it, these services can make sure all of those files are served over HTTPS.

Once you’re set up with HTTPS, you’re ready to write a service worker script. It’s time to open up your favorite text editor. You’re about to turbocharge your website!

Footnotes

Planning for Everything

Tue, 04/03/2018 - 07:22

A note from the editors: We’re pleased to share an excerpt from Chapter 7 (“Reflecting”) of Planning for Everything: The Design of Paths and Goals by Peter Morville, available now from Semantic Studios.

Once upon a time, there was a happy family. Every night at dinner, mom, dad, and two girls who still believed in Santa played a game. The rules are simple. Tell three stories about your day, two true, one false, and see who can detect the fib. Today I saw a lady walk a rabbit on a leash. Today I found a tooth in the kitchen. Today I forgot my underwear. The family ate, laughed, and learned together, and lied happily ever after.

There’s truth in the tale. It’s mostly not false. We did play this game, for years, and it was fun. We loved to stun and bewilder each other, yet the big surprise was insight. In reflecting on my day, I was often amazed by oddities already lost. If not for the intentional search for anomaly, I’d have erased these standard deviations from memory. The misfits we find, we rarely recall.

We observe a tiny bit of reality. We understand and remember even less. Unlike most machines, our memory is selective and purposeful. Goals and beliefs define what we notice and store.  To mental maps we add places we predict we’ll need to visit later. It’s not about the past. The intent of memory is to plan.

In reflecting we look back to go forward. We search the past for truths and insights to shift the future. I’m not speaking of nostalgia, though we are all borne back ceaselessly and want what we think we had. My aim is redirection. In reflecting on inconvenient truths, I hope to change not only paths but goals.

Figure 7-1. Reflection changes direction.

We all have times for reflection. Alone in the shower or on a walk, we retrace the steps of a day. Together at lunch for work or over family dinner, we share memories and missteps. Some of us reflect more rigorously than others. Given time, it shows.

People who as a matter of habit extract underlying principles or rules from new experiences are more successful learners than those who take their experiences at face value, failing to infer lessons that can be applied later in similar situations.1

In Agile, the sprint retrospective offers a collaborative context for reflection. Every two to four weeks, at the end of a sprint, the team meets for an hour or so to look back. Focal questions include 1) what went well? 2) what went wrong? 3) how might we improve? In reflecting on the plan, execution, and results, the team explores surprises, conflicts, roadblocks, and lessons.

In addition to conventional analysis, a retrospective creates an opportunity for double loop learning. To edit planned actions based on feedback is normal, but revising assumptions, goals, values, methods, or metrics may effect change more profound. A team able to expand the frame may hack their habits, beliefs, and environment to be better prepared to succeed and learn.

Figure 7-2. Double loop learning.

Retrospectives allow for constructive feedback to drive team learning and bonding, but that’s what makes them hard. We may lack courage to be honest, and often people can’t handle the truth. Our filters are as powerful as they are idiosyncratic, which means we’re all blind men touching a tortoise, or is it a tree or an elephant? It hurts to reconcile different perceptions of reality, so all too often we simply shut up and shut down.

Search for Truth

To seek truth together requires a culture of humility and respect. We are all deeply flawed and valuable. We must all speak and listen. Ideas we don’t implement may lead to those we do. Errors we find aren’t about fault, since our intent is a future fix. And counterfactuals merit no more confidence than predictions, as we never know what would have happened if.

Reflection is more fruitful if we know our own minds, but that is harder than we think. An imperfect ability to predict actions of sentient beings is a product of evolution. It’s quick and dirty yet better than nothing in the context of survival in a jungle or a tribe. Intriguingly, cognitive psychology and neuroscience have shown we use the same theory of mind to study ourselves.

Self-awareness is just this same mind reading ability, turned around and employed on our own mind, with all the fallibility, speculation, and lack of direct evidence that bedevils mind reading as a tool for guessing at the thought and behavior of others.2

Empirical science tells us introspection and consciousness are unreliable bases for self-knowledge. We know this is true but ignore it all the time. I’ll do an hour of homework a day, not leave it to the end of vacation. If we adopt a dog, I’ll walk it. If I buy a house, I’ll be happy. I’ll only have one drink. We are more than we think, as Walt Whitman wrote in Song of Myself.

Do I contradict myself?
Very well then I contradict myself
(I am large, I contain multitudes.)

Our best laid plans go awry because complexity exists within as well as without. Our chaotic, intertwingled bodyminds are ecosystems inside ecosystems. No wonder it’s hard to predict. Still, it’s wise to seek self truth, or at least that’s what I think.

Upon reflection, my mirror neurons tell me I’m a shy introvert who loves reading, hiking, and planning. I avoid conflict when possible but do not lack courage. Once I set a goal, I may focus and filter relentlessly. I embrace habit and eschew novelty. If I fail, I tend to pivot rather than persist. Who I am is changing. I believe it’s speeding up. None of these traits is bad or good, as all things are double-edged. But mindful self awareness holds value. The more I notice the truth, the better my plans become.

Years ago, I planned a family vacation on St. Thomas. I kept it simple: a place near a beach where we could snorkel. It was a wonderful, relaxing escape. But over time a different message made it past my filters. Our girls had been bored. I dismissed it at first. I’d planned a shared experience I recalled fondly. It hurt to hear otherwise. But at last I did listen and learn. They longed not for escape but adventure. Thus our trip to Belize. I found planning and executing stressful due to risk, but I have no regrets. We shared a joyful adventure we’ll never forget.

Way back when we were juggling toddlers, we accidentally threw out the mail. Bills went unpaid, notices came, we swore we’d do better, then lost mail again. One day I got home from work to find an indoor mailbox system made with paint cans. My wife Susan built it in a day. We’ve used it to sort and save mail for 15 years. It’s an epic life hack I’d never have done. My ability to focus means I filter things out. I ignore problems and miss fixes. I’m not sure I’ll change. Perhaps it merits a prayer.

God grant me the serenity
to accept the things I cannot change,
courage to change the things I can,
and wisdom to know the difference.

We also seek wisdom in others. This explains our fascination with the statistics of regret. End of life wishes often include:

I wish I’d taken more risks, touched more lives, stood up to bullies, been a better spouse or parent or child. I should have followed my dreams, worked and worried less, listened more. If only I’d taken better care of myself, chosen meaningful work, had the courage to express my feelings, stayed in touch. I wish I’d let myself be happy.

While they do yield wisdom, last wishes are hard to hear. We are skeptics for good reason. Memory prepares for the future, and that too is the aim of regret. It’s unwise to trust the clarity of rose-colored glasses. The memory of pain and anxiety fades in time, but our desire for integrity grows. When time is short, regret is a way to rectify. I’ve learned my lesson. I’m passing it on to you. I’m a better person now. Don’t make my mistakes. It’s easy to say “I wish I’d stood up to bullies,” but hard to do at the time. There’s wisdom in last wishes but bias and self justification too. Confabulation means we edit memories with no intention to deceive. The truth is elusive. Reflection is hard.

Footnotes
  • 1. Make It Stick by Peter Brown et. al. (2014), p.133.
  • 2. Why You Don’t Know Your Own Mind by Alex Rosenberg (2016).

Meeting Design

Thu, 03/29/2018 - 07:13

A note from the editors: We’re pleased to share an excerpt from Chapter 2 (“The Design Constraint of All Meetings”) of Meeting Design: For Managers, Makers, and Everyone by Kevin Hoffman, available now from Two Waves.

Jane is a “do it right, or I’ll do it myself ” kind of person. She leads marketing, customer service, and information technology teams for a small airline that operates between islands of the Caribbean. Her work relies heavily on “reservation management system” (RMS) software, which is due for an upgrade. She convenes a Monday morning meeting to discuss an upgrade with the leadership from each of her three teams. The goal of this meeting is to identify key points for a proposal to upgrade the outdated software.

Jane begins by reviewing the new software’s advantages. She then goes around the room, engaging each team’s representatives in an open discussion. They capture how this software should alleviate current pain points; someone from marketing takes notes on a laptop, as is their tradition. The meeting lasts nearly three hours, which is a lot longer than expected, because they frequently loop back to earlier topics as people forget what was said. It concludes with a single follow-up action item: the director of each department will provide her with two lists for the upgrade proposal. First, a list of cost savings, and second, a list of timesaving outcomes. Each list is due back to Jane by the end of the week.

The first team’s list is done early but not organized clearly. The second list provides far too much detail to absorb quickly, so Jane puts their work aside to summarize later. By the end of the following Monday, there’s no list from the third team—it turns out they thought she meant the following Friday. Out of frustration, Jane calls another meeting to address the problems with the work she received, which range from “not quite right” to “not done at all.” Based on this pace, her upgrade proposal is going to be finished two weeks later than planned.

What went wrong? The plan seemed perfectly clear to Jane, but each team remembered their marching orders differently, if they remembered them at all. Jane could have a meeting experience that helps her team form more accurate memories. But for that meeting to happen, she needs to understand where those memories are formed in her team and how to form them more clearly.

Better Meetings Make Better Memories

If people are the one ingredient that all meetings have in common, there is one design constraint they all bring: their capacity to remember the discussion. That capacity lives in the human brain.

The brain shapes everything believed to be true about the world. On the one hand, it is a powerful computer that can be trained to memorize thousands of numbers in random sequences.1 But brains are also easily deceived, swayed by illusions and pre-existing biases. Those things show up in meetings as your instincts. Instincts vary greatly based on differences in the amount and type of previous experience. The paradox of ability and deceive-ability creates a weird mix of unpredictable behavior in meetings. It’s no wonder that they feel awkward.

What is known about how memory works in the brain is constantly evolving. To cover that in even a little detail is beyond the scope of this book, so this chapter is not meant to be an exhaustive look at human memory. However, there are a few interesting theories that will help you be more strategic about how you use meetings to support forming actionable memories.

Your Memory in Meetings

The brain’s job in meetings is to accept inputs (things we see, hear, and touch) and store it as memory, and then to apply those absorbed ideas in discussion (things we say and make). See Figure 2.1.

FIGURE 2.1 The human brain has a diverse set of inputs that contribute to your memories.

Neuroscience has identified four theoretical stages of memory, which include sensory, working, intermediate, and long-term. Understanding working memory and intermediate memory is relevant to meetings, because these stages represent the most potential to turn thought into action.

Working Memory

You may be familiar with the term short-term memory. Depending on the research you read, the term working memory has replaced short-term memory in the vocabulary of neuro- and cognitive science. I’ll use the term working memory here. Designing meeting experiences to support the working memory of attendees will improve meetings.

Working memory collects around 30 seconds of the things you’ve recently heard and seen. Its storage capacity is limited, and that capacity varies among individuals. This means that not everyone in a meeting has the same capacity to store things in their working memory. You might assume that because you remember an idea mentioned within the last few minutes of a meeting, everyone else probably will as well. That is not necessarily the case.

You can accommodate variations in people’s ability to use working memory by establishing a reasonable pace of information. The pace of information is directly connected to how well aligned attendees’ working memories become. To make sure that everyone is on the same page, you should set a pace that is deliberate, consistent, and slower than your normal pace of thought.

Sometimes, concepts are presented more quickly than people can remember them, simply because the presenter is already familiar with the details. Breaking information into evenly sized, consumable chunks is what separates a great presenter from an average (or bad) one. In a meeting, slower, more broken-up pacing allows a group of people to engage in constructive and critical thinking more effectively. It gets the same ideas in everyone’s head. (For a more detailed dive into the pace of content in meetings, see Chapter 3, “Build Agendas Out of Ideas, People, and Time.”)

Theoretical models that explain working memory are complex, as seen in Figure 2.2.2 This model presumes two distinct processes taking place in your brain to make meaning out of what you see, what you hear, and how much you can keep in your mind. Assuming that your brain creates working memories from what you see and what you hear in different ways, combining listening and seeing in meetings becomes more essential to getting value out of that time.

FIGURE 2.2 Alan Baddeley and Graham Hitch’s Model of Working Memory provides context for the interplay between what we see and hear in meetings.

In a meeting, absorbing something seen and absorbing something heard require different parts of the brain. Those two parts can work together to improve retention (the quantity and accuracy of information in our brain) or compete to reduce retention. Nowhere is this better illustrated than in the research of Richard E. Meyer, where he has found that “people learn better from words and pictures than from words alone, but not all graphics are created equal(ly).”3 When what you hear and what you see compete, it creates cognitive dissonance. Listening to someone speaking while reading the same words on a screen actually decreases the ability to commit something to memory. People who are subjected to presentation slides filled with speaking points face this challenge. But listening to someone while looking at a complementary photograph or drawing increases the likelihood of committing something to working memory.

Intermediate-Term Memory

Your memory should transform ideas absorbed in meetings into taking an action of some kind afterward. Triggering intermediate-term memories is the secret to making that happen. Intermediate-term memories last between two and three hours, and are characterized by processes taking place in the brain called biochemical translation and transcription. Translation can be considered as a process by which the brain makes new meaning. Transcription is where that meaning is replicated (see Figures 2.3a and 2.3b). In both processes, the cells in your brain are creating new proteins using existing ones: making some “new stuff” from “existing stuff.”4

FIGURE 2.3 Biochemical translation (a) and transcription (b), loosely in the form of understanding a hat.

Here’s an example: instead of having someone take notes on a laptop, imagine if Jane sketched a diagram that helped her make sense out of the discussion, using what was stored in her working memory. The creation of that diagram is an act of translation, and theoretically Jane should be able to recall the primary details of that diagram easily for two to three hours, because it’s moving into her intermediate memory.

If Jane made copies of that diagram, and the diagram was so compelling that those copies ended up on everyone’s wall around the office that would be transcription. Transcription is the (theoretical) process that leads us into longer-term stages of memory. Transcription connects understanding something within a meeting to acting on it later, well after the meeting has ended.

Most of the time simple meetings last from 10 minutes to an hour, while workshops and working sessions can last anywhere from 90 minutes to a few days. Consider the duration of various stages of memory against different meeting lengths (see Figure 2.4). A well-designed meeting experience moves the right information from working to intermediate memory. Ideas generated and decisions made should materialize into actions that take place outside the meeting. Any session without breaks that lasts longer than 90 minutes makes the job of your memories moving thought into action fuzzier, and therefore more difficult.

FIGURE 2.4 The time duration of common meetings against the varying durations for different stages of memory. Sessions longer than 90 minutes can impede memories from doing their job.

Jane’s meeting with her three teams lasted nearly three hours. That length of time spent on a single task or topic taxes people’s ability to form intermediate (actionable) memories. Action items become muddled, which leads to liberal interpretations of what each team is supposed to accomplish.

But just getting agreement about a shared task in the first place is a difficult design challenge. All stages of memory are happening simultaneously, with multiple translation and transcription processes being applied to different sounds and sights. A fertile meeting environment that accommodates multiple modes of input allows memories to form amidst the cognitive chaos.

Brain Input Modes

During a meeting, each attendee’s brain in a meeting is either in a state of input or output. By choosing to assemble in a group, the assumption is implicit that information needs to be moved out of one place, or one brain, into another (or several others).

Some meetings, like presentations, move information in one direction. The goal is for a presenting party to move information from their brain to the brains in the audience. When you are presenting an idea, your brain is in output mode. You use words and visuals to give form to ideas in the hopes that they will become memories in your audience. Your audience’s brains are receiving information; if the presentation is well designed and well executed, their ears and their eyes will do a decent job of absorbing that information accurately.

In a live presentation, the output/input processes are happening synchronously. This is not like reading a written report or an email message, where the author (presenting party) has output information in absence of an audience, and the audience is absorbing information in absence of the author’s presence; that is moving information asynchronously.

Footnotes
  • 1. Joshua Foer, Moonwalking with Einstein (New York: Penguin Books, 2011).
  • 2. A. D. Baddeley and G. Hitch, “Working Memory,” in The Psychology of Learning and Motivation: Advances in Research and Theory, ed. G. H. Bower (New York: Academic Press, 1974), 8:47–89.
  • 3. Richard E. Meyer, “Principles for Multimedia Learning with Richard E. Mayer,” Harvard Initiative for Learning & Teaching (blog), July 8, 2014, http://hilt.harvard.edu/blog/ principles-multimedia-learning-richard-e-mayer
  • 4. M. A. Sutton and T. J. Carew, “Behavioral, Cellular, and Molecular Analysis of Memory in Aplysia I: Intermediate-Term Memory,” Integrative and Comparative Biology 42, no. 4 (2002): 725–735.

Designing for Research

Tue, 03/20/2018 - 07:09

If you’ve spent enough time developing for the web, this piece of feedback has landed in your inbox since time immemorial:

“This photo looks blurry. Can we replace it with a better version?”

Every time this feedback reaches me, I’m inclined to question it: “What about the photo looks bad to you, and can you tell me why?

That’s a somewhat unfair question to counter with. The complaint is rooted in a subjective perception of image quality, which in turn is influenced by many factors. Some are technical, such as the export quality of the image or the compression method (often lossy, as is the case with JPEG-encoded photos). Others are more intuitive or perceptual, such as content of the image and how compression artifacts mingle within. Perhaps even performance plays a role we’re not entirely aware of.

Fielding this kind of feedback for many years eventually lead me to design and develop an image quality survey, which was my first go at building a research project on the web. I started with twenty-five photos shot by a professional photographer. With them, I generated a large pool of images at various quality levels and sizes. Images were served randomly from this pool to users who were asked to rate what they thought about their quality.

Results from the first round were interesting, but not entirely clear: users seemed to have a tendency to overestimate the actual quality of images, and poor performance appeared to have a negative impact on perceptions of image quality, but this couldn’t be stated conclusively. A number of UX and technical issues made it necessary to implement important improvements and conduct a second round of research. In lieu of spinning my wheels trying to extract conclusions from the first round results, I decided it would be best to improve the survey as much as possible, and conduct another round of research to get better data. This article chronicles how I first built the survey, and then how I subsequently listened to user feedback to improve it.

Defining the research

Of the subjects within web performance, image optimization is especially vast. There’s a wide array of formats, encodings, and optimization tools, all of which are designed to make images small enough for web use while maintaining reasonable visual quality. Striking the balance between speed and quality is really what image optimization is all about.

This balance between performance and visual quality prompted me to consider how people perceive image quality. Lossy image quality, in particular. Eventually, this train of thought lead to a series of questions spurring the design and development of an image quality perception survey. The idea of the survey is that users are providing subjective assessments on quality. This is done by asking participants to rate images without an objective reference for what’s “perfect.” This is, after all, how people view images in situ.

A word on surveys

Any time we want to quantify user behavior, it’s inevitable that a survey is at least considered, if not ultimately chosen to gather data from a group of people. After all, surveys are perfect when your goal is to get something measurable. However, the survey is a seductively dangerous tool, as Erika Hall cautions. They’re easy to make and conduct, and are routinely abused in their dissemination. They’re not great tools for assessing past behavior. They’re just as bad (if not worse) at predicting future behavior. For example, the 1–10 scale often employed by customer satisfaction surveys don’t really say much of anything about how satisfied customers actually are or how likely they’ll be to buy a product in the future.

The unfortunate reality, however, is that in lieu of my lording over hundreds of participants in person, the survey is the only truly practical tool I have to measure how people perceive image quality as well as if (and potentially how) performance metrics correlate to those perceptions. When I designed the survey, I kept with the following guidelines:

  • Don’t ask participants about anything other than what their perceptions are in the moment. By the time a participant has moved on, their recollection of what they just did rapidly diminishes as time elapses.
  • Don’t assume participants know everything you do. Guide them with relevant copy that succinctly describes what you expect of them.
  • Don’t ask participants to provide assessments with coarse inputs. Use an input type that permits them to finely assess image quality on a scale congruent with the lossy image quality encoding range.

All we can do going forward is acknowledge we’re interpreting the data we gather under the assumption that participants are being truthful and understand the task given to them. Even if the perception metrics are discarded from the data, there are still some objective performance metrics gathered that could tell a compelling story. From here, it’s a matter of defining the questions that will drive the research.

Asking the right questions

In research, you’re seeking answers to questions. In the case of this particular effort, I wanted answers to these questions:

  • How accurate are people’s perceptions of lossy image quality in relation to actual quality?
  • Do people perceive the quality of JPEG images differently than WebP images?
  • Does performance play a role in all of this?

These are important questions. To me, however, answering the last question was the primary goal. But the road to answers was (and continues to be) a complex journey of design and development choices. Let’s start out by covering some of the tech used to gather information from survey participants.

Sniffing out device and browser characteristics

When measuring how people perceive image quality, devices must be considered. After all, any given device’s screen will be more or less capable than others. Thankfully, HTML features such as srcset and picture are highly appropriate for delivering the best image for any given screen. This is vital because one’s perception of image quality can be adversely affected if an image is ill-fit for a device’s screen. Conversely, performance can be negatively impacted if an exceedingly high-quality (and therefore behemoth) image is sent to a device with a small screen. When sniffing out potential relationships between performance and perceived quality, these are factors that deserve consideration.

With regard to browser characteristics and conditions, JavaScript gives us plenty of tools for identifying important aspects of a user’s device. For instance, the currentSrc property reveals which image is being shown from an array of responsive images. In the absence of currentSrc, I can somewhat safely assume support for srcset or picture is lacking, and fall back to the img tag’s src value:

const surveyImage = document.querySelector(".survey-image"); let loadedImage = surveyImage.currentSrc || surveyImage.src;

Where screen capability is concerned, devicePixelRatio tells us the pixel density of a given device’s screen. In the absence of devicePixelRatio, you may safely assume a fallback value of 1:

let dpr = window.devicePixelRatio || 1;

devicePixelRatio enjoys excellent browser support. Those few browsers that don’t support it (i.e., IE 10 and under) are highly unlikely to be used on high density displays.

The stalwart getBoundingClientRect method retrieves the rendered width of an img element, while the HTMLImageElement interface’s complete property determines whether an image has finished loaded. The latter of these two is important, because it may be preferable to discard individual results in situations where images haven’t loaded.

In cases where JavaScript isn’t available, we can’t collect any of this data. When we collect ratings from users who have JavaScript turned off (or are otherwise unable to run JavaScript), I have to accept there will be gaps in the data. The basic information we’re still able to collect does provide some value.

Sniffing for WebP support

As you’ll recall, one of the initial questions asked was how users perceived the quality of WebP images. The HTTP Accept request header advertises WebP support in browsers like Chrome. In such cases, the Accept header might look something like this:

Accept: image/webp,image/apng,image/*,*/*;q=0.8

As you can see, the WebP content type of image/webp is one of the advertised content types in the header content. In server-side code, you can check Accept for the image/webp substring. Here’s how that might look in Express back-end code:

const WebP = req.get("Accept").indexOf("image/webp") !== -1 ? true : false;

In this example, I’m recording the browser’s WebP support status to a JavaScript constant I can use later to modify image delivery. I could use the picture element with multiple sources and let the browser figure out which one to use based on the source element’s type attribute value, but this approach has clear advantages. First, it’s less markup. Second, the survey shouldn’t always choose a WebP source simply because the browser is capable of using it. For any given survey specimen, the app should randomly decide between a WebP or JPEG image. Not all participants using Chrome should rate only WebP images, but rather a random smattering of both formats.

Recording performance API data

You’ll recall that one of the earlier questions I set out to answer was if performance impacts the perception of image quality. At this stage of the web platform’s development, there are several APIs that aid in the search for an answer:

  • Navigation Timing API (Level 2): This API tracks performance metrics for page loads. More than that, it gives insight into specific page loading phases, such as redirect, request and response time, DOM processing, and more.
  • Navigation Timing API (Level 1): Similar to Level 2 but with key differences. The timings exposed by Level 1 of the API lack the accuracy as those in Level 2. Furthermore, Level 1 metrics are expressed in Unix time. In the survey, data is only collected from Level 1 of the API if Level 2 is unsupported. It’s far from ideal (and also technically obsolete), but it does help fill in small gaps.
  • Resource Timing API: Similar to Navigation Timing, but Resource Timing gathers metrics on various loading phases of page resources rather than the page itself. Of the all the APIs used in the survey, Resource Timing is used most, as it helps gather metrics on the loading of the image specimen the user rates.
  • Server Timing: In select browsers, this API is brought into the Navigation Timing Level 2 interface when a page request replies with a Server-Timing response header. This header is open-ended and can be populated with timings related to back-end processing phases. This was added to round two of the survey to quantify back-end processing time in general.
  • Paint Timing API: Currently only in Chrome, this API reports two paint metrics: first paint and first contentful paint. Because a significant slice of users on the web use Chrome, we may be able to observe relationships between perceived image quality and paint metrics.

Using these APIs, we can record performance metrics for most participants. Here’s a simplified example of how the survey uses the Resource Timing API to gather performance metrics for the loaded image specimen:

// Get information about the loaded image const surveyImageElement = document.querySelector(".survey-image"); const fullImageUrl = surveyImageElement.currentSrc || surveyImageElement.src; const imageUrlParts = fullImageUrl.split("/"); const imageFilename = imageUrlParts[imageUrlParts.length - 1]; // Check for performance API methods if ("performance" in window && "getEntriesByType" in performance) { // Get entries from the Resource Timing API let resources = performance.getEntriesByType("resource"); // Ensure resources were returned if (typeof resources === "object" && resources.length > 0) { resources.forEach((resource) => { // Check if the resource is for the loaded image if (resource.name.indexOf(imageFilename) !== -1) { // Access resource images for the image here } }); } }

If the Resource Timing API is available, and the getEntriesByType method returns results, an object with timings is returned, looking something like this:

{ connectEnd: 1156.5999999947962, connectStart: 1156.5999999947962, decodedBodySize: 11110, domainLookupEnd: 1156.5999999947962, domainLookupStart: 1156.5999999947962, duration: 638.1000000037602, encodedBodySize: 11110, entryType: "resource", fetchStart: 1156.5999999947962, initiatorType: "img", name: "https://imagesurvey.site/img-round-2/1-1024w-c2700e1f2c4f5e48f2f57d665b1323ae20806f62f39c1448490a76b1a662ce4a.webp", nextHopProtocol: "h2", redirectEnd: 0, redirectStart: 0, requestStart: 1171.6000000014901, responseEnd: 1794.6999999985565, responseStart: 1737.0999999984633, secureConnectionStart: 0, startTime: 1156.5999999947962, transferSize: 11227, workerStart: 0 }

I grab these metrics as participants rate images, and store them in a database. Down the road when I want to write queries and analyze the data I have, I can refer to the Processing Model for the Resource and Navigation Timing APIs. With SQL and data at my fingertips, I can measure the distinct phases outlined by the model and see if correlations exist.

Having discussed the technical underpinnings of how data can be collected from survey participants, let’s shift the focus to the survey’s design and user flows.

Designing the survey

Though surveys tend to have straightforward designs and user flows relative to other sites, we must remain cognizant of the user’s path and the impediments a user could face.

The entry point

When participants arrive at the home page, we want to be direct in our communication with them. The home page intro copy greets participants, gives them a succinct explanation of what to expect, and presents two navigation choices:

From here, participants either start the survey or read a privacy policy. If the user decides to take the survey, they’ll reach a page politely asking them what their professional occupation is and requesting them to disclose any eyesight conditions. The fields for these questions can be left blank, as some may not be comfortable disclosing this kind of information. Beyond this point, the survey begins in earnest.

The survey primer

Before the user begins rating images, they’re redirected to a primer page. This page describes what’s expected of participants, and explains how to rate images. While the survey is promoted on design and development outlets where readers regularly work with imagery on the web, a primer is still useful in getting everyone on the same page. The first paragraph of the page stresses that users are rating image quality, not image content. This is important. Absent any context, participants may indeed rate images for their content, which is not what we’re asking for. After this clarification, the concept of lossy image quality is demonstrated with the following diagram:

Lastly, the function of the rating input is explained. This could likely be inferred by most, but the explanatory copy helps remove any remaining ambiguity. Assuming your user knows everything you do is not necessarily wise. What seems obvious to one is not always so to another.

The image specimen page

This page is the main event and is where participants assess the quality of images shown to them. It contains two areas of focus: the image specimen and the input used to rate the image’s quality.

Let’s talk a bit out of order and discuss the input first. I mulled over a few options when it came to which input type to use. I considered a select input with coarsely predefined choices, an input with a type of number, and other choices. What seemed to make the most sense to me, however, was a slider input with a type of range.

A slider input is more intuitive than a text input, or a select element populated with various choices. Because we’re asking for a subjective assessment about something with such a large range of interpretation, a slider allows participants more granularity in their assessments and lends further accuracy to the data collected.

Now let’s talk about the image specimen and how it’s selected by the back-end code. I decided early on in the survey’s development that I wanted images that weren’t prominent in existing stock photo collections. I also wanted uncompressed sources so I wouldn’t be presenting participants with recompressed image specimens. To achieve this, I procured images from a local photographer. The twenty-five images I settled on were minimally processed raw images from the photographer’s camera. The result was a cohesive set of images that felt visually related to each other.

To properly gauge perception across the entire spectrum of quality settings, I needed to generate each image from the aforementioned sources at ninety-six different quality settings ranging from 5 to 100. To account for the varying widths and pixel densities of screens in the wild, each image also needed to be generated at four different widths for each quality setting: 1536, 1280, 1024, and 768 pixels, to be exact. Just the job srcset was made for!

To top it all off, images also needed to be encoded in both JPEG and WebP formats. As a result, the survey draws randomly from 768 images per specimen across the entire quality range, while also delivering the best image for the participant’s screen. This means that across the twenty-five image specimens participants evaluate, the survey draws from a pool of 19,200 images total.

With the conception and design of the survey covered, let’s segue into how the survey was improved by implementing user feedback into the second round.

Listening to feedback

When I launched round one of the survey, feedback came flooding in from designers, developers, accessibility advocates, and even researchers. While my intentions were good, I inevitably missed some important aspects, which made it necessary to conduct a second round. Iteration and refinement are critical to improving the usefulness of a design, and this survey was no exception. When we improve designs with user feedback, we take a project from average to something more memorable. Getting to that point means taking feedback in stride and addressing distinct, actionable items. In the case of the survey, incorporating feedback not only yielded a better user experience, it improved the integrity of the data collected.

Building a better slider input

Though the first round of the survey was serviceable, I ran into issues with the slider input. In round one of the survey, that input looked like this:

There were two recurring complaints regarding this specific implementation. The first was that participants felt they had to align their rating to one of the labels beneath the slider track. This was undesirable for the simple fact that the slider was chosen specifically to encourage participants to provide nuanced assessments.

The second complaint was that the submit button was disabled until the user interacted with the slider. This design choice was intended to prevent participants from simply clicking the submit button on every page without rating images. Unfortunately, this implementation was unintentionally hostile to the user and needed improvement, because it blocked users from rating images without a clear and obvious explanation as to why.

Fixing the problem with the labels meant redesigning the slider as it appeared in Figure 3. I removed the labels altogether to eliminate the temptation of users to align their answers to them. Additionally, I changed the slider background property to a gradient pattern, which further implied the granularity of the input.

The submit button issue was a matter of how users were prompted. In round one the submit button was visible, yet the disabled state wasn’t obvious enough to some. After consulting with a colleague, I found a solution for round two: in lieu of the submit button being initially visible, it’s hidden by some guide copy:

Once the user interacts with the slider and rates the image, a change event attached to the input fires, which hides the guide copy and replaces it with the submit button:

This solution is less ambiguous, and it funnels participants down the desired path. If someone with JavaScript disabled visits, the guide copy is never shown, and the submit button is immediately usable. This isn’t ideal, but it doesn’t shut out participants without JavaScript.

Addressing scrolling woes

The survey page works especially well in portrait orientation. Participants can see all (or most) of the image without needing to scroll. In browser windows or mobile devices in landscape orientation, however, the survey image can be larger than the viewport:

Working with such limited vertical real estate is tricky, especially in this case where the slider needs to be fixed to the bottom of the screen (which addressed an earlier bit of user feedback from round one testing). After discussing the issue with colleagues, I decided that animated indicators in the corners of the page could signal to users that there’s more of the image to see.

When the user hits the bottom of the page, the scroll indicators disappear. Because animations may be jarring for certain users, a prefers-reduced-motion media query is used to turn off this (and all other) animations if the user has a stated preference for reduced motion. In the event JavaScript is disabled, the scrolling indicators are always hidden in portrait orientation where they’re less likely to be useful and always visible in landscape where they’re potentially needed the most.

Avoiding overscaling of image specimens

One issue that was brought to my attention from a coworker was how the survey image seemed to expand boundlessly with the viewport. On mobile devices this isn’t such a problem, but on large screens and even modestly sized high-density displays, images can be scaled excessively. Because the responsive img tag’s srcset attribute specifies a maximum resolution image of 1536w, an image can begin to overscale at as “small” at sizes over 768 pixels wide on devices with a device pixel ratio of 2.

Some overscaling is inevitable and acceptable. However, when it’s excessive, compression artifacts in an image can become more pronounced. To address this, the survey image’s max-width is set to 1536px for standard displays as of round two. For devices with a device pixel ratio of 2 or higher, the survey image’s max-width is set to half that at 768px:

This minor (yet important) fix ensures that images aren’t scaled beyond a reasonable maximum. With a reasonably sized image asset in the viewport, participants will assess images close to or at a given image asset’s natural dimensions, particularly on large screens.

User feedback is valuable. These and other UX feedback items I incorporated improved both the function of the survey and the integrity of the collected data. All it took was sitting down with users and listening to them.

Wrapping up

As round two of the survey gets under way, I’m hoping the data gathered reveals something exciting about the relationship between performance and how people perceive image quality. If you want to be a part of the effort, please take the survey. When round two concludes, keep an eye out here for a summary of the results!

Thank you to those who gave their valuable time and feedback to make this article as good as it could possibly be: Aaron Gustafson, Jeffrey Zeldman, Brandon Gregory, Rachel Andrew, Bruce Hyslop, Adrian Roselli, Meg Dickey-Kurdziolek, and Nick Tucker.

Additional thanks to those who helped improve the image quality survey: Mandy Tensen, Darleen Denno, Charlotte Dann, Tim Dunklee, and Thad Roe.

Conversational Design

Thu, 03/15/2018 - 08:30

A note from the editors: We’re pleased to share an excerpt from Chapter 1 of Erika Hall’s new book, Conversational Design, available now from A Book Apart.

Texting is how we talk now. We talk by tapping tiny messages on touchscreens—we message using SMS via mobile data networks, or through apps like Facebook Messenger or WhatsApp.

.article-layout .main-content > figure.quote:first-child figcaption { margin-top: 1rem; }

In 2015, the Pew Research Center found that 64% of American adults owned a smartphone of some kind, up from 35% in 2011. We still refer to these personal, pocket-sized computers as phones, but “Phone” is now just one of many communication apps we neglect in favor of texting. Texting is the most widely used mobile data service in America. And in the wider world, four billion people have mobile phones, so 4 billion people have access to SMS or other messaging apps. For some, dictating messages into a wristwatch offers an appealing alternative to placing a call.

The popularity of texting can be partially explained by the medium’s ability to offer the easy give-and-take of conversation without requiring continuous attention. Texting feels like direct human connection, made even more captivating by unpredictable lag and irregular breaks. Any typing is incidental because the experience of texting barely resembles “writing,” a term that carries associations of considered composition. In his TED talk, Columbia University linguist John McWhorter called texting “fingered conversation”—terminology I find awkward, but accurate. The physical act—typing—isn’t what defines the form or its conventions. Technology is breaking down our traditional categories of communication.

By the numbers, texting is the most compelling computer-human interaction going. When we text, we become immersed and forget our exchanges are computer-mediated at all. We can learn a lot about digital design from the inescapable draw of these bite-sized interactions, specifically the use of language.

What Texting Teaches Us

This is an interesting example of what makes computer-mediated interaction interesting. The reasons people are compelled to attend to their text messages—even at risk to their own health and safety—aren’t high-production values, so-called rich media, or the complexity of the feature set.

Texting, and other forms of social media, tap into something very primitive in the human brain. These systems offer always-available social connection. The brevity and unpredictability of the messages themselves triggers the release of dopamine that motivates seeking behavior and keeps people coming back for more. What makes interactions interesting may start on a screen, but the really interesting stuff happens in the mind. And language is a critical part of that. Our conscious minds are made of language, so it’s easy to perceive the messages you read not just as words but as the thoughts of another mingled with your own. Loneliness seems impossible with so many voices in your head.

With minimal visual embellishment, texts can deliver personality, pathos, humor, and narrative. This is apparent in “Texts from Dog,” which, as the title indicates, is a series of imagined text exchanges between a man and his dog. (Fig 1.1). With just a few words, and some considered capitalization, Joe Butcher (writing as October Jones) creates a vivid picture of the relationship between a neurotic canine and his weary owner.

Fig 1.1: “Texts from Dog” shows how lively a simple text exchange can be.

Using words is key to connecting with other humans online, just as it is in the so-called “real world.” Imbuing interfaces with the attributes of conversation can be powerful. I’m far from the first person to suggest this. However, as computers mediate more and more relationships, including customer relationships, anyone thinking about digital products and services is in a challenging place. We’re caught between tried-and-true past practices and the urge to adopt the “next big thing,” sometimes at the exclusion of all else.

Being intentionally conversational isn’t easy. This is especially true in business and at scale, such as in digital systems. Professional writers use different types of writing for different purposes, and each has rules that can be learned. The love of language is often fueled by a passion for rules — rules we received in the classroom and revisit in manuals of style, and rules that offer writers the comfort of being correct outside of any specific context. Also, there is the comfort of being finished with a piece of writing and moving on. Conversation, on the other hand, is a context-dependent social activity that implies a potentially terrifying immediacy.

Moving from the idea of publishing content to engaging in conversation can be uncomfortable for businesses and professional writers alike. There are no rules. There is no done. It all feels more personal. Using colloquial language, even in “simplifying” interactive experiences, can conflict with a desire to appear authoritative. Or the pendulum swings to the other extreme and a breezy style gets applied to a laborious process like a thin coat of paint.

As a material for design and an ingredient in interactions, words need to emerge from the content shed and be considered from the start.  The way humans use language—easily, joyfully, sometimes painfully—should anchor the foundation of all interactions with digital systems.

The way we use language and the way we socialize are what make us human; our past contains the key to what commands our attention in the present, and what will command it in the future. To understand how we came to be so perplexed by our most human quality, it’s worth taking a quick look at, oh!, the entire known history of communication technology.

The Mother Tongue

Accustomed to eyeballing type, we can forget language began in our mouths as a series of sounds, like the calls and growls of other animals. We’ll never know for sure how long we’ve been talking—speech itself leaves no trace—but we do know it’s been a mighty long time.

Archaeologist Natalie Thais Uomini and psychologist Georg Friedrich Meyer concluded that our ancestors began to develop language as early as 1.75 million years ago. Per the fossil record, modern humans emerged at least 190,000 years ago in the African savannah. Evidence of cave painting goes back 30,000 years (Fig 1.2).

Then, a mere 6,000 years ago, ancient Sumerian commodity traders grew tired of getting ripped off. Around 3200 BCE, one of them had the idea to track accounts by scratching wedges in wet clay tablets. Cuneiform was born.

So, don’t feel bad about procrastinating when you need to write—humanity put the whole thing off for a couple hundred thousand years! By a conservative estimate, we’ve had writing for about 4% of the time we’ve been human. Chatting is easy; writing is an arduous chore.

Prior to mechanical reproduction, literacy was limited to the elite by the time and cost of hand-copying manuscripts. It was the rise of printing that led to widespread literacy; mass distribution of text allowed information and revolutionary ideas to circulate across borders and class divisions. The sharp increase in literacy bolstered an emerging middle class. And the ability to record and share knowledge accelerated all other advances in technology: photography, radio, TV, computers, internet, and now the mobile web. And our talking speakers.

Fig 1.2: In hindsight, “literate culture” now seems like an annoying phase we had to go through so we could get to texting.

Every time our communication technology advances and changes, so does the surrounding culture—then it disrupts the power structure and upsets the people in charge. Catholic archbishops railed against mechanical movable type in the fifteenth century. Today, English teachers deplore texting emoji. Resistance is, as always, futile. OMG is now listed in the Oxford English Dictionary.

But while these developments have changed the world and how we relate to one another, they haven’t altered our deep oral core.

Orality, Say It with Me Orality knits persons into community. Walter Ong

Today, when we record everything in all media without much thought, it’s almost impossible to conceive of a world in which the sum of our culture existed only as thoughts.

Before literacy, words were ephemeral and all knowledge was social and communal. There was no “save” option and no intellectual property. The only way to sustain an idea was to share it, speaking aloud to another person in a way that made it easy for them to remember. This was orality—the first interface.

We can never know for certain what purely oral cultures were like. People without writing are terrible at keeping records. But we can examine oral traditions that persist for clues.

The oral formula

Reading and writing remained elite activities for centuries after their invention. In cultures without a writing system, oral characteristics persisted to help transmit poetry, history, law and other knowledge across generations.

The epic poems of Homer rely on meter, formulas, and repetition to aid memory:

Far as a man with his eyes sees into the mist of the distance Sitting aloft on a crag to gaze over the wine-dark seaway, Just so far were the loud-neighing steeds of the gods overleaping. Iliad, 5.770

Concrete images like rosy-fingered dawn, loud-neighing steeds, wine-dark seaway, and swift-footed Achilles served to aid the teller and to sear the story into the listener’s memory.

Biblical proverbs also encode wisdom in a memorable format:

As a dog returns to its vomit, so fools repeat their folly. Proverbs 26:11

That is vivid.

And a saying that originated in China hundreds of years ago can prove sufficiently durable to adorn a few hundred Etsy items:

A journey of a thousand miles begins with a single step. Tao Te Ching, Chapter 64, ascribed to Lao Tzu The labor of literature

Literacy created distance in time and space and decoupled shared knowledge from social interaction. Human thought escaped the existential present. The reader doesn’t need to be alive at the same time as the writer, let alone hanging out around the same fire pit or agora. 

Freed from the constraints of orality, thinkers explored new forms to preserve their thoughts. And what verbose and convoluted forms these could take:

The Reader will I doubt too soon discover that so large an interval of time was not spent in writing this discourse; the very length of it will convince him, that the writer had not time enough to make a shorter. George Tullie, An Answer to a Discourse Concerning the Celibacy of the Clergy, 1688

There’s no such thing as an oral semicolon. And George Tullie has no way of knowing anything about his future audience. He addresses himself to a generic reader he will never see, nor receive feedback from. Writing in this manner is terrific for precision, but not good at all for interaction.

Writing allowed literate people to become hermits and hoarders, able to record and consume knowledge in total solitude, invest authority in them, and defend ownership of them. Though much writing preserved the dullest of records, the small minority of language communities that made the leap to literacy also gained the ability to compose, revise, and perfect works of magnificent complexity, utility, and beauty.

The qualities of oral culture

In Orality and Literacy: The Technologizing of the Word, Walter Ong explored the “psychodynamics of orality,” which is, coincidentally, quite a mouthful.  Through his research, he found that the ability to preserve ideas in writing not only increased knowledge, it altered values and behavior. People who grow up and live in a community that has never known writing are different from literate people—they depend upon one another to preserve and share knowledge. This makes for a completely different, and much more intimate, relationship between ideas and communities.

Oral culture is immediate and social

In a society without writing, communication can happen only in the moment and face-to-face. It sounds like the introvert’s nightmare! Oral culture has several other hallmarks as well:

  • Spoken words are events that exist in time. It’s impossible to step back and examine a spoken word or phrase. While the speaker can try to repeat, there’s no way to capture or replay an utterance.
  • All knowledge is social, and lives in memory. Formulas and patterns are essential to transmitting and retaining knowledge. When the knowledge stops being interesting to the audience, it stops existing.
  • Individuals need to be present to exchange knowledge or communicate. All communication is participatory and immediate. The speaker can adjust the message to the context. Conversation, contention, and struggle help to retain this new knowledge.
  • The community owns knowledge, not individuals. Everyone draws on the same themes, so not only is originality not helpful, it’s nonsensical to claim an idea as your own.
  • There are no dictionaries or authoritative sources. The right use of a word is determined by how it’s being used right now.
Literate culture promotes authority and ownership

Printed books enabled mass-distribution and dispensed with handicraft of manuscripts, alienating readers from the source of the ideas, and from each other. (Ong pg. 100):

  • The printed text is an independent physical object. Ideas can be preserved as a thing, completely apart from the thinker.
  • Portable printed works enable individual consumption. The need and desire for private space accompanied the emergence of silent, solo reading.
  • Print creates a sense of private ownership of words. Plagiarism is possible.
  • Individual attribution is possible. The ability to identify a sole author increases the value of originality and creativity.
  • Print fosters a sense of closure. Once a work is printed, it is final and closed.

Print-based literacy ascended to a position of authority and cultural dominance, but it didn’t eliminate oral culture completely.

Technology brought us together again

All that studying allowed people to accumulate and share knowledge, speeding up the pace of technological change. And technology transformed communication in turn. It took less than 150 years to get from the telegraph to the World Wide Web. And with the web—a technology that requires literacy—Ong identified a return to the values of the earlier oral culture. He called this secondary orality. Then he died in 2003, before the rise of the mobile internet, when things really got interesting.

Secondary orality is:

  • Immediate. There is no necessary delay between the expression of an idea and its reception. Physical distance is meaningless.
  • Socially aware and group-minded. The number of people who can hear and see the same thing simultaneously is in the billions.
  • Conversational. This is in the sense of being both more interactive and less formal.
  • Collaborative. Communication invites and enables a response, which may then become part of the message.
  • Intertextual. The products of our culture reflect and influence one another.

Social, ephemeral, participatory, anti-authoritarian, and opposed to individual ownership of ideas—these qualities sound a lot like internet culture.

Wikipedia: Knowledge Talks

When someone mentions a genre of music you’re unfamiliar with—electroclash, say, or plainsong—what do you do to find out more? It’s quite possible you type the term into Google and end up on Wikipedia, the improbably successful, collaborative encyclopedia that would be absent without the internet.

According to Wikipedia, encyclopedias have existed for around two-thousand years. Wikipedia has existed since 2001, and it’s the fifth most-popular site on the web. Wikipedia is not a publication so much as a society that provides access to knowledge. A volunteer community of “Wikipedians” continuously adds to and improves millions of articles in over 200 languages. It’s a phenomenon manifesting all the values of secondary orality:

  • Anyone can contribute anonymously and anyone can modify the contributions of another.
  • The output is free.
  • The encyclopedia articles are not attributed to any sole creator. A single article might have 2 editors or 1,000.
  • Each article has an accompanying “talk” page where editors discuss potential improvements, and a “history” page that tracks all revisions. Heated arguments are not documented. They take place as revisions within documents.

Wikipedia is disruptive in the true Clayton Christensen sense. It’s created immense value and wrecked an existing business model. Traditional encyclopedias are publications governed by authority, and created by experts and fact checkers. A volunteer project collaboratively run by unpaid amateurs shows that conversation is more powerful than authority, and that human knowledge is immense and dynamic.

In an interview with The Guardian, a British librarian expressed some disdain about Wikipedia.

The main problem is the lack of authority. With printed publications, the publishers must ensure that their data are reliable, as their livelihood depends on it. But with something like this, all that goes out the window. Philip Bradley, “Who knows?”, The Guardian, October 26, 2004

Wikipedia is immediate, group-minded, conversational, collaborative, and intertextual— secondary orality in action—but it relies on traditionally published sources for its authority. After all, anything new that changes the world does so by fitting into the world. As we design for new methods of communication, we should remember that nothing is more valuable simply because it’s new; rather, technology is valuable when it brings us more of what’s already meaningful.

From Documents to Events

Pages and documents organize information in space. Space used to be more of a constraint back when we printed conversation out. Now that the internet has given us virtually infinite space, we need to mind how conversation moves through time. Thinking about serving the needs of people in an internet-based culture requires a shift from thinking about how information occupies space—documents—to how it occupies time—events.

Texting means that we’ve never been more lively (yet silent) in our communications. While we still have plenty of in-person interactions, it’s gotten easy to go without. We text grocery requests to our spouses. We click through a menu in a mobile app to summon dinner (the order may still arrive at the restaurant by fax, proving William Gibson’s maxim that the future is unevenly distributed). We exchange messages on Twitter and Facebook instead of visiting friends in person, or even while visiting friends in person. We work at home and Slack our colleagues.

We’re rapidly approaching a future where humans text other humans and only speak aloud to computers. A text-based interaction with a machine that’s standing in for a human should feel like a text-based interaction with a human. Words are a fundamental part of the experience, and they are part of the design. Words should be the basis for defining and creating the design.

We’re participating in a radical cultural transformation. The possibilities manifest in systems like Wikipedia that succeed in changing the world by using technology to connect people in a single collaborative effort. And even those of us creating the change suffer from some lag. The dominant educational and professional culture remains based in literary values. We’ve been rewarded for individual achievement rather than collaboration. We seek to “make our mark,” even when designing changeable systems too complex for any one person to claim authorship. We look for approval from an authority figure. Working in a social, interactive way should feel like the most natural thing in the world, but it will probably take some doing.

Literary writing—any writing that emerges from the culture and standards of literacy—is inherently not interactive. We need to approach the verbal design not as a literary work, but as a conversation. Designing human-centered interactive systems requires us to reflect on our deep-seated orientation around artifacts and ownership. We must alienate ourselves from a set of standards that no longer apply.

Most advice on “writing for the web” or “creating content” starts from the presumption that we are “writing,” just for a different medium. But when we approach communication as an assembly of pieces of content rather than an interaction, customers who might have been expecting a conversation end up feeling like they’ve been handed a manual instead.

Software is on a path to participating in our culture as a peer.  So, it should behave like a person—alive and present. It doesn’t matter how much so-called machine intelligence is under the hood—a perceptive set of programmatic responses, rather than a series of documents, can be enough if they have the qualities of conversation.

Interactive systems should evoke the best qualities of living human communities—active, social, simple, and present—not passive, isolated, complex, or closed off.

Life Beyond Literacy Indeed, language changes lives. It builds society, expresses our highest aspirations, our basest thoughts, our emotions and our philosophies of life. But all language is ultimately at the service of human interaction. Other components of language—things like grammar and stories—are secondary to conversation. Daniel L. Everett, How Language Began

Literacy has gotten us far. It’s gotten you this far in this book. So, it’s not surprising we’re attached to the idea. Writing has allowed us to create technologies that give us the ability to interact with one another across time and space, and have instantaneous access to knowledge in a way our ancestors would equate with magic. However, creating and exchanging documents, while powerful, is not a good model for lively interaction. Misplaced literate values can lead to misery—working alone and worrying too much about posterity.

So, it’s time to let go and live a little! We’re at an exciting moment. The computer screen that once stood for a page can offer a window into a continuous present that still remembers everything. Or, the screen might disappear completely.

Now we can start imagining, in an open-ended way, what constellation of connected devices any given person will have around them, and how we can deliver a meaningful, memorable experience on any one of them. We can step away from the screen and consider what set of inputs, outputs, events, and information add up to the best experience.

This is daunting for designers, sure, yet phenomenal for people. Thinking about human-computer interactions from a screen-based perspective was never truly human-centered from the start. The ideal interface is an interface that’s not noticeable at all—a world in which the distance from thought to action has collapsed and merely uttering a phrase can make it so.

We’re fast moving past “computer literacy.” It’s on us to ensure all systems speak human fluently.

A DIY Web Accessibility Blueprint

Tue, 03/13/2018 - 07:18

The summer of 2017 marked a monumental victory for the millions of Americans living with a disability. On June 13th, a Southern District of Florida Judge ruled that Winn-Dixie’s inaccessible website violated Title III of the Americans with Disabilities Act. This case marks the first trial under the ADA, which was passed into law in 1990.

Despite spending more than $7 million to revamp its website in 2016, Winn-Dixie neglected to include design considerations for users with disabilities. Some of the features that were added include online prescription refills, digital coupons, rewards card integration, and a store locator function. However, it appears that inclusivity didn’t make the cut.

Because Winn-Dixie’s new website wasn’t developed to WCAG 2.0 standards, the new features it boasted were in effect only available to sighted, able-bodied users. When Florida resident Juan Carlos Gil, who is legally blind, visited the Winn-Dixie website to refill his prescriptions, he found it to be almost completely inaccessible using the same screen reader software he uses to access hundreds of other sites.

Juan stated in his original complaint that he “felt as if another door had been slammed in his face.” But Juan wasn’t alone. Intentionally or not, Winn-Dixie was denying an entire group of people access to their new website and, in turn, each of the time-saving features it had to offer.

What makes this case unique is that it marks the first time in history in which a public accommodations case went to trial, meaning the judge ruled the website to be a “place of a public accommodation” under the ADA and therefore subject to ADA regulations. Since there are no specific ADA regulations regarding the internet, Judge Scola decided the adoption of the Web Content Accessibility Guidelines (WCAG) 2.0 Level AA to be appropriate. (Thanks to the hard work of the Web Accessibility Initiative (WAI) at the W3C, WCAG 2.0 has found widespread adoption throughout the globe, either as law or policy.)

Learning to have empathy

Anyone with a product subscription service (think diapers, razors, or pet food) knows the feeling of gratitude that accompanies the delivery of a much needed product that arrives just in the nick of time. Imagine how much more grateful you’d be for this service if you, for whatever reason, were unable to drive and lived hours from the nearest store. It’s a service that would greatly improve your life. But now imagine that the service gets overhauled and redesigned in such a way that it is only usable by people who own cars. You’d probably be pretty upset.

This subscription service example is hypothetical, yet in the United States, despite federal web accessibility requirements instituted to protect the rights of disabled Americans, this sort of discrimination happens frequently. In fact, anyone assuming the Winn-Dixie case was an isolated incident would be wrong. Web accessibility lawsuits are rising in number. The increase from 2015 to 2016 was 37%. While some of these may be what’s known as “drive-by lawsuits,” many of them represent plaintiffs like Juan Gil who simply want equal rights. Scott Dinin, Juan’s attorney, explained, “We’re not suing for damages. We’re only suing them to follow the laws that have been in this nation for twenty-seven years.”

For this reason and many others, now is the best time to take a proactive approach to web accessibility. In this article I’ll help you create a blueprint for getting your website up to snuff.

The accessibility blueprint

If you’ll be dealing with remediation, I won’t sugarcoat it: successfully meeting web accessibility standards is a big undertaking, one that is achieved only when every page of a site adheres to all the guidelines you are attempting to comply with. As I mentioned earlier, those guidelines are usually WCAG 2.0 Level AA, which means meeting every Level A and AA requirement. Tight deadlines, small budgets, and competing priorities may increase the stress that accompanies a web accessibility remediation project, but with a little planning and research, making a website accessible is both reasonable and achievable.

My intention is that you may use this article as a blueprint to guide you as you undertake a DIY accessibility remediation project. Before you begin, you’ll need to increase your accessibility know-how, familiarize yourself with the principles of universal design, and learn about the benefits of an accessible website. Then you may begin to evangelize the benefits of web accessibility to those you work with.

Have the conversation with leadership

Securing support from company leadership is imperative to the long-term success of your efforts. There are numerous ways to broach the subject of accessibility, but, sadly, in the world of business, substantiated claims top ethics and moral obligation. Therefore I’ve found one of the most effective ways to build a business case for web accessibility is to highlight the benefits.

Here are just a few to speak of:

  • Accessible websites are inherently more usable, and consequently they get more traffic. Additionally, better user experiences result in lower bounce rates, higher conversions, and less negative feedback, which in turn typically make accessible websites rank higher in search engines.
  • Like assistive technology, web crawlers (such as Googlebot) leverage HTML to get their information from websites, so a well marked-up, accessible website is easier to index, which makes it easier to find in search results.
  • There are a number of potential risks for not having an accessible website, one of which is accessibility lawsuits.
  • Small businesses in the US that improve the accessibility of their website may be eligible for a tax credit from the IRS.
Start the movement

If you can’t secure leadership backing right away, you can still form a grassroots accessibility movement within the company. Begin slowly and build momentum as you work to improve usability for all users. Though you may not have the authority to make company-wide changes, you can strategically and systematically lead the charge for web accessibility improvements.

My advice is to start small. For example, begin by pushing for site-wide improvements to color contrast ratios (which would help color-blind, low-vision, and aging users) or work on making the site keyboard accessible (which would help users with mobility impairments or broken touchpads, and people such as myself who prefer not using a mouse whenever possible). Incorporate user research and A/B testing into these updates, and document the results. Use the results to champion for more accessibility improvements.

Read and re-read the guidelines

Build your knowledge base as you go. Learning which laws, rules, or guidelines apply to you, and understanding them, is a prerequisite to writing an accessibility plan. Web accessibility guidelines vary throughout the world. There may be other guidelines that apply to you, and in some cases, additional rules, regulations, or mandates specific to your industry.

Not understanding which rules apply to you, not reading them in full, or not understanding what they mean can create huge problems down the road, including excessive rework once you learn you need to make changes.

Build a team

Before you can start remediating your website, you’ll need to assemble a team. The number of people will vary depending on the size of your organization and website. I previously worked for a very large company with a very large website, yet the accessibility team they assembled was small in comparison to the thousands of pages we were tasked to remediate. This team included a project manager, visual designers, user experience designers, front-end developers, content editors, a couple requirements folks, and a few QA testers. Most of these people had been pulled from their full-time roles and instructed to quickly become familiar with WCAG 2.0. To help you create your own accessibility team, I will explain in detail some of the top responsibilities of the key players:

  • Project manager is responsible for coordinating the entire remediation process. They will help run planning sessions, keep everyone on schedule, and report the progress being made. Working closely with the requirements people, their goal is to keep every part of this new machine running smoothly.
  • Visual designers will mainly address issues of color usage and text alternatives. In its present form, WCAG 2.0 contrast minimums only apply to text, however the much anticipated WCAG 2.1 update (due to be released in mid-2018) contains a new success criterion for Non-text Contrast, which covers contrast minimums of all interactive elements and “graphics required to understand the content.” Visual designers should also steer clear of design trends that ruin usability.
  • UX designers should be checking for consistent, logical navigation and reading order. They’ll need to test that pages are using heading tags appropriately (headings are for semantic structure, not for visual styling). They’ll be checking to see that page designs are structured to appear and operate in predictable ways.
  • Developers have the potential to make or break an accessible website because even the best designs will fail if implemented incorrectly. If your developers are unfamiliar with WAI-ARIA, accessible coding practices, or accessible JavaScript, then they have a few things to learn. Developers should think of themselves as designers because they play a very important role in designing an inclusive user experience. Luckily, Google offers a short, free Introduction to Web Accessibility course and, via Udacity, a free, advanced two-week accessibility course. Additionally, The A11Y Project is a one-stop shop loaded with free pattern libraries, checklists, and accessibility resources for front-end developers.
  • Editorial review the copy for verbosity. Avoid using phrases that will confuse people who aren’t native language speakers. Don’t “beat around the bush” (see what I did there?). Keep content simple, concise, and easy to understand. No writing degree? No worries. There are apps that can help you improve the clarity of your writing and that correct your grammar like a middle school English teacher. Score bonus points by making sure link text is understandable out of context. While this is a WCAG 2.0 Level AAA guideline, it’s also easily fixed and it greatly improves the user experience for individuals with varying learning and cognitive abilities.
  • Analysts work in tandem with editorial, design, UX, and QA. They coordinate the work being done by these groups and document the changes needed. As they work with these teams, they manage the action items and follow up on any outstanding tasks, questions, or requests. The analysts also deliver the requirements specifications to the developers. If the changes are numerous and complex, the developers may need the analysts to provide further clarification and to help them properly implement the changes as described in the specs.
  • QA will need to be trained to the same degree as the other accessibility specialists since they will be responsible for testing the changes that are being made and catching any issues that arise. They will need to learn how to navigate a website using only a keyboard and also by properly using a screen reader (ideally a variety of screen readers). I emphasized “properly” because while anyone can download NVDA or turn on VoiceOver, it takes another level of skill to understand the difference between “getting through a page” and “getting through a page with standard keyboard controls.” Having individuals with visual, auditory, or mobility impairments on the QA team can be a real advantage, as they are more familiar with assistive technology and can test in tandem with others. Additionally, there are a variety of automated accessibility testing tools you can use alongside manual testing. These tools typically catch only around 30% of common accessibility issues, so they do not replace ongoing human testing. But they can be extremely useful in helping QA learn when an update has negatively affected the accessibility of your website.
Start your engines!

Divide your task into pieces that make sense. You may wish to tackle all the global elements first, then work your way through the rest of the site, section by section. Keep in mind that every page must adhere to the accessibility standards you’re following for it to be deemed “accessible.” (This includes PDFs.)

Use what you’ve learned so far by way of accessibility videos, articles, and guidelines to perform an audit of your current site. While some manual testing may seem difficult at first, you’ll be happy to learn that some manual testing is very simple. Regardless of the testing being performed, keep in mind that it should always be done thoroughly and by considering a variety of users, including:

  • keyboard users;
  • blind users;
  • color-blind users;
  • low-vision users;
  • deaf and hard-of-hearing users;
  • users with learning disabilities and cognitive limitations;
  • mobility-impaired users;
  • users with speech disabilities;
  • and users with seizure disorders.
When you are in the weeds, document the patterns

As you get deep in the weeds of remediation, keep track of the patterns being used. Start a knowledge repository for elements and situations. Lock down the designs and colors, code each element to be accessible, and test these patterns across various platforms, browsers, screen readers, and devices. When you know the elements are bulletproof, save them in a pattern library that you can pull from later. Having a pattern library at your fingertips will improve consistency and compliance, and help you meet tight deadlines later on, especially when working in an agile environment. You’ll need to keep this online knowledge repository and pattern library up-to-date. It should be a living, breathing document.

Cross the finish line … and keep going!

Some people mistakenly believe accessibility is a set-it-and-forget-it solution. It isn’t. Accessibility is an ongoing challenge to continually improve the user experience the way any good UX practitioner does. This is why it’s crucial to get leadership on board. Once your site is fully accessible, you must begin working on the backlogs of continuous improvements. If you aren’t vigilant about accessibility, people making even small site updates can unknowingly strip the site of the accessibility features you worked so hard to put in place. You’d be surprised how quickly it can happen, so educate everyone you work with about the importance of accessibility. When everyone working on your site understands and evangelizes accessibility, your chances of protecting the accessibility of the site are much higher.

It’s about the experience, not the law

In December of 2017, Winn-Dixie appealed the case with blind patron Juan Carlo Gil. Their argument is that a website does not constitute a place of accommodation, and therefore, their case should have been dismissed. This case, and others, illustrate that the legality of web accessibility is still very much in flux. However, as web developers and designers, our motivation to build accessible websites should have nothing to do with the law and everything to do with the user experience.

Good accessibility is good UX. We should seek to create the best user experience for all. And we shouldn’t settle for simply meeting accessibility standards but rather strive to create an experience that delights users of all abilities.

Additional resources and articles

If you are ready to learn more about web accessibility standards and become the accessibility evangelist on your team, here are some additional resources that can help.

Resources Articles

We Write CSS Like We Did in the 90s, and Yes, It’s Silly

Tue, 03/06/2018 - 06:47

As web developers, we marvel at technology. We enjoy the many tools that help with our work: multipurpose editors, frameworks, libraries, polyfills and shims, content management systems, preprocessors, build and deployment tools, development consoles, production monitors—the list goes on.

Our delight in these tools is so strong that no one questions whether a small website actually requires any of them. Tool obesity is the new WYSIWYG—the web developers who can’t do without their frameworks and preprocessors are no better than our peers from the 1990s who couldn’t do without FrontPage or Dreamweaver. It is true that these tools have improved our lives as developers in many ways. At the same time, they have perhaps also prevented us from improving our basic skills.

I want to talk about one of those skills: the craft of writing CSS. Not of using CSS preprocessors or postprocessors, but of writing CSS itself. Why? Because CSS is second in importance only to HTML in web development, and because no one needs processors to build a site or app.

Most of all, I want to talk about this because when it comes to writing CSS, it often seems that we have learned nothing since the 1990s. We still write CSS the natural way, with no advances in sorting declarations or selectors and no improvements in writing DRY CSS.

Instead, many developers argue fiercely about each of these topics. Others simply dig in their heels and refuse to change. And a third cohort protests even the discussion of these topics.

I don’t care that developers do this. But I do care about our craft. And I care that we, as a profession, are ignoring simple ways to improve our work.

Let’s talk about this more after the code break.

Here’s unsorted, unoptimized CSS from Amazon in 2003.

.serif { font-family: times, serif; font-size: small; } .sans { font-family: verdana, arial, helvetica, sans-serif; font-size: small; } .small { font-family: verdana, arial, helvetica, sans-serif; font-size: x-small; } .h1 { font-family: verdana, arial, helvetica, sans-serif; color: #CC6600; font-size: small; } .h3color { font-family: verdana, arial, helvetica, sans-serif; color: #CC6600; font-size: x-small; } .tiny { font-family: verdana, arial, helvetica, sans-serif; font-size: xx-small; } .listprice { font-family: arial, verdana, sans-serif; text-decoration: line-through; font-size: x-small; } .price { font-family: verdana, arial, helvetica, sans-serif; color: #990000; font-size: x-small; } .attention { background-color: #FFFFD5; }

And here’s CSS from contemporary Amazon:

.a-box { display: block; border-radius: 4px; border: 1px #ddd solid; background-color: #fff; } .a-box .a-box-inner { border-radius: 4px; position: relative; padding: 14px 18px; } .a-box-thumbnail { display: inline-block; } .a-box-thumbnail .a-box-inner { padding: 0 !important; } .a-box-thumbnail .a-box-inner img { border-radius: 4px; } .a-box-title { overflow: hidden; } .a-box-title .a-box-inner { overflow: hidden; padding: 12px 18px 11px; background: #f0f0f0; }

Just as in 2003, the CSS is unsorted and unoptimized. Did we learn anything over the past 15 years? Is this really the best CSS we can write?

Let’s look at three areas where I believe we can easily improve the way we do our work: declaration sorting, selector sorting, and declaration repetition.

Declaration sorting

The 90s web developer, if he or she wrote CSS, wrote CSS as it occurred to them. Without sense or order—with no direction whatsoever. The same was true of last decade’s developer. The same is true of today’s developer, whether novice or expert.

.foo { font: arial, sans-serif; background: #abc; margin: 1em; text-align: center; letter-spacing: 1px; -x-yaddayadda: yes; }

The only difference between now and then: today’s expert developer uses eight variables, because “that’s how you do it” (even with one-pagers) and because at some point in their life they may need them. In twenty-something years of web development we have somehow not managed to make our CSS consistent and easier to work on by establishing the (or even a) common sense standard to sort declarations.

(If this sounds harsh, it’s because it’s true. Developers condemn selectors, shorthands, !important, and other useful aspects of CSS rather than concede that they don’t even know how to sort their declarations.)

In reality, the issue is dead simple: Declarations should be sorted alphabetically. Period.

Why?

For one, sorting makes collaborating easier.

Untrained developers can do it. Non-English speakers (such as this author) can do it. I wouldn’t be surprised to learn that even houseplants can do it.

For another reason, alphabetical sorting can be automated. What’s that? Yes, one can use or write little scripts (such as CSS Declaration Sorter) to sort declarations.

Given the ease of sorting, and its benefits, the current state of affairs borders on the ridiculous, making it tempting to ignore our peers who don’t sort declarations, and to ban from our lives those who argue that it’s easier—or even logical—not to sort alphabetically but instead to sort based on 1) box dimensions, 2) colors, 3) grid- or flexbox-iness, 4) mood, 5) what they ate for breakfast, or some equally random basis.

With this issue settled (if somewhat provocatively), on to our second problem from the 90s.

Selector sorting

The situation concerning selectors is quite similar. Almost since 1994, developers have written selectors and rules as they occurred to them. Perhaps they’ve moved them around (“Oh, that belongs with the nav”). Perhaps they’ve refactored their style sheets (“Oh, strange that site styles appear amidst notification styles”). But standardizing the order—no.

Let’s take a step back and assume that order does matter, not just for aesthetics as one might think, but for collaboration. As an example, think of the letters below as selectors. Which list would be easiest to work with?

c, b · a · a, b · c, d · d, c, a · e · a c · b · a, b · a · c, d · a, c, d · a · e a, b · a, c, d · a · b, c · c, d · e

The fact that one selector (a) was a duplicate that only got discovered and merged in the last row perhaps gives away my preference. But then, if you wanted to add d, e to the list, wouldn’t the order of the third row make placing the new selector easier than placing it in either of the first two rows?

This example gets at the two issues caused by not sorting selectors:

  • No one knows where to add new selectors, creating a black hole in the workflow.
  • There’s a higher chance of both selector repetition and duplication of rules with the same selectors.

Both problems get compounded in larger projects and larger teams. Both problems have haunted us since the 90s. Both problems get fixed by standardizing—through coding guidelines—how selectors should be ordered.

The answer in this case is not as trivial as sorting alphabetically (although we could play with the idea—the cognitive ease of alphabetical selector sorting may make it worth trying). But we can take a path similar to how the HTML spec roughly groups elements, so that we first define sections, and then grouping elements, text elements, etc. (That’s also the approach of at least one draft, the author’s.)

The point is that ideal selector sorting doesn’t just occur naturally and automatically. We can benefit from putting more thought into this problem.

Declaration repetition

Our third hangover from the 90s is that there is and has always been an insane amount of repetition in our style sheets. According to one analysis of more than 200 websites, a median of 66% of all declarations are redundant, and the repetition rate goes as high as 92%—meaning that, in this study at least, the typical website uses each declaration at least three times and some up to ten times.

As shown by a list of some sample sites I compiled, declaration repetition has indeed been bad from the start and has even increased slightly over the years.

Yes, there are reasons for repetition: notably for different target media (we may repeat ourselves for screen, print, or different viewport sizes) and, occasionally, for the cascade. That is why a repetition rate of 10–20% seems to be acceptable. But the degree of repetition we observe right now is not acceptable—it’s an unoptimized mess that goes mostly unnoticed.

What’s the solution here? One possibility is to use declarations just once. We’ve seen with a sample optimization of Yandex’s large-scale site that this can lead to slightly more unwieldy style sheets, but we also know that in many other cases it does make them smaller and more compact.

This approach of using declarations just once has at least three benefits:

  • It reduces repetition to a more acceptable amount.
  • It reduces the pseudo need for variables.
  • Excluding outliers like Yandex, it reduces file size and payload (10–20% according to my own experience—we looked at the effects years ago at Google).

No matter what practice we as a field come up with—whether to use declarations just once or follow a different path—the current level of “natural repetition” we face on sample websites is too high. We shouldn’t need to remind ourselves not to repeat ourselves if we repeat code up to nine times, and it’s getting outright pathetic—again excuse the strong language—if then we’re also the ones to scream for constants and variables and other features only because we’ve never stopped to question this 90s-style coding.

The unnatural, more modern way of writing CSS

Targeting these three areas would help us move to a more modern way of writing style sheets, one that has a straightforward but powerful way to sort declarations, includes a plan for ordering selectors, and minimizes declaration repetition.

In this article, we’ve outlined some options for us to adhere to this more modern way:

  • Sort declarations alphabetically.
  • Use an existing order system or standardize and follow a new selector order system.
  • Try to use declarations just once.
  • Get assistance through tools.

And yet there’s still great potential to improve in all of these areas. The potential, then, is what we should close with. While I’ve emphasized our “no changes since the 90s” way of writing CSS, and stressed the need for robust practices, we need more proposals, studies, and conversations around what practices are most beneficial. Beneficial in terms of writing better, more consistent CSS, but also in terms of balancing our sense of craft (our mastery of our profession) with a high degree of efficiency (automating when it’s appropriate). Striving to achieve this balance will help ensure that developers twenty years from now won’t have to write rants about hangovers from the 2010s.

Owning the Role of the Front-End Developer

Tue, 02/27/2018 - 06:30

When I started working as a web developer in 2009, I spent most of my time crafting HTML/CSS layouts from design comps. My work was the final step of a linear process in which designers, clients, and other stakeholders made virtually all of the decisions.

Whether I was working for an agency or as a freelancer, there was no room for a developer’s input on client work other than when we were called to answer specific technical questions. Most of the time I would be asked to confirm whether it was possible to achieve a simple feature, such as adding a content slider or adapting an image loaded from a CMS.

In the ensuing years, as front-end development became increasingly challenging, developers’ skills began to evolve, leading to more frustration. Many organizations, including the ones I worked for, followed a traditional waterfall approach that kept us in the dark until the project was ready to be coded. Everything would fall into our laps, often behind schedule, with no room for us to add our two cents. Even though we were often highly esteemed by our teammates, there still wasn’t a chance for us to contribute to projects at the beginning of the process. Every time we shared an idea or flagged a problem, it was already too late.

Almost a decade later, we’ve come a long way as front-end developers. After years of putting in the hard work required to become better professionals and have a bigger impact on projects, many developers are now able to occupy a more fulfilling version of the role.

But there’s still work to be done: Unfortunately, some front-end developers with amazing skills are still limited to basic PSD-to-HTML work. Others find themselves in a better position within their team, but are still pushing for a more prominent role where their ideas can be fostered.

Although I’m proud to believe I’m part of the group that evolved with the role, I continue to fight for our seat at the table. I hope sharing my experience will help others fighting with me.

My road to earning a seat at the table

My role began to shift the day I watched an inspiring talk by Seth Godin, which helped me realize I had the power to start making changes to make my work more fulfilling. With his recommendation to demand responsibility whether you work for a boss or a client, Godin gave me the push I needed.

I wasn’t expecting to make any big leaps—just enough to feel like I was headed in the right direction.

Taking small steps within a small team

My first chance to test the waters was ideal. I had recently partnered with a small design studio and we were a team of five. Since I’d always been open about my soft spot for great design, it wasn’t hard to sell them on the idea of having me begin to get a bit more involved with the design process and start giving technical feedback before comps were presented to clients.

The results were surprisingly amazing and had a positive impact on everybody’s work. I started getting design hand-offs that I both approved of from a technical point of view and had a more personal connection with. For their part, the designers happily noticed that the websites we launched were more accurate representations of the comps they had handed off.

My next step was to get involved with every single project from day one. I started to tag along to initial client meetings, even before any contracts had been signed. I started flagging things that could turn the development phase into a nightmare; at the same time I was able to throw around some ideas about new technologies I’d been experimenting with.

After a few months, I started feeling that my skills were finally having an impact on my team’s projects. I was satisfied with my role within the team, but I knew it wouldn’t last forever. Eventually it was time for me to embark on a journey that would take me back to the classic role of the front-end developer, closer to the base of the waterfall.

Moving to the big stage

As my career started to take off, I found myself far away from that five-desk office where it had all started. I was now working with a much bigger team, and the challenges were quite different. At first I was amazed at how they were approaching the process: the whole team had a strong technical background, unlike any team I had ever worked with, which made collaboration very efficient. I had no complaints about the quality of the designs I was assigned to work with. In fact, during my first few months, I was constantly pushed out of my comfort zone, and my skills were challenged to the fullest.

After I started to feel more comfortable with my responsibilities, though, I soon found my next challenge: to help build a stronger connection between the design and development teams. Though we regularly collaborated to produce high-quality work, these teams didn’t always speak the same language. Luckily, the company was already making an effort to improve the conversation between creatives and developers, so I had all the support I needed.

As a development team, we had been shifting to modern JavaScript libraries that led us to work on our applications using a strictly component-based approach. But though we had slowly changed our mindset, we hadn’t changed the ways we collaborated with our creative colleagues. We had not properly shared our new vision; making that connection would become my new personal goal.

I was fascinated by Brad Frost’s “death to the waterfall” concept: the idea that UX, visual design, and development teams should work in parallel, allowing for a higher level of iteration during the project.

By pushing to progressively move toward a collaborative workflow, everyone on my team began to share more responsibilities and exchange more feedback throughout every project. Developers started to get involved in projects during the design phase, flagging any technical issues we could anticipate. Designers made sure they provided input and guidance after the projects started coming to life during development. Once we got the ball rolling, we quickly began seeing positive results and producing rewarding (and award-winning) work.

Even though it might sound like it was a smooth transition, it required a great amount of hard work and commitment from everybody on the team. Not only did we all want to produce better work but we also needed to be willing to take a big leap away from our comfort zones and our old processes.

How you can push for a seat at the table

In my experience, making real progress required a combination of sharpening my skills as a front-end developer and pushing the team to improve our processes.

What follows are more details about what worked for me—and could also work for you.

Making changes as a developer

Even though the real change in your role may depend on your organization, sometimes your individual actions can help jump-start the shift:

  • Speak up. In multidisciplinary teams, developers are known as highly analytical, critical, and logical, but not always the most communicative of the pack. I’ve seen many who quietly complain and claim to have better ideas on how things should be handled, but bottle up those thoughts and move on to a different job. After I started voicing my concerns, proposing new ideas, and seeing small changes within my team, I experienced an unexpected boost in my motivation and noticed others begin to see my role differently.
  • Always be aware of what the rest of the team is up to. One of the most common mistakes we tend to make is to focus only on our craft. To connect with our team and improve in our role, we need to understand our organization’s goals, our teammates’ skill sets, our customers, and basically every other aspect of our industry that we used to think wasn’t worth a developer’s time. Once I started having a better understanding of the design process, communication with my team started to improve. The same applied to designers who started learning more about the processes we use as front-end developers.
  • Keep core skills sharp. Today our responsibilities are broader and we’re constantly tasked with leading our teams into undiscovered technologies. As a front-end developer, it’s not uncommon to be required to research technologies like WebGL or VR, and introduce them to the rest of the team. We must stay current with the latest practices in our technical areas of focus. Our credibility is at stake every time our input is needed, so we must always strive to be the best developers in the business.
Rethinking practices within the company

In order to make the most of your role as a developer, you’ll have to persuade your organization to make key changes. This might be hard to achieve, since it tends to require taking all members of your team out of their comfort zones.

For me, what worked was long talks with my colleagues, including designers, management, and fellow developers. It’s hard for a manager to turn you down when you propose an idea to improve the quality of your work and only ask for small changes. Once the rest of the team is on board, you have to work hard and start implementing these changes to keep the ball rolling:

  • Involve developers in projects from the beginning. Many companies have high standards when it comes to hiring developers but don’t take full advantage of their talent. We tend to be logical thinkers, so it’s usually a good idea to involve developers in many aspects of the projects we work on. I often had to take the first step to be invited to project kickoffs. But once I started making an effort to provide valuable input, my team started automatically involving me and other developers during the creative phase of new projects.
  • Schedule team reviews. Problems frequently arise when teams present to clients without having looped in everyone working on the project. Once the client signs off on something, it can be risky to introduce new ideas, even if they add value. Developers, designers, and other key players must come together for team reviews before handing off any work. As a developer, sometimes you might need to raise your hand and invest some of your time to help your teammates review their work before they present it.
  • Get people to work together. Whenever possible, get people in the same room. We tend to rely on technology and push to communicate only by chat and email, but there is real value in face time. It’s always a good idea to have different teammates sit together, or at least in close enough proximity for regular in-person conversation, so they can share feedback more easily during projects. If your team works remotely, you have to look for alternatives to achieve the same effect. Occasional video chats and screen sharing can help teams share feedback and interact in real time.
  • Make time for education. Of all the teams I’ve worked on, those that foster a knowledge-sharing culture tend to work most efficiently. Simple and casual presentations among colleagues from different disciplines can be vital to creating a seamless variety of skills across the team. So it’s important to encourage members of the team to teach and learn from each other.

    When we made the decision to use only a component-based architecture, we prepared a simple presentation for the design team that gave them an overview of how we all would benefit from the change to our process. Shortly after, the team began delivering design comps that were aligned with our new approach.

It’s fair to say that the modern developer can’t simply hide behind a keyboard and expect the rest of the team to handle all of the important decisions that define our workflow. Our role requires us to go beyond code, share our ideas, and fight hard to improve the processes we’re involved in.

Discovery on a Budget: Part II

Tue, 02/13/2018 - 08:00

Welcome to the second installment of the “Discovery on a Budget” series, in which we explore how to conduct effective discovery research when there is no existing data to comb through, no stakeholders to interview, and no slush fund to draw upon. In part 1 of this series, we discussed how it is helpful to articulate what you know (and what you assume) in the form of a problem hypothesis. We also covered strategies for conducting one of the most affordable and effective research methods: user interviews. In part 2 we will discuss when it’s beneficial to introduce a second, competing problem hypothesis to test against the first. We will also discuss the benefits of launching a “fake-door” and how to conduct an A/B test when you have little to no traffic.

A quick recap

In part 1 I conducted the first round of discovery research for my budget-conscious (and fictitious!) startup, Candor Network. The original goal for Candor Network was to provide a non-addictive social media platform that users would pay for directly. I articulated that goal in the form of a problem hypothesis:

Because their business model relies on advertising, social media tools like Facebook are deliberately designed to “hook” users and make them addicted to the service. Users are unhappy with this and would rather have a healthier relationship with social media tools. They would be willing to pay for a social media service that was designed with mental health in mind.

Also in part 1, I took extra care to document the assumptions that went into creating this hypothesis. They were:

  • Users feel that social media sites like Facebook are addictive.
  • Users don’t like to be addicted to social media.
  • Users would be willing to pay for a non-addictive Facebook replacement.

For the first round of research, I chose to conduct user interviews because it is a research method that is adaptable, effective, and—above all—affordable. I recruited participants from Facebook, taking care to document the bias of using a convenience sampling method. I carefully crafted my interview protocol, and used a number of strategies to keep my participants talking. Now it is time to review the data and analyze the results.

Analyze the data

When we conduct discovery research, we look for data that can help us either affirm or reject the assumptions we made in our problem hypothesis. Regardless of what research method you choose, it’s critical that you set aside the time to objectively review and analyze the results.

In practice, analyzing interview data involves creating transcriptions of the interviews and then reading them many, many times. Each time you read through the transcripts, you highlight and label sentences or sections that seem relevant or important to your research question. You can use products like NVivo, HyperRESEARCH, or any other qualitative analysis tool to help facilitate this process. Or, if you are on a pretty strict budget, you can simply use Google Sheets to keep track of relevant sections in one column and labels in another.

Screenshot of my interview analysis in Google Sheets

For my project, I specifically looked for data that would show whether my participants felt Facebook was addicting and whether that was a bad thing, and if they’d be willing to pay for an alternative. Here’s how that analysis played out:

Assumption 1: Users feel that social media sites like Facebook are addictive

Facebook has a weird, hypnotizing effect on my brain. I keep scrolling and scrolling and then I like wake up and think, ‘where have I been? why am I spending my time on this?’ interview participant

Overwhelmingly, my data affirms this assumption. All of my participants (eleven out of eleven) mentioned Facebook being addictive in some way.

Assumption 2: Users don’t like to be addicted to social media

I know a lot of people who spend a lot of time on Facebook, but I think I manage it pretty well. interview participant

This assumption turned out to be a little more tricky to affirm or reject. While all of my participants described Facebook as addictive, many of them (eight out of eleven) expressed that “it wasn’t so bad” or that they felt like they were less addicted than the average Facebook user.

Assumption 3: Users would be willing to pay for a non-addictive Facebook replacement

No, I wouldn’t pay for that. I mean, why would I pay for something I don’t think I should use so much anyway? interview participant

Unfortunately for my project, I can’t readily affirm this assumption. Four participants told me they would flat-out never pay for a social media service, four participants said they would be interested in trying a paid-for “non-addictive Facebook,” and three participants said they would only try it if it became really popular and everyone else was using it.

One unexpected result: “It’s super creepy”

I don’t like that they are really targeting me with those ads. It’s super creepy. interview participant

In reviewing the interview transcripts, I came across one unexpected theme. More than 80% of the interviewees (nine out of eleven) said they found Facebook “creepy” because of the targeted advertising and the collection of personal data. Also, most of those participants (seven out of nine) went on to say that they would pay for a “non-creepy Facebook.” This is particularly remarkable because I never asked the participants how they felt about targeted advertising or the use of personal data. It always came up in the conversation organically.

Whenever we start a new project, our initial ideas revolve around our own personal experiences and discomforts. I started Candor Network because I personally feel that social media is designed to be addicting, and that this is a major flaw with many of the most popular services. However, while I can affirm my first assumption, I had unclear results on the second and have to consider rejecting the third. Also, I encountered a new user experience that I previously didn’t think of or account for: that the way social media tools collect and use personal data for advertising can be disconcerting and “creepy.” As is so often the case, the data analysis showed that there are a variety of other experiences, expectations, and needs that must be accounted for if the project is to be successful.

Refining the hypothesis Discovery research cycle: Create Hypothesis, Test, Analyze, and repeat

Each time we go through the discovery research process, we start with a hypothesis, test it by gathering data, analyze the data, and arrive at a new understanding of the problem. In theory, it may be possible to take one trip through the cycle and either completely affirm or completely reject our hypothesis and assumptions. However, like with Candor Network, it is more often the case that we get a mixture of results: some assumptions can be affirmed while others are rejected, and some completely new insights come to light.

One option is to continue working with a single hypothesis, and simply refine it to account for the results of each round of research. This is especially helpful when the research mostly affirms your assumptions, but there is additional context and nuance you need to account for. However, if you find that your research results are pulling you in a new direction entirely, it can be useful to create a second, competing hypothesis.

In my example, the interview research brought to light a new concern about social media I previously hadn’t considered: the “creepy” collection of personal data. I am left wondering, Would potential customers be more attracted to the idea of a social media platform built to prevent addiction, or one built for data privacy? To answer this question, I articulated a new, competing hypothesis:

Because their business model relies on advertising, social media tools like Facebook are designed to gather lots of behavior data. They then utilize this behavior data to create super-targeted ads. Users are unhappy with this, and would rather use a social media tool that does not rely on the commodification of their data to make money. They would be willing to pay for a social media service that did not track their use and behavior.

I now have two hypotheses to test against one another: one focused on social media addiction, the other focused on behavior tracking and data collection.

At this point, it would be perfectly acceptable to conduct another round of interviews. We would need to change our interview protocol and find more participants, but it would still be an effective (and cheap) method to use. However, for this article I wanted to introduce a new method for you to consider, and to illustrate that a technique like A/B testing is not just for the “big guys” on the web. So I chose to conduct an A/B test utilizing two “fake-doors.”

A low-cost comparative test: fake-door A/B testing

A “fake-door” test is simply a marketing page, ad, button, or other asset that promotes a product that has yet to be made. Fake-door testing (or “ghetto testing”) is Zynga’s go-to method for testing ideas. They create a five-word summary of any new game they are considering, make a few ads, and put it up on various high-trafficked websites. Data is then collected to track how often users click on each of the fake-door “probes,” and only those games that attract a certain number of “conversions” on the fake-door are built.

One of the many benefits of conducting a fake-door test is that it allows you to measure interest in a product before you begin to develop it. This makes it a great method for low-budget projects, because it can help you decide whether a project is worth investing in before you spend anything.

However, for my project, I wasn’t just interested in measuring potential customer interest in a single product idea. I wanted to continue evaluating my original hypothesis on non-addictive social media as well as start investigating the second hypothesis on a social media platform that doesn’t record behavior data. Specifically, I wanted to see which theoretical social media platform is more attractive. So I created two fake-door landing pages—one for each hypothesis—and used Google Optimize to conduct an A/B test.

Versions A (right) and B (left) of the Candor Network landing page

Version A of the Candor Network landing page advertises the product I originally envisioned and described in my first problem hypothesis. It advertises a social network “built with mental health in mind.” Version B reflects the second problem hypothesis and my interview participants’ concerns around the “creepy” commodification of user data. It advertises a social network that “doesn’t track, use, solicit, or sell your data.” In all other respects, the landing pages are identical, and both will receive 50% of the traffic.

Running an A/B test with little to no site traffic

One of the major caveats when running an A/B test is that you need to have a certain number of people participate to achieve any kind of statistically significant result. This wouldn’t be a problem if we worked at a large company with an existing customer base, as it would be relatively straightforward to find ways to direct some of the existing traffic to the test. If you’re working on a new or low-trafficked site, however, conducting an A/B test can be tricky. Here are a few strategies I recommend:

Figuring out how much traffic you need to achieve statistical significance in a quantitative study is an inexact science. If we were conducting a high-stakes experiment at a more established business, we would conduct multiple rounds of pre-tests to calculate the effect size of the experiment. Then we would use a calculation like Cohen’s d to estimate the number of people we need to participate in the actual test. This approach is rigorous and helps avoid sample pollution or sampling bias, but it requires a lot of resources upfront (like time, money, and lots of potential participants) that we may not have access to.

In general, however, you can use this rule of thumb: the bigger the difference between the variations, the fewer participants you need to see a significant result. In other words, if your A and B are very different from each other, you will need fewer participants.

Tip 2: Run the test for a longer amount of time

When I worked at Weather Underground, we would always start an A/B test on a Sunday and end it a full week later on the following Sunday. That way we could be sure we captured both weekday and weekend users. Because Weather Underground is a high-trafficked site, this always resulted in having more than enough participants to see a statistically significant result.

If you’re working on a new or low-trafficked site, however, you’ll need to run your test for longer than a week to achieve the number of test participants required. I recommend budgeting enough time so that your study can run a full six weeks. Six weeks will provide enough time to not only capture results from all your usual website traffic, but also any newcomers you can recruit through other means.

Tip 3: Beg and borrow traffic from someone else

I’ve got a pretty low number of followers on social media, so if I tweet or post about Candor Network, only a few people will see it. However, I know a few people and organizations that have a huge number of followers. For example, @alistapart has roughly 148k followers on Twitter, and A List Apart’s publisher, Jeffrey Zeldman (@zeldman), has 358k followers. I have asked them both to share the link for Candor Network with their followers.

A helpful tweet from @zeldman

Of course, this method of advertising doesn’t cost any money, but it does cost social capital. I’m sure A List Apart and Mr. Zeldman wouldn’t appreciate it if I asked them to tweet things on my behalf on a regular basis. I recommend you use this method sparingly.

Tip 4: Beware! There is always a risk of no results.

Before you create an A/B test for your new product idea, there is one major risk you need to assess: there is a chance that your experiment won’t produce any statistically significant results at all. Even if you use all of the tips I’ve outlined above and manage to get a large number of participants in your test, there is a chance that you won’t be able to “declare a winner.” This isn’t just a risk for companies that have low traffic, it is an inherent risk when running any kind of quantitative study. Sometimes there simply isn’t a clear effect on participant behavior.

Tune in next time for the last installment

In the third and final installment of the “Discovery on a Budget” series, I’ll describe how I designed the incredibly short survey on the Candor Network landing page and discuss the results of my fake-door A/B test. I will also make another revision to my problem hypothesis and will discuss how to know when you’re ready to leave the discovery process (at least for now) and embark on the next phase of design: ideating over possible solutions.

My Accessibility Journey: What I’ve Learned So Far

Tue, 02/06/2018 - 08:00

Last year I gave a talk about CSS and accessibility at the stahlstadt.js meetup in Linz, Austria. Afterward, an attendee asked why I was interested in accessibility: Did I or someone in my life have a disability?

I’m used to answering this question—to which the answer is no—because I get it all the time. A lot of people seem to assume that a personal connection is the only reason someone would care about accessibility.

This is a problem. For the web to be truly accessible, everyone who makes websites needs to care about accessibility. We tend to use our own abilities as a baseline when we’re designing and building websites. Instead, we need to keep in mind our diverse users and their diverse abilities to make sure we’re creating inclusive products that aren’t just designed for a specific range of people.

Another reason we all should think about accessibility is that it makes us better at our jobs. In 2016 I took part in 10k Apart, a competition held by Microsoft and An Event Apart. The objective was to build a compelling web experience that worked without JavaScript and could be delivered in 10 kB. On top of that, the site had to be accessible. At the time, I knew about some accessibility basics like using semantic HTML, providing descriptions for images, and hiding content visually. But there was still a lot to learn.

As I dug deeper, I realized that there was far more to accessibility than I had ever imagined, and that making accessible sites basically means doing a great job as a developer (or as a designer, project manager, or writer).

Accessibility is exciting

Web accessibility is not about a certain technology. It’s not about writing the most sophisticated code or finding the most clever solution to a problem; it’s about users and whether they’re able to use our products.

The focus on users is the main reason why I’m specializing in accessibility rather than solely in animation, performance, JavaScript frameworks, or WebVR. Focusing on users means I have to keep up with pretty much every web discipline, because users will load a page, deal with markup in some way, use a design, read text, control a JavaScript component, see animation, walk through a process, and navigate. What all those things have in common is that they’re performed by someone in front of a device. What makes them exciting is that we don’t know which device it will be, or which operating system or browser. We also don’t know how our app or site will be used, who will use it, how fast their internet connection will be, or how powerful their device will be.

Making accessible sites forces you to engage with all of these variables—and pushes you, in the process, to do a great job as a developer. For me, making accessible sites means making fast, resilient sites with great UX that are fun and easy to use even in conditions that aren’t ideal.

I know, that sounds daunting. The good news, though, is that I’ve spent the last year focusing on some of those things, and I’ve learned several important lessons that I’m happy to share.

1. Accessibility is a broad concept

Many people, like me pre-2016, think making your site accessible is synonymous with making it accessible to people who use screen readers. That’s certainly hugely important, but it’s only one part of the puzzle. Accessibility means access for everyone:

  • If your site takes ten seconds to load on a mobile connection, it’s not accessible.
  • If your site is only optimized for one browser, it’s not accessible.
  • If the content on your site is difficult to understand, your site isn’t accessible.

It doesn’t matter who’s using your website or when, where, and how they’re doing it. What matters is that they’re able to do it.

The belief that you have to learn new software or maybe even hardware to get started with accessibility is a barrier for many developers. At some point you will have to learn how to use a screen reader if you really want to get everything right, but there’s a lot more to do before that. We can make a lot of improvements that help everyone, including people with visual impairments, by simply following best practices.

2. There are permanent, temporary, and situational impairments

Who benefits from a keyboard-accessible site? Only a small percentage of users, some might argue. Aaron Gustafson pointed me to the Microsoft design toolkit, which helped me broaden my perspective. People with permanent impairments are not the only ones who benefit from accessibility. There are also people with temporary and situational impairments who’d be happy to have an alternative way of navigating. For example, someone with a broken arm, someone who recently got their forearm tattooed, or a parent who’s holding their kid in one arm while having to check something online. When you watch a developer operate their editor, it sometimes feels like they don’t even know they have a mouse. Why not give users the opportunity to use your website in a similar way?

As you think about the range of people who could benefit from accessibility improvements, the group of beneficiaries tends to grow much bigger. As Derek Featherstone has said, “When something works for everyone, it works better for everyone.

3. The first step is to make accessibility a requirement

I’ve been asked many times whether it’s worth the effort to fix accessibility, how much it costs, and how to convince bosses and colleagues. My answer to those questions is that you can improve things significantly without even having to use new tools, spend extra money, or ask anyone’s permission.

The first step is to make accessibility a requirement—if not on paper, then at least in your head. For example, if you’re looking for a slider component, pick one that’s accessible. If you’re working on a design, make sure color contrasts are high enough. If you’re writing copy, use language that is easy to understand.

We ask ourselves many questions when we make design and development decisions: Is the code clean? Does the site look nice? Is the UX great? Is it fast enough? Is it well-documented?

As a first step, add one more question to your list: Is it accessible?

4. Making accessible sites is a team sport

Another reason why making websites accessible sounds scary to some developers is that there is a belief that we’re the only ones responsible for getting it right.

In fact, as Dennis Lembree reminds us, “Nearly everyone in the organization is responsible for accessibility at some level.

It’s a developer’s job to create an accessible site from a coding perspective, but there are many things that have to be taken care of both before and after that. Designs must be intuitive, interactions clear and helpful, copy understandable and readable. Relevant personas and use cases have to be defined, and tests must be carried out accordingly. Most importantly, leadership and teams have to see accessibility as a core principle and requirement, which brings me to the next point: communication.

5. Communication is key

After talking to a variety of people at meetups and conferences, I think one of the reasons accessibility often doesn’t get the place it deserves is that not everyone knows what it means. Many times you don’t even have to convince your team, but rather just explain what accessibility is. If you want to get people on board, it matters how you approach them.

The first step here is to listen. Talk to your colleagues and ask why they make certain design, development, or management decisions. Try to find out if they don’t approach things in an accessible way because they don’t want to, they’re not allowed to, or they just never thought of it. You’ll have better results if they don’t feel bad, so don’t try to guilt anyone into anything. Just listen. As soon as you know why they do things the way they do, you’ll know how to address your concerns.

Highlight the benefits beyond accessibility

You can talk about accessibility without mentioning it. For example, talk about typography and ideal character counts per line and how beautiful text is with the perfect combination of font size and line height. Demonstrate how better performance impacts conversion rates and how focusing on accessibility can promote out-of-the-box thinking that improves usability in general.

Challenge your colleagues

Some people like challenges. At a meetup, a designer who specializes in accessibility once said that one of the main reasons she loves designing with constraints in mind is that it demands a lot more of her than going the easy way. Ask your colleagues, Can we hit a speed index below 1000? Do you think you can design that component in such a way that it’s keyboard-accessible? My Nokia 3310 has a browser—wouldn’t it be cool if we could make our next website work on that thing as well?

Help people empathize

In his talk “Every Day Website Accessibility,” Scott O’Hara points out that it can be hard for someone to empathize if they are unaware of what they should be empathizing with. Sometimes people just don’t know that certain implementations might be problematic for others. You can help them by explaining how people who are blind or who can’t use a mouse, use the web. Even better, show videos of how people navigate the web without a mouse. Empathy prompts are also a great of way of illustrating different circumstances under which people are surfing the web.

6. Talk about accessibility before a projects kicks off

It’s of course a good thing if you’re fixing accessibility issues on a site that’s already in production, but that has its limitations. At some point, changes may be so complicated and costly that someone will argue that it’s not worth the effort. If your whole team cares about accessibility from the very beginning, before a box is drawn or a line of code is written, it’s much easier, effective, and cost-efficient to make an accessible product.

7. A solid knowledge of HTML solves a lot of problems

It’s impressive to see how JavaScript and the way we use it has changed in recent years. It has become incredibly powerful and more important than ever for web development. At the same time, it seems HTML has become less important. There is an ongoing discussion about CSS in JavaScript and whether it’s more efficient and cleaner than normal CSS from a development perspective. What we should talk about instead is the excessive use of <div> and <span> elements at the expense of other elements. It makes a huge difference whether we use a link or a <div> with an onclick handler. There’s also a difference between links and buttons when it comes to accessibility. Form items need <label> elements, and a sound document outline is essential. Those are just a few examples of absolute basics that some of us forgot or never learned. Semantic HTML is one of the cornerstones of accessible web development. Even if we write everything in JavaScript, HTML is what is finally rendered in the user’s browser.

(Re)learning HTML and using it consciously prevents and fixes many accessibility issues.

8. JavaScript is not the enemy, and sometimes JavaScript even improves accessibility

I’m one of those people who believes that most websites should be accessible even when JavaScript fails to execute. That doesn’t mean that I hate JavaScript; of course not—it pays part of my rent. JavaScript is not the enemy, but it’s important that we use it carefully because it’s very easy to change the user experience for the worse otherwise.

Not that long ago, I didn’t know that JavaScript could improve accessibility. We can leverage its power to make our websites more accessible for keyboard users. We can do things like trapping focus in a modal window, adding key controls to custom components, or showing and hiding content in an accessible manner.

There are many impressive and creative CSS-only implementations of common widgets, but they’re often less accessible and provide worse UX than their JavaScript equivalents. In a post about building a fully accessible help tooltip, Sara Soueidan explains why JavaScript is important for accessibility. “Every single no-JS solution came with a very bad downside that negatively affected the user experience,” she writes.

9. It’s a good time to know vanilla CSS and JavaScript

For a long time, we’ve been reliant on libraries, frameworks, grid systems, and polyfills because we demanded more of browsers than they were able to give us. Naturally, we got used to many of those tools, but from time to time we should take a step back and question if we really still need them. There were many problems that Bootstrap and jQuery solved for us, but do those problems still exist, or is it just easier for us to write $() instead of document.querySelector()?

jQuery is still relevant, but browser inconsistencies aren’t as bad as they used to be. CSS Grid Layout is supported in all major desktop browsers, and thanks to progressive enhancement we can still provide experiences for legacy browsers. We can do feature detection natively with feature queries, testing has gotten much easier, and caniuse and MDN help us understand what browsers are capable of. Many people use frameworks and libraries without knowing what problems those tools are solving. To decide whether it makes sense to add the extra weight to your site, you need a solid understanding of HTML, CSS, and JavaScript. Instead of increasing the page weight for older browsers, it’s often better to progressively enhance an experience. Progressively enhancing our websites—and reducing the number of requests, kilobytes, and dependencies—makes them faster and more robust, and thus more accessible.

10. Keep learning about accessibility and share your knowledge

I’m really thankful that I’ve learned all this in the past few months. Previously, I was a very passive part of the web community for a very long time. Ever since I started to participate online, attend and organize events, and write about web-related topics, especially accessibility, things have changed significantly for me and I’ve grown both personally and professionally.

Understanding the importance of access and inclusion, viewing things from different perspectives, and challenging my decisions has helped me become a better developer.

Knowing how things should be done is great, but it’s just the first step. Truly caring, implementing, and most importantly sharing your knowledge is what makes an impact.

Share your knowledge

Don’t be afraid to share what you’ve learned. Write articles, talk at meetups, and give in-house workshops. The distinct culture of sharing knowledge is one of the most important and beautiful things about our industry.

Go to conferences and meetups

Attending conferences and meetups is very valuable because you get to meet many different people from whom you can learn. There are several dedicated accessibility events and many conferences that feature at least one accessibility talk.

Organize meetups

Dennis Deacon describes his decision to start and run an accessibility meetup as a life-changing experience. Meetups are very important and valuable for the community, but organizing a meetup doesn’t just bring value to attendees and speakers. As an organizer, you get to meet all these people and learn from them. By listening and by understanding how they see and approach things, and what’s important to them, you are able to broaden your horizons. You grow as a person, but you also get to meet other professionals, agencies, and companies from which you may also benefit professionally.

Invite experts to your meetup or conference

If you’re a meetup or conference organizer, you can have a massive impact on the role accessibility plays in our community. Invite accessibility experts to your event and give the topic a forum for discussion.

Follow accessibility experts on Twitter

Follow experts on Twitter to learn what they’re working on, what bothers them, and what they think about recent developments in inclusive web development and design in general. I’ve learned a lot from the following people: Aaron Gustafson, Adrian Roselli, Carie Fisher, Deborah Edwards-Onoro, Heydon Pickering, Hugo Giraudel, Jo Spelbrink, Karl Groves, Léonie Watson, Marco Zehe, Marcy Sutton, Rob Dodson, Scott O’Hara, Scott Vinkle, and Steve Faulkner.

11. Simply get started

You don’t have to go all-in from the very beginning. If you improve just one thing, you’re already doing a great job in bringing us closer to a better web. Just get started and keep working.

There are a lot of resources out there, and trying to find out how and where to start can get quite overwhelming. I’ve gathered a few sites and books that helped me; hopefully they will help you as well. The following lists are by no means exhaustive.

Video series
  • This free Udacity course is a great way to get started.
  • Rob Dodson covers many different accessibility topics in his video series A11ycasts (a11y is short for accessibility—the number eleven stands for the number of letters omitted).
Books Blogs Newsletters Accessible JavaScript components Resources and further reading

Design Like a Teacher

Thu, 02/01/2018 - 08:00

In 2014, the clinic where I served as head of communications and digital strategy switched to a new online patient portal, a change that was mandated by the electronic health record (EHR) system we used. The company that provides the EHR system held several meetings for the COO and me to learn the new tool and provided materials to give to patients to help them register for and use the new portal.

As the sole person at my clinic working on any aspect of user experience, I knew the importance of knowing the audience when implementing an initiative like the patient portal. So I was skeptical of the materials provided to the patients, which assumed a lot of knowledge on their part and focused on the cool features of the portal rather than on why patients would actually want to use it.

By the time the phone rang for the fifth time on the first day of the transition, I knew my suspicion that the patient portal had gone wrong in the user experience stage was warranted. Patients were getting stuck during every phase of the process—from wondering why they should use the portal to registering for and using it. My response was to ask patients what they had tried so far and where they were getting stuck. Then I would try to explain why they might want to use the portal.

Sometimes I lost patients completely; they just refused to sign up. They had a bad user experience trying to understand how a portal fit into their mental model of receiving healthcare, and I had a terrible user experience trying to learn what I needed to do to guide patients through the migration. To borrow a phrase from Dave Platt, the lead instructor of the UX Engineering course I currently help teach, the “hassle budget” was extremely high.

I realized three important things in leading this migration:

  • When people get stuck, their frustration prevents them from providing information up front. They start off with “I’m stuck” and don’t offer additional feedback until you pull it out of them. (If you felt a tremor just then, that was every IT support desk employee in the universe nodding emphatically.)
  • In trying to get them unstuck, I had to employ skills that drew on my work outside of UX. There was no choice; I had a mandate to reach an adoption rate of at least 60%.
  • The overarching goal was really to help these patients learn to do something different than what they were used to, whether that was deal with a new interface or deal with an interface for the first time at all.

Considering these three realizations led me to a single, urgent conclusion that has transformed my UX practice: user experience is really a way of defining and planning what we want a user to learn, so we also need to think about our own work as how to teach.

It follows, then, that user experience toolkits need to include developing a teaching mindset. But what does that mean? And what’s the benefit? Let’s use this patient portal story and the three realizations above as a framework for considering this.

Helping users get unstuck

Research on teaching and learning has produced two concepts that can help explain why people struggle to get unstuck and what to do about it: 1) cognitive load and 2) the zone of proximal development.

Much like you work your muscles through weight resistance to develop physical strength, you work your brain through cognitive load to develop mental strength—to learn. There are three kinds of cognitive load: intrinsic, germane, and extraneous.

 

.cognitive-load td:first-child{width: 33.33%;} .cognitive-load td:last-child{width: 66.67%} This type of cognitive load ... is responsible for ... Intrinsic Actual learning of the material Germane Building that new information into a more permanent memory store Extraneous Everything else about the experience of encountering the material (e.g., who’s teaching it, how they teach, your comfort level with the material, what the room is like, the temperature, the season, your physical health, energy levels, and so on)

In the case of the patient portal, intrinsic cognitive load was responsible for a user actually signing up for the portal and using it for the first time. Germane cognitive load was devoted to a user making sense of this experience and storing it so that it can be repeated in the future with increasing fluency. My job in salvaging the user experience was to figure out what was extraneous in the process of using the portal so that I could help users focus on what they needed to know to use it effectively.

Additionally, we all have a threshold for comfortably exploring and figuring something out with minimal guidance. This threshold moves around depending on the task and is called your zone of proximal development. It lays between the spaces where you can easily do a task on your own and where you cannot do a task at all without help. Effective learning happens in this zone by offering the right support, at the right time, in the right amount.

When you’re confronted with an extremely frustrated person because of a user experience you have helped create (or ideally, before that scenario happens), ask yourself a couple questions:

  • Did I put too much burden on their learning experience at the expense of the material?
  • Did I do my best to support their movement from something completely familiar to something new and/or unknown?

Think about your creation in terms of the immediate task and everything else. Consider (or reconsider) all the feelings, thoughts, contexts, and everything else that could make up the space around that task. What proportion of effort goes to the task versus to everything in the space around it? After that, think about how easy or difficult it is to achieve that task. Are you offering the right support, at the right time, in the right amount? What new questions might you need to ask to figure that out?

Making use of “unrelated” skill sets

When you were hired, you responded to a job description that included specific bullet points detailing the skills you should have and duties you would be responsible for fulfilling. You highlighted everything about your work that showed you fit that description. You prepared your portfolio, and demonstrated awareness of the recent writings from UX professionals in the field to show you can hold a conversation about how to “do” this work. You looked incredibly knowledgeable.

In research on teaching and learning, we also explore the idea of how we know in addition to what we know. Some people believe that knowledge is universally true and is out there to be uncovered and explored. Others believe that knowledge is subjective because it is uncovered and explored through the filter of the individual’s experiences, reflections, and general perception of reality. This is called constructivism. If we accept constructivism, it means that we open ourselves to learning from each other in how we conceptualize and practice UX based on what else we know besides what job descriptions ask. How do we methodically figure out the what else? By asking better questions.

Part of teaching and learning in a constructivist framework is understanding that the name of the game is facilitation, not lecturing (you might have heard the cute phrase, “Guide on the side, not sage on the stage”). Sharing knowledge is actually about asking questions to evoke reflection and then conversation to connect the dots. Making time to do this can help you recall and highlight the “unrelated” skills that you may have buried that would actually serve you well in your UX work. For example:

  • That was an incredibly difficult stakeholder meeting. What feels like the most surprising thing about how it turned out?
  • It seemed like we got nothing done in that wireframing session. Everyone wanted to see their own stuff included instead of keeping their eye on who we’re solving for. What is another time in my life when I had this kind of situation? How did it turn out?

All of this is in service to helping ourselves unlock more productive communication with our clients. In the patient portal case, I relied very heavily on my master’s degree in international relations, which taught me about how to ask questions to methodically untangle a problem into more manageable chunks, and listen for what a speaker is really saying between the lines. When you don’t speak the same language, your emotional intelligence and empathy begin to heighten as you try to relate on a broader human level. This helped me manage patient expectations to navigate them to the outcome I needed, even though this degree was meant to prepare me to be a diplomat.

As you consider how you’re feeling in your current role, preparing for a performance review, or plotting your next step, think about your whole body of experience. What are the themes in your work that you can recall dealing with in other parts of your life (at any point)? What skills are you relying on that, until you’ve now observed them, you didn’t think very much about but that have a heavy influence on your style of practice or that help make you effective when you work with your intended audiences?

Unlearn first, then learn

When Apple wanted to win over consumers in their bid to make computers a household item, they had to help them embrace what a machine with a screen and some keys could accomplish. In other words, to convince consumers it was worth it to learn how to use a computer, they first had to help consumers unlearn their reliance on a desk, paper, file folders, and pencils.

Apple integrated this unlearning and learning into one seamless experience by creating a graphical user interface that used digital representations of objects people were already familiar with—desks, paper, file folders, and pencils. But the solution may not always be that literal. There are two concepts that can help you support your intended audiences as they transition from one system or experience to another.

The first concept, called a growth mindset, relates to the belief that people are capable of constructing and developing intelligence in any given area, as opposed to a fixed mindset, which holds that each of us is born with a finite capacity for some level of skill. It’s easy to tell if someone has a fixed mindset if they say things like, “I’m too old to understand new technology,” or “This is too complicated. I’ll never get it.”

The second is self-determination theory, which divides motivation into two types: intrinsic and extrinsic. Self-determination theory states that in learning, your desire to persevere is not just about having motivation at all, but about what kind of motivation you have. Intrinsic motivation comes from within yourself; extrinsic comes from the world around you. Thanks to this research and subsequent studies, we know that intrinsic motivation is vital to meaningful learning and skill development (think about the last time you did an HR training and liked it).

This appears in our professional practice as the ever-favored endeavor to generate “buy-in.” What we’re really saying is, “How do I get someone to feel motivated on their own to be part of this or do this thing, instead of me having to reward them or somehow provide an incentive?” Many resources on getting buy-in are about the end goal of getting someone to do what you want. While this is important, conducting this as a teaching process allows you to step back and make space for the other person’s active contribution to a dialogue where you also learn something, even if you don’t end up getting buy-in:

  • “I’m curious about your feeling that this is complicated. Walk me through what you’ve done so far and tell me more about that feeling.”
  • “What’s the most important part of this for you? What excites you or resonates with you?”

For the majority of patients I worked with, transitioning to a new portal was almost fully an extrinsically motivated endeavor—if they didn’t switch, they didn’t get to access their health information, such as lab results, which is vital for people with chronic diseases. They did it because they had to. And many patients ran into a fixed-mindset wall as they confronted bad design: “I can’t understand this. I’m not very good at the computer. I don’t understand technology. Why do I have to get my information this way?” I especially spent a lot of time on exploring why some users felt the portal was complicated (i.e., the first bullet point above), because not only did I want them to switch to it, but I wanted them to switch and then keep using the portal with increasing fluency. First I had to help them unlearn some beliefs about their capabilities and what it means to access information online, and then I could help them successfully set up and use this tool.

While you’re researching the experience you’re going to create around a product, service, or program, ask questions not just about the thing itself but about the circumstances or context. What are the habits or actions someone might need to stop doing, or unlearn, before they adopt what you’re creating? What are the possible motivators in learning to do the something different? Among those, what is the ratio of extrinsic to intrinsic? Do you inadvertently cause an inflammation of fixed mindset? How do you instead encourage a growth mindset?

Where we go from here

Ultimately, I hit the target: about 70% of patients who had been using the old portal migrated to the new tool. It took some time for me to realize I needed to create a process rather than react to individual situations, but gradually things started to smooth out as I knew what bumps in the road to expect. I also walked back even further and adjusted our communications and website content to speak to the fears and concerns I now knew patients experienced. Eventually, we finished migrating existing patients, and the majority of patients signing onto this portal for the first time were new to the clinic overall (so they would not have used the previous portal). To my knowledge the interface design never improved in any profound way, but we certainly lodged a lot of technical tickets to contribute to a push for feature changes and improvements.

Although this piece contains a lot of information, it essentially boils down to asking questions as you always do, but from a different angle to uncover more blind spots. The benefit is a more thorough understanding of who you intend to serve and a more empathetic process for acquiring that understanding. Each section is specifically written to give you a direct idea of a question or prompt you can use the next time you have an opportunity in your own work. I would love to hear how deepening your practice in this way works for you—please comment or feel free to find me on Twitter!

CSS: The Definitive Guide, 4th Edition

Tue, 01/30/2018 - 06:48

A note from the editors: We’re pleased to share an excerpt from Chapter 19 (“Filters, Blending, Clipping, and Masking”) of CSS: The Definitive Guide, 4th Edition by Eric Meyer and Estelle Weyl, available now from O’Reilly.

In addition to filtering, CSS offers the ability to determine how elements are composited together. Take, for example, two elements that partially overlap due to positioning. We’re used to the one in front obscuring the one behind. This is sometimes called simple alpha compositing, in that you can see whatever is behind the element as long as some (or all) of it has alpha channel values less than 1. Think of, for example, how you can see the background through an element with opacity: 0.5, or in the areas of a PNG or GIF87a that are set to be transparent.

But if you’re familiar with image-editing programs like Photoshop or GIMP, you know that image layers which overlap can be blended together in a variety of ways. CSS has gained the same ability. There are two blending strategies in CSS (at least as of late 2017): blending entire elements with whatever is behind them, and blending together the background layers of a single element.

Blending Elements

In situations where elements overlap, it’s possible to change how they blend together with the property mix-blend-mode.

mix-blend-mode

Values: normal | multiply | screen | overlay | darken | lighten | colordodge | color-burn | hard-light | soft-light | difference | exclusion | hue | saturation | color | luminosity
Initial value: normal
Applies to: All elements
Computed value: As declared
Inherited: No
Animatable: No

The way the CSS specification puts this is: “defines the formula that must be used to mix the colors with the backdrop.” That is to say, the element is blended with whatever is behind it (the “backdrop”), whether that’s pieces of another element, or just the background of its parent element.

The default, normal, means that the element’s pixels are shown as is, without any mixing with the backdrop, except where the alpha channel is less than 1. This is the “simple alpha compositing” mentioned previously. It’s what we’re all used to, which is why it’s the default value. A few examples are shown in Figure 19-6.

Figure 19-6. Simple alpha channel blending

For the rest of the mix-blend-mode keywords, I’ve grouped them into a few categories. Let’s also nail down a few definitions:

  • The foreground is the element that has mix-blend-mode applied to it.
  • The backdrop is whatever is behind that element. This can be other elements, the background of the parent element, and so on.
  • A pixel component is the color component of a given pixel: R, G, and B

If it helps, think of the foreground and backdrop as images that are layered atop one another in an image-editing program. With mix-blend-mode, you can change the blend mode applied to the top image (the foreground).

Darken, Lighten, Difference, and Exclusion

These blend modes might be called simple-math modes—they achieve their effect by directly comparing values in some way, or using simple addition and subtraction to modify pixels:

darken: Each pixel in the foreground is compared with the corresponding pixel in the backdrop, and for each of the R, G, and B values (the pixel components), the smaller of the two is kept. Thus, if the foreground pixel has a value corresponding to rgb(91,164,22) and the backdrop pixel is rgb(102,104,255), the resulting pixel will be rgb(91,104,22).

lighten: This blend is the inverse of darken: when comparing the R, G, and B components of a foreground pixel and its corresponding backdrop pixel, the larger of the two values is kept. Thus, if the foreground pixel has a value corresponding to rgb(91,164,22) and the backdrop pixel is rgb(102,104,255), the resulting pixel will be rgb(102,164,255).

difference: The R, G, and B components of each pixel in the foreground are compared to the corresponding pixel in the backdrop, and the absolute value of subtracting one from the other is the final result. Thus, if the foreground pixel has a value corresponding to rgb(91,164,22) and the backdrop pixel is rgb(102,104,255), the resulting pixel will be rgb(11,60,233). If one of the pixels is white, the resulting pixel will be the inverse of the non-white pixel. If one of the pixels is black, the result will be exactly the same as the non-black pixel.

exclusion: This blend is a milder version of difference. Rather than being | back - fore |, the formula is back + fore - (2 × back × fore), where back and fore are values in the range from 0-1. For example, an exclusion calculation of an orange (rgb(100%,50%,0%)) and medium gray (rgb(50%,50%,50%)) will yield rgb(50%,50%,50%). For the red component, the math is 1 + 0.5 - (2 × 1 × 0.5), which reduces to 0.5, corresponding to 50%. For the blue and green components, the math is 0 + 0.5 - (2 × 0 × 0.5), which again reduces to 0.5. Compare this to difference, where the result would be rgb(50%,0%,50%), since each component is the absolute value of subtracting one from the other.

This last definition highlights the fact that for all blend modes, the actual values being operated on are in the range 0-1. The previous examples showing values like rgb(11,60,233) are normalized from the 0-1 range. In other words, given the example of applying the difference blend mode to rgb(91,164,22) and rgb(102,104,255), the actual operation is as follows:

  • rgb(91,164,22) is R = 91 ÷ 255 = 0.357; G = 164 ÷ 255 = 0.643; B = 22 ÷ 255 = 0.086. Similarly, rgb(102,104,255) corresponds to R = 0.4; G = 0.408; B = 1.
  • Each component is subtracted from the corresponding component, and the absolute value taken. Thus, R = | 0.357 - 0.4 | = 0.043; G = | 0.643 - 0.408 | = 0.235; B = | 1 - 0.086 | = 0.914. This could be expressed as rgba(4.3%,23.5%,91.4%), or (by multiplying each component by 255) as rgb(11,60,233).

From all this, you can perhaps understand why the full formulas are not written out for every blend mode we cover. If you’re interested in the fine details, each blend mode’s formula is provided in the “Compositing and Blending Level 1” specification.

Examples of the blend modes in this section are depicted in Figure 19-7.

Figure 19-7. Darken, lighten, difference, and exclusion blending Multiply, Screen, and Overlay

These blend modes might be called the multiplication modes—they achieve their effect by multiplying values together:

multiply: Each pixel component in the foreground is multiplied by the corresponding pixel component in the backdrop. This yields a darker version of the foreground, modified by what is underneath. This blend mode is symmetric, in that the result will be exactly the same even if you were to swap the foreground with the back‐drop.

screen: Each pixel component in the foreground is inverted (see invert in the earlier section “Color Filtering” on page 948), multiplied by the inverse of the corresponding pixel component in the backdrop, and the result inverted again. This yields a lighter version of the foreground, modified by what is underneath. Like multiply, screen is symmetric.

overlay: This blend is a combination of multiply and screen. For foreground pixel components darker than 0.5 (50%), the multiply operation is carried out; for foreground pixel components whose values are above 0.5, screen is used. This makes the dark areas darker, and the light areas lighter. This blend mode is not symmetric, because swapping the foreground for the backdrop would mean a different pattern of light and dark, and thus a different pattern of multiplying versus screening.

Examples of these blend modes are depicted in Figure 19-8.

Figure 19-8. Multiply, screen, and overlay blending Hard and Soft Light

There blend modes are covered here because the first is closely related to a previous blend mode, and the other is just a muted version of the first:

hard-light: This blend is the inverse of overlay blending. Like overlay, it’s a combination of multiply and screen, but the determining layer is the backdrop. Thus, for back‐drop pixel components darker than 0.5 (50%), the multiply operation is carried out; for backdrop pixel components lighter than 0.5, screen is used. This makes it appear somewhat as if the foreground is being projected onto the backdrop with a projector that employs a harsh light.

soft-light: This blend is a softer version of hard-light. That is to say, it uses the same operation, but is muted in its effects. The intended appearance is as if the foreground is being projected onto the backdrop with a projector that employs a diffuse light.

Examples of these blend modes are depicted in Figure 19-9.

Figure 19-9. Hard- and soft-light blending Color Dodge and Burn

Color dodging and burning are interesting modes, in that they’re meant to lighten or darken a picture with a minimum of change to the colors themselves. The terms come from old darkroom techniques performed on chemical film stock:

color-dodge: Each pixel component in the foreground is inverted, and the component of the corresponding backdrop pixel component is divided by the inverted foreground value. This yields a brightened backdrop unless the foreground value is 0, in which case the backdrop value is unchanged.

color-burn: This blend is a reverse of color-dodge: each pixel component in the backdrop is inverted, the inverted backdrop value is divided by the unchanged value of the corresponding foreground pixel component, and the result is then inverted. This yields a result where the darker the backdrop pixel, the more its color will burn through the foreground pixel.

Examples of these blend modes are depicted in Figure 19-10.

Figure 19-10. Color dodge and burn blending Hue, Saturation, Luminosity, and Color

The final four blend modes are different than those we’ve seen before, because they do not perform operations on the R/G/B pixel components. Instead, they perform operations to combine the hue, saturation, luminosity, and color of the foreground and backdrop in different ways:

hue: For each pixel, combines the luminosity and saturation levels of the backdrop with the hue angle of the foreground.

saturation: For each pixel, combines the hue angle and luminosity level of the backdrop with the saturation level of the foreground.

color: For each pixel, combines the luminosity level of the backdrop with the hue angle and saturation level of the foreground.

luminosity: For each pixel, combines the hue angle and saturation level of the backdrop with the luminosity level of the foreground.

Examples of these blend modes are depicted in Figure 19-11.

Figure 19-11. Hue, saturation, luminosity, and color blending

These blend modes can be a lot harder to grasp without busting out raw formulas, and even those can be confusing if you aren’t familiar with how things like saturation and luminosity levels are determined. If you don’t feel like you quite have a handle on how they work, the best thing is to practice with a bunch of different images and simple color patterns.

Two things to note:

  • Remember that an element always blends with its backdrop. If there are other elements behind it, it will blend with them; if there’s a patterned background on the parent element, the blending will be done against that pattern.
  • Changing the opacity of a blended element will change the outcome, though not always in the way you might expect. For example, if an element with mix-blend-mode: difference is also given opacity: 0.8, then the difference calculations will be scaled by 80%. More precisely, a scaling factor of 0.8 will be applied to the color-value calculations. This can cause some operations to trend toward flat middle gray, and others to shift the color changes.

The King vs. Pawn Game of UI Design

Tue, 01/23/2018 - 06:33

If you want to improve your UI design skills, have you tried looking at chess? I know it sounds contrived, but hear me out. I’m going to take a concept from chess and use it to build a toolkit of UI design strategies. By the end, we’ll have covered color, typography, lighting and shadows, and more.

But it all starts with rooks and pawns.

I want you to think back to the first time you ever played chess (if you’ve never played chess, humor me for a second—and no biggie; you will still understand this article). If your experience was anything like mine, your friend set up the board like this:

And you got your explanation of all the pieces. This one’s a pawn and it moves like this, and this one is a rook and it moves like this, but the knight goes like this or this—still with me?—and the bishop moves diagonally, and the king can only do this, but the queen is your best piece, like a combo of the rook and the bishop. OK, want to play?

This is probably the most common way of explaining chess, and it’s enough to make me hate board games forever. I don’t want to sit through an arbitrary lecture. I want to play.

One particular chess player happens to agree with me. His name is Josh Waitzkin, and he’s actually pretty good. Not only at chess (where he’s a grandmaster), but also at Tai Chi Push Hands (he’s a world champion) and Brazilian Jiu Jitsu (he’s the first black belt under 5x world champion Marcelo Garcia). Now he trains financiers to go from the top 1% to the top .01% in their profession.

Point is: this dude knows a lot about getting good at stuff.

Now here’s the crazy part. When Josh teaches you chess, the board looks like this:

King vs. King and Pawn

Whoa.

Compared to what we saw above, this is stupidly simple.

And, if you know how to play chess, it’s even more mind-blowing that someone would start teaching with this board. In the actual game of chess, you never see a board like this. Someone would have won long ago. This is the chess equivalent of a street fight where both guys break every bone in their body, dislocate both their arms, can hardly see out of their swollen eyes, yet continue to fight for another half-hour.

What gives?

Here’s Josh’s thinking: when you strip the game down to its core, everything you learn is a universal principle.

That sounds pretty lofty, but I think it makes sense when you consider it. There are lots of things to distract a beginning chess player by a fully-loaded board, but everything you start learning in a king-pawn situation is fundamentally important to chess:

  • using two pieces to apply pressure together;
  • which spaces are “hot”;
  • and the difference between driving for a checkmate and a draw.

Are you wondering if I’m ever going to start talking about design? Glad you asked.

The simplest possible scenario

What if, instead of trying to design an entire page with dozens of elements (nav, text, input controls, a logo, etc.), we consciously started by designing the simplest thing possible? We deliberately limit the playing field to one, tiny thing and see what we learn? Let’s try.

What is the simplest possible element? I vote that it’s a button.

This is the most basic, default button I could muster. It’s Helvetica (default font) with a 16px font size (pretty default) on a plain, Sketch-default-blue rectangle. It’s 40px tall (nice, round number) and has 20px of horizontal padding on each side.

So yeah, I’ve already made a bunch of design decisions, but can we agree I basically just used default values instead of making decisions for principled, design-related reasons?

Now let’s start playing with this button. What properties are modifiable here?

  • the font (and text styling)
  • the color
  • the border radius
  • the border
  • the shadows

These are just the first things that come to my mind. There are even more, of course.

Typography

Playing with the font is a pretty easy place to start.

Blown up to show font detail.

Now I’ve changed the font to Moon (available for free on Behance for personal use). It’s rounded and soft, unlike Helvetica, which felt a little more squared-off—or at least not as overtly friendly.

The funny thing is: do you see how the perfectly square edges now look a tad awkward with the rounded font?

Let’s round the corners a bit.

Bam. Nice. That’s a 3px border radius.

But that’s kind of weird, isn’t it? We adjusted the border radius of a button because of the shape of the letterforms in our font. I wouldn’t want you thinking fonts are just loosey-goosey works of art that only work when you say the right incantations.

No, fonts are shapes. Shapes have connotations. It’s not rocket science.

Here’s another popular font, DIN.

With its squared edges, DIN is a clean and solid workhorse font.

Specifically, this is a version called DIN 2014 (available for cheap on Typekit). It’s the epitome of a squared-off-but-still-readable font. A bit harsh and no-nonsense, but in a bureaucratic way.

It’s the official font of the German government, and it looks the part.

So let’s test our working hypothesis with DIN.

How does DIN look with those rounded corners?

Well, we need to compare it to square corners now, don’t we?

Ahhh, the squared-off corners are better here. It’s a much more consistent feel.

Now look at our two buttons with their separate fonts. Which is more readable? I think Moon has a slight advantage here. DIN’s letters just look too cramped by comparison. Let’s add a bit of letter-spacing.

When we add some letter-spacing, it’s far more relaxed.

This is a key law of typography: always letter-space your uppercase text. Why? Because unless a font doesn’t have lowercase characters, it was designed for sentence-case reading, and characters in uppercase words will ALWAYS appear too cramped. (Moon is the special exception here—it only has uppercase characters, and notice how the letter-spacing is built into the font.)

We’ll review later, but so far we’ve noticed two things that apply not just to buttons, but to all elements:

  • Rounded fonts go better with rounded shapes; squared-off fonts with squared-off shapes.
  • Fonts designed for sentence case should be letter-spaced when used in words that are all uppercase.

Let’s keep moving for now.

Color

Seeing the plain default Sketch blue is annoying me. It’s begging to be changed into something that matches the typefaces we’re using.

How can a color match a font? Well, I’ll hand it to you. This one is a bit more loosey-goosey.

For our Moon button, we want something a bit more friendly. To me, a staid blue says default, unstyled, trustworthy, takes-no-risks, design-by-committee. How do you inject some fun into it?

Well, like all problems of modifying color, it helps to think in the HSB color system (hue, saturation, and brightness). When we boil color down to three intuitive numbers, we give ourselves levers to pull.

For instance, let’s look at hue. We have two directions we can push hue: down to aqua or up to indigo. Which sounds more in line with Moon? To me, aqua does. A bit less staid, a bit more Caribbean sea. Let’s try it. We’ll move the hue to 180° or so.

Ah, Moon Button, now you’ve got a beach vibe going on. You’re a vibrant sea foam!

This is a critical lesson about color. “Blue” is not a monolith; it’s a starting point. I’ve taught hundreds of students UI design, and this comes up again and again: just because blue was one color in kindergarten doesn’t mean that we can’t find interesting variations around it as designers.

“Blue” is not a monolith. Variations are listed in HSB, with CSS color names given below each swatch.

Aqua is a great variation with a much cooler feel, but it’s also much harder to read that white text. So now we have another problem to fix.

“Hard to read” is actually a numerically-specific property. The World Wide Web Consortium has published guidelines for contrast between text and background, and if we use a tool to test those, we find we’re lacking in a couple departments.

White text on an aqua button doesn’t provide enough contrast, failing to pass either AA or AAA WCAG recommendations.

According to Stark (which is my preferred Sketch plugin for checking contrast—check out Lea Verou’s Contrast Ratio for a similar web-based tool), we’ve failed our contrast guidelines across the board!

How do you make the white text more legible against the aqua button? Let’s think of our HSB properties again.

  • Brightness. Let’s decrease it. That much should be obvious.
  • Saturation. We’re going to increase it. Why? Because we’re contrasting with white text, and white has a saturation of zero. So a higher saturation will naturally stand out more.
  • Hue. We’ll leave this alone since we like its vibe. But if the contrast continued to be too low, you could lower the aqua’s luminosity by shifting its hue up toward blue.

So now, we’ve got a teal button:

Much better?

Much better.

For what it’s worth, I’m not particularly concerned about missing the AAA standard here. WCAG developed the levels as relative descriptors of how much contrast there is, not as an absolute benchmark of, say, some particular percentage of people to being able to read the text. The gold standard is—as always—to test with real people. AAA is best to hit, but at times, AA may be as good as you’re going to get with the colors you have to work with.

Some of the ideas we’ve used to make a button’s blue a bit more fun and legible against white are actually deeper lessons about color that apply to almost everything else you design:

  • Think in HSB, as it gives you intuitive levers to pull when modifying color.
  • If you like the general feel of a color, shifting the hue in either direction can be a baseline for getting interesting variations on it (e.g., we wanted to spice up the default blue, but not by, say, changing it to red).
  • Modify saturation and brightness at the same time (but always in opposite directions) to increase or decrease contrast.

OK, now let’s switch over to our DIN button. What color goes with its harsh edges and squared-off feel?

The first thing that comes to mind is black.

But let’s keep brainstorming. Maybe a stark red would also work.

Or even a construction-grade orange.

(But not the red and orange together. Yikes! In general, two adjacent hues with high saturations will not look great next to each other.)

Now, ignoring that the text of this is “Learn More” and a button like this probably doesn’t need to be blaze orange, I want you to pay attention to the colors I’m picking. We’re trying to maintain consistency with the official-y, squared-off DIN. So the colors we go to naturally have some of the same connotations: engineered, decisive, no funny business.

Sure, this match-a-color-and-a-font business is more subjective, but there’s something solid to it: note that the words I used to describe the colors (“stark” and “construction-grade”) apply equally well to DIN—a fact I am only noticing now, not something done intentionally.

Want to match a color with a font? This is another lesson applicable to all of branding. It’s best to start with adjectives/emotions, then match everything to those. Practically by accident, we’ve uncovered something fundamental in the branding design process.

Shadows

Let’s shift gears to work with shadows for a bit.

There are a couple directions we could go with shadows, but the two main categories are (for lack of better terms):

  • realistic shadows;
  • and cartoon-y shadows.

Here’s an example of each:

The top button’s shadow is more photorealistic. It behaves like a shadow in the real world.

The bottom button’s shadow is a bit lower-fidelity. It shows that the button is raised up, but it’s a cartoon version, with a slightly unrealistic, idealized bottom edge—and without a normal shadow, which would be present in the real world.

The bottom works better for the button we’re crafting. The smoothness, the friendliness, the cartoon fidelity—it all goes together.

As for our DIN button?

I’m more ambivalent here. Maybe the shadow is for a hover state, à la Material Design?

In any case, with a black background, a darkened bottom edge is impossible—you can’t get any darker than black.

By the way, you may not have noticed it above, but the black button has a much stronger shadow. Compare:

The teal button’s shadow is 30%-opacity black, shifted 1 pixel down on the y-axis, with a 2-pixel blur (0 1px 2px). The black button’s is 50%-opacity black, shifted 2 pixels down on the y-axis, with a 4-pixel blur (0 2px 4px). What’s more, the stronger shadow looks awful on the teal button.

Why is that? The answer, like so many questions that involve color, is in luminosity. When we put the button’s background in luminosity blend mode, converting it to a gray of equal natural lightness, we see something interesting.

The shadow, at its darkest, is basically as dark as the button itself. Or, at least, the rate of change of luminosity is steady between each row of pixels.

The top row is the button itself, not shadow.

Shadows that are too close to the luminosity of their element’s backgrounds will appear too strong. And while this may sound like an overly specific lesson, it’s actually broadly applicable across elements. You know where else you see it?

Borders

Let’s put a border on our teal button.

Now the way I’ve added this border is something that a bunch of people have thought of: make the border translucent black so that it works on any background color. In this case, I’ve used a single-pixel-wide border of 20%-opacity black.

However, if I switch the background color to a more standard blue, which is naturally a lot less luminous, that border all but disappears.

In fact, to see it on blue just as much as you can see it on teal, you’ve got to crank up black’s opacity to something like 50%.

This is a generalizable rule: when you want to layer black on another color, it needs to be a more opaque black to show up the same amount on less luminous background colors. Where else would you apply this idea?

Spoiler alert: shadows!

Each of these buttons has the same shadow (0 2px 3px) except for different opacities. The top two buttons’ shadows have opacity 20%, and the bottom two have opacity 40%. Note how what’s fine on a white background (top left) is hardly noticeable on a dark background (top right). And what’s too dark on a white background (lower left) works fine on a dark background (lower right).

Icons

I want to change gears one more time and talk about icons.

Here’s the download icon from Font Awesome, my least favorite icon set of all time.

I dislike it, not only because it’s completely overused, but also because the icons are really bubbly and soft. Yet most of the time, they’re used in clean, crisp websites. They just don’t fit.

You can see it works better with a soft, rounded font. I’m less opposed to this sort of thing.

But there’s still a problem: the icon has these insanely small details! The dots are never going to show up at size, and even the space between the arrow and the disk is a fraction of a pixel in practice. Compared to the letterforms, it doesn’t look like quite the same style.

But what good is my complaining if I don’t offer a solution?

Let’s create a new take on the “download” icon, but with some different guiding principles:

  • We’ll use a stroke weight that’s equivalent (or basically equivalent) to the text weight.
  • We’ll use corner radii that are similar to the corner radii of our font: squared off for DIN, rounded for Moon.
  • We’ll use a simpler icon shape so the differences are easy to see.

Let’s see how it looks:

I call this “drawing with the same pen.” Each of these icons looks like it could basically be a character in the font used in the button. And that’s the point here. I’m not saying all icons will appear this way, but for an icon that appears inline with text like this, it’s a fantastic rule of thumb.

Wrapping it up

Now this is just the beginning. Buttons can take all kinds of styles.

But we’ve got a good start here considering we designed just two buttons. In doing so, we covered a bunch of the things that are focal points of my day-to-day work as a UI designer:

  • lighting and shadows;
  • color;
  • typography;
  • consistency;
  • and brand.

And the lessons we’ve learned in those areas are fundamental to the entirety of UI design, not just one element. Recall:

  • Letterforms are shapes. You can analyze fonts as sets of shapes, not simply as works of art.
  • You should letter-space uppercase text, since most fonts were designed for sentence case.
  • Think in HSB to modify colors.
  • You can find more interesting variations on a “basic” color (like a CSS default shade of blue or red) by tweaking the hue in either direction.
  • Saturation and brightness are levers that you can move in opposite directions to control luminosity.
  • Find colors that match the same descriptors that you would give your typeface and your overall brand.
  • Use darker shadows or black borders on darker backgrounds—and vice versa.
  • For inline icons, choose or design them to appear as though they were drawn with the same pen as the font you’re using.

You can thank Josh Waitzkin for making me a pedant. I know, you just read an entire essay on buttons. But next time you’re struggling with a redesign or even something you’re designing from scratch, try stripping out all the elements that you think you should be including already, and just mess around with the simplest players on the board. Get a feel for the fundamentals, and go from there.

Weird? Sure. But if it’s good enough for a grandmaster, I’ll take it.

Mental Illness in the Web Industry

Thu, 01/18/2018 - 06:57

The picture of the tortured artist has endured for centuries: creative geniuses who struggle with their metaphorical demons and don’t relate to life the same way as most people. Today, we know some of this can be attributed to mental illness: depression, anxiety, bipolar disorder, and many others. We have modern stories about this and plenty of anecdotal information that fuels the popular belief in a link between creativity and mental illness.

But science has also started asking questions about the link between mental illness and creativity. A recent study has suggested that creative professionals may be more genetically predisposed to mental illness. In the web industry, whether designer, dev, copywriter, or anything else, we’re often creative professionals. The numbers suggest that mental illness hits the web industry especially hard.

Our industry has made great strides in compassionate discussion of disability, with a focus on accessibility and events like Blue Beanie Day. But even though we’re having meaningful conversations and we’re seeing progress, issues related to diversity, inclusion, and sexual harassment are still a major problem for our industry. Understanding and acceptance of mental health issues is an area that needs growth and attention just like many others.

When it comes to mental health, we aren’t quite as understanding as we think we are. According to a study published by the Center of Disease Control, 57% of the general population believes that society at large is caring and sympathetic toward people with mental illness; but only 25% of people with mental health symptoms believed the same thing. Society is less understanding and sympathetic regarding mental illness than it thinks it is.

Where’s the disconnect?  What does it look like in our industry? It’s usually not negligence or ill will on anybody’s part. It has a lot more to do with people just not understanding the prevalence and reality of mental illness in the workplace. We need to begin discussing mental illness as we do any other personal challenge that people face.

This article is no substitute for a well-designed scientific study or a doctor’s advice, and it’s not trying to declare truths about mental illness in the industry. And it certainly does not intend to lump together or equalize any and all mental health issues, illnesses, or conditions. But it does suspect that plenty of people in the industry struggle with their mental health at some point or another, and we just don’t seem to talk about it. This doesn’t seem to make sense in light of the sense of community that web professionals have been proud of for decades.

We reached out to a few people in our industry who were willing to share their unique stories to bring light to what mental health looks like for them in the workplace. Whether you have your own struggles with mental health issues or just want to understand those who do, these stories are a great place to start the conversation.

Meet the contributors

Gerry: I’ve been designing websites since the late ‘90s, starting out in UI design, evolving into an IA, and now in a UX leadership role. Over my career, I’ve contributed to many high-profile projects, organized local UX events, and done so in spite of my personal roadblocks.

Brandon Gregory: I’ve been working in the web industry since 2006, first as a designer, then as a developer, then as a manager/technical leader. I’m also a staff member and regular contributor at A List Apart. I was diagnosed with bipolar disorder in 2002 and almost failed out of college because of it, although I now live a mostly normal life with a solid career and great family. I’ve been very open about my condition and have done some writing on it on Medium to help spread awareness and destigmatize mental illnesses.

Stephen Keable: I’ve been building and running websites since 1999, both professionally and for fun. Worked for newspapers, software companies, and design agencies, in both permanent and freelance roles, almost always creating front-end solutions, concentrating on a user-centered approach.

Bri Piccari: I’ve been messing around with the web since MySpace was a thing, figuring out how to customize themes and make random animations fall down from the top of my profile. Professionally, I’ve been in the field since 2010, freelancing while in college before transitioning to work at small agencies and in-house for a spell after graduation. I focus on creating solid digital experiences, employing my love for design with [a] knack for front-end development. Most recently, I started a small design studio, but decided to jump back into more steady contract and full-time work, after the stress of owning a small business took a toll on my mental health. It was a tough decision, but I had to do what was best for me. I also lead my local AIGA chapter and recently got my 200-hour-yoga-teacher certification.

X: I also started tinkering with the web on Myspace, and started working on websites to help pay my way through college. I just always assumed I would do something else to make a living. Then, I was diagnosed with bipolar disorder. My [original non-web] field was not a welcoming and supportive place for that, so I had to start over, in more ways than one. The web industry hadn’t gone anywhere, and it’s always been welcoming to people with random educational histories, so I felt good about being able to make a living and staying healthy here. But because of my experience when I first tried to be open about my illness, I now keep it a secret. I’m not ashamed of it; in fact, it’s made me live life more authentically. For example, in my heart, I knew I wanted to work on the web the entire time.

The struggle is real

Mental health issues are as numerous and unique as the people who struggle with them. We asked the contributors what their struggles look like, particularly at work in the web industry.

G: I have an interesting mix of ADD, dyslexia, and complex PTSD. As a result, I’m an incomplete person, in a perpetual state of self-doubt, toxic shame, and paralyzing anxiety. I’ve had a few episodes in my past where a requirement didn’t register or a criticism was taken the wrong way and I’ve acted less than appropriately (either through panic, avoidance, or anger). When things go wrong, I deal with emotional flashbacks for weeks.

Presenting or reading before an audience is a surreal experience as well. I go into a zone where I’m never sure if I’m speaking coherently or making any sense at all until I’ve spoken with friends in the audience afterward. This has had a negative effect on my career, making even the most simple tasks anxiety-driven.

BG: I actually manage to at least look like I have everything together, so most people don’t know I have bipolar until I tell them. On the inside, I struggle—a lot. There are bouts of depression where I’m exhausted all day and deal with physical pain, and bursts of mania where I take unnecessary risks and make inappropriate outbursts, and I can switch between these states with little or no notice. It’s a balancing act to be sure, and I work very hard to keep it together for the people in my life.

SK: After the sudden death of my mother, I started suffering from panic attacks. One of which came on about 30 mins after getting to work, I couldn’t deal with the attack at work, so suddenly went home without telling anyone. Only phoning my boss from a lay-by after I’d been in tears at the side of the road for a while. The attacks also triggered depression, which has made motivation when I’m working from home so hard that I actually want to spend more time at the office. Luckily my employer is very understanding and has been really flexible.

BP: Depending upon the time of year, I struggle greatly, with the worst making it nearly impossible to leave my apartment. As most folks often say, I’ve gotten rather good at appearing as though I’ve got my shit together—typically, most people I interact with have no idea what I’m going through unless I let them in. It wasn’t until recently that my mental health began to make a public appearance, as the stress of starting my own business and attempting to “have it all” made it tough to continue hiding it. There are definitely spans of time where depression severely affects my ability to create and interface with others, and “fake it till ya make it” doesn’t even cut it. I’m currently struggling with severe anxiety brought on by stress. Learning to manage that has been a process.

X: I have been fortunate to be a high-functioning bipolar person for about 5 years now, so there really isn’t a struggle you can really see. The struggle is the stress and anxiety of losing that stability, and especially of people finding out. I take medication, have a routine, a support system, and a self-care regimen that is the reason why I am stable, but if work starts [to] erode my work-life balance, I can’t protect that time and energy anymore. In the past, this has started to happen when I’ve been asked to routinely pull all-nighters, work over the weekend, travel often, or be surrounded by a partying and drinking culture at work. Many people burn out under those conditions, but for me, it could be dangerous and send me into a manic episode, or even [make me] feel suicidal. I struggle with not knowing how far I can grow in my career, because a lot of the things you do to prove yourself and to demonstrate that you’re ready for more responsibility involves putting more on your plate. What’s the point of going after a big role if it’ll mean that I won’t be able to take care of myself? The FOMO [(fear of missing out)] gets bad.

Making it work

There are different ways that people can choose to—or choose not to—address the mental problems they struggle with. We’re ultimately responsible for making our own mental health decisions, and they are different for everyone. In the meantime, the rent has to get paid. Here’s how our contributors cope with their situations at work to make it happen.

G: I started seeing a therapist, which has been an amazing help. I’ve also worked to change my attitude about criticism—I ask more clarifying questions, looking to define the problem, rather than get mad, defensive, or sarcastic. I’ve learned to be more honest with my very close coworkers, making them aware of my irrational shortcomings and asking for help. Also, because I’ve experienced trauma in personal and professional life, I’m hypersensitive to the emotions of others. Just being around a heated argument or otherwise heightened situation could put my body into a panic. I have to take extra special care in managing personalities, making sure everyone in a particular situation feels confident that they’re set up for success.

BG: Medicine has worked very well for me, and I’m very lucky in that regard. That keeps most of my symptoms at a manageable level. Keeping my regular schedule and maintaining some degree of normalcy is a huge factor in remaining stable. Going to work, sleeping when I should, and keeping some social appointments, while not always easy, keep me from slipping too far in either direction. Also, writing has been a huge outlet for me and has helped others to better understand my condition as well. Finding some way to express what you’re going through is huge.

SK: I had several sessions of bereavement counseling to help with the grief. I also made efforts to try and be more physically active each day, even if just going for a short walk on my lunch break. Working had become a way of escaping everything else that was going on at the time. Before the depression I used to work from home two days a week, however found these days very hard being on my own. So I started working from the office every weekday. Thankfully, through all of this, my employer was incredibly supportive and simply told me to do what I need to do. And it’s made me want to stay where I work more than before, as I realize how lucky I am to have their support.

BP: Last winter I enrolled in a leadership/yoga teacher training [program] with a goal of cultivating a personal practice to better manage my depression and anxiety. Making the jump to be in an uncomfortable situation and learn the value of mindfulness has made a huge difference in my ability to cope with stress. Self-care is really big for me, and being aware of when I need to take a break. I’ve heard it called high-functioning depression and anxiety. I often take on too much and learning to say no has been huge. Therapy and a daily routine have been incredibly beneficial as well.

X: The biggest one is medicine, it’s something I will take for the rest of my life and it’s worth it to me. I did a form of therapy called Dialectical Behavioral Therapy for a couple of years. The rest is a consistent regimen of self-care, but there are a couple of things that are big for work. Not working nights or weekends, keeping it pretty 9–5. Walking to and from the office or riding my bike. I started a yoga practice immediately after getting diagnosed, and the mental discipline it’s given me dampens the intensity of how I react to stressful situations at work. This isn’t to say that I will refuse to work unless it’s easy. Essentially, if something catches on fire, these coping strategies help me keep my shit together for long enough to get out.

Spreading awareness

There are a lot of misconceptions about mental illness, in the web industry as much as anywhere else. Some are benign but annoying; others are pretty harmful. Here are some of the things we wish others knew about us and our struggles.

G: Nothing about my struggle is rational. It seems as if my body is wired to screw everything up and wallow in the shame of it. I have to keep moving, working against myself to get projects as close to perfect as possible. However, I am wired to really care about people, and that is probably why I’ve been successful in UX.

BG: Just because I look strong doesn’t mean I don’t need support. Just because I have problems doesn’t mean I need you to solve them. Sometimes, just checking in or being there is the best thing for me. I don’t want to be thought of as broken or fragile (although I admit, sometimes I am). I am more than my disorder, but I can’t completely ignore it either.

Also, there are still a lot of stigmas surrounding mental illness, to the point that I didn’t feel safe admitting to my disorder to a boss at a previous job. Mental illnesses are medical conditions that are often classified as legitimate disabilities, but employees may not be safe admitting that they have one—that’s the reality we live with.

SK: For others who are going through grief-related depression, I would say that talking about it with friends, family, and even strangers helps you process it a lot. And the old cliché that time is a healer really is true. Also, for any employers, be supportive [of those] with mental health conditions—as supportive as you would [be of those] with physical health situations. They will pay you back.

BP: I am a chronically ambitious human. Oftentimes, this comes from a place of working and doing versus dealing with what is bothering or plaguing me at the time. Much of my community involvement came from a place of needing a productive outlet. Fortunately or unfortunately, I have accomplished a lot through that—however, there are times where I simply need a break. I’m learning to absorb and understand that, as well as become OK with it.

X: I wish people knew how much it bothers me to hear the word bipolar being used as an adjective to casually describe things and people. It’s not given as a compliment, and it makes it less likely that I will ever disclose my illness publicly. I also wish people knew how many times I’ve come close to just being open about it, but held back because of the other major diversity and inclusion issues in the tech industry. Women have to deal with being called moody and erratic. People stereotype the ethnic group I belong to as being fiery and ill-tempered. Why would I give people another way to discriminate against me?

Working with External User Researchers: Part I

Tue, 01/16/2018 - 08:00

You’ve got an idea or perhaps some rough sketches, or you have a fully formed product nearing launch. Or maybe you’ve launched it already. Regardless of where you are in the product lifecycle, you know you need to get input from users.

You have a few sound options to get this input: use a full-time user researcher or contract out the work (or maybe a combination of both). Between the three of us, we’ve run a user research agency, hired external researchers, and worked as freelancers. Through our different perspectives, we hope to provide some helpful considerations.

Should you hire an external user researcher?

First things first–in this article, we focus on contracting external user researchers, meaning that a person or team is brought on for the duration of a contract to conduct the research. Here are the most common situations where we find this type of role:

Organizations without researchers on staff: It would be great if companies validated their work with users during every iteration. But unfortunately, in real-world projects, user research happens at less frequent intervals, meaning there might not be enough work to justify hiring a full-time researcher. For this reason, it sometimes makes sense to use external people as needed.

Organizations whose research staff is maxed out: In other cases, particularly with large companies, there may already be user researchers on the payroll. Sometimes these researchers are specific to a particular effort, and other times the researchers themselves function as internal consultants, helping out with research across multiple projects. Either way, there is a finite amount of research staff, and sometimes the staff gets overbooked. These companies may then pull in additional contract-based researchers to independently run particular projects or to provide support to full-time researchers.

Organizations that need special expertise: Even if a company does have user research on staff and those researchers have time, it’s possible that there are specialized kinds of user research for which an external contract-based researcher is brought on. For example, they may want to do research with representative users who regularly use screen readers, so they bring in an accessibility expert who also has user research skills. Or they might need a researcher with special quantitative skills for a certain project.

Why hire an external researcher vs. other options?

Designers as researchers: You could hire a full-time designer who also has research skills. But a designer usually won’t have the same level of research expertise as a dedicated researcher. Additionally, they may end up researching their own designs, making it extremely difficult to moderate test sessions without any form of bias.

Product managers as researchers: While it’s common for enthusiastic product managers to want to conduct their own guerilla user research, this is often a bad idea. Product managers tend to hear feedback that validates their ideas and most often aren’t trained on how to ask non-leading questions.

Temporary roles: You could also bring on a researcher in a staff augmentation role, meaning someone who works for you full-time for an extended period of time, but who is not considered a full-time employee. This can be a bit harder to justify. For example, there may be legal requirements that you’d have to pass if you directly contract an individual. Or you could find someone through a staffing agency–fewer legal hurdles, but likely far pricier.

If these options don’t sound like a good fit for your needs, hiring an external user researcher on a project-specific basis could be the best solution for you. They give you exactly what you need without additional commitment or other risks. They may be a freelancer (or a slightly larger microbusiness), or even a team farmed out for a particular project by a consulting firm or agency.

What kinds of projects would you contract a user researcher for?

You can reasonably expect that anyone or any company that advertises their skillset as user research likely can do the full scope of qualitative efforts—from usability studies of all kinds, to card sorts, to ethnographic and exploratory work.

Contracting out quantitative work is a bit riskier. An analogy that comes to mind is using TurboTax to file your taxes. While TurboTax may be just fine for many situations, it’s easy to overlook what you don’t know in terms of more complicated tax regulations, which can quickly get you in trouble. Similarly, with quantitative work, there’s a long list of diverse, specialized quantitative skills (e.g., logs analysis, conjoint, Kano, and multiple regression). Don’t assume someone advertising as a general quantitative user researcher has the exact skills you need.

Also, for some companies, quantitative work comes with unique user privacy considerations that can require special internal permissions from legal and privacy teams.

But if the topic of your project is pretty easy to grasp and absorb without needing much specialized technical or organizational insight, hiring an external researcher is generally a great option.

What are the benefits to hiring an external researcher?

A new, objective perspective is one major benefit to hiring an external researcher. We all suffer from design fixation and are influenced by organizational politics and perceived or real technical constraints. Hiring an unbiased external researcher can uncover more unexpected issues and opportunities.

Contracting a researcher can also expand an internal researcher’s ability to influence. Having someone else moderate research studies frees up in-house researchers to be part of the conversations among stakeholders that happen while user interviews are being observed. If they are intuitively aware of an issue or opportunity, they can emphasize their perspective during those critical, decision-making moments that they often miss out on when they moderate studies themselves. In these situations, the in-house team can even design the study plan, draft the discussion guide, and just have the contractor moderate the study. The external researcher may then collaborate with the in-house researcher on the final report.

More candid and honest feedback can come out of hiring an external researcher. Research participants tend to be more comfortable sharing more critical feedback with someone who doesn’t work for the company whose product is being tested.

Lastly, if you need access to specialized research equipment or software (for example, proprietary online research tools), it can be easier to get it via an external researcher.

How do I hire an external user researcher?

So you’ve decided that you need to bring on an external user researcher to your team. How do you get started?

Where to find them

Network: Don’t wait until you need help to start networking and collecting a list of external researchers. Be proactive. Go to UX events in your local region. You’ll meet consultants and freelancers at those events, as well as people who have contracted out research and can make recommendations. You won’t necessarily have the opportunity for deep conversations, but you can continue a discussion over coffee or drinks!

Referrals: Along those same lines, when you anticipate a need at some point in the future, seek out trusted UX colleagues at your company and elsewhere. Ask them to connect you with people that they may have worked with.

What about a request for proposal (RFP)?

Your company may require you to specify your need in the form of an RFP, which is a document that outlines your project needs and specifications, and asks for bids in response.

An RFP provides these benefits:

  • It keeps the playing field level, and anyone who wants to bid on a project can (in theory).
  • You can be very specific about what you’re looking for, and get bids that can be easy to compare on price.

On the other hand, an RFP comes with limitations:

  • You may think your requirements were very specific, but respondents may interpret them in different ways. This can result in large quote differences.
  • You may be eliminating smaller players—those freelancers and microbusinesses who may be able to give you the highest level of seniority for the dollar but don’t have the staff to respond to RFPs quickly.
  • You may be forced to be very concrete about your needs when you are not yet sure what you’ll actually need.

When it comes to RFPs, the most important thing to remember is to clearly and thoroughly specify your needs. Don’t forget to include small but important details that can matter in terms of pricing, such as answers to these questions:

  • Who is responsible for recruitment of research participants?
  • How many participants do you want included?
  • Who will be responsible for distributing participant incentives?
  • Who will be responsible for localizing prototypes?
  • How long will sessions be?
  • Over how many days and locations will they be?
  • What is the format of expected deliverables?
  • Do you want full, transcribed videos, or video clips?

It’s these details that will ultimately result in receiving informed proposals that are easy to compare.

Do a little digging on their backgrounds

Regardless of how you find a potential researcher, make sure you check out their credentials if you haven’t worked with them before.

At the corporate level, review the company: Google them and make sure that user research seems to be one of their core competencies. The same is true when dealing with a freelancer or microbusiness: Google them and see whether you get research-oriented results, and also check them out on social media.

Certainly feel free to ask for references if you don’t already have a direct connection, but take them with a grain of salt. Between the self-selecting nature of a reference, and a potential reference just trying to be nice to a friend, these can never be fully trusted.

One of the strongest indicators of experience and quality work is if a researcher has been hired by the same client for more than one project over time.

Larger agencies, individual researchers, or something in-between?

So you’ve got a solid sense of what research you need, and you’ve got several quality options to choose from. But external researchers come in all shapes and sizes, from single freelancers to very large agencies. How do you choose what’s best for your project while still evaluating the external researchers fairly?

Larger consulting firms and agencies do have some distinct advantages—namely that you’ve got a large company to back up the project. Even if one researcher isn’t available as expected (for example, if the project timeline slips), another can take their place. They also likely have a whole infrastructure for dealing with contracts like yours.

On the other hand, this larger infrastructure may add extra burden on your side. You may not know who exactly is going to be working on your project, or their level of seniority or experience. Changes in scope will likely be more involved. Larger infrastructure also likely means higher costs.

Individual (freelance) researchers also have some key advantages. You will likely have more control over contracting requirements. They are also likely to be more flexible—and less costly. In addition, if they were referred to you, you will be working with a specific resource that you can get to know over multiple projects.

Bringing on individual researchers can incur a little more risk. You will need to make sure that you can properly justify hiring an external researcher instead of an employee. (In the United States, the IRS has a variety of tests to make sure it is OK.) And if your project timeline slips, you run a greater risk of losing the researcher to some other commitment without someone to replace them.

A small business, a step between an individual researcher and a large firm, has some advantages over hiring an individual. Contracting an established business may involve less red tape, and you will still have the personal touch of knowing exactly who is conducting your research.

An established business also shows a certain level of commitment, even if it’s one person. For example, a microbusiness could represent a single freelancer, but it could also involve a very small number of employees or established relationships with trusted subcontractors (or both). Whatever the configuration,  don’t expect a business of this size to have the ability to readily respond to RFPs.

The money question

Whether you solicit RFPs or get a single bid, price quotes will often differ significantly. User research is not a product but rather a customized and sophisticated effort around your needs. Here are some important things to consider:

  • Price quotes are a reflection of how a project is interpreted. Different researchers are going to interpret your needs in different ways. A good price quote clearly details any assumptions that are going into pricing so you can quickly see if something is misaligned.
  • Research teams are made up of staff with different levels of experience. A quote is going to be a reflection of the overall seniority of the team, their salaries and benefits, the cost of any business resources they use, and a reasonable profit margin for the business.
  • Businesses all want to make a reasonable profit, but approaches to profitability differ. Some organizations may balance having a high volume of work with less profit per project. Other organizations may take more of a boutique approach: more selectivity over projects taken on, with added flexibility to focus on those projects, but also with a higher profit margin.
  • Overbooked businesses provide higher quotes. Some consultants and agencies are in the practice of rarely saying no to a request, even if they are at capacity in terms of their workload. In these instances, it can be a common practice to multiply a quote by as much as three—if you say no, no harm done given they’re at capacity. However, if you say yes, the substantial profit is worth the cost for them to hire additional resources and to work temporarily above capacity in the meantime.

To determine whether a researcher or research team is right for you, you’ll certainly need to look at the big picture, including pricing, associated assumptions, and the seniority and background of the individuals who are doing the work.

Remember, it’s always OK to negotiate

If you have a researcher or research team that you want to work with but their pricing isn’t in line with your budget, let them know. It could be that the quote is just based on faulty assumptions. They may expect you to negotiate and are willing to come down in price; they may also offer alternative, cheaper options with them.

Next steps

Hiring an external user researcher typically brings a long list of benefits. But like most relationships, you’ll need to invest time and effort to foster a healthy working dynamic between you, your external user researcher, and your team. Stay tuned for the next installment, where we’ll focus on how to collaborate together.

No More FAQs: Create Purposeful Information for a More Effective User Experience

Thu, 01/11/2018 - 08:00

It’s normal for your website users to have recurring questions and need quick access to specific information to complete … whatever it is they came looking for. Many companies still opt for the ubiquitous FAQ (frequently asked/anticipated questions) format to address some or even all information needs. But FAQs often miss the mark because people don’t realize that creating effective user information—even when using the apparently simple question/answer format—is complex and requires careful planning.

As a technical writer and now information architect, I’ve worked to upend this mediocre approach to web content for more than a decade, and here’s what I’ve learned: instead of defaulting to an unstructured FAQ, invest in information that’s built around a comprehensive content strategy specifically designed to meet user and company goals. We call it purposeful information.

The problem with FAQs

Because of the internet’s Usenet heritage—discussion boards where regular contributors would produce FAQs so they didn’t have to repeat information for newbies—a lot of early websites started out by providing all information via FAQs. Well, the ‘80s called, and they want their style back!

Unfortunately, content in this simple format can often be attractive to organizations, as it’s “easy” to produce without the need to engage professional writers or comprehensively work on information architecture (IA) and content strategy. So, like zombies in a horror film, and with the same level of intellectual rigor, FAQs continue to pop up all over the web. The trouble is, this approach to documentation-by-FAQ has problems, and the information is about as far from being purposeful as it’s possible to get.

For example, when companies and organizations resort to documentation-by-FAQ, it’s often the only place certain information exists, yet users are unlikely to spend the time required to figure that out. Conversely, if information is duplicated, it’s easy for website content to get out of sync. The FAQ page can also be a dumping ground for any information a company needs to put on the website, regardless of the topic. Worse, the page’s format and structure can increase confusion and cognitive load, while including obviously invented questions and overt marketing language can result in losing users’ trust quickly. Looking at each issue in more detail:

  • Duplicate and contradictory information: Even on small websites, it can be hard to maintain information. On large sites with multiple authors and an unclear content strategy, information can get out of sync quickly, resulting in duplicate or even contradictory content. I once purchased food online from a company after reading in their FAQ—the content that came up most often when searching for allergy information—that the product didn’t contain nuts. However, on receiving the product and reading the label, I realized the FAQ information was incorrect, and I was able to obtain a refund. An information architecture (IA) strategy that includes clear pathways to key content not only better supports user information needs that drive purchases, but also reduces company risk. If you do have to put information in multiple locations, consider using an object-oriented content management system (CMS) so content is reused, not duplicated. (Our company open-sourced one called Fae.)
  • Lack of discernible content order: Humans want information to be ordered in ways they can understand, whether it’s alphabetical, time-based, or by order of operation, importance, or even frequency. The question format can disguise this organization by hiding the ordering mechanism. For example, I could publish a page that outlines a schedule of household maintenance tasks by frequency, with natural categories (in order) of daily, weekly, monthly, quarterly, and annually. But putting that information into an FAQ format, such as “How often should I dust my ceiling fan?,” breaks that logical organization of content—it’s potentially a stand-alone question. Even on a site that’s dedicated only to household maintenance, that information will be more accessible if placed within the larger context of maintenance frequency.
  • Repetitive grammatical structure: Users like to scan for information, so having repetitive phrases like “How do I …” that don’t relate to the specific task make it much more difficult for readers to quickly find the relevant content. In a lengthy help page with catch-all categories, like the Patagonia FAQ page, users have to swim past a sea of “How do I …,” “Why can’t I …,” and “What do I …” phrases to get to the actual information. While categories can help narrow the possibilities, the user still has to take the time to find the most likely category and then the relevant question within it. The Patagonia website also shows how an FAQ section can become a catch-all. Oh, how I’d love the opportunity to restructure all that Patagonia information into purposeful information designed to address user needs at the exact right moment. So much potential!
  • Increased cognitive load: As well as being repetitive, the question format can also be surprisingly specific, forcing users to mentally break apart the wording of the questions to find a match for their need. If a question appears to exclude the required information, the user may never click to see the answer, even if it is actually relevant. Answers can also raise additional, unnecessary questions in the minds of users. Consider the FAQ-formatted “Can I pay my bill with Venmo?” (which limits the answer to one payment type that only some users may recognize). Rewriting the question to “How can I pay my bill online?” and updating the content improves the odds that users will read the answer and be able to complete their task. However, an even better approach is to create purposeful content under the more direct and concise heading “Online payment options,” which is broad enough to cover all payment services (as a topic in the “Bill Payments” portion of a website), as well as instructions and other task-orientated information.
  • Longer content requirements: In most cases, questions have a longer line length than topic headings. The Airbnb help page illustrates when design and content strategy clash. The design truncates the question after 40 characters when the browser viewport is wider than 743 pixels. You have to click the question to find out if it holds the answer you need—far from ideal! Yet the heading “I’m a guest. How do I check the status of my reservation?” could easily have been rewritten as “Checking reservation status” or even “Guests: Checking reservation status.” Not only do these alternatives fit within the line length limitations set by the design, but the lower word count and simplified English also reduce translation costs (another issue some companies have to consider).
Purposeful information

Grounded in the Minimalist approach to technical documentation, the idea behind purposeful information is that users come to any type of content with a particular purpose in mind, ranging from highly specific (task completion) to general learning (increased knowledge). Different websites—and even different areas within a single website—may be aimed at different users and different purposes. Organizations also have goals when they construct websites, whether they’re around brand awareness, encouraging specific user behavior, or meeting legal requirements. Companies that meld user and organization goals in a way that feels authentic can be very successful in building brand loyalty.

Commerce sites, for example, have the goal of driving purchases, so the information on the site needs to provide content that enables effortless purchasing decisions. For other sites, the goal might be to drive user visits, encourage newsletter sign-ups, or increase brand awareness. In any scenario, burying in FAQs any pathways needed by users to complete their goals is a guaranteed way to make it less likely that the organization will meet theirs.

By digging into what users need to accomplish (not a general “they need to complete the form,” but the underlying, real-world task, such as getting a shipping quote, paying a bill, accessing health care, or enrolling in college), you can design content to provide the right information at the right time and better help users accomplish those goals. As well as making it less likely you’ll need an FAQ section at all, using this approach to generate a credible IA and content strategy—the tools needed to determine a meaningful home for all your critical content—will build authority and user trust.

Defining specific goals when planning a website is therefore essential if content is to be purposeful throughout the site. Common user-centered methodologies employed during both IA and content planning include user-task analysis, content audits, personas, user observations, and analysis of call center data and web analytics. A complex project might use multiple methodologies to define the content strategy and supporting IA to provide users with the necessary information.

The redesign of the Oliver Winery website is a good example of creating purposeful information instead of resorting to an FAQ. There was a user goal of being able to find practical information about visiting the winery (such as details regarding food, private parties, etc.), yet this information was scattered across various pages, including a partially complete FAQ. There was a company goal of reducing the volume of calls to customer support. In the redesign, a single page called “Plan Your Visit” was created with all the relevant topics. It is accessible from the “Visit” section and via the main navigation.

The system used is designed to be flexible. Topics are added, removed, and reordered using the CMS, and published on the “Plan Your Visit” page, which also shows basic logistical information like hours and contact details, in a non-FAQ format. Conveniently, contact details are maintained in only one location within the CMS yet published on various pages throughout the site. As a result, all information is readily available to users, increasing the likelihood that they’ll make the decision to visit the winery.

If you have to include FAQs

This happens. Even though there are almost always more effective ways to meet user needs than writing an FAQ, FAQs happen. Sometimes the client insists, and sometimes even the most ardent opponent (ahem) concludes that in a very particular circumstance, an FAQ can be purposeful. The most effective FAQ is one with a specific, timely, or transactional need, or one with information that users need repeated access to, such as when paying bills or organizing product returns.

Good topics for an FAQ include transactional activities, such as those involved in the buying process: think shipments, payments, refunds, and returns. By being specific and focusing on a particular task, you avoid the categorization problem described earlier. By limiting questions to those that are frequently asked AND that have a very narrow focus (to reduce users having to sort through lots of content), you create more effective FAQs.

Amazon’s support center has a great example of an effective FAQ within their overall support content because they have exactly one: “Where’s My Stuff?.” Set under the “Browse Help Topics” heading, the question leads to a list of task-based topics that help users track down the location of their missing packages. Note that all of the other support content is purposeful, set in a topic-based help system that’s nicely categorized, with a search bar that allows users to dive straight in.

Conference websites, which by their nature are already focused on a specific company goal (conference sign-ups), often have an FAQ section that covers basic conference information, logistics, or the value of attending. This can be effective. However, for the reasons outlined earlier, the content can quickly become overwhelming if conference organizers try to include all information about the conference as a single list of questions, as demonstrated by Web Summit’s FAQ page. Overdoing it can cause confusion even when the design incorporates categories and an otherwise useful UX that includes links, buttons, or tabs, such as on the FAQ page of The Next Web Conference.

In examining these examples, it’s apparent how much more easily users could access the information if it wasn’t presented as questions. But if you do have to use FAQs, here are my tips for creating the best possible user experience.

Creating a purposeful FAQ:

  • Make it easy to find.
  • Have a clear purpose and highly specific content in mind.
  • Give it a clear title related to the user tasks (e.g., “Shipping FAQ” rather than just “FAQ”).
  • Use clear, concise wording for questions.
  • Focus questions on user goals and tasks, not on product or brand.
  • Keep it short.

What to avoid in any FAQ:

  • Don’t include “What does FAQ stand for?” (unfortunately, not a fictional example). Instead, simply define acronyms and initialisms on first use.
  • Don’t define terms using an FAQ format—it’s a ticket straight to documentation hell. If you have to define terms, what you need is a glossary, not FAQs.
  • Don’t tell your brand story or company history, or pontificate. People don’t want to know as much about your brand, product, and services as you are eager to tell them. Sorry.
In the end, always remember your users

Your website should be filled with purposeful content that meets users’ core needs and fulfills your company’s objectives. Do your users and your bottom line a favor and invest in effective user analysis, IA, content strategy, and documentation. Your users will be able to find the information they need, and your brand will be that much more awesome as a result.

Why Mutation Can Be Scary

Tue, 01/09/2018 - 08:00

A note from the editors: This article contain sample lessons from Learn JavaScript, a course that helps you learn JavaScript to build real-world components from scratch.

To mutate means to change in form or nature. Something that’s mutable can be changed, while something that’s immutable cannot be changed. To understand mutation, think of the X-Men. In X-Men, people can suddenly gain powers. The problem is, you don’t know when these powers will emerge. Imagine your friend turns blue and grows fur all of a sudden; that’d be scary, wouldn’t it?

h2 code, h3 code { text-transform: none; }

In JavaScript, the same problem with mutation applies. If your code is mutable, you might change (and break) something without knowing.

Objects are mutable in JavaScript

In JavaScript, you can add properties to an object. When you do so after instantiating it, the object is changed permanently. It mutates, like how an X-Men member mutates when they gain powers.

In the example below, the variable egg mutates once you add the isBroken property to it. We say that objects (like egg) are mutable (have the ability to mutate).

const egg = { name: "Humpty Dumpty" }; egg.isBroken = false; console.log(egg); // { // name: "Humpty Dumpty", // isBroken: false // }

Mutation is pretty normal in JavaScript. You use it all the time.

Here’s when mutation becomes scary.

Let’s say you create a constant variable called newEgg and assign egg to it. Then you want to change the name of newEgg to something else.

const egg = { name: "Humpty Dumpty" }; const newEgg = egg; newEgg.name = "Errr ... Not Humpty Dumpty";

When you change (mutate) newEgg, did you know egg gets mutated automatically?

console.log(egg); // { // name: "Errr ... Not Humpty Dumpty" // }

The example above illustrates why mutation can be scary—when you change one piece of your code, another piece can change somewhere else without your knowing. As a result, you’ll get bugs that are hard to track and fix.

This weird behavior happens because objects are passed by reference in JavaScript.

Objects are passed by reference in JavaScript

To understand what “passed by reference” means, first you have to understand that each object has a unique identity in JavaScript. When you assign an object to a variable, you link the variable to the identity of the object (that is, you pass it by reference) rather than assigning the variable the object’s value directly. This is why when you compare two different objects, you get false even if the objects have the same value.

console.log({} === {}); // false

When you assign egg to newEgg, newEgg points to the same object as egg. Since egg and newEgg are the same thing, when you change newEgg, egg gets changed automatically.

console.log(egg === newEgg); // true

Unfortunately, you don’t want egg to change along with newEgg most of the time, since it causes your code to break when you least expect it. So how do you prevent objects from mutating? Before you understand how to prevent objects from mutating, you need to know what’s immutable in JavaScript.

Primitives are immutable in JavaScript

In JavaScript, primitives (String, Number, Boolean, Null, Undefined, and Symbol) are immutable; you cannot change the structure (add properties or methods) of a primitive. Nothing will happen even if you try to add properties to a primitive.

const egg = "Humpty Dumpty"; egg.isBroken = false; console.log(egg); // Humpty Dumpty console.log(egg.isBroken); // undefined const doesn’t grant immutability

Many people think that variables declared with const are immutable. That’s an incorrect assumption.

Declaring a variable with const doesn’t make it immutable, it prevents you from assigning another value to it.

const myName = "Zell"; myName = "Triceratops"; // ERROR

When you declare an object with const, you’re still allowed to mutate the object. In the egg example above, even though egg is created with const, const doesn’t prevent egg from mutating.

const egg = { name: "Humpty Dumpty" }; egg.isBroken = false; console.log(egg); // { // name: "Humpty Dumpty", // isBroken: false // } Preventing objects from mutating

You can use Object.assign and assignment to prevent objects from mutating.

Object.assign

Object.assign lets you combine two (or more) objects together into a single one. It has the following syntax:

const newObject = Object.assign(object1, object2, object3, object4);

newObject will contain properties from all of the objects you’ve passed into Object.assign.

const papayaBlender = { canBlendPapaya: true }; const mangoBlender = { canBlendMango: true }; const fruitBlender = Object.assign(papayaBlender, mangoBlender); console.log(fruitBlender); // { // canBlendPapaya: true, // canBlendMango: true // }

If two conflicting properties are found, the property in a later object overwrites the property in an earlier object (in the Object.assign parameters).

const smallCupWithEar = { volume: 300, hasEar: true }; const largeCup = { volume: 500 }; // In this case, volume gets overwritten from 300 to 500 const myIdealCup = Object.assign(smallCupWithEar, largeCup); console.log(myIdealCup); // { // volume: 500, // hasEar: true // }

But beware! When you combine two objects with Object.assign, the first object gets mutated. Other objects don’t get mutated.

console.log(smallCupWithEar); // { // volume: 500, // hasEar: true // } console.log(largeCup); // { // volume: 500 // } Solving the Object.assign mutation problem

You can pass a new object as your first object to prevent existing objects from mutating. You’ll still mutate the first object though (the empty object), but that’s OK since this mutation doesn’t affect anything else.

const smallCupWithEar = { volume: 300, hasEar: true }; const largeCup = { volume: 500 }; // Using a new object as the first argument const myIdealCup = Object.assign({}, smallCupWithEar, largeCup);

You can mutate your new object however you want from this point. It doesn’t affect any of your previous objects.

myIdealCup.picture = "Mickey Mouse"; console.log(myIdealCup); // { // volume: 500, // hasEar: true, // picture: "Mickey Mouse" // } // smallCupWithEar doesn't get mutated console.log(smallCupWithEar); // { volume: 300, hasEar: true } // largeCup doesn't get mutated console.log(largeCup); // { volume: 500 } But Object.assign copies references to objects

The problem with Object.assign is that it performs a shallow merge—it copies properties directly from one object to another. When it does so, it also copies references to any objects.

Let’s explain this statement with an example.

Suppose you buy a new sound system. The system allows you to declare whether the power is turned on. It also lets you set the volume, the amount of bass, and other options.

const defaultSettings = { power: true, soundSettings: { volume: 50, bass: 20, // other options } };

Some of your friends love loud music, so you decide to create a preset that’s guaranteed to wake your neighbors when they’re asleep.

const loudPreset = { soundSettings: { volume: 100 } };

Then you invite your friends over for a party. To preserve your existing presets, you attempt to combine your loud preset with the default one.

const partyPreset = Object.assign({}, defaultSettings, loudPreset);

But partyPreset sounds weird. The volume is loud enough, but the bass is non-existent. When you inspect partyPreset, you’re surprised to find that there’s no bass in it!

console.log(partyPreset); // { // power: true, // soundSettings: { // volume: 100 // } // }

This happens because JavaScript copies over the reference to the soundSettings object. Since both defaultSettings and loudPreset have a soundSettings object, the one that comes later gets copied into the new object.

If you change partyPreset, loudPreset will mutate accordingly—evidence that the reference to soundSettings gets copied over.

partyPreset.soundSettings.bass = 50; console.log(loudPreset); // { // soundSettings: { // volume: 100, // bass: 50 // } // }

Since Object.assign performs a shallow merge, you need to use another method to merge objects that contain nested properties (that is, objects within objects).

Enter assignment.

assignment

assignment is a small library made by Nicolás Bevacqua from Pony Foo, which is a great source for JavaScript knowledge. It helps you perform a deep merge without having to worry about mutation. Aside from the method name, the syntax is the same as Object.assign.

// Perform a deep merge with assignment const partyPreset = assignment({}, defaultSettings, loudPreset); console.log(partyPreset); // { // power: true, // soundSettings: { // volume: 100, // bass: 20 // } // }

assignment copies over values of all nested objects, which prevents your existing objects from getting mutated.

If you try to change any property in partyPreset.soundSettings now, you’ll see that loudPreset remains as it was.

partyPreset.soundSettings.bass = 50; // loudPreset doesn't get mutated console.log(loudPreset); // { // soundSettings { // volume: 100 // } // }

assignment is just one of many libraries that help you perform a deep merge. Other libraries, including lodash.assign and merge-options, can help you do it, too. Feel free to choose from any of these libraries.

Should you always use assignment over Object.assign?

As long as you know how to prevent your objects from mutating, you can use Object.assign. There’s no harm in using it as long as you know how to use it properly.

However, if you need to assign objects with nested properties, always prefer a deep merge over Object.assign.

Ensuring objects don’t mutate

Although the methods I mentioned can help you prevent objects from mutating, they don’t guarantee that objects don’t mutate. If you made a mistake and used Object.assign for a nested object, you’ll be in for deep trouble later on.

To safeguard yourself, you might want to guarantee that objects don’t mutate at all. To do so, you can use libraries like ImmutableJS. This library throws an error whenever you attempt to mutate an object.

Alternatively, you can use Object.freeze and deep-freeze. These two methods fail silently (they don’t throw errors, but they also don’t mutate the objects).

Object.freeze and deep-freeze

Object.freeze prevents direct properties of an object from changing.

const egg = { name: "Humpty Dumpty", isBroken: false }; // Freezes the egg Object.freeze(egg); // Attempting to change properties will silently fail egg.isBroken = true; console.log(egg); // { name: "Humpty Dumpty", isBroken: false }

But it doesn’t help when you mutate a deeper property like defaultSettings.soundSettings.base.

const defaultSettings = { power: true, soundSettings: { volume: 50, bass: 20 } }; Object.freeze(defaultSettings); defaultSettings.soundSettings.bass = 100; // soundSettings gets mutated nevertheless console.log(defaultSettings); // { // power: true, // soundSettings: { // volume: 50, // bass: 100 // } // }

To prevent a deep mutation, you can use a library called deep-freeze, which recursively calls Object.freeze on all objects.

const defaultSettings = { power: true, soundSettings: { volume: 50, bass: 20 } }; // Performing a deep freeze (after including deep-freeze in your code per instructions on npm) deepFreeze(defaultSettings); // Attempting to change deep properties will fail silently defaultSettings.soundSettings.bass = 100; // soundSettings doesn't get mutated anymore console.log(defaultSettings); // { // power: true, // soundSettings: { // volume: 50, // bass: 20 // } // } Don’t confuse reassignment with mutation

When you reassign a variable, you change what it points to. In the following example, a is changed from 11 to 100.

let a = 11; a = 100;

When you mutate an object, it gets changed. The reference to the object stays the same.

const egg = { name: "Humpty Dumpty" }; egg.isBroken = false; Wrapping up

Mutation is scary because it can cause your code to break without your knowing about it. Even if you suspect the cause of breakage is a mutation, it can be hard for you to pinpoint the code that created the mutation. So the best way to prevent code from breaking unknowingly is to make sure your objects don’t mutate from the get-go.

To prevent objects from mutating, you can use libraries like ImmutableJS and Mori.js, or use Object.assign and Object.freeze.

Take note that Object.assign and Object.freeze can only prevent direct properties from mutating. If you need to prevent multiple layers of objects from mutating, you’ll need libraries like assignment and deep-freeze.

Discovery on a Budget: Part I

Thu, 01/04/2018 - 08:00

If you crack open any design textbook, you’ll see some depiction of the design cycle: discover, ideate, create, evaluate, and repeat. Whenever we bring on a new client or start working on a new feature, we start at the top of the wheel with discover (or discovery). It is the time in the project when we define what problem we are trying to solve and what our first approach at solving it should be.

Ye olde design cycle

We commonly talk about discovery at the start of a sprint cycle at an established business, where there are things like budgets, product teams, and existing customers. The discovery process may include interviewing stakeholders or pouring over existing user data. And we always exit the discovery phase with some sort of idea to move forward with.

However, discovery is inherently different when you work at a nonprofit, startup, or fledgling small business. It may be a design team of one (you), with zero dollars to spend, and only a handful of people aware the business even exists. There are no clients to interview and no existing data to examine. This may also be the case at large businesses when they want to test the waters on a new direction without overcommitting (or overspending). Whenever you are constrained on budget, data, and stakeholders, you need to be flexible and crafty in how you conduct discovery research. But you can’t skimp on rigor and thoroughness. If the idea you exit the discovery phase with isn’t any good, your big launch could turn out to be a business-ending flop.

In this article I’ll take you through a discovery research cycle, but apply it towards a (fictitious) startup idea. I’ll introduce strategies for conducting discovery research with no budget, existing user data, or resources to speak of. And I’ll show how the research shapes the business going forward.

Write up the problem hypothesis

An awful lot of ink (virtual or otherwise) has been spent on proclaiming we should all, “fall in love with the problem, not the solution.” And it has been ink spent well. When it comes to product building, a problem-focused philosophy is the cornerstone of any user-centric business.

But how, exactly, do you know when you have a problem worth solving? If you work at a large, established business you may have user feedback and data pointing you like flashing arrows on a well-marked road towards a problem worth solving. However, if you are launching a startup, or work at a larger business venturing into new territory, it can be more like hiking through the woods and searching for the next blaze mark on the trail. Your ideas are likely based on personal experiences and gut instincts.

When your ideas are based on personal experiences, assumptions, and instincts, it’s important to realize they need a higher-than-average level of tire-kicking. You need to evaluate the question “Do I have a problem worth solving?” with a higher level of rigor than you would at a company with budget to spare and a wealth of existing data. You need to take all of your ideas and assumptions and examine them thoroughly. And the best way to examine your ideas and categorize your assumptions is with a hypothesis.

As the dictionary describes, a hypothesis is “a supposition or proposed explanation made on the basis of limited evidence as a starting point for further investigation.” That also serves as a good description of why we do discovery research in the first place. We may have an idea that there is a problem worth solving, but we don’t yet know the scope or critical details. Articulating our instincts, ideas, and assumptions as a problem hypothesis lays a foundation for the research moving forward.

Here is a general formula you can use to write a problem hypothesis:

Because [assumptions and gut instincts about the problem], users are [in some undesirable state]. They need [solution idea].

For this article, I decided to “launch” a fictitious (and overly ambitious) startup as an example. Here is the problem hypothesis I wrote for my startup:

Because their business model relies on advertising, social media tools like Facebook are deliberately designed to “hook” users and make them addicted to the service. Users are unhappy with this and would rather have a healthier relationship with social media tools. They would be willing to pay for a social media service that was designed with mental health in mind.

You can see in this example that my assumptions are:

  • Users feel that social media sites like Facebook are addictive.
  • Users don’t like to be addicted to social media.
  • Users would be willing to pay for a non-addictive Facebook replacement.

These are the assumptions I’ll be researching and testing throughout the discovery process. If I find through my research that I cannot readily affirm these assumptions, it means I might not be ready to take on Mr. Zuckerberg just yet.

The benefit of articulating our assumptions in the form of a hypothesis is that it provides something concrete to talk about, refer to, and test. The whole product team can be involved in forming the initial problem hypothesis, and you can refer back to it throughout the discovery process. Once we’ve completed the research and analyzed the results, we can edit the hypothesis to reflect our new understanding of our users and the problems we want to solve.

Now that we’ve articulated a problem hypothesis, it is time to figure out our research plan. In the following two sections, I’ll cover the research method I recommend the most for new ventures, as well as strategies for recruiting participants on a budget.

A method that is useful in all phases of design: interviews

In my career as a user researcher, I have used all sorts of methods. I’ve done A/B testing, eye tracking, Wizard of Oz testing, think-alouds, contextual inquiries, and guerilla testing. But the one research method I utilize the most, and that I believe provides the most “bang for the buck,” is user interviews.

User interviews are relatively inexpensive to conduct. You don’t need to travel to a client site and you don’t need a fortune’s worth of equipment. If you have access to a phone, you can conduct an interview with participants all around the world. Yet interviews provide a wealth of information and can be used in every phase of research and design. Interviews are especially useful in discovery, because it is a method that is adaptable. As you learn more about the problem you are trying to solve, you can adapt your interview protocol to match.

To be clear, your interviewees will not tell you:

  • what to build;
  • or how to build it.

But they absolutely can tell you:

  • what problem they have;
  • how they feel about it;
  • and what the value of a solution would mean to them.

And if you know the problem, how users feels about it, and the value of a solution, you are well on your way to designing the right product.

The challenge of conducting a good user interview is making sure you ask the questions that elicit that information. Here are a couple tips:

Tip 1: always ask the following two questions:

  • “What do you like about [blank]?”
  • “What do you dislike about [blank]?”

… where you fill “[blank]” with whatever domain your future product will improve.

Your objective is to gain an understanding of all aspects of the problem your potential customers face—the bad and the good. One common mistake is to spend too much time investigating what’s wrong with the current state of affairs. Naturally, you want your product to fix all the problems your customers face. However, you also need to preserve what currently works well, what is satisfying, or what is otherwise good about how users accomplish their goals currently. So it is important to ask about both in user interviews.

For example, in my interviews I always asked, “What do you like about using Facebook?” And it wasn’t until my interview participant told me everything they enjoyed about Facebook that I would ask, “What do you dislike about using Facebook?”

Tip 2: after (nearly) every response, ask them to say more.

The goal of conducting interviews is to gain an exhaustive set of data to review and consider moving forward. That means you don’t want your participants to discuss one thing they like and dislike, you want them to tell you all the things they like and dislike.

Here is an example of how this played out in one of the interviews I conducted:

Interviewer (Me): What do you like about using Facebook?

Interviewee: I like seeing people on there that I wouldn’t otherwise get a chance to see and catch up with in real life. I have moved a couple times so I have a lot of friends that I don’t see regularly. I also like seeing the people I know do well, even though I haven’t seen them since, maybe, high school. But I like seeing how their life has gone. I like seeing their kids. I like seeing their accomplishments. It’s also a little creepy because it’s a window into their life and we haven’t actually talked in forever. But I like staying connected.

Interviewer (Me): What else do you like about it?

Interviewee: Um, well it’s also sort of a convenient way of keeping contacts. There have been a few times when I was able to message people and get in touch with people even when I don’t have their address or email in my phone. I could message them through Facebook.

Interviewer (Me): Great. Is there anything else you like about it?

Interviewee: Let me think … well I also find cool stuff to do on the weekends there sometimes. They have an events feature. And businesses, or local places, will post events and there have been a couple times where I’ve gone to something cool. Like I found a cool movie festival once that way.

Interviewer (Me): That seems cool. What else do you like about using Facebook?

Interviewee: Uh … that’s all I think I really use it for. I can’t really think of anything else. Mainly I use it just to keep in touch with people that I’ve met over the years.

From this example you can see the first feature that popped into the interviewee’s mind was their ability to keep up with friends that they otherwise wouldn’t have much opportunity to connect with anymore. That is a feature that any Facebook replacement would have to replicate. However, if I hadn’t pushed the interviewee to think of even more features they like, I might have never uncovered an important secondary feature: convenient in-app messaging. In fact, six out of the eleven people I interviewed for this project said they liked Facebook Messenger. But not a single one of them mentioned that feature first. It only came up in conversation after I probed for more.

As I continued to repeat my question, the interviewee thought of one more feature they liked: local event listings. (Five out of the eleven people I interviewed mentioned this feature.) But after that, the interviewee couldn’t think of any more features to discuss. You know you can move on to the next question in the interview when your participant starts to repeat themselves or bluntly tells you they have nothing else to say.

Recruit all around you, then document the bias

There are all sorts of ways to recruit participants for research. You can hire an agency or use a tool like UserTesting.com. But many of those paid-for options can be quite costly, and since we are working with a shoestring budget we have roughly zero dollars to spend on recruitment. We will have to be creative.

My post on Facebook to recruit volunteers. One volunteer decided to respond with a Hunger Games “I volunteer as tribute!” gif.

For my project, I decided to rely on the kindness of friends and strangers I could reach through Facebook. I posted one request for participants on my personal Facebook page, and another on the local FreeCodeCamp page. A day after I posted my request, twenty-five friends and five strangers volunteered. This type of participant recruitment method is called convenience sampling, because I was recruiting participants that were conveniently accessible to me.

Since my project involved talking to people about social media sites like Facebook, it was appropriate for my first attempt at recruiting to start on Facebook. I could be sure that everyone who saw my request uses Facebook in some form or fashion. However, like all convenience sampling, my recruitment method was biased. (I’ll explain how in just a bit.)

Bias is something that we should try—whenever possible—to avoid. If we have access to more sophisticated recruitment methods, we should use them. However, when you have a tight budget, avoiding recruitment bias is virtually impossible. In this scenario, our goals should be to:

  • mitigate bias as best we can;
  • and document all the biases we see.

For my project, I could mitigate some of the biases by using a few more recruitment methods. I could go to various neighborhoods and try to recruit participants off the street (i.e., guerilla testing). If I had a little bit of money to spend, I could hang out in various coffee shops and offer folks free coffee in exchange for ten-minute interviews. These recruitment methods also fall under the umbrella of convenience sampling, but by using a variety of methods I can mitigate some of the bias I would have from using just one of them.

Also, it is always important to reflect on and document how your sampling method is biased. For my project, I wrote the following in my notes:

All of the people I interviewed were connected to me in some way on Facebook. Many of them I know well enough to be “friends” with. All of them were around my age, many (but not all) worked in tech in some form or fashion, and all of them but one lived in the US.

Documenting bias ensures that we won’t forget about the bias when it comes time to analyze and discuss the results.

Let’s keep this going

As the title suggests, this is just the first installment of a series of articles on the discovery process. In part two, I will analyze the results of my interviews, revise my problem hypothesis, and continue to work on my experimental startup. I will launch into another round of discovery research, but this time utilizing some different research methods, like A/B testing and fake-door testing. You can help me out by checking out this mock landing page for Candor Network (what I’ve named my fictitious startup) and taking the survey you see there.