If you’re a visual designer, you probably spend a majority of your time making small adjustments to multiple visual elements. Maybe your client has decided they need a few more pixels of padding between each of your elements, or perhaps they’ve decided that all of their avatars needed to have rounded corners. Any which way, you might find yourself making the same adjustment in your design over and over… and over again.
In Adobe Experience Design CC (Beta), we’ve introduced the Repeat Grid feature to address this tedious aspect of a designer’s workflow. In this article, we’ll dig deep to uncover the true power of this time-saving feature. We’ll create and adjust a Repeat Grid, add content to it, and wire it up in Adobe XD’s simple and powerful Prototype Mode. If you’d like to follow along, you can download and test Adobe XD1 for free.
At its core, a Repeat Grid is a special type of group. Just like we group objects, we’ll create our Repeat Grid by selecting an object or a group of objects and convert them to a Repeat Grid. In this exercise, we’ll make a simple phone contact list with an image and a name.
From the welcome screen, select an artboard type to start a new file.
Draw a rectangle using the Rectangle tool (R).
To the right of your rectangle, use the Text tool (T) to type in some placeholder text.
Using the Selection tool (V), select both objects, either by marquee selecting (drawing a box around both objects), or by selecting one object and Shift-selecting the other.
Note that we do not need precision at this point, as we can adjust the elements later.
Convert the selection to a Repeat Grid by clicking on the button in the Property Inspector or by using the shortcut key Cmd + R.
Our group is now a Repeat Grid. You can see that it now has two handles, one on the right and one on the bottom, and the box around your group is a green, dotted line.
Click and drag the right handle to the right, expanding the Repeat Grid. To expand the Repeat Grid down, drag the bottom handle down.
We now have repeated elements in our Repeat Grid. All of the styles we apply to any object will be to all repeated versions of it.
Step 3: Adjust Any Elements Within Your Repeat Grid Link
Like any group, we can access the Repeat Grid’s component elements by double clicking into the group. Once we’ve made our changes, we can exit the edit context by pressing the Escape key. However, there are other ways to access the component elements. For instance, we can drill down into the element in the Layers panel (Cmd + Y) or by direct selecting it (Cmd + Click).
Using the Selection tool (V), double click on any rectangle in the Repeat Grid. You should now see a light blue box around the cell you’re editing. Select and drag your text so that it’s aligned to your rectangle.
Click on the Text object and change the typeface and size in the Property Inspector on the right. All of your text objects share the same style.
Press Escape to exit the edit context and move the Repeat Grid so that it’s aligned to the artboard.
Step 4: Adjust The Row And Column Padding In Your Repeat Grid Link
Now that we have our Repeat Grid, we can begin to adjust the space between each row and column. By hovering over the gap between elements, we can activate the column and row indicators and change them to our liking.
Place your cursor between the right side of a text element and the left side of a rectangle, directly in the column gutter. Once the pink column indicator is displayed, drag the right side of the gutter left and right until it’s set to 30.
Place your cursor between rectangles, directly in the row gutter. Once the pink row indicator displays, drag the bottom of the gutter up and down until it’s set to 30.
Continue to adjust the spacing between cells and the size of the Repeat Grid until you have the right number of elements to fit your artboard.
You can convert any set of objects into a Repeat Grid. Those objects become a cell in the Repeat Grid. You can then edit the cell and adjust the gap between rows and columns.
Now that we have the overall shape of our contacts list, we can populate it with content. The simplest way to populate is to change each element separately.
Cmd + Click a text object in your Repeat Grid to select it. You’re now in the Repeat Grid’s edit context mode.
Double click the text element to edit it and change the text to a name. Note that the content isn’t applied to all of the other text objects in the Repeat Grid. However, any style applied to the text object applies to all text objects.
Drag an image into one of the rectangles to import it. Your image will be applied as the fill for the rectangle, and automatically resizes to fill the shape. We call this feature auto-masking.
Drag a second image into the second rectangle. We define order in the Repeat Grid in left-to-right reading order (left to right, then top to bottom). Note that the Repeat Grid now alternates between the first photo and the second photo. We’ve now created a 2-photo pattern.
Drag a third image into the fourth rectangle. Now that you’ve dragged an item into the fourth rectangle, we have a 4-photo pattern, with the first and third being identical images.
Drag a fourth image into the first rectangle. This replaces the first element in your 4-photo pattern, so you should now have four unique photos in your pattern.
Text works on the concept of overrides; we can override the content of a text object itself, but the styles remain applied to all repetitions of the object. However, we can build out the concept of repeated patterns with auto-masked objects, where the image fill of an object is repeated in a pattern that you define. For instance, if you dragged your third image into the third rectangle, you would have created a 3-photo pattern. Similarly, if you had dragged an image into the fifth rectangle, you would have created a 5-photo pattern.
However, that can get really tedious. Instead, what we’ll do is use content that we’ve prepared ahead of time.
Step 3: Drag A Return-Separated Text File To Your Text Object Link
Create a text file with the extension .txt. You can create this using Mac’s TextEdit (select Format > Make Plain Text) or any text editor you prefer. Separate each piece of data with a return.
Once you’ve saved the file, drag it from Finder and onto your Repeat Grid’s text object in Adobe XD to import the data.
Now our object repeats based on the number of lines in our text file. If our text file has four lines, it’ll place a line per text object and repeat after placing the first four.
Step 4: Drag a selection of image files into your rectangle. Link
In Finder, select a number of images.
Drag this selection from Finder and onto your Repeat Grid’s rectangle to import the images as fills for the repeated rectangle.
Select the rectangle and change the corner radius by dragging one of the radius controls. All of your style changes are reflected on each repetition.
Similar to dragging images in one at a time, you’re creating a repeating pattern for your object’s fill. And, just like text, any change to the container is propagated to all of the repetitions of the object in the Repeat Grid.
Note that you can easily change the content of a Repeat Grid, either by changing an individual object or by dragging in data sources. Note that the data is imported and not linked, so any changes you make to the source file won’t affect the data you’ve already placed in your XD file. All of your styles and the size and shape of any container is reflected in all repetitions of an element.
Now that we have a fairly fleshed out contacts list, we can continue our design process, iterating as we receive feedback from our colleagues and stakeholders. In this case, we might need to add elements after the fact. Repeat Grid makes this easy by allowing us to add elements to a cell.
In our example, we’ll add a horizontal line to separate the cells vertically.
Draw a horizontal line over the cell below by selecting the Line tool (L) and holding down the Shift key while dragging across.
Using the Selection tool, adjust the line’s location until it’s aligned to the left of the rectangle.
Press Escape to exit edit context.
We can draw any element or add text within the Repeat Grid’s edit context, even after you’ve created it. Since Repeat Grid automatically repeats every element, this allows us the flexibility to play with design in a new way.
We’ve just added a line, but now the cells are overlapping one another, leaving us with a visual mess. We’ll need to add vertical space between cells. When something like this happens, Repeat Grid recalculates the gutter between the row or column (from the bottom of one to the top of the next, or from the right of one to the left of the next) and sets it to a negative number if they overlap.
Hover in the overlap space. Grab either the top or bottom of the rectangle and pull it down, so the overlap no longer exists, then a little further.
We’ve solved this problem, but what about adding artwork that we’ve already created? We can cut from one context and paste into another.
Step 3: Cut And Paste Into The Repeat Grid’s Edit Context Link
Download the star.svg file and drag it onto the pasteboard, outside of your current artboard. This imports the star.svg file into your project.
Convert your imported path into a Repeat Grid and drag the right handle to the right until you have a total of four stars. Adjust the padding to bring the stars closer together.
Cut the Repeat Grid with the stars (Cmd + X), then double click on any cell of your contact list in order to enter its edit context.
Paste (Cmd + V). Your Repeat Grid of stars will paste into the center of the cell. Move the stars so that it’s underneath the text.
Sometimes, though, we’ll want to break apart the Repeat Grid; sometimes you just want independent objects after you’ve lined them up. In order to do this, we’ll ungroup the Repeat Grid and make our changes.
Step 4: Ungroup The Inner Repeat Grid And Edit As Necessary Link
Since you’re already in the contact list’s edit context, click on the Repeat Grid of stars to select it.
Ungroup the Repeat Grid by selecting the Ungroup button in the Property Inspector, selecting Ungroup Grid from the context menu (Ctrl-click or right mouse button), or using the keyboard shortcut Cmd + Shift + G.
Select two of the stars and uncheck the fill.
You can even add objects to the Repeat Grid after you create it, either by drawing or pasting into the edit context. If you have negative padding, you can adjust it easily by hovering over the overlap area. You can use Repeat Grid as an easy alignment tool between elements and decouple the repeated elements by ungrouping.
Now that we have a Repeat Grid, we’re going to wire it to another artboard in Prototype Mode. Using Adobe XD, we can switch back and forth between Design and Prototype Modes quickly, which allows us to edit both the UI and interactions at the same time.
In this case, we’re just going to create a second artboard and wire from our Repeat Grid in three different scenarios.
Option 1: Wire The Entire Repeat Grid For A Single Interaction Link
Create a second artboard in your file by using the Artboard tool (A). Click to the right of your existing artboard to create another artboard next to your first.
Switch to Prototype mode by clicking on the tab at the top of the application frame or by using the keyboard shortcut Cmd + Tab.
Select the Repeat Grid in your first artboard. A connector with an arrow will appear on the right side of the object at its midpoint.
Drag this connector to the next artboard. Select your transition options in the pop-up, then press Escape or click outside to dismiss it.
Preview by either pressing the Play button in the upper right-hand corner of the application frame or by using the keyboard shortcut Cmd + Enter. Click anywhere over the Repeat Grid to play the interaction.
What we’ve done at this point is wire the entire object, including its padding, as a hit point for the interaction.
Option 2: Wire A Single Element Of A Repeat Grid For An Interaction Link
Undo your last wire by using the keyboard shortcut Cmd + Z.
Cmd-click a rectangle in your Repeat Grid to direct select it.
Drag the connector on the right of the rectangle and drag it to the second artboard. Select your transition options in the pop-up as before, then dismiss it.
If your Preview window isn’t still open, launch it again and click the target.
At this point, we have a single element, but what happens if we want to select the entire cell? We can create a group within the Repeat Grid in order to make this a valid hit point.
Option 3: Create a group of elements within the Repeat Grid and create an interaction from the group. Link
Undo your last wire by using the keyboard shortcut Cmd + Z.
Switch back to Design mode by clicking on the tab or using the keyboard shortcut Cmd + Tab.
Cmd + Click a rectangle in your Repeat Grid to direct select it. Shift-click the text object next to it to add it to your selection.
Group the two objects by using the context menu selection or the keyboard shortcut Cmd + G.
Switch back to Prototyping mode. Note that your selection remains the same as while in Design mode.
Drag the connector from the group to the second artboard. You’ve now wired the entire group area as a hit point for the interaction.
You can even create an interaction by setting the hit point to the entire Repeat Grid, an individual element from inside it, or a group created inside of it.
I hope that this brief tutorial has helped you explore the power of Repeat Grid. This simple and powerful feature has been quite popular in the beta version, and it’s evolving as we get more feedback from users. If you have an idea for improvements, please do share them in the comments section below.
This article is part of the UX design series sponsored by Adobe. The newly introduced Experience Design app13 is made for a fast and fluid UX design process, creating interactive navigation prototypes, as well as testing and sharing them — all in one place.
You can check out more inspiring projects created with Adobe XD on Behance14, and also visit the Adobe XD blog15 to stay updated and informed. Adobe XD is being updated with new features frequently, and since it’s in public Beta, you can download and test it for free16.
Three user interfaces (UIs) go to a pub. The first one orders a drink, then several more. A couple of hours later, it asks for the bill and leaves the pub drunk. The second UI orders a drink, pays for it up front, orders another drink, pays for it and so on, and in a couple of hours leaves the pub drunk. The third UI exits the pub already drunk immediately after going in — it knows how the pubs work and is efficient enough not to lose time. Have you heard of this third one? It is called an “optimistic UI.”
Recently, having discussed psychological performance optimization3 at a number of conferences dedicated to both front-end development and UX, I was surprised to see how little the topic of optimistic UI design is addressed in the community. Frankly, the term itself is not even well defined. In this article, we will find out what concepts it is based on, and we will look at some examples as well as review its psychological background. After that, we will review the concerns and main points regarding how to maintain control over this UX technique.
But before we begin, truth be told, no single thing could be called an “optimistic UI.” Rather, it is the mental model behind the implementation of an interface. Optimistic UI design has its own history and rationale.
A long while ago — when the word “tweet” applied mostly to birds, Apple was on the verge of bankruptcy and people still put fax numbers on their business cards — web interfaces were quite ascetic. And the vast majority of them had not even a hint of optimism. An interaction with a button, for example, could follow a scenario similar to the following:
The user clicks a button.
The button is triggered into a disabled state.
A call is sent to a server.
A response from the server is sent back to the page.
The page is reloaded to reflect the status of the response.
This might look quite inefficient in 2016; however, surprisingly enough, the same scenario is still used in a lot of web pages and applications and is still a part of the interaction process for many products. The reason is that it is predictable and more or less error-prone: The user knows that the action has been requested from the server (the disabled state of the button hints at this), and once the server responds, the updated page clearly indicates the end of this client-server-client interaction. The problems with this kind of interaction are quite obvious:
The user has to wait. By now, we know that even the shortest delay in the server’s response time has a negative effect on the user’s perception6 of the entire brand, not only on this particular page.
Every time the user gets a response to their action, it is presented in quite a destructive way (a new page loads, instead of the existing one being updated), which breaks the context of the user’s task and might affect their train of thought377. Even though we are not necessarily talking about multitasking8 in this case, any switch of mental context is unpleasant. So, if an action is not inherently meant to switch contexts (online payment is a good example of when a switch is natural), switching would set up an unfriendly tone of dialogue between user and system.
Then, the so-called Web 2.0 arrived and provided new modes of interaction with web pages. The core of these were XMLHttpRequest and AJAX. These new modes of interaction were complemented by “spinners”: the simplest form of progress indicator, the sole purpose of which was to communicate to the user that the system is busy performing some operation. Now, we did not need to reload the page after getting a response from the server; we could just update a part of the already-rendered page instead. This made the web much more dynamic, while allowing for smoother and more engaging experiences for users. The typical interaction with a button could now look like this:
The user clicks a button.
The button is triggered into a disabled state, and a spinner of some kind is shown on the button to indicate the system is working.
A call is sent to the server.
A response from the server is sent back to the page.
The visual state of the button and the page are updated according to the response status.
This new interaction model addressed one of the aforementioned problems of the old method of interaction: The update of the page happens without a destructive action, keeping the context for the user and engaging them in the interaction much better than before.
This kind of interaction pattern has been widely used everywhere in digital media. But one issue remains: Users still have to wait for a response from the server. Yes, we can make our servers respond faster, but no matter how hard we try to speed up the infrastructure, users still have to wait. Again, users do not like to wait, to put it mildly. For example, research shows11 that 78% of consumers feel negative emotions as a result of slow or unreliable websites. Moreover, according to a survey12 conducted by Harris Interactive for Tealeaf, 23% of users confess to cursing at their phones, 11% have screamed at them, and a whole 4% have actually thrown their phone when experiencing a problem with an online transaction. Delays are among those problems.
Even if you show some kind of progress indicator while the user waits, unless you are very creative with the indicator15, nowadays that is simply not enough. For the most part, people have gotten accustomed to spinners indicating a system’s slowness. Spinners are now more associated with purely passive waiting16, when the user has no option other than either to wait for the server’s response or to close the tab or application altogether. So, let’s come up with a step to improve this kind of interaction; let’s look at this concept of an optimistic UI.
As mentioned, an optimistic UI is nothing more than a way of handling human-computer interaction. To understand the main ideas behind it, we will stick with our “user clicks a button” scenario. But the principle will be the same for pretty much any kind of interaction that you might want to make optimistic. According to the Oxford English Dictionary17:
op-ti-mis-tic, adj. hopeful and confident about the future.
Let’s begin with the “confident about the future” part.
What do you think: How often does your server return an error on some user action? For example, does your API fail often when users click a button? Or maybe it fails a lot when users click a link? Frankly, I don’t think so. Of course, this might vary based on the API, server load, level of error-handling and other factors that you, as the front-end developer or UX specialist, might not be willing to get involved in. But as long as the API is stable and predictable and the front end properly communicates legitimate actions in the UI, then the number of errors in response to actions initiated by the user will be quite low. I would go so far as to state that they should never go above 1 to 3%. This means that in 97 to 99% of cases when the user clicks a button on a website, the server’s response should be success, with no error. This deserves to be put in a better perspective:
Think about it for a moment: If we were 97 to 99% certain about a success response, we could be confident about the future of those responses — well, at least much more confident about the future than Schrödinger’s cat was. We could write a whole new story about button interaction:
The user clicks a button.
The visual state of the button is triggered into success mode instantly.
That’s it! At least from the user’s point of view, there is nothing more to it — no waiting, no staring at a disabled button, and not yet another annoying spinner. The interaction is seamless, without the system crudely stepping in to remind the user about itself.
From the developer’s point of view, the complete cycle looks like this:
The user clicks a button.
The visual state of the button is triggered into success mode instantly.
The call is sent to the server.
The response from the server is sent back to the page.
In 97 to 99% of cases, we know that the response will be success, and so we don’t need to bother the user.
Only in the case of a failed request will the system speak up. Don’t worry about this for now — we will get to this point later in the article.
Let’s look at some examples of optimistic interactions. You are probably familiar with “like” buttons, as found on Facebook and Twitter. Let’s take a look at the latter.
It starts, obviously enough, with the click of the button. But note the visual state of the button when the user is no longer pressing or hovering over the button. It switches to the success state instantly!
Let’s see what’s happening in the “Network” tab of our browser’s developer tools at this very moment.
The “Network” tab shows that the server request has been sent but is still in progress. The “likes” counter number has not been incremented yet, but with the change in color, the interface is clearly communicating success to the user, even before having gotten a response from the server.
After a successful response is received from the server, the counter is updated, but the transition is much subtler than the instant color change. This provides the user with a smooth, uninterrupted experience, without any perceived waiting.
Another example of optimistic interaction is seen on Facebook, with its own like button. The scenario is quite similar, except that Facebook updates the counter instantly, together with success color of the button, without waiting for the server’s response.
One thing to note here, though. If we look at the server’s response time, we’ll see that it is a little over 1 second. Considering that the RAIL model recommends28100 milliseconds as the optimal response time for a simple interaction, this would normally be way too long. However, the user does not perceive any wait time in this case because of the optimistic nature of this interaction. Nice! This is another instance of psychological performance optimization29.
But let’s face it: There is still that 1 to 3% chance that the server will return an error. Or perhaps the user is simply offline. Or, more likely, perhaps the server returns what is technically a success response but the response contains information that has to be further processed by the client. As a result, the user will not get a failure indicator, but we cannot consider the response a success either. To understand how to deal with such cases, we should understand why and how optimistic UIs work psychologically in the first place.
So far, I have not heard anyone complain about the aforementioned optimistic interactions on the major social networks. So, let’s say that these examples have convinced us that optimistic UIs work. But why do they work for users? They work simply because people hate waiting. That’s it, folks! You can skip to the next part of the article.
But if you’re still reading, then you are probably interested in knowing why it is so. So, let’s dig a bit deeper into the psychological ground of this approach.
An optimistic UI has two basic ingredients that are worth psychological analysis:
the fast response to the user’s action;
the handling of potential failures on the server, on the network and elsewhere.
When we talk about optimistic UI design, we’re talking about an optimal response time in human-computer interaction. And recommendations for this type of communication have been around since as far back as 1968. Back then, Robert B. Miller published his seminal piece “Response Time in Man-Computer Conversational Transactions32” (PDF), in which he defines as many as 17 different types of responses a user can get from a computer. One of those types Miller calls a “response to control activation” — the delay between the depressing of a key and the visual feedback. Even back in 1968, it should have not exceeded 0.1 to 0.2 seconds. Yes, the RAIL model33 is not the first to recommend this — the advice has been around for about 50 years. Miller notes, though, that even this short delay in feedback might be far too slow for skilled users. This means that, ideally, the user should get acknowledgement of their action within 100 milliseconds. This is getting into the range of one of the fastest unconscious actions the human body can perform — an eye blink. For this reason, the 100-millisecond interval is usually perceived to be instant. “Most people blink around 15 times a minute and a blink lasts on average 100 to 150 milliseconds,” says Davina Bristow34, of University College London’s Institute of Neurology, adding that this “means that overall we spend at least 9 days per year blinking.”
Because of its instant visual response (even before the actual request has finished), an optimistic UI is one of the examples of the early-completion35 techniques used in psychological performance optimization. But the fact that people like interfaces that respond in the blink of an eye should not come as a surprise to most of us, really. And it’s not hard to achieve either. Even in the old days, we disabled buttons instantly after they were clicked, and this was usually enough to acknowledge the user’s input. But a disabled state in an interface element means passive waiting36: The user cannot do anything about it and has no control over the process. And this is very frustrating for the user. That’s why we skip the disabled state altogether in an optimistic UI — we communicate a positive outcome instead of making the user wait.
Let’s get to the second interesting psychological aspect of optimistic UI design — the handling of potential failure. In general, plenty of information and articles are available on how to handle UI errors in the best possible way. However, while we will see how to handle failure later in this article, what matters most in an optimistic UI is not how we handle errors, but when we do it.
Humans naturally organize their activity into clumps, terminated by the completion of a subjectively defined purpose or sub-purpose. Sometimes we refer to these clumps as a “train of thought377,” a “flow of thought38” (PDF) or simply a “flow39.” The flow state is characterized by peak enjoyment, energetic focus and creative concentration. During a flow, the user is completely absorbed in the activity. A tweet by Tammy Everts40 nicely illustrates this:
On the web, the durations of such clumps of activity are much shorter. Let’s revisit Robert B. Miller’s work for a moment. The response types he cites include:
a response to a simple inquiry of listed information;
a response to a complex inquiry in graphic form;
a response to “System, do you understand me?”
He ties all of these to the same 2-second interval within which the user should get the relevant type of response. Without digging deeper, we should note that this interval also depends on a person’s working memory43 (referring to the span of time within which a person can keep a certain amount of information in their head and, more importantly, be able to manipulate it). To us, as developers and UX specialists, this means that within 2 seconds of interacting with an element, the user will be in a flow and focused on the response they are expecting. If the server returns an error during this interval, the user will still be in “dialogue” with the interface, so to speak. It’s similar to a dialogue between two people, where you say something and the other person mildly disagrees with you. Imagine if the other person spent a long time nodding in agreement (the equivalent of our indication of a success state in the UI) but then finally said “no” to you. Awkward, isn’t it? So, an optimistic UI must communicate failure to the user within the 2 seconds of the flow.
Armed with the psychology of how to handle failure in an optimistic UI, let’s finally get to those 1 to 3% of failed requests.
By far, the most common remark I hear is that optimistic UI design is a kind of black pattern — cheating, if you will. That is, by employing it, we are lying to our users about the result of their interaction. Legally, any court would probably support this point. Still, I consider the technique a prediction or hope. (Remember the definition of “optimistic”? Here is where we allow some room for the “hopeful” part of it.) The difference between “lying” and “predicting” is in how you treat those 1 to 3% of failed requests. Let’s look at how Twitter’s optimistic “like” button behaves offline.
First, in line with the optimistic UI pattern, the button switches to the success state right after being clicked — again, without the user pressing or hovering over the button any longer, exactly as the button behaves when the user is online.
But because the user is offline, the request fails.
So, as soon as possible within the user’s flow, the failure should be communicated. Again, 2 seconds is usually the duration of such a flow. Twitter communicates this in the subtlest way possible, simply by reverting the button’s state.
The conscientious reader here might say that this failure-handling could be taken one step further, by actually notifying the user that the request could not be sent or that an error has occurred. This would make the system as transparent as possible. But there is a catch — or, rather, a series of issues:
Any sort of notification that appears suddenly on screen would switch the user’s context, prompting them to analyze the reason behind the failure, a reason that would probably be presented in the error message.
As with any error message or notification, this one should guide the user in this new context by providing actionable information.
That actionable information would set yet another context.
OK, by now we can all agree that this is getting a bit complicated. While this error-handling would be reasonable for, say, a large form on a website, for an action as simple as clicking a like button, it’s overkill — both in terms of the technical development required and the working memory of users.
So, yes, we should be open about failure in an optimistic UI, and we should communicate it as soon as possible so that our optimism does not become a lie. But it should be proportional to the context. For a failed like, subtly reverting the button to its original state should be enough — that is, unless the user is liking their significant other’s status, in which case the thing better work all the time.
One other question might arise: What happens if the user closes the browser tab right after getting a success indicator but before the response is returned from the server? The most unpleasant case would be if the user closes the tab before a request has even been sent to the server. But unless the user is extremely nimble or has the ability to slow down time, this is hardly possible.
If an optimistic UI is implemented properly, and interactions are applied only to those elements that never wait longer than 2 seconds for a server response, then the user would have to close the browser tab within that 2-second window. That’s not particularly difficult with a keystroke; however, as we’ve seen, in 97 to 99% of cases, the request will be successful, whether the tab is active or not (it’s just that a response won’t be returned to the client).
So, this problem might arise only for those 1 to 3% who get a server error. Then again, how many of those rush to close the tab within 2 seconds? Unless they’re in a tab-closing speed competition, I don’t think the number will be significant. But if you feel this is relevant to your particular project and might have negative consequences, then employ some tools to analyze user behavior; if the probability of such a scenario is high enough, then limit optimistic interaction to non-critical elements.
I intentionally haven’t mentioned cases in which a request is artificially delayed; these do not generally fall under the umbrella of optimistic UI design. Moreover, we have spent more than enough time on the pessimistic side of things, so let’s summarize some main points about implementing a good optimistic UI.
I sincerely hope this article has helped you to understand some of the main concepts behind optimistic UI design. Perhaps you’re interesting in trying out this approach in your next project. If so, here are some things to keep in mind before you begin:
A prerequisite to everything we’ve talked about so far: Make sure the API you’re relying on is stable and returns predictable results. Enough said.
The interface should catch potential errors and problems before a request is sent to the server. Better yet, totally eliminate anything that could result in an error from the API. The simpler a UI element is, the simpler it will be to make it optimistic.
Apply optimistic patterns to simple binary-like elements for which nothing more than a success or failure response is expected. For example, if a button click assumes a server response such as “yes,” “no” or “maybe” (all of which might represent success to varying degrees), such a button would be better off without an optimistic pattern.
Know your API’s response times. This is crucial. If you know that the response time for a particular request never goes below 2 seconds, then sprinkling some optimism over your API first is probably best. As mentioned, an optimistic UI works best for server response times of less than 2 seconds. Going beyond that could lead to unexpected results and a lot of frustrated users. Consider yourself warned.
An optimistic UI is not just about button clicks. The approach could be applied to different interactions and events during a page’s lifecycle, including the loading of the page. For example, skeleton screens52 follow the same idea: You predict that the server will respond with success in order to fill out placeholders to show to the user as soon as possible.
Optimistic UI design is not really a novelty on the web, nor is it a particularly advanced technique, as we have seen. It is just another approach, another mental model, to help you manage the perceived performance55 of your product. Being grounded in the psychological aspects of human-computer interaction, optimistic UI design, when used intelligently, can help you to build better, more seamless experiences on the web, while requiring very little to implement. But, in order to make the pattern truly effective and to keep our products from lying to users, we must understand the mechanics of optimistic UI design.
We all recognize emoji. They’ve become the global pop stars of digital communication. But what are they, technically speaking? And what might we learn by taking a closer look at these images, characters, pictographs… whatever they are ? (Thinking Face). We will dig deep to learn about how these thingamajigs work.
Please note: Depending on your browser, you may not be able to see all emoji featured in this article (especially the Tifinagh1 characters). Also, different platforms vary in how they display emoji as well. That’s why the article always provides textual alternatives. Don’t let it discourage you from reading though!
Now, let’s start with a seemingly simple question. What are emoji?
What we’ll find is that they are born from, and depend on, the same technical foundation, character sets and document encoding that underlie the rest of our work as web-based designers, developers and content creators. So, we’ll delve into these topics using emoji as motivation to explore this fundamental aspect of the web. We’ll learn all about emoji as we go, including how we can effectively work them into our own projects, and we’ll collect valuable resources along the way.
There is a lot of misinformation about these topics online, a fact made painfully clear to me as I was writing this article. Chances are you’ve encountered more than a little of it yourself. The recent release of Unicode 9 and the enormous popularity of emoji make now as good a time as any to take a moment to appreciate just how important this topic is, to look at it afresh and to fill in any gaps in our knowledge, large or small.
By the end of this article, you will know everything you need to know about emoji, regardless of the platform or application you’re using, including the distributed web. What’s more, you’ll know where to find the authoritative details to answer any emoji-related question you may have now or in the future.
21 June 2016 brought the official release of Unicode Version 9.02, and with it 72 new emoji. What are they? Where do new emoji come from anyway? Why aren’t your friends seeing the new ROFL, Fox Face, Crossed Fingers and Pancakes emoji you’re sending them?! ? (Pouting Face emoji) Keep reading for the answers to these and many other questions.
A question to get us started: What is the plural form of the word “emoji”?
It’s a question that came up in the process of reviewing and editing this article. The good news is that I have an answer! The bad news (depending on how bothered you are by triviality) is that the answer is that there is no definitive answer. I believe the most accurate answer that can be given is to say that, currently, there is no established correct form for the plural of emoji.
An article titled “What’s the Plural of Emoji?3” by Robinson Meyer4, published by The Atlantic5 on 6 January 2016, discusses exactly this issue. The author turns up recent conflicting uses of both forms “emoji” and “emojis,” even within the same national publications:
In written English right now, there’s little consensus on this question. National publications have not settled on a regular style. The Atlantic, for instance, used both (emoji6, emojis7) in the last quarter of 2015. And in October alone in The New York Times, you could find the technology reporter Vindu Goel covering Facebook’s “six new emoji,”8 despite, two weeks later, Austin Ramzy detailing the Australian foreign minister’s “liberal use of emojis9.” …
The Unicode Emoji Subcommittee, which, as we will see, is the group responsible for emoji in the Unicode Standard, uses “emoji” as the plural form. This plural form appears in passages of documentation quoted in this article. Consider, for example, the very first sentence of the first paragraph of the Emoji Subcommittee’s official homepage at unicode.org10:
Emoji are pictographs (pictorial symbols) that are typically presented in a colorful form and used inline in text. They represent things such as faces, weather, vehicles and buildings, food and drink, animals and plants, or icons that represent emotions, feelings, or activities.
I have chosen the plural “emoji”, for the sake of consistency if nothing else. At this point in time, you can confidently use whichever form you prefer, unless of course the organization or individual for whom you’re writing has strong opinions one way or the other. You can and should consult your style guide if necessary.
We’ll start at the beginning, with the basic building blocks not just of emoji, nor even digital communication, but of all written language: characters and character sets.
A dictionary definition for a character will do to get us started: “A character is commonly a symbol representing a letter or number.”
That’s simple enough. But like so many other concepts, for it to be meaningful, we need to consider the broader context and put it into practice. Characters, in and of themselves, are not enough. I could draw a squiggle with a pencil on a piece of paper and rightfully call it a character, but that wouldn’t be particularly valuable. Not only that, but it is difficult to convey a useful amount of information using a single character. We need more.
Character Sets
A character set is “a set of characters.”
Expanding on that a bit, we can take a step back and consider a slightly more precise but still general description of a set: a group or collection of things that belong together, resemble one another or are usually found together.
Because we’re dealing with sets in the context of computing, we can be a little more precise. In the field of computer science a set is: a collection of a finite number of values in no particular order, with the added condition that none of the values are repeated.
What’s this about a collection? Technically speaking, a collection is a grouping of a number of items, possibly zero, that have some shared significance.
So, a character set is a grouping of some finite number of characters (i.e. a collection), in no particular order, such that none of the characters are repeated.
That’s a solid, precise, if pedantic, definition.
The World Wide Web Consortium (W3C38), the international community of member organizations that work together to develop standards for the web, has its own definition, which is not far from the generic one we’ve arrived at on our own.
A character set or repertoire comprises the set of characters one might use for a particular purpose — be it those required to support Western European languages in computers, or those a Chinese child will learn at school in the third grade (nothing to do with computers).
So, any arbitrary set of characters can be considered a character set. There are, however, some well-known standardized character sets that are much more significant than any random grouping we might put together. One such standardized character set, unarguably the most important character set in use today, is Unicode. Again, quoting the W3C40:
Unicode is a universal character set, i.e. a standard that defines, in one place, all the characters needed for writing the majority of living languages in use on computers. It aims to be, and to a large extent already is, a superset of all other character sets that have been encoded.
Text in a computer or on the Web is composed of characters. Characters represent letters of the alphabet, punctuation, or other symbols.
In the past, different organizations have assembled different sets of characters and created encodings for them — one set may cover just Latin-based Western European languages (excluding EU countries such as Bulgaria or Greece), another may cover a particular Far Eastern language (such as Japanese), others may be one of many sets devised in a rather ad hoc way for representing another language somewhere in the world.
Unfortunately, you can’t guarantee that your application will support all encodings, nor that a given encoding will support all your needs for representing a given language. In addition, it is usually impossible to combine different encodings on the same Web page or in a database, so it is usually very difficult to support multilingual pages using ‘legacy’ approaches to encoding.
The Unicode Consortium provides a large, single character set that aims to include all the characters needed for any writing system in the world, including ancient scripts (such as Cuneiform, Gothic and Egyptian Hieroglyphs). It is now fundamental to the architecture of the Web and operating systems, and is supported by all major web browsers and applications. The Unicode Standard also describes properties and algorithms for working with characters.
This approach makes it much easier to deal with multilingual pages or systems, and provides much better coverage of your needs than most traditional encoding systems.
We just learned that the Unicode Consortium is the group responsible for the Unicode Standard. From their website41:
The Unicode Consortium enables people around the world to use computers in any language. Our freely-available specifications and data form the foundation for software internationalization in all major operating systems, search engines, applications, and the World Wide Web. An essential part of our mission is to educate and engage academic and scientific communities, and the general public.
Unicode provides a unique number for every character,
no matter what the platform,
no matter what the program,
no matter what the language.
Fundamentally, computers just deal with numbers. They store letters and other characters by assigning a number for each one. Before Unicode was invented, there were hundreds of different encoding systems for assigning these numbers. No single encoding could contain enough characters: for example, the European Union alone requires several different encodings to cover all its languages. Even for a single language like English no single encoding was adequate for all the letters, punctuation, and technical symbols in common use.
These encoding systems also conflict with one another. That is, two encodings can use the same number for two different characters, or use different numbers for the same character. Any given computer (especially servers) needs to support many different encodings; yet whenever data is passed between different encodings or platforms, that data always runs the risk of corruption.
Unicode provides a unique number for every character, no matter what the platform, no matter what the program, no matter what the language. … The emergence of the Unicode Standard, and the availability of tools supporting it, are among the most significant recent global software technology trends.
In short, Unicode is a single (very) large set of characters designed to encompass “all the characters needed for writing the majority of living languages in use on computers.” As such, it “provides a unique number for every character, no matter what the platform, no matter what the program, no matter what the language.”
Both the W3C and Unicode Consortium use the term “encoding” as part of their definitions. Descriptions like that, helpful as they may be, are a big part of the reason why there is often confusion around what are in fact simple concepts. Encoding is a more involved, difficult-to-grasp concept than character sets, and one we’ll discuss shortly. Don’t worry about encoding quite yet; before we get from character sets to encoding, we need one more step.
A coded character set is a set of characters for which a unique number has been assigned to each character. Units of a coded character set are known as code points. A code point value represents the position of a character in the coded character set. For example, the code point for the letter ‘à’ in the Unicode coded character set is 225 in decimal, or E1 in hexadecimal notation. (Note that hexadecimal notation is commonly used for referring to code points…)
Note: There is an unfortunate mistake in the passage above. The character displayed is “à” and the location given for that symbol in the Unicode coded character set is 225 in decimal, or E1 hexadecimal notation. But 225 (dec) / E1 (hex) is the location of “á,” not “à,” which is found at 224 (dec) / E0 (hex). Oops! ? (Unamused Face emoji)
That isn’t too difficult to understand. Being able to describe any one character with a numeric code is convenient. Rather than writing “the Latin script letter ‘a’ with a diacritic grave,” we can say xE0, the hexadecimal notation for the numeric location of that symbol (“à”) in the coded character set known as Unicode. Among other advantages of this arrangement, we can look up that character without having to know what “Latin script letter ‘a’ with a diacritic grave” means. The natural-language way of describing a character can be awkward for us, even more so for computers, which are both much better at looking up numeric references than we are and much worse at understanding natural-language descriptions.
So, a coded character set is simply a way to assign a numeric code to every character in a set such that there is a one-to-one correspondence between character and code. With that, not only is the Unicode Consortium’s description of Unicode more understandable, but we’re ready to tackle encoding.
⛅ ? ? ? ? ?? ?
Encoding
We’ve quickly reviewed characters, character sets and coded character sets. That brings us to the last concept we need to cover before turning our attention to emoji. Encoding is both the hardest concept to wrap our heads around and also the easiest. It’s the easiest because, as we’ll see, in a practical sense, we don’t need to know all that much about it.
We’ve come to an important point of transition. Character sets and coded character sets are in the human domain. These are concepts that we must have a good grasp of in order to confidently and effectively do our work. When we get to encoding, we’re transitioning into the realm of the computing devices and, more specifically, the low-level storage, retrieval and transmission of data. Encoding is interesting, and it is important that we get right what little of it we are responsible for, but we need only a high-level understanding of the technical details in order to do our part.
The first thing to know is that “character sets” and “encodings” (or, for our purpose here, “document encodings”) are not the same thing. That may seem obvious to you, especially now that we’re clearly discussing them separately, but it is a common source of confusion. The relationship is a little easier to understand, and keep straight, if we think of the latter as “character set encodings.”
It’s back to the W3C’s “Character Encodings: Essential Concepts” for a definition of encoding44 to get us started:
The character encoding reflects the way the coded character set is mapped to bytes for manipulation by a computing device.
In the table below, which reproduces the same information from a graphic appearing in the W3C document, the first 4 characters and corresponding code points are part of the Tifinagh alphabet, and the fifth is the more familiar exclamation point.
The table shows, from left to right, the symbol itself, the corresponding code point and the way the code point maps to a sequence of bytes using the UTF-8 encoding scheme. Each byte in memory is represented by a two-digit hexadecimal number. So, for example, in the first row we see that the UTF-8 encoding of the Tifinagh letter ya (ⴰ) requires 3 bytes of storage (E2 B4 BO).
There are two important points to take away from the information in this table:
First, encodings are distinct from the coded character sets. The coded character set is the information that is stored, and the encoding is the manner in which it is stored. (Don’t worry about the specifics.)
Secondly, note how under the UTF-8 encoding scheme the Tifinagh code points map to three bytes, but the exclamation point maps to a single byte.
Although the code point for the letter à in the Unicode coded character set is always 225 (in decimal), in UTF-8 it is represented in the computer by two bytes. … there isn’t a trivial, one-to-one mapping between the coded character set value and the encoded value for this character. … the letter à can be represented by two bytes in one encoding and four bytes in another.
The encoding forms that can be used with Unicode are called UTF-8, UTF-16, and UTF-32.
The W3C’s explanation is accurate, concise, informative and, for many readers, clear as mud. At this point, we’re dealing with pretty low-level stuff. Let’s keep pushing ahead; as is often the case, learning more will give us the context we need to better understand what we’ve already seen.
UTF is a set of encodings specifically created for the implementation of Unicode. It is part of the core specification of Unicode itself.
The Unicode Consortium maintains an official website for Unicode 9.050 (as well as all previous versions of the specification). A PDF of the core specification51 was just recently published to the website in August 2016. You’ll find the discussion of UTF in “Section 2.5: Encoding Forms.”
Computers handle numbers not simply as abstract mathematical objects, but as combinations of fixed-size units like bytes and 32-bit words. A character encoding model must take this fact into account when determining how to associate numbers with the characters.
Actual implementations in computer systems represent integers in specific code units of particular size—usually 8-bit (= byte), 16-bit, or 32-bit. In the Unicode character encoding model, precisely defined encoding forms specify how each integer (code point) for a Unicode character is to be expressed as a sequence of one or more code units. The Unicode Standard provides three distinct encoding forms for Unicode characters, using 8-bit, 16-bit, and 32-bit units. These are named UTF-8, UTF-16, and UTF-32, respectively. The “UTF” is a carryover from earlier terminology meaning Unicode (or UCS) Transformation Format. Each of these three encoding forms is an equally legitimate mechanism for representing Unicode characters; each has advantages in different environments.
Note: These encoding forms are consistent from one version of the specification to the next. In fact, their stability is vital to maintaining the integrity of the Unicode standard. Whatever we read about the encoding forms in the Version 9.0 specification was true of Version 8.052 as well, and will hold going forward.
The Unicode specification discusses at length the pros and cons and preferred usage of these three forms — UTF-8, UTF-16 and UTF-32 — endorsing the use of all three as appropriate. For the purposes of this brief discussion of UTF encoding, it’s enough to know the following:
UTF-8 uses 1 byte to represent characters in the ASCII set, 2 bytes for characters in several more alphabetic blocks, 3 bytes for the rest of the BMP, and 4 bytes as needed for supplementary characters.
UTF-16 uses 2 bytes for any character in the BMP, and 4 bytes for supplementary characters.
UTF-32 uses 4 bytes for all characters.
From the brief description of storage requirements for the various UTF encodings above, you might guess that UTF-8 is more complicated to implement (owing to the fact that it is not fixed-width) but more space efficient than say UTF-32, which is more regular but less space-efficient, with every character taking up exactly 4 bytes.
Ignoring the Berber characters and focusing on the exclamation point in the rightmost column, we see that the same character would take up a single byte in UTF-8, 2 bytes (two times the storage) in UTF-16, and 4 bytes (four times the storage) in UTF-32. That’s three very different amounts of storage to convey the exact same information. Multiply that difference in storage requirements by the size of the web, estimated to be at least 4.83 billion pages65 currently, and it’s easy to appreciate that the storage requirements of these encodings is not an inconsequential consideration.
Whether or not that all made sense to you, here’s the good news…
When dealing with HTML, the character set we’ll use is Unicode, and the character encoding is always UTF-8. It turns out that that’s all we’ll ever need to concern ourselves with. ? (Relieved Face emoji, U+1F60C) Regardless, it’s no less important to be aware of the general concepts, as well as the simple fact that there are other character sets and encodings.
Now, we can bring all of this to the context of the web, and start working our way toward emoji.
⛅ ? ? ? ? ?? ?
Declaring Character Sets And Document Encoding On the Web
We need to tell user agents (web browsers, screen readers, etc.) how to correctly interpret our HTML documents. In order to do that, we need to specify both the character set and the encoding. There are two (overlapping) ways to go about this:
utilizing HTTP headers,
declaring within the HTML document itself.
That gives us a very quick summary. After a period of officially working together, these two standards bodies have parted ways. However, there is still an awkward collaboration of sorts on the HTML5 standard itself. The WHATWG works on its specification, rolling in changes continually. Much like a modern evergreen operating system (OS) or application with an update feature, the latest changes are incorporated without waiting for the next official release. This is what the WHATWG means by “living standards,” which it describes as follows:
This means that they are standards that are continuously updated as they receive feedback, either from Web designers, browser vendors, tool vendors, or indeed any other interested party. It also means that new features get added to them over time, at a rate intended to keep the specifications a little ahead of the implementations but not so far ahead that the implementations give up.
Despite the continuous maintenance, or maybe we should say as part of the continuing maintenance, a significant effort is placed on getting the specifications and the implementations to converge — the parts of the specification that are mature and stable are not changed willy nilly. Maintenance means that the days where the specifications are brought down from the mountain and remain forever locked, even if it turns out that all the browsers do something else, or even if it turns out that the specification left some detail out and the browsers all disagree on how to implement it, are gone. Instead, we now make sure to update the specifications to be detailed enough that all the implementations (not just browsers, of course) can do the same thing. Instead of ignoring what the browsers do, we fix the spec to match what the browsers do. Instead of leaving the specification ambiguous, we fix the the [sic] specification to define how things work.
For its part, the W3C will from time to time package these updates (at least some of them), as well as its own changes possibly, to arrive at a new version of its HTML 5.x standard.
Assuming that the WHATWG process works as advertised — and that may be a pretty good assumption considering that many of the people directly involved with the WHATWG also work for the organizations responsible for the implementation of the standard (e.g. Apple, Google, Mozilla and Opera) — the best strategy is probably to refer to the WHATWG spec first. That is what I have done in this article. Where I quote from an HTML5 spec, I am referencing the WHATWG specification. I do, however, make use of informational documents from the W3C throughout the article because they are helpful and not inconsistent with either spec.
Honestly, for our purposes here, it hardly matters. The sections I pull from are nearly (though not strictly) identical. But I suppose that’s really part of the problem, rather than evidence of cohesiveness. To get a sense of just how messy the situation is, take a look at the “Fork Tracking” page on the WHATWG’s wiki70.
content-type HTTP Header Declaration
As long as we’re talking about the web, there’s good reason to believe that the W3C has something to say71 about the topic:
When you retrieve a document, a web server sends some additional information. This is called the HTTP header. Here is an example of the kind of information about the document that is passed as part of the header with a document as it travels from the server to the client.
HTTP/1.1 200 OK Date: Wed, 05 Nov 2003 10:46:04 GMT Server: Apache/1.3.28 (Unix) PHP/4.2.3 Content-Location: CSS2-REC.en.html Vary: negotiate,accept-language,accept-charset TCN: choice P3P: policyref=http://www.w3.org/2001/05/P3P/p3p.xml Cache-Control: max-age=21600 Expires: Wed, 05 Nov 2003 16:46:04 GMT Last-Modified: Tue, 12 May 1998 22:18:49 GMT ETag: "3558cac9;36f99e2b" Accept-Ranges: bytes Content-Length: 10734 Connection: close Content-Type: text/html; charset=UTF-8 Content-Language: en
If your document is dynamically created using scripting, you may be able to explicitly add this information to the HTTP header. If you are serving static files, the server may associate this information with the files. The method of setting up a server to pass character encoding information in this way will vary from server to server. You should check with the documentation or your server administrator.
Without getting too heavily into the details, while still coming away from the discussion with some sense of how this works, let’s clarify some of the terminology.
HTTP is the network application protocol underlying communication on the web. (The same protocol is also used in other contexts, but was originally designed for the web.) HTTP is a client-server protocol and facilitates communication between the software making a request (the client) and the software fulfilling or responding to the request (the server) by exchanging request and response messages. These messages all must follow a well-defined, standardized structure so that they can be anticipated and interpreted properly by the recipient.
Part of this structure is a header providing information about the message itself, about the capabilities or requirements of the originator or of the recipient of the message, and so on. The header consists of a number of individual header lines. Each line represents a single header field comprising a lone key-value pair. One of the approximately 45+ defined fields is Content-Type, which identifies both the encoding and character set of the content of the message.
In the example above, among the header lines, we see:
Content-Type: text/html; charset=UTF-8
The field contains two pieces of information.
The first is the media type74, text/html, which identifies the content of the message as an HTML document, which a web server can process directly. There are other media types, like application/pdf (a PDF document), which generally need to be handled differently.
The second piece of information is the document encoding and character set, charset=UTF-8. As we’ve already seen, UTF-8 is exclusively used with Unicode. So UTF-8 alone is enough to identify both the encoding and character set.
You can view these HTTP headers yourself. Options for doing so include, among others:
Browser development tools
web-based tools
Checking HTTP Headers Using A Browser’s Developer Tools
Checking HTTP Headers with Firefox (Recent Versions)
Open the “Web Console” from the “Tools” → “Web Developer” menu.
Select the “Network” tab in the pane that appears (at the bottom of the browser window, if you haven’t changed this default).
Navigate to a web page you’d like to inspect. You’ll see a list consisting of all resources that contribute to the page fill-in, including the root document, at the top (with component resources listed underneath).
Select any one of these resources from the list for which you’d like to look at the accompanying headers. (The pane should split.)
Select the “headers” tab in the new pane that appears.
You’ll see response and request headers corresponding to both ends of the exchange, and among the response headers you should find the Content-Type field.
Checking HTTP Headers with Chrome (Recent Versions)
Open the “Developer Tools” from the “View” → “Developer” menu.
Select the “Network” tab in the pane that appears (at the bottom of the browser window, if you haven’t changed this default).
Navigate to a web page you’d like to inspect. You’ll see a list of all resources that contribute to the page fill-in, including the root document, at the top (with component resources listed underneath).
Select any one of these resources from the list for which you’d like to look at the accompanying headers. (The pane should split.)
Select the “headers” tab in the new pane that appears.
You’ll see response and request headers corresponding to both ends of the exchange, and among the response headers you should find the Content-Type field.
Checking HTTP Headers Using Web-Based Tools
Many websites allow you to view the HTTP headers returned from the server for any public website, some much better than others. One reliable option is the W3C’s own Internationalization Checker75.
Simply type a URL into the provided text-entry field, and the page will return a table with information related to the internationalization and language of the document at that address. You should see a section titled “Character Encoding,” with a row for the “HTTP Content-Type” header. You’re hoping to see a value of utf-8.
Using A Meta Element With charset Attribute
We can also declare the character set and encoding in the document itself. More specifically, we can use an HTML meta element to specify the character set and encoding.
There are two different, widely used, equally valid formats (both interpreted in exactly the same way).
The latter form is shorter and, so, easier to type and harder to get wrong accidentally, and it takes up less space. For these reasons, it’s the one we should use.
It might occur to you to ask (or not), “If the browser needs to know the character set and encoding before it can read the document, how can it read the document to find the meta element and get the value of the charset attribute?”
That’s a good question. How clever of you. ? (Octopus emoji, U+1F419 — widely considered to be among the cleverest of all animal emoji76) I probably wouldn’t have thought to ask that. I had to learn to ask that question. (Sometimes it’s not just the answers we need to learn, but the questions as well.)
It would be convenient if you could put the Content-Type of the HTML file right in the HTML file itself, using some kind of special tag. Of course this drove purists crazy… how can you read the HTML file until you know what encoding it’s in?! Luckily, almost every encoding in common use does the same thing with characters between 32 and 127, so you can always get this far on the HTML page without starting to use funny letters:
But that meta tag really has to be the very first thing in the <head> section because as soon as the web browser sees this tag it’s going to stop parsing the page and start over after reinterpreting the whole page using the encoding you specified.
Notice the use of the older-style meta element. This post is from 2003 and, therefore, predates HTML5.
The element containing the character encoding declaration must be serialized completely within the first 1024 bytes of the document.
In order to satisfy this condition as safely as possible, it’s best practice to have the meta element specifying the charset as the first element in the head section of the page.
This means that every HTML5 document should begin very much like the following (the only differences being the title text, possibly the value of the lang attribute, the use of white space, single versus double quotation marks around attribute values, and capitalization).
Now you might be thinking, “Great, I can specify the character set and encoding in the HTML document. I’d much rather do that than worry about HTTP header field values.”
I don’t blame you. But that begs the question, “What happens if the character set and encoding are set in both places?”
Maybe somewhat surprisingly the information in the HTTP headers takes precedence. Yes, that’s right, the meta element and charset attribute do not override the HTTP headers. (I think I remember being surprised by this.) ? (Slightly Frowning Face emoji, U+1F641)
If you have access to the server settings, you should also consider whether it makes sense to use the HTTP header. Note however that, since the HTTP header has a higher precedence than the in-document meta declarations, content authors should always take into account whether the character encoding is already declared in the HTTP header. If it is, the meta element must be set to declare the same encoding.
There’s no getting away from it. To avoid problems, we should always make sure the value is properly set in the HTTP header as well as in the HTML document.
An Encoding By Any Other Name
Before we finish with character sets and encoding and move on to emoji, there are two other complications to consider. The first is as subtle as it is obvious.
It’s not enough to declare a document is encoded in UTF-8 — it must be encoded in UTF-8! To accomplish this, your editor needs to be set to encode the document as UTF-8. It should be a preference within the application.
Say, I have a joke for you… What time is it when you have an editor that doesn’t allow you to set the encoding to UTF-8?
Punchline: Time to get a new editor! ? (Face with rolling eyes emoji, U+1F644)
So, we’ve specified the character set and encoding both in the HTTP headers and in the document itself, and we’ve taken care that our files are encoded in UTF-8. Now, thanks to Unicode and UTF-8, we can know, without any doubt, that our documents will be interpreted and displayed properly for every visitor using any browser or other user agent on any device, running any software, anywhere in the world. But is that true? No, it’s not quite true.
There is still a missing piece of the puzzle. We’ll come back to this. Building suspense, that’s good writing! ? (Smiling Face with Mouth Open emoji, U+1F603)
⛅ ? ? ? ? ?? ?
What Were We Talking About Again? Oh Yeah, Emoji!
So What Are Emoji?
We’ve already mentioned the Unicode Consortium, the non-profit responsible for the Unicode standard.
There’s a subcommittee of the Unicode Consortium dedicated to emoji, called, unsurprisingly, the Unicode Emoji Subcommittee80. As with the rest of the Unicode standard, the Unicode Consortium and its website (unicode.org81) are the authoritative source for information about emoji. Fortunately for us, it provides a wealth of accessible information, as well as some more formal technical documents, which can be a little harder to follow. An example of the former is its “Emoji and Dingbats” FAQ82.
Q: What are emoji?
A: Emoji are “picture characters” originally associated with cellular telephone usage in Japan, but now popular worldwide. The word emoji comes from the Japanese 絵 (e ≅ picture) + 文字 (moji ≅ written character).
Note: See those Japanese characters in this primarily English-language document? Thanks Unicode!
Emoji are often pictographs — images of things such as faces, weather, vehicles and buildings, food and drink, animals and plants — or icons that represent emotions, feelings, or activities. In cellular phone usage, many emoji characters are presented in color (sometimes as a multicolor image), and some are presented in animated form, usually as a repeating sequence of two to four images — for example, a pulsing red heart.
Q: Do emoji characters have to look the same wherever they are used?
A: No, they don’t have to look the same. For example, here are just some of the possible images for U+1F36D LOLLIPOP, U+1F36E CUSTARD, U+1F36F HONEY POT, and U+1F370 SHORTCAKE:
In other words, any pictorial representation of a lollipop, custard, honey pot or shortcake respectively, whether a line drawing, gray scale, or colored image (possibly animated) is considered an acceptable rendition for the given emoji. However, a design that is too different from other vendors’ representations may cause interoperability problems: see Design Guidelines86 in UTR #5187.
Read through just that one FAQ — and this article, of course ? (Grinning face with Smiling Eyes emoji, U+1F601) — and you’ll have a better handle on emoji than most people ever will.
How Do We Use Emoji?
The short answer is, the same way we use every other character. As we’ve already discussed, emoji are symbols associated with code points. What’s special about them is just semantics — i.e. the meaning we ascribe to them, not the mechanics of them.
If you have a key on a keyboard mapped to a particular character, producing that character is as simple as pressing the key. However, considering that, as we’ve seen, more than 120,000 characters are currently in use, and the space defined by Unicode allows for more than 1.1 million of them, creating a keyboard large enough to assign a character to each key is probably not a good strategy.
When we exhaust the reach of our keyboards, we can use a software utility to insert characters.
Recent versions of macOS include an “Emojis and Symbols” panel that can be accessed from anywhere in the OS (via the menu bar or the keyboard shortcut Control + Command + Space). Other OS’ offer similar capabilities to view and click or to copy and paste emoji into text-entry fields. Applications may offer additional app-specific features for inserting emoji beyond the system-wide utilities.
Lastly, we can take advantage of character references to enter emoji (and any other character we like, for that matter) on the web.
Character References
Character references are commonly used as a way to include syntax characters as part of the content of an HTML document, and also can be used to input characters that are hard to type or not available otherwise.
Note: I’m sure many of you are familiar with character references. But if you keep reading this section, it wouldn’t surprise me if you learn something new.
HTML is a markup language, and, as such, HTML documents contain both content and the instructions describing the document together as plain text in the document itself. Typically, the vast majority of characters are part of the document’s content. However, there are other “special” characters in the mix. In HTML, these form the tags corresponding to the HTML elements that define the structure and semantics of the document. Moreover, it’s worth taking a moment to recognize that the syntax itself — i.e. its implementation — creates a need for additional markup-specific characters. The ampersand is a good example. The ampersand (&) is special because it marks the beginning of all other character references. If the ampersand itself were not treated specially, we’d need another mechanism altogether.
Syntax characters are treated as special, and should never be used as part of the content of a document, because they are always interpreted specially, regardless of the author’s intent. Mistakenly using these characters as content makes it difficult or impossible for a browser or other user agent to parse the document correctly, leading to all sorts of structural and display issues. But aside from their special status, markup-related characters are characters like any other, and very often we need to use them as content. If we can’t type the literal characters, then we need some other way to represent them. We can use character references for this, referred to as “escaping” the character, as in getting outside of (i.e. escaping) the character’s markup-specific meaning.
What characters need to be escaped? According to the W3C three characters should always be escaped88: the less-than symbol (<, <), the greater-than symbol (>, >) and the ampersand (&, &) — just those three.
Two others, the double quote (“, ") and single quote (‘, ') are often escaped based on context; in particular, when they appear as part of the value of an attribute and the same character is used as the delimiter of the attribute value.
Note: Even this is a bit of a fib, though it comes from a typically reliable source. To be safe, we can always escape those characters. But in fact, because of the way document parsing works, we can often get away without escaping one or more of them.
If you’re interested in a more detailed run-through of just how complicated and fiddly the exceptions to the syntax-character rules can get in practice, have a look at the blog post “Ambiguous Ampersands110,” in which Mathias Bynens considers precisely when these characters must be escaped and when they don’t need be in practice.
But here’s the thing: These types of liberties can, and frequently do, cascade through our markup when we inevitably make other mistakes. For that reason alone, you may want to stick to the advice from the W3C. Not only is it the safer approach, it’s a lot easier to remember, and that’s not a bad thing.
In addition to these few syntax characters, there are similar references for every other Unicode character as well, all 120,000+ of them. These references come in two types:
named character references (named entities)
numeric character references (NCRs)
Note: It is perfectly confusing that the term “numeric character reference” is abbreviated NCR, which could just as easily be used as the abbreviation for named character reference. ? (Weary Face, U+1F629)
Named character references
Named character references (also known as named entities, entity references or character entity references) are pre-defined word-like references to code points. There are quite a few of them. The WHATWG provides a handy comprehensive table of character reference names supported by HTML5113, listing 2,231 of them. That’s far, far more than you will ever likely see used in practice. After all, the idea is that these names will serve as mnemonics (memory aids). But it’s difficult to remember 2,231 of anything. If you don’t know that a named reference exists, you won’t use it.
But where does this information come from? How can we be certain that the information in the table referenced above, which I describe as “comprehensive,” is in fact comprehensive? Not only are those perfectly valid questions, it’s exactly the type of questioning we need more of, to cut through the clutter of hearsay and other misinformation that is all too common online.
The very best sources of authoritative information are the specifications themselves, and that “handy, comprehensive table” is in fact is a link to section 12.5 of the WHATWG’s HTML5 spec114.
Let’s say you wanted to include a greater-than symbol (>) in the content of your document.
As we have just seen, and as you probably already knew, we can’t just press the key bearing that symbol on your keyboard. That literal character is special and may be treated as markup, not content. There is a named character reference for the greater-than symbol. It looks like >. Typing that in your document will get you the character you’re looking for every time, >.
So, in summary, named entities are predefined mnemonics for certain Unicode characters.
Essentially, a group of people thought it would be nice if we could refer to & as &, so they arranged to make that possible, and now & corresponds to the code point for &.
Numeric Character References
There isn’t a named reference for every Unicode character. We just saw that there are 2,231 of them. That’s a lot, but certainly not all. You won’t find any emoji in that list, for one thing. But while there are not always named references, there are always numeric character references. We can use these for any Unicode character, both those that have named references and those that don’t.
An NCR is a reference that uses the code point value in decimal or hexadecimal form. Unlike the names, there is nothing easy to remember about these.
We’ve looked at the ampersand (&), for which there is a named character reference, &. We can also write this as a numeric reference in decimal or hexadecimal form:
decimal: & (&)
hexadecimal: & (&)
Note: The # indicates that what follows is a numeric reference, and #x indicates that the numeric reference is in hexadecimal notation.
You’ll find a table containing a complete list of emoji in the document “Full Emoji Data182117,” maintained by the Unicode Consortium’s Emoji Subcommittee. It includes several different representations of each emoji for comparison and, importantly, the Unicode code point in hexadecimal form, as well as the name of the character and some additional information. It’s a great resource and makes unnecessary any number of other websites that often contain incomplete, out of date or otherwise partially inaccurate information.
Note: If you look carefully, you might spot some unfamiliar emoji in this list. As I write this, the list includes new emoji from the just recently released Unicode Version 9.0. So, you can find ? Face with Cowboy Hat (U+1F920), ? Shrug (U+1F937), ? Selfie (U+1F91E), ? Potato (U+1F954) and 68 others from the newest version of Unicode118.
To look at just a few:
? (?): Pile of Poo
? (?): Doughnut (that’s one delicious-looking emoji)
? (?): Person Doing a Cartwheel (new with version 9)
Are you seeing a box or other generic symbol, rather than an image for the last emoji in the list (and the preceding paragraph as well)? That makes perfect sense, if you’re reading this around the time I wrote it. In fact, it would be more remarkable if you were seeing emoji there. That last emoji, Person Doing a Cartwheel (U+1F938), is new as of Unicode Version 9.0119, released on 21 June 2016. It isn’t surprising at all if your platform or application doesn’t yet support an emoji released within the past several days (or weeks). Of course, as this newest release of Unicode begins to roll out, Version 9.0’s 72 new emoji120 will begin to appear, including ? (Person Doing a Cartwheel, U+1F938), the symbol for which will automatically replace the blank here and in the list above.
Do we really have to type in these references? No, if the platform or application you’re using allows you to enter emoji in some other way, that will work just fine. In fact, it’s preferred. Emoji are not syntax characters and so can always be entered directly.
Here’s a basketball emoji I inserted into this document from OS X’s “Emojis and Symbols” panel: ? (Basketball, U+1F3C0).
This brings up a more general question, “If there’s a reference for every Unicode character, when should we use them?”
The W3C provides us with a perfectly reasonable, well-justified answer to this question in a Q&A document titled “Using Character Escapes in Markup and CSS121“. The short version of its answer to the question of when should we use character references is, as little as possible:
When not to use escapes
It is almost always preferable to use an encoding that allows you to represent characters in their normal form, rather than using character entity references or NCRs.
Using escapes can make it difficult to read and maintain source code, and can also significantly increase file size.
Many English-speaking developers have the expectation that other languages only make occasional use of non-ASCII characters, but this is wrong.
Take for example the following passage in Czech.
Jako efektivnější se nám jeví pořádání tzv. Road Show prostřednictvím našich autorizovaných dealerů v Čechách a na Moravě, které proběhnou v průběhu září a října.
If you were to require NCRs for all non-ASCII characters, the passage would become unreadable, difficult to maintain and much longer. It would, of course, be much worse for a language that didn’t use Latin characters at all.
Jako efektivnĕjší se nám jeví pořádání tzv. Road Show prostřednictvím našich autorizovaných dealerů v Čechách a na Moravě, které proběhnou v průběhu září a října.
As we said before, use characters rather than escapes for ordinary text.
So, we should only use character references when we absolutely must, such as when escaping markup-specific characters, but not for “ordinary text” (including emoji). Still, it is nice to know that we can always use a numeric character reference to input any Unicode character at all into our HTML documents. Literally, a world ? (Earth Globe Americas, U+1F30E) of characters are open to us.
The final point I will make before moving on has to do with case-sensitivity. This is another one of those issues about which there is much confusion and debate, despite the fact that it is not open to interpretation.
Character References and Case-Sensitivity
Named character references are case-sensitive and must match the case of the names given in the table of named character references supported by HTML122 that is part of the HTML5 spec. Having said that, if you look at the named references in that table carefully, you will see more than one name that maps to the same code point (and, so, the same character).
Take the ampersand. You’ll find the following four entries in the table for this single character:
This may be where some of the confusion originates. One could easily be fooled into assuming that this simulated, limited case-insensitivity is the real thing. But it would be a mistake to think that these kinds of variations are consistent. They definitely are not. For example, the one and only one valid named reference for the backslash character is Backslash;.
Backslash; — U+02216 (∖)
You won’t find any other references to U+02216 in that table, and so no other form is valid.
If we look closely at the four named entities for the ampersand (U+00026) again, you’ll see that half of them include a trailing semicolon (;) and the other half don’t. This, too, may have led to confusion, with some people mistakenly believing that the semicolon is optional. It isn’t. There are some explicitly defined named character references, such as AMP and amp, without it, but the vast majority of named references and all numeric references include a semicolon. Furthermore, none of the named references without the trailing semicolon can be used in HTML5. ? (Face with Mouth Open, U+1F62E) Section 12.1.4 of the HTML5 specification123 tells us (emphasis added):
Character references must start with a U+0026 AMPERSAND character (&). Following this, there are three possible kinds of character references:
Named character references
The ampersand must be followed by one of the names given in the named character references124 section, using the same case. The name must be one that is terminated by a U+003B SEMICOLON character (;).
Decimal numeric character reference
The ampersand must be followed by a U+0023 NUMBER SIGN character (#), followed by one or more ASCII digits125, representing a base-ten integer that corresponds to a Unicode code point that is allowed according to the definition below. The digits must then be followed by a U+003B SEMICOLON character (;).
Hexadecimal numeric character reference
The ampersand must be followed by a U+0023 NUMBER SIGN character (#), which must be followed by either a U+0078 LATIN SMALL LETTER X character (x) or a U+0058 LATIN CAPITAL LETTER X character (X), which must then be followed by one or more ASCII hex digits126, representing a hexadecimal integer that corresponds to a Unicode code point that is allowed according to the definition below. The digits must then be followed by a U+003B SEMICOLON character (;).
“The ampersand must be followed by one of the names given in the named character references section, using the same case. The name must be one that is terminated by a U+003B SEMICOLON character (;).”
That answers that.
By the way, the alpha characters in hexadecimal numeric character references (a to f, A to F) are always case-insensitive.
⛅ ? ? ? ? ?? ?
Glyphs
At the end of the encoding section, I asked whether Unicode and UTF-8 encoding are enough to ensure that our documents, interfaces and all of the characters in them will display properly for all visitors to our websites and all users of our applications. The answer was no. But if you remember, I left this as a cliffhanger. ? (Hushed Face, U+1F62F) There’s just one thing left to complete our picture of emoji (pun intended) ? (Grimacing Face U+1F62C).
Unicode and UTF-8 do most of the heavy lifting, but something is missing. We need a glyph associated with every character in order to be able to see a representation of the character.
In typography, a glyph /’ɡlɪf/ is an elemental symbol within an agreed set of symbols, intended to represent a readable character for the purposes of writing and thereby expressing thoughts, ideas and concepts. As such, glyphs are considered to be unique marks that collectively add up to the spelling of a word, or otherwise contribute to a specific meaning of what is written, with that meaning dependent on cultural and social usage.
For example, in most languages written in any variety of the Latin alphabet the dot on a lower-case i is not a glyph because it does not convey any distinction, and an i in which the dot has been accidentally omitted is still likely to be recognized correctly. In Turkish, however, it is a glyph because that language has two distinct versions of the letter i, with and without a dot.
The relationship between the terms glyph, font and typeface is that a glyph is a component of a font that is composed of many such glyphs with a shared style, weight, slant and other characteristics. Fonts, in turn, are components of a typeface (also known as a font family), which is a collection of fonts that share common design features but each of which is distinctly different.
Essentially, we need a font that includes a representation for the code point of the emoji we want to display. If we don’t have that symbol we’ll get a blank space, an empty box or some other generic character as an indication that the symbol we’re after isn’t available. You’ve probably seen this. Now you understand why, if you didn’t before.
What would happen if you tried to insert those characters into a web page using a numeric character reference? Let’s see:
ⴰ (ⴰ)
ⵣ (ⵣ)
ⵓ (ⵓ)
ⵍ (ⵍ)
! (!)
Chances are you can see the exclamation point, but some of the others might be missing, replaced by a box or other generic symbol.
As has already been mentioned, those symbols are Tifinagh characters. Tifinagh130 is “a series of abjad and alphabetic scripts used to write Berber languages.” According to Wikipedia131:
Berber or the Amazigh languages or dialects (Berber name: Tamaziɣt, Tamazight, ⵜⴰⵎⴰⵣⵉⵖⵜ [tæmæˈzɪɣt], [θæmæˈzɪɣθ]) are a family of similar and closely related languages and dialects indigenous to North Africa. They are spoken by large populations in Algeria and Morocco, and by smaller populations in Libya, Tunisia, northern Mali, western and northern Niger, northern Burkina Faso, Mauritania, and in the Siwa Oasis of Egypt. Large Berber-speaking migrant communities have been living in Western Europe since the 1950s. In 2001, Berber became a constitutional national language of Algeria, and in 2011 Berber became a constitutionally official language of Morocco, after years of persecution.
Not long ago, it would have been surprising if your platform displayed any of the Tifinagh characters. But wide internationalization support has improved dramatically and is getting better all the time, thanks in large part to Unicode. There is now a good chance you’ll see them.
We can still reliably stump our platforms with references to characters that have been only very recently released, as we saw earlier with ? (Person Doing a Cartwheel, U+1F938).
For those of you who really appreciate missing characters, here are some others:
? — (Drooling Face, U+1F924)
? — (Shrug, U+1F937)
? — (Face Palm, U+1F9260)
? — (Selfie, U+1F933)
? — (Owl, U+1F989)
? — (Carrot, U+1F955)
? — (Shallow Pan of Food, U+1F958)
? — (Shopping Trolley, U+1F6D2)
Note: If you are seeing the eight emoji in that list, then it’s safe to say you have support for the newest emoji introduced with Unicode Version 9.0. As we’ll see, that support could be coming from your OS or application or it could even be loaded via Javascript for a particular website (but not all of the others).
Regardless of what you or I do or do not see, Unicode is doing its part.
OK, so emoji are symbols (i.e. glyphs) that correspond to a specific code point — that’s a clean, easy-to-understand arrangement… well, it’s not quite so simple. There is one more thing we need to cover — the zero-width joiner.
Zero-Width Joiner: The Most Important Character You’ll Never See
The zero-width joiner (ZWJ) has a code point but no corresponding symbol. It is used to connect two or more other Unicode code points to create a new “compound character” with a unique glyph all its own.
The U+200D ZERO WIDTH JOINER (ZWJ) can be used between the elements of a sequence of characters to indicate that a single glyph should be presented if available. An implementation may use this mechanism to handle such an emoji zwj sequence as a single glyph, with a palette or keyboard that generates the appropriate sequences for the glyphs shown. So to the user, these would behave like single emoji characters, even though internally they are sequences.
When an emoji zwj sequence is sent to a system that does not have a corresponding single glyph, the ZWJ characters would be ignored and a fallback sequence of separate emoji would be displayed. Thus an emoji zwj sequence should only be supported where the fallback sequence would also make sense to a recipient. …
So, a ZWJ is exactly what it says it is: It does not have any appearance (i.e. it is “zero width”), and it joins other characters.
One important use of the ZWJ related to emoji are the skin tone modifiers that have already been mentioned. We’ve said that these skin tones were added to Unicode in version 8.0 to bring diversity to the appearance of emoji depicting human beings by allowing for a range of skin color. More specifically, Unicode has adopted the Fitzpatrick scale, a numeric classification scheme for skin tones specifying six broad groups or “types” of skin (Type I to Type VI) that represent in a general way at least a majority of people. From the Wikipedia entry for the Fitzpatrick scale138:
It was developed in 1975 by Thomas B. Fitzpatrick, a Harvard dermatologist, as a way to estimate the response of different types of skin to ultraviolet (UV) light. It was initially developed on the basis of skin and eye color, but when this proved misleading, it was altered to be based on the patient’s reports of how their skin responds to the sun; it was also extended to a wider range of skin types. The Fitzpatrick scale remains a recognized tool for dermatological research into human skin pigmentation.
…
Type I (scores 0–6) always burns, never tans (pale white; blond or red hair; blue eyes; freckles).
Type II (scores 7–13) usually burns, tans minimally (white; fair; blond or red hair; blue, green, or hazel eyes)
Type III (scores 14–20) sometimes mild burn, tans uniformly (cream white; fair with any hair or eye color)
Type IV (scores 21–27) burns minimally, always tans well (moderate brown)
Type V (scores 28–34) very rarely burns, tans very easily (dark brown)
Type VI (scores 35–36) Never burns, never tans (deeply pigmented dark brown to darkest brown)
Getting back to the ZWJ, let’s take a closer look at this in practice using the Boy emoji (U+1F466). To illustrate that this really works as described, I’m going to write out all of the emoji using numeric references. (You can take a look at the page source if you want to confirm this.)
So, we’ll start with our base emoji:
? — (Boy, ?)
Although the Fitzpatrick scale specifies six skin types, the depiction of the first two types using emoji are combined under Unicode. So, only five skin tone modifiers are actually available to us:
? — Emoji Modifer Fitzpatrick Type 1-2 (?)
? — Emoji Modifer Fitzpatrick Type 3 (?)
? — Emoji Modifer Fitzpatrick Type 4 (?)
? — Emoji Modifer Fitzpatrick Type 5 (?)
? — Emoji Modifer Fitzpatrick Type 6 (?)
These skin tone modifiers appear in the list “Full Emoji Data144,” along with all of the other emoji that are part of Unicode. They are represented as a square or some other shape of the appropriate color (i.e. skin tone) when used alone. That’s what you should be seeing in the list above.
We can manually build up the skin-type variants of the Boy emoji by combining the base emoji with each of the skin-type symbols:
?? — Boy Type 1 to 2 (??)
?? — Boy Type 3 (??)
?? — Boy Type 4 (??)
?? — Boy Type 5 (??)
?? — Boy Type 6 (??)
Note: The used in the list above is just an image I’ve used as a placeholder for the ZWJ character, which is not visible itself.
Family and other groups are built up in exactly same way, one character at a time, with ZWJs in between. But don’t forget we’ve already learned that we should use character references only when we absolutely must. So, although we could build up the group “Family, Man, Woman, Girl, Boy” manually…
? ? ? ? — ????
…it’s preferable (and also easier and less error-prone) to use the emoji symbol itself. So, of course that is what we should do. — ????
We could also write the Latin lowercase “a” as either simply “a” or a (a). Imagine writing an entire HTML document out using only numeric character references, and you will probably appreciate the pointlessness of the exercise.
How Do We Know If We Have These Symbols?
Like any other character, in order to display a specific emoji, a symbol must be included in some font available on the device you are using. Otherwise, the OS will find no way to graphically represent the code point.
An OS, be it a desktop operating system such as Apple’s macOS, or a mobile OS like iOS and Android, ships with quite a few preinstalled fonts. (Many more can be acquired and installed separately.) The fonts available may differ by geographic region or primary language of the intended users across localized versions.
Many of these fonts will overlap, offering the same characters but presenting them in different styles. From one font to the next, these differences may be subtle or extreme. However, not all fonts overlap. Fonts intended for use with Western languages, for example, will not contain the symbols for Chinese, Japanese or Korean characters, and the reverse is also true. Even overlapping fonts may have extended characters that differ.
It is the responsibility of the font designer to decide which characters will be included in a given font. It is the responsibility of developers of an OS to make sure that the total collection of fonts covers all of the intended languages and provides a wide range of styles and decorative, mathematic and miscellaneous symbols and so on, so that platform is as expressively powerful as possible.
The current versions of all major platforms (Windows, macOS, Linux, iOS and Android) support emoji. Precisely what that means differs from one platform to the next, version to version and, in the case of Linux, from distribution to distribution.
The Great Emoji Proliferation Of 2016
In this article we’ve covered a little about the history of emoji through the current version of the Unicode Standard, Version 9.0 (released 21 June 2016). We’ve seen that Version 9.0 introduced 72 entirely new emoji, not counting variants. (The number is 167 if we add skin tone modifers.)
Before moving on I should mention that the emoji included in Unicode 9.0 are referred to separately as “Unicode Emoji Version 3.0”. That is to say that “Unicode Version 9.0 emoji” and “Unicode Emoji Version 3.0” are the same set of characters.
As we begin to look at Emoji support with the next section, and considering everything we’ve learned about emoji so far, it would seem to be a relatively straight-forward to know what we’re looking for. Ideally we would like to see full support for all Version 3.0 emoji, including skin tone modifiers, and multi-person groupings. After all, a platform can’t possibly do better than full support for the current version of the Standard. That would be madness! Have you heard the Nietzsche quote:
There is always some madness in technology. But there is also always some reason in madness.
Nietzsche was talking about love, but isn’t it just as true of tech? ? (Winking face, U+1F609)
It turns out you can do “better” than 100% support, that is depending on your definition of better. Let’s say that you can do more than 100% support. This isn’t a unique concept in technology, or even web design and development. For example, implementators doing more than 100% support for CSS led to vendor prefixes. Somehow more than 100% always seems good at first, but tends to lead to problems. It’s a little like how spending more than 100% of the money you have starts out as a solution to a problem, but tends to eventually lead to more problems ? (Frowning face with open mouth, U+1F626).
What does more than 100% support look like in the context of emoji?
First, let’s agree that simply calling something an emoji does not make it an emoji. You may remember (or not) Pepsi’s global PepsiMoji campaign145. A PepsiMoji is not an emoji, it’s an image. We won’t concern ourselves with gimmicky marketing ploys like that. There are still a couple of ways platforms are exceeding 100% support:
Using Zero Width Joiners to create non-standard symbols by combining standard emoji.
Rolling out proposed emoji before they are officially released.
An example of the former are Microsoft’s somewhat silly (IMO) Ninja Cat emoji, which I discuss in the section on Windows’ emoji support. But cat-inspired OS mascots are not the only place we see this sort of unofficial emoji sequence. More practical uses have to do with multi-person groupings and diversity, i.e. gender and skin tones.
Emoji for multi-person groupings present some special challenges:
Gender combinations. Some multi-person groupings explicitly indicate gender: MAN AND WOMAN HOLDING HANDS, TWO MEN HOLDING HANDS, TWO WOMEN HOLDING HANDS. Others do not: KISS, COUPLE WITH HEART, FAMILY (the latter is also non-specific as to the number of adult and child members). While the default representation for the characters in the latter group should be gender-neutral, implementations may desire to provide (and users may desire to have available) multiple representations of each of these with a variety of more-specific gender combinations.
Skin tones. In real multi-person groupings, the members may have a variety of skin tones. However, this cannot be indicated using an emoji modifier with any single character for a multi-person grouping.
The basic solution for each of these cases is to represent the multi-person grouping as a sequence of characters—a separate character for each person intended to be part of the grouping, along with characters for any other symbols that are part of the grouping. Each person in the grouping could optionally be followed by an emoji modifier. For example, conveying the notion of COUPLE WITH HEART for a couple involving two women can use a sequence with WOMAN followed by an emoji-style HEAVY BLACK HEART followed by another WOMAN character; each of the WOMAN characters could have an emoji modifier if desired.
This makes use of conventions already found in current emoji usage, in which certain sequences of characters are intended to be displayed as a single unit.
We’re told, “the default … should be gender-neutral, implementations may desire to provide … multiple representations of each of these with a variety of more-specific gender combinations.”
This is in fact what implementations are doing, and pretty aggressively. As we will see, with its Windows 10 Anniversary Update, Microsoft now allows for as many as 52,000 combinations of multi-person groupings. Is this a good thing?
It is certainly laudable effort. Having said that, it’s worth keeping in mind what it means from a standards perspective. Though Microsoft has greatly increased universality of its emoji through its flexibly diverse emoji handling, it comes at the expense of compatibility with other platforms. Be aware that users of every other platform, including every version of Windows other than Windows 10 Anniversary Update, will not see many of these groupings as intended.
Microsoft is not the only implementor to do this, but they’ve taken it further than others.
That leaves the idea of rolling out proposed emoji before they are officially released. Everyone is getting in on this, and it is a new phenomenon. First, let’s take a brief moment to understand the situation.
Like most other standards bodies, the Unicode Consortium doesn’t wait for the release of one version of a standard to begin working on the next. As soon as the door closes on inclusion for one release, changes and new proposals are evaluated for the next.
Generally speaking, the Unicode Consortium invites outside parties to participate in selecting future emoji, the process for which is outlined in the document “Submitting Emoji Character Proposals147“. As that page describes, the process is somewhat drawn out, and proposals can be rejected for many reasons, but those that fair well are eventually added as “candidates”.
From the same page:
…proposals that are accepted as candidates are added to Emoji Candidates148, with placeholder code points…
So proposed Emoji are made public in some cases well before inclusion in the Unicode Emoji Standard. However the proposals carry this warning:
Candidates are tentative: they may be removed or their code point, glyph, or name changed. No code point values for candidates are final, until (and if) the candidates are included as characters in a version of Unicode. Do not deploy any of these.
That would seem pretty cut and dry. Promising proposals are added to a publicly available candidates list with the disclaimer that they should not be used until officially released. But rarely are these kinds of issues so simple for long. Seemingly more often than not, the exception is the rule, and by that measure, emoji are following the rules.
Google wants to increase the representation of women in emoji and would like to propose that Unicode implementers do the same. Our proposal is to create a new set of emoji that represents a wide range of professions for women and men with a goal of highlighting the diversity of women’s careers and empowering girls everywhere.
That proposal, well worth reading, was submitted in May 2016, a little over a month before the release of Unicode Version 9.0 with its 72 new emoji. It has led to a flurry of activity resulting in a Proposed Update to UTR #51150, and making way for Unicode Emoji Version 4.0 much more quickly than might have been expected. How quickly — right about now.
The original proposal led to another by the Unicode Subcommittee, “Gender Emoji ZWJ Sequences151“, published on 14 July 2016, that fast-tracked new officially recognized sequences providing greater gender parity among existing emoji characters as well as new profession emoji.
From the proposal:
This document describes how vendors can support a set of both female and male versions of many emoji characters, including new profession emoji. Because these emoji use sequences of existing Unicode characters composed according to UTR#51: Unicode Emoji, vendors can begin design and implementation work now and can deploy before the end of 2016, rather than waiting for Unicode v10.0 to come out in June of 2017.
Unicode itself does not normally specify the gender for emoji characters: the emoji character is RUNNER, not MAN RUNNER; POLICE OFFICER not POLICEMAN. Even where the name may appear to be exclusively one gender, such as U+2603 SNOWMAN or U+1F482 GUARDSMAN the character can be treated as neutral regarding gender.
To get a greater sense of realism for these characters, however, vendors typically have picked the appearance of a particular gender to display. This has led to gender disparities in the emoji that people can use. There is also a lack of emoji representing professions and roles, and the few that are present (like POLICE OFFICER) do not provide for both genders; a vendor has to choose one or the other, but can’t represent both.
How big a change are we talking about? There are a total of 88 new sequences combining existing emoji in new ways to provide gender alternates for current characters and also professions, for which there are male and female representations. However, adding skin tone variants to these new alternate characters and professions means that in total the number of emoji has increased from 1,788 in Version 3.0 to 2,243 in the proposed Version 4.0. That’s 455 new emoji in approximately 2 months. ? (Rocket, U+1F680)
So why am I bothering to discuss unreleased, “beta” emoji? After all, the review period for the new proposed standard doesn’t close until 24 October 2016 (coincidentally the date this article is scheduled to be published). I’m covering all of this because implementations are already rolling out these changes. It’s no longer possible to accurately describe the current level of emoji support on these platforms without mentioning the proposed Version 4.0. Now that we have covered all of the officially-official emoji through the current version of the Standard, and unofficially-official emoji included in the post-current Standard, we can make sense of emoji support for across popular platforms. ? (Dizzy face, U+1F635)
Emoji OS Support
Emoji Support: Apple Platforms (macOS and iOS)
The current version of Apple’s Mac operating system, “macOS Sierra” (10.12) released on 20 September 2016, includes well over 100 fonts, and among them one named “Apple Color Emoji”. It’s this font that contains all of the symbols for the platform’s native emoji. The same font is used by the current version of Apple’s mobile OS, iOS 10.
Users of Apple’s OS’ have typically enjoyed very good emoji support, and the newest versions continue the trend. To begin with, iOS 10 and macOS Sierra support all emoji through Unicode Version 9.0. Apple’s OS release cycle is nicely timed as far as emoji are concerned. Unicode updates are happening in the Summer, and bring with them changes to Emoji. Apple is able to roll them out across all of their platforms in the Fall.
Beyond Unicode Emoji Version 3.0, macOS Sierra and iOS 10 support the gendered ZWJ sequences that are key part of the proposed Unicode Version 4.0. However new professions and sequences that add skin tone modifiers to existing multi-person groupings didn’t make the update. As a bonus Apple threw in the Version 4.0 draft specification ?️? (Rainbow flag sequence — White flag, U+1F3F3 + Emoji variation selector, U+FE0F + ZWJ, U+200D + Rainbow, U+1F308).
In total, Apple’s newest OS updates include 632 changes and additions. Some of these changes are minor and reflect nothing more than an evolution of design sensibilities of those involved at Apple. Others are more dramatic however, most notably ? (pistol, U+1F52B), which has been changed from a realistic looking weapon to a cartoonish sci-fi water gun.
Windows 8 included a limited set of black-and-white emoji with the “Segoe UI Symbol” font. This same font was eventually added to Windows 7, providing that version of the OS with its basic emoji symbols.
Windows 8.1 was the first Windows OS to support color emoji by default, shipping with the “Segoe UI Emoji” font, which provides Windows with its unique set of color emoji symbols.
Windows 10 continues to build on increasingly good support, adding all Unicode Version 8.0 emoji, including skin tone modifiers156. However, Windows 10 did not include symbols for national flags ?? (Flag of the United States of America, U+1F1FA, U+1F1F8) ?? (Flag of Germany U+1F1E9, U+1F1EA), which are displayed as two-character country-code identifiers instead.
Note: Flag emoji are each associated with pairs of 26 individual “regional indicator symbols.” These combinations are referred to as an “emoji_flag_sequence”. It is the combination of pairs of code points together that produce a single flag emoji. For more information about flag symbols, refer to “Annex B: Flags157” in the document “Unicode Technical Report #51158.”
Windows 10 Anniversary Update159 released on 2 August 2016 (and available now via the Windows Update facility on your Windows 10 PC), brings with it a wealth of changes to emoji on Windows. The update (officially, version 1607, and the second major update to Windows 10) includes all of the new Unicode Version 9.0 emoji, but that’s just the beginning.
The Microsoft Design Language Team embarked on Project Emoji, redesigning the emoji set from scratch in under a year. From early sketches to creating a new scripting method, the team knew only emoji. Illustrators, graphic designers, program managers, font technicians, production designers, and scripting gurus all worked with an impressive singular focus.
It’s a testament to the acknowledged significance of emoji that Microsoft would make this kind of an effort.
The update includes “over 1700 new glyphs, with a possible 52,000 combinations of diverse women, men, kids, babies, and families.” As we’ve already discussed, Unicode 9.0 adds only 72 new emoji. The fact that Microsoft added 1700 new symbols clearly demonstrates that diversity was a critical focus of the update. Support for diversity begins with the same skin tone modifiers available under previous editions of Windows 10, now extended to more emoji. But the most ambitious effort is expansive support for family and other multi-person groups.
From the same “Project Emoji” blog post:
So if you’re a single mother with three kids, you’ll be able to create that image. If your husband is dark-toned and you’re light-toned and your two kids are a blend of both, you can apply all of those modifiers to create your own personal family emoji, one that’s sincerely representative. It extends to the couple emoji, where you can join a woman, a heart, and a woman — both with unique skin tones — for a more inclusive emoji. Because they’re created dynamically, there are tens of thousands of permutations. And no other platform supports that today.
Emoji in Windows 10 Anniversary Update gives us a good sense of the scale of expanding skin tone modifiers across flexible multi-person groupings. However this effort seems to be largely independent of Unicode Emoji Version 4.0. Missing are all of the gendered and profession emoji that are the hallmark of proposed version.
The first thing you might notice are the bold outlines surrounding each of the new emoji. But the changes go much further than that. On the minor end of the scale, virtually all emoji have a more geometric look, which contributes to an overall stronger, more readable appearance. Beyond this, many emoji are drastically different, going so far as to be essentially new interpretations. Generally speaking, the new emoji are less generic, willowy and neutral, which is to say that they are more iconic, bold and dynamic. They’ve gone from looking like designs you might see on the wallpaper in a nursery to what you’d expect of the signage in a modern building. Some examples will give you a better sense of what I’m trying to describe:
For more information on all things emoji in Windows 10 Anniversary Update, I highly recommend the Windows Experience Blog162 post “Project Emoji: The Complete Redesign163161,” written by Danielle McClune (4 August 2016). It does a good job of covering the changes introduced with the Anniversary Update and also provides a bit of an insider’s perspective. For those of you not particularly interested in Windows, the article offers some useful general information, including a brief illustrated discussion of the emoji skin tone modifiers and the Fitzpatrick scale.
If you still can’t get enough of Windows 10 emoji goodness, the next place to turn to is Emojipedia’s blog post “Avalanche of New Emojis Arrive on Windows164,” which nicely displays an overview of the changes to emoji in Anniversary Update. Beyond that, the Emojipedia page dedicated to the Anniversary Update165 lists all of the emoji for this version of Windows, with an option to narrow the list to just the new symbols.
If all of this wasn’t enough, Microsoft has added a new emoji keyboard to improve the experience of working with its updated emoji.
It’s fair to say Microsoft has really upped its emoji game with the Windows 10 Anniversary Update. Are there any notable gaps or other issues related to Windows’ emoji support other than the absent Version 4.0 symbols? I’ll mention two…
First, the flag emoji are still missing. You will continue to see country-code identifiers, as in earlier versions of Windows 10.
Second is something of an oddity that’s specific to Windows 10 Anniversary Update (and incompatible with every other platform): Ninja Cat.
Ninja Cat is a character that started out as something of an unofficial mascot for Windows 10 among developers, making its first appearance in a presentation about the OS in mid-2014 (before its release). Apparently, Ninja Cat has proven to be popular and enduring enough over the past couple of years to justify some desktop wallpapers and an animated GIF166, coinciding with the Anniversary Update, and — you know what’s coming — there are ninja cat emoji as well.
I’m making a point of mentioning Ninja Cat because it is another example of the use of zero-width joiners. All Ninja Cat emoji (yes there’s more than one) are sequences of the ? (Cat face, U+1F431) emoji in combination with other standard emoji, connected with the ZWJ character, and resulting in new unofficial symbols.
Note: If skin tone modifiers and flexible multi-person groups are among the more important uses of ZWJ, then non-standardized mascots and gimmicks have to be among the worst, as fun as they may be.
The basic Ninja Cat is a combination of ? (Cat Face, U+1F431) and ? (Bust in Silhouette, U+1F464). Other combinations include:
Here is what those symbols look like in Windows 10 Anniversary Update (the only place you will see them):
Figure 6: Ninja Cat emoji in Windows 10 Anniversary UpdateNinja CatAstro CatDino CatHacker CatHipster CatStunt Cat
Emoji Support: Linux
If you’re a Linux user, you’ll know that these kinds of things tend to be distribution-dependent. The Unicode underpinnings are there. What may be missing is a font providing the symbols for displaying the emoji. Adding an emoji font is a relatively simple matter, and good options are available, including some that we’ll see shortly.
Emoji Support: Android
Android has supported emoji since Jelly Bean (4.1) and color emoji since Kit Kat (4.4). The latest version of Android, the recently released “Nougat” version 7.1 includes substantial changes. Like Microsoft, Google has made a big push toward ensuring its emoji support is second to none. The prior version of Android, “Marshmallow” (6.0.1), supported all of the official emoji through Unicode Version 8, with the notable exception of skin tone modifiers.
“Noto” and “Roboto” are the standard font families on recent versions of Android and Chrome. Unlike “Apple Color Emoji” and Window’s “Segoe UI Emoji,” Noto167 and Roboto168 are freely available for download, including “Noto Color Emoji,” the primary emoji font on Android.
For starters, Google is beginning to move away from the amorphous blobs that Android users are familiar with. However, Android is keeping the same gumdrops for generic faces:
Figure 8: Generic face emoji in Android Nougat (version 7.0)Slightly Smiling Face, U+1F642 (Look familiar? Yep, it’s unchanged from Android 6.0.1)Smiling Face With Open Mouth, U+1F603Face With Stuck-out Tongue, U+1F61BDrooling Face, U+1F924 (New with Unicode 9.0)Rolling on the Floor Laughing emoji, U+1F923 (New with Unicode 9.0)
Those last two, both introduced in Unicode 9.0, are proof that Google is not entirely abandoning its gumdrops. But emoji are changing where it matters most, with human-looking depictions for less generic faces, actions and groups, complete with skin tone modifiers (conspicuously absent from Android to date).
Here are a few examples to give you some sense of how much the situation has improved:
Figure 9: Comparison of Android emoji from 6.0.1 to 7.0 Dev Preview 2Man with TurbanAndroid 6.0.1: Man With Turban, U+1F473Android N Dev Preview 2: Man With Turban, U+1F473 (U+200D U+1F3FB, U+200D U+1F3FC, U+200D U+1F3FD, U+200D U+1F3FE, U+200D U+1F3FF)GirlAndroid 6.0.1: Girl, U+1F467Android N Dev Preview 2: Girl, U+1F467 (U+200D U+1F3FB, U+200D U+1F3FC, U+200D U+1F3FD, U+200D U+1F3FE, U+200D U+1F3FF)Happy Person Raising One HandAndroid 6.0.1: Happy Person Raising One Hand, U+1F467Android N Dev Preview 2: Happy Person Raising One Hand, U+1F64B (U+200D U+1F3FB, U+200D U+1F3FC, U+200D U+1F3FD, U+200D U+1F3FE, U+200D U+1F3FF)Man and Woman Holding HandsAndroid 6.0.1: Man and Woman Holding Hands, U+1F46BAndroid N Dev Preview 2: Man and Woman Holding Hands, U+1F46BCouple with HeartAndroid 6.0.1: Couple With Heart: Woman, Woman (U+1F469 U+200D U+2764 U+FE0F U+200D U+1F469); Man, Man (U+1F468 U+200D U+2764 U+FE0F U+200D U+1F468); Woman, Man (U+1F469 U+200D U+2764 U+FE0F U+200D U+1F468)Android N Dev Preview 2: Couple with Heart: Woman, Woman (U+1F469 U+200D U+2764 U+FE0F U+200D U+1F469); Man, Man (U+1F468 U+200D U+2764 U+FE0F U+200D U+1F468); Woman, Man (U+1F469 U+200D U+2764 U+FE0F U+200D U+1F468)
Nougat includes all of the new emoji introduced with Unicode Version 9169. However, despite it originally being Google’s proposal, and their continued close involvement with the beta standard, Version 4.0 emoji, including gendered emoji and professions, didn’t make the original Nougat (7.0) update.
However, just in the past few days, on 20 October 2016, Google released Android 7.1 with support for those Emoji Version 4.0 sequences. The 7.1 update is the first version of Android to include the new professions and genered emoji, as well as an expansion of multi-person groupings to include single parent families. For good measure Android 7.1 also includes ?️? (Rainbow flag sequence — White flag, U+1F3F3 + Emoji variation selector, U+FE0F + ZWJ, U+200D + Rainbow, U+1F308).
Once again, Emojipedia is a good resource for platform-specific emoji information. A page dedicated to Android Nougat 7.1170 shows all of the emoji for the most recent version the OS. You can find pages for earlier releases as well.
Emoji On The Web
The person viewing your website or application must have emoji support to see the intended symbols. We’ve specified the character set and encoding, and written emoji into our documents and UIs. The user agent renders the page and all of the characters on it, and that’s what emoji are, of course.
You’ll recognize that this is no more than the usual arrangement. However, emoji are newer than the other elements we’re used to working with, and what’s more, they occupy an odd space somewhere between text and images. Also, many of us begin with a shaky understanding of character sets and encodings. Altogether, it leads to confusion and consternation ? (Confused Face, U+1F615) in the way that only something similar to but not exactly the same as what we already know well can. Though this should come as no surprise, it’s critically important, and for that reason I’m mentioning it here at the end of the article.
A code point without a glyph is just a code point. U+2D53 (Tifinagh letter Yu) is clearly different than U+1F32E (Taco emoji). But, ultimately, we care only about the corresponding symbols ⵓ (U+2D53) and ? (U+1F32E). As usual, we’re at the mercy of our audience.
What about polyfills, shims and fallbacks? Is there anything like that we can use to improve the situation? I’m happy to say that the answer is yes.
Emoji One
The Emoji One171 project is aimed at creating a collection of up-to-date, high-quality and universal emoji resources. The largest component of the project by far is a comprehensive set of emoji that can be freely used by anyone for any purpose, both non-commercial or commercial (with attribution).
Figure 10: Emoji One set sampler
In the words of the people responsible for the project, Emoji One is:
The web’s first and only complete open source emoji set. It is 100% free and super easy to integrate.
In December 2015, Emoji One announced the upcoming release of an updated and redesigned set of all of their emoji, branded “The 2016 Collection” (Q1 2016 Version 2.1.0, January 29, 2016172). More importantly, they committed to quarterly design updates173, a promise they have made good on to date.
On May 30th, Emoji One officially released its 2nd quarterly update (Q2 2016 update, Version 2.2.0174), with a total of 624 “design upgrades,” encompassing changes to existing symbols and entirely new (to the set) emoji.
Shortly thereafter, on June 21st, and coinciding with the official release of Unicode 9.0, Emoji One released version 2.2.4175, updating its set to include the 72 new Unicode emoji along with associated skin tone sequences.
Version 2.2.4 is the current version, though Emoji One has recently begun promoting the next major update, version 3.0, teasing “The revolution begins this December.”
Currently, the Emoji One set comprises 1834 emoji, organized in nine categories, all of which can be browsed on the Emoji One website in the Emoji Gallery176 and searched via The Demo177.
emojicopy provides a responsive interface for searching emoji and copying selected symbols, with optional text, to paste into other apps.
emoji.codes offers a number of useful features, including a “Cheat Sheet180” for browsing and searching Emoji One short codes, a method of inputting emoji in applications and websites with built-in support for Emoji One.
emoji.codes’ “Family Tree181” is a table comparing emoji symbols across as many as 10 platforms (Symbola, Emoji One, Apple, Google, Windows, Twitter, Mozilla, LG, Samsung and Facebook). It’s essentially a prettier version of the same sort of table on the Unicode Consortium’s “Full Emoji Data182117” page previously mentioned.
The Family Tree is certainly much cleaner and provides a beneficial search interface. Just keep in mind that, when in doubt, the Unicode Consortium is the authoritative source.
Lastly, an Emoji Rolodex183 lists links to quite a few resources: information, libraries, scripts, plugins, standalone apps and more. There are valuable tools here and some purely fun links, too.
All Emoji One art files are available for personal and commercial use under a Creative Commons license (CC BY 4.0184). Emoji are available as PNG, SVG and font files (for Android, macOS and iOS) and can be downloaded as a complete set from the Emoji One developer page185 or individually from the gallery186.
In addition to the emoji art itself, the developer resources include a toolkit of conversion scripts, style sheets, sprite files and more.
The project also offers an extension for Google Chrome, called, fittingly, “EmojiOne for Chrome187” which is described as “four game-changers in one,” providing:
a panel in Chrome for inputting emoji
emoji character search
set-once toggling between skin tones
well, the last “game changer” is Emoji One itself (which is cheating, but I think we can let it slide)
Emoji One is, without a doubt, an important (assuming you think emoji are important) well-executed, well-maintained and ambitious project. If all of that wasn’t enough, Emoji One has paid up to become a voting member of the Unicode Consortium.
Let’s hope the project has a bright future.
We can include Emoji One in our websites and applications and know that not only will visitors see the emoji we intend, but also that the emoji we use will maintain a consistent, high-quality appearance across devices, OS’ and browsers. Come on, that’s pretty great!
To get started, you’ll want to read through the information in Emoji One’s GitHub repository188, or download the complete developer toolkit189. Presently, the toolkit is around 60 MB and includes all emoji symbols as PNG files in three sizes (64, 128 and 512 pixels), and SVG vector images as well.
If you just want to read through the instructions for getting started, the Emoji One readme on GitHub190 is probably a better option.
You will learn that Emoji One has partnered with JSDeliver191, a “free super-fast CDN for developers and webmasters” (their words, not mine) to make it easy to install on any Javascript-capable website. It’s a matter of adding a script and link element to the CDN-hosted Javascript and CSS files in the usual way.
There are also options for installing via npm, Bower, Composer and Meteor package managers. If any of those are relevant to you, then you should have no trouble at all.
Twemoji
I would be remiss if I didn’t mention that Emoji One is not your only option for open source emoji sets. In November 2014, Twitter announced on its blog that it was “Open Sourcing Twitter emoji for Everyone192.”
Figure 11: Twemoji set sampler
From the blog post:
The project ships with the simple twemoji.js library that can be easily embedded in your project. We strongly recommend looking at the preview.html source code193 to understand some basic usage patterns and how to take advantage of the library, which is hosted by our friends at MaxCDN194.…
For more advanced uses, the twemoji library has one main method exposed: parse. You can parse a simple string, that should be sanitized, which will replace emoji characters with their respective images.
Twemoji195 is hosted on GitHub and includes, among other resources, a set of emoji images as PNG files at 16 × 16, 36 × 36, 72 × 72 sizes, and SVG vectors as well.
After the initial release, a version 2 included emoji from Unicode Version 8 (as well as other changes). Subsequently, version 2.1 added symbols for all of the Unicode 9 emoji, for a total of over 1830 symbols. Presently, the current version of Twemoji is 2.2, which includes support for the gendered and profession emoji from the Unicode Emoji Version 4.0 draft, bringing the total number of symbols to 2,477. A complete download of the current version of the project is an over 400 MB ZIP file. You can read about the latest updates in the project’s ReadMe file196.
Conclusion
⛅ ? ? ? ? ?? ?
Are we done? Could it be? Do we know everything there is to know about emoji? ? (Tired Face, U+1F62B)
It might be a stretch to say we know absolutely everything there is to know, but we know what we need to know to find and understand everything there is to know. ? (Face with Look of Triumph U+1F624)
These days, I’ve been pondering what purpose we as developers have in our world. I’m not able to provide you with an answer here, but instead want to encourage you to think about it, too. Do you have an opinion on this? Are we just pleasing other people’s demands? Or are we in charge of advising the people who demand solutions from us if we think they’re wrong? A challenging question, and the answer will be different for everyone here. If you want to let me know your thoughts, I’d be happy to hear them.
Bear with me, this week’s list is a large one. Too many good resources popped up, explaining technical and design concepts, how to use new JavaScript methods to write smarter applications, how to use CSS Grid Layouts, and how to take care of your happiness.
The Safari Technology Preview 171 adds support for Custom Elements v1, rel=noopener, and stylesheet loading via a link element inside Shadow DOM subtrees. Furthermore, preloading behavior was changed — it now matches iOS where resources like images get less priority when loading.
Already available in Nightly Builds, the feature to emulate throttled network connections2 in Firefox’s Developer Tools will soon be added to the stable release, too.
How do you design a simple, usable registration form for a tax reform? @jelumalai explains the process from a designer’s perspective6, diving deep into the challenge of asking for a lot of information while maintaining a clear workflow for the user.
7 How can you master the balancing act between asking for a lot of information and keeping a form as simple and usable for the user? @jelumalai shares his lessons learned8. (Image credit: @jelumalai9)
Stefan Judis explains when to use and when not to use aria-selected12. Applying it to the current active navigation item, for example, isn’t correct, but applying it to the current active tab in a tablist, on the other hand, would be.
JavaScript’s requestIdleCallback method14 will soon come to Firefox 52. If you don’t want to wait, good news: It can already be tested in Nightly Builds and is also supported in Chrome where it adds great value to scheduling tasks in cooperation with the browser environment.
Oliver Williams shares what he learned about CSS Grid Layout16. Once you realize that it’s designed to be used alongside Flexbox and not as a replacement, you’ll slowly grasp how powerful the new technology really is.
JP de Vries shares the challenges of unfolding critical CSS17 and why most websites are better off without it.
Mike Monteiro gave an impactful talk at this year’s Beyond Tellerrand conference in Berlin. “Let Us Now Praise Ordinary People21” opens our eyes to how we can change the world and why we need to get over-hyped startups that only claim to change something to actually do meaningful work. If I can make you watch one thing this week, take 45 minutes, sit back and listen to Mike Monteiro.
selfcare.tech22 wants to help developers take better care of their health. It shows some great methods for solving common problems every one of us will face at some point.
These solar panels are certainly a cool invention23: They can pull drinking water straight from the air, up to 5 liters per day per panel. A very nice way to source water when you don’t have traditional water resources.
Every developer knows that just because a website looks like and does what it’s meant to on the latest iPhone, doesn’t mean it will work across every mobile device. In this article, we’ll highlight some of the many open device labs out there — fantastic and helpful initiatives by the community that deserve support and attention.
Open device labs (ODLs) are a response to the myriad of operating systems, browsers and devices that litter our technical landscape. They offer developers a (usually) free space to go to test their web systems, websites and apps on a range of software and hardware. This premise forms the core of the OpenDeviceLab.com1 initiative, which is a community movement to help people locate the right ODL for the job and to drum up further support for these testing centers.
Testing on a variety of real devices is crucial for ensuring the intended user experience, especially looking over the boundaries of each ones own cultural and/or economic area. But, by far not everybody has access to the necessary test bed. Think of people producing content/apps for the first world, not taking less developed regions or those with a completely different mobile device landscape into account — the business will fail on one side, since UX or even plain accessibility will suck on the other.
He adds:
ODLs even provide ways to get stakeholders and customers to live the urge for necessary budget decisions on quality assurance, accessibility and I18n and L10n.
There are now more than 150 ODLs in 35+ countries, but ODLs may have reached the peak of their popularity, given the growing attention on in-house device labs (IDLs).
While new ODLs are opening at a constant rate, a handful of ODLs have closed down due to a lack of demand. Penn State University, for example, recently closed its ODL and replaced it with a virtual-only offering, due to a lack of funding, despite being the only one in the region. This virtual testing environment now mainly uses BrowserStack and similar virtual machine (VM) platforms.
Greg O’Toole, a lecturer at Penn State University, says:
We have had fairly consistent interest and inquiry levels from groups and individual software developers from around the Penn State community and the PA community at large looking for ways to get involved and/or utilize the offerings of the ODL@PSU over the past 24 months. I am optimistic still. We have had several offers from these community members to donate Fitbit devices, old laptops, etc. which sort of orbit around the target set of devices we are looking for.
IDLs do seem to be a popular option. Many of the labs we spoke with originated on a company’s property, eventually opening up as ODLs to support the development work of others.
Christian Schaefer, freelance front-end developer from the ODL initiative, says:
I don’t think ODLs are important in today’s technical landscape, IDLs are. ODLs served the purpose of steering the people’s interest towards the topic of real device testing. And ODLs may also be blueprints for how to set up such a thing yourself, inhouse. But reality shows that ODLs are often only used by their owners/administrators.
ODLs not being used by external people has to do with convenience and also with access to development infrastructure, which is mostly only available within a company. That’s why companies started assembling their own device labs — sometimes opening it up to others, thereby creating a new ODL.
Andre adds:
In general, ODLs located at companies see way lower public usage as compared to ODLs in public places, like co-working spaces, libraries, or places that have been built solely for running the ODL. Privacy and competition are to be taken into account, this is where some companies posted excellent upfront statements and provided physical space separate from their regular offices — others didn’t.
ODLs are particularly popular in the UK and Germany, with other such labs popping up around the world. Whether you want to call them ODLs or IDLs, the need to test on physical devices seems unlikely to lose momentum.
Here are some of the best ODLs for the developer community:
This stylish lab is the only one in the Augsburg region of Germany, and you can get a free coffee if you work here. It supports its users with a balance of professional knowledge and consulting in fields such as design and UX (as a member of the German Berufsverband User Experience und Usability (UPA) group) and development (with Magento certification and Drupal Association membership).
The lab also works with the University of Applied Sciences to support its students in classes and projects.
Address: Alter Teichweg 25a, 22081 Hamburg, Germany
All devices in this test lab run with purchased OS versions and offer a virtual environment for each major Internet Explorer (IE) version and Windows operating system. You can use one of the lab’s computers equipped with a Ghost Lab license and test pages simultaneously on multiple devices. Plus, there’s free coffee and water, as well as a friendly team to chat about the latest trends and operating systems.
Address: LAUNCH/CO Hamburg, Neuer Kamp 3, 20359 Hamburg, Germany
This lab maintains a wide range of setups and devices, and it is open for ad-hoc testing sessions; that is, people can usually drop in the same day to test, without booking ahead.
It is run by fellow developers, and someone is always there to help if a question arises or advice is needed. Devices are also lent out on request.
This long-standing lab has worked in the mobile space for more than 15 years. Its work has spanned many technologies, including WAP and J2ME, CHTML (aka i-mode) and XHTML/MP, the W3C Mobile Web Initiative, the app store (thanks to the iPhone and Android devices), HTML5 and responsive web design, and it’s adapting to a (potential) future with progressive web apps.
This lab doesn’t just focus on mobile devices, though — it also houses some specialist devices, such as Oculus Rift and Google Glass.
Address: c/o BNT.DE, Löwengasse 27E, 60385 Frankfurt am Main, Germany
With continued five-star ratings24, this lab is a popular choice with its users and engages in regular meetups. It has 32 devices and offers paid testing sessions for larger companies. This busy lab also requests users to make an appointment before turning up.
This is one of the few labs in the world that uses a charging model to keep the facility up to date with OS and hardware releases, with all the money made going back into keeping the lab open. This ODL is also exploring other tasks it can help the community with, like adding usability testing facilities for guerilla user testing, session recording and remote observation.
The lab offers free access for open-source and community projects, ad-hoc rates from £25 per hour for small businesses, and monthly subscriptions that help it maintain recurring revenue and stay open.
The team also built a private test lab for the Scottish government’s web team and is currently working on some interesting analytics tools for identifying the right test coverage for your audience.
Address: The Old Treasury, 7a Kings Road, Southsea, PO5 4DJ, UK
This lab may have been open only a few months, but its dedicated private space is already home to several key devices. All devices are laid out and labelled on shelves, with a work area where you can use browser sync on your laptop to experiment with the layout.
The ODL space can also be booked for usability testing31. It houses dedicated Windows and Mac laptops that stream user website tests to a screen outside the room for observation and analysis. This facility allows developers and designers to ensure that every project delivers a seamless and engaging user experience.
Address: 12 Basepoint, Andersons Road, Southampton, Hampshire, SO14 5FE, UK
With more than 25 devices, ranging from an iPhone 6 Plus to an old HTC Desire rocking Android 2.2, this ODL allows the local community to test on real devices, old and new, to make sure their projects work for as many people as possible.
This ODL is a big fan of progressive enhancement and encourages people to build resilient and inclusive websites. The relaxed and collaborative environment in the office also means that team members are on hand for advice.
Adam Tomat from the lab says, “We started the ODL because the local community is important to us. We recognised how difficult it is to test responsive websites without access to real devices and also how unfeasible it is for every individual or organisation to acquire a large enough selection. It’s been a great way for us to connect with new people, start conversations and help to build on the existing digital community here in Southampton.”
Address: We Are Base, 65 Seamoor Rd, Bournemouth, BH4 9AE, UK
Believed to be the largest collection of open devices in the world, this ODL has more than 440 devices available to use for web and app testing. With everything from brand new devices to old niche devices available, you can book for 1 to 2 hours per day, and only 10 devices can be booked out at a time, for security reasons.
Aside from being the largest in the world, this lab is right in the heart of one of the UK’s fastest-growing tech clusters in Bournemouth and Poole and provides an essential resource for local agencies, product teams, freelancers and students.
Address: Harella House, 90-98 Goswell Road, London, EC1V 7DF, UK
This popular London-based device lab has four hot seats in its studio, so you get access to a lot of kit, and you can feel like part of the studio as you see the lab’s practitioners at work.
Former managers have given each of the lab’s devices a name under their own naming convention. The current manager names devices after movie characters. Its latest phone is “Johnny the iPhone 7 Plus,” which you might recognize from the Short Circuit films. If you’d like to use Johnny or any of his 52 companions, write to device.lab@foolproof.co.uk41.
43 Cover-Up has an impressive lineup of devices, all of which the lab originally used to test its own products. (View large version44)
Address: Unit 1, George Street, Bridgend, CF31 3TS, UK
Cover-Up’s core business is creating innovative accessories for a selection of phones, tablets and laptops. As a result, for each new product range that it creates, it always buys the device, so that it can test firsthand. So, the company has a huge selection of gadgets that laid around for many years gathering dust — until its ODL was born.
The ODL now houses dozens of devices and is open 9:00 to 17:00, Monday to Friday. Just drop the team an email to book a spot at this ODL serving South Wales.
This lab holds a variety of screen sizes, from televisions down to watches, and also has devices of varying quality, from old monitors to HD screens. A selection of input types are available, such as trackpads, mice, styluses and screenreaders, installed to test for issues beyond just screen layout, which is a nice touch.
This balance of old and new devices is important, as Paul Burgess from the Indylab explains: “We shouldn’t assume that everyone is using the latest technology. Many people are using devices that haven’t been updated in a long time, so we include some older operating systems and pre-smartphone devices, as well as an iPod running a vintage iOS 2.2.1.”
Studio Massis is a coworking studio created by and for creative freelancers located in Gent, near the Dampoort railway station.
The space holds a range of mobile devices, and testing can be done free of charge on the premises. You can take devices to test at your own office, but there is a fee and warranty request for this service.
This new lab is just starting out, so not everything is set up yet. This ODL not only provides mobile devices for app and web development, but also has a green key, motion-capture suits and a broad range of media devices, including VR headsets and cool stuff like Leap Motion.
The lab depends on Erasmus University College Brussels, so priority goes to students during weekdays. The lab is reserved every Wednesday, serving visitors from 13:00 to 21:00.
This ODL contains a lot of obscure devices with non-mainstream operating systems such as Firefox OS, BlackBerry Playbook and even a few webOS, Ubuntu Phone and Tizen devices. Access to the device lab is free, and coffee is even provided.
The lab recently received a great number of donated devices have not yet been unpacked, tested or listed on the website. So, more to come!
58 In this closed-off device lab, you can focus on testing and debugging your application. (View large version59)
Address: Incentro Rotterdam (building Koffie, Unit 2429, 4th Floor), Van Nelleweg 1, 3044BC Rotterdam, The Netherlands
This lab is located in Van Nellefabriek, a UNESCO World Heritage Site. It houses a broad variety of devices, from the more common iPhone devices to old BlackBerrys, Google Glass and Oculus Rift. Visitors can also work in a closed-off space to focus on debugging and testing applications.
The lab’s goal is to help people build better software. It is used for the internal projects of digital service company Incentro60, but anyone is welcome to use this space.
Users can come here to test devices, or the lab can test a solution on your behalf (for a fee). With 18 devices and more added all the time, you can see your application on all devices simultaneously.
68 MODL is situated in a “glass box” in the middle of the inUse office, with a privacy screen available. (View large version69)
Address: C/O inUse, Gustav Adolfs Torg 10B, 4th floor, Malmö, Sweden
This lab is hosted in a cozy office, populated mostly by experienced digital designers — but it is run by a couple of passionate front-end developers. There’s always free coffee, and you can also chat with others in the same industry while you take a break from testing. Despite being in an open office, you can pull the drapes on the glass box for some privacy while testing.
This ODL only allows for websites to be tested, not native apps, due to administrative difficulties with developer units.
Address: Rue de Sébeillon 9B, 1004 Lausanne, Switzerland
With free coffee, this ODL is the first one in the French-speaking part of Switzerland. The team takes the time to help users set up and use their devices. The lab is now also available in a portable format (a suitcase), which you can take with you (free of charge) to test from your place of work.
The local multimedia school (Eracom) often comes to look at its projects on old devices, and user-testing sessions are also organized in the lab twice a month.
Address: 3635 Perkins Ave, Cleveland, Ohio 44114, USA
The thunder::tech User Experience Lab provides an array of devices that cover a wide range of operating systems and screen sizes, for testing most real-world scenarios. It also houses some unusual items, including a 3D printer for rapid prototyping, wearables such as Google Glass, Samsung Gear VR, Apple Watch, Android Wear watch and interactive devices such as Leap Motion.
Over the last year, this ODL has hosted a Cleveland UXPA event, in which the lab discussed its process for testing clients’ websites. Madison Letizia, social-media team manager at thunder::tech, says, “That’s a big step from where our lab was started many years ago as it originally used only employees’ personal smartphones to fill the need of website testing for client projects. It quickly evolved into the MTL (Mobile Testing Lab) with dedicated devices and today it is officially a User Experience Lab, providing countless opportunities to experience tech, express creativity and learn something new.
Address: 505 S. Rosa Road, Suite 225, Madison, WI 53719, USA
This completely community-supported lab has been built up by various members over time. It’s free to any member in the coworking space who wants to use it.
The ODL is expanding soon, so anyone considering using the space should come and check it out.
Aside from being the only Open Device Lab in Montreal, this lab is not just a library of devices: It’s run by a friendly team of professional software testers who will happily provide advice and share their knowledge with visitors over coffee.
This ODL is located in the heart of downtown Montreal, so it’s easily accessible by public transportation. It is open on the usual business days between 8:00 am and 18:00. If you can give the team a heads up before visiting, this gives them a chance to prepare a comfortable workspace for the day.
Address: 208 SW 1st Ave, Suite 240, Portland, OR 97204, USA
This friendly community device lab provides a quiet space where people in Portland can come to test their work on a variety of devices.
Of the dozens of mobile devices, cofounder Jason Grigsby says, “We’d love for more Portlanders to come use it. We enjoy sharing our space and learning what people are working on.”
86 Here is Monkop at work, crawling this application on several devices at once and automatically in the device lab (View large version87)
Address: Jose Ellauri 1126, Montevideo, Uruguay
Abstracta is a software company focused on QE, so it knows how to help ODL users. The organization has partnered with Monkop88 to have many bots automatically run test flows in parallel, while measuring application performance on real devices.
90A few extra shelves similar to the one in this picture are to be donated to other ODLs in Bangkok. (View large version91)
Address: 20 Satri Witthaya 2 Soi 17, Lat Phrao, 10230 Bangkok, Thailand
Thailand’s first ODL was originally started by a team for its own testing needs, before they decided to share their resources with the web and development community in Bangkok.
According to the lab, traffic in Bangkok can be a nightmare, so more ODLs are required in other parts of town to combat this issue. BK Device Lab has built a few extra shelves to store the devices, which can be donated to these other labs based around Bangkok to help them get started.
The devices in this lab are chosen with its local (South African) users in mind and are updated every now and then based on research and statistics, which means the selection leans towards feature phones and lower-end smartphones.
The devices are purposefully kept relatively low in number but cover a wide range of variables (input method, form factor, quality and condition of hardware, device capabilities). The high-level categories that a device belongs to (for example, old Android, cheap smartphone) are more important than the device itself: The lab is aiming for support across a wide range of these variables. Its focus is to help people build “future-friendly things.”
A couple of devices in the lab might seem odd to include (like the Amazon Kindle Touch, the Sony Playstation Vita and the Nintendo 3DS XL) because these aren’t particularly popular devices for web browsing in South Africa, but they are a good way to testing your website on an old device and browser.
Address: Level 1, 346 William Street Perth, Western Australia
As western Australia’s first (and only) ODL, this space includes a free workspace with a library full of web-related books and a rotating collection of records.
The space is open to all digital professionals, game developers, makers, freelancers and students alike. You can also book for intimate industry and networking events.
Address: UXBERT Usability Lab, Al Nemer Center 1, Olaya Street, Riyadh 11372, Saudi Arabia
This state-of-the-art ODL is the only one in Saudi Arabia and only one of three in the Middle East. It’s located in the only commercial usability testing lab in Saudi Arabia and is equipped with the latest technology, including an Oculus Rift, eye-tracking tools and brain EEG control devices.
Large corporate, government and other businesses pay for Arabic UX and usability-testing consulting services based on a project’s requirements, but use of the ODL is completely free for unfunded startups and entrepreneurs. All the lab asks is that you book in advance to ensure that the lab and devices are available.
As well as testing for quality or bugs, visitors can also take advantage of its usability testing lab (depending on equipment availability) to test with real users and record those tests, but you need to bring your own users.
Nadeem Bakhsh, a senior UX, usability and ecommerce consultant at the UXBERT Usability Lab, sas, “Saudi’s tech startup community is young, but it’s full of ambitious people with innovative ideas. We started this device and usability lab to help support these startups to turn their ideas into reality. Using the lab, they not only get access to devices for testing, but can bounce ideas and get insights from our team of Arab UX, usability and ecommerce experts to help develop their ideas into wireframes, user journeys and prototypes.”
This lab has a great mix of new technology and some really ancient devices. The oldest in its collection is a Nokia N85, and it’s got an Oculus Rift and Leap Motion Controller as well. Run by Foolproof104, an experience design agency, it does a lot of usability testing alongside other research and also houses a testing and observation lab.
Located at Seah Street in Singapore, a short walk from City Hall MRT station, this ODL is open during usual business hours.
Have more open device labs you want to share? Please let us know in the comments. Thank you!
I have been drawing desktop wallpapers for Smashing Magazine’s monthly collections for over a year now, and every time it’s a very fun and challenging mission. In this article, I would like to share how I approach all stages of the process and provide general techniques for creating vector illustrations in Adobe Illustrator. Hopefully, you will find these techniques useful.
While referring to a particular drawing — the illustration for the “Understand Yourself” desktop wallpaper, which was featured in May’s wallpaper collection this year1 — I’ll also highlight key takeaways from my experience as an illustrator and designer.
2 The “Understand Yourself” illustration, featured in the May 2016 desktop wallpapers collection (View large version3)
The idea for “Understand Yourself” derived from my curiosity about the future relationship between robots and human beings (artificial intelligence has become a thing recently). How would a robot go about understanding human emotions? By doing the same things that people do, of course. Hence, a pensive robot staring at the sunset.
Let’s take a closer look at it and see how it was made.
Although vector artwork is scalable without compromising quality, you have to decide on the ratio. I prefer 4:3 and 16:9 because these are fairly common standards for most screens. Also, bear in mind that, despite the perfect scalability of vector graphics, working with curve anchors and colors in small areas is sometimes onerous.
Composition
Rules are made for breaking. But we should know which are supposed to get broken, right? One that I really like is the rule of thirds4. It is easy and it works well. The key idea is that main objects should be located at the intersections of the grid lines. If you are willing to learn more about composition, I can’t recommend anything better than the book Framed Ink5.
Depth
To make an illustration look more natural, create depth. You can achieve this by placing some objects closer to the viewer and some farther.
Framing
Don’t fret that some of your artwork will get trimmed; account for it while drawing. The rule of thumb is to think of your illustration as a clipping from a much bigger picture. While drawing, don’t try to squeeze all objects into the canvas; let them hang out. This is even more relevant if you are planning to turn your artwork into a wallpaper with multiple versions.
Detail
Adding detail is a great way to make your illustration more attractive. The more thorough the work is, the more one will want to explore it, and the more truthful it will look. On the other hand, adding detail might (and most of the times does) take a lot more time than creating a decent illustration that you are satisfied with.
Perfection
Don’t be afraid to make mistakes. There is always someone (future you, as well) who is better at composition and coloring. Your drawing won’t be flawless, and over time you will notice a lot of things you didn’t pay attention to or just missed. At the same time, the only way to learn something is to make mistakes. That’s how it works.
Since the dawn of the human race, storytelling has been one of the most exciting forms of communication. It teaches, it captivates, it makes us think.
An illustration might look static, but it doesn’t have to be. Creating a story within a still image is easier than you might think. All you have to do is to imagine that your artwork is a middle frame of a movie. Technically, a movie is a sequence of images played at high speed, so that the eye doesn’t notice the change of frames.
Think about what happened before the frame you are working on and what might happen after. Let’s think about what’s happening at the moment as well. What led to our frame? What are the causes and consequences?
The art of storytelling is not about what you tell the viewer, but rather how people perceive what you are telling. A good story sources its power from people’s emotions and memories; it resonates with the viewer.
In my opinion, the most important part of the idea-generation process is doodling. This fun and simple activity creates plenty of ideas fast. Of course, you have to sift through them later, but quantity is what matters at this point. All you have to do is start drawing random things. The beauty of doodling is that you don’t have to think hard — your subconscious does all the work. Almost all of my illustrations, logo concepts and comic strips have evolved from doodles.
Try not to tie your artwork to a specific topic if it’s not absolutely necessary. Strong illustration works on its own. In our case, while the concept is connected to the nice weather in May and the beginning of a new season, it could easily be deprived of that context without losing its meaning.
Observe the world around you; get inspired. Think outside the box, because every new idea is a combination of old ones. Jack Foster’s How to Get Ideas9 is a wonderful read on the topic.
A paper sketch will capture your initial idea (materialize it, if you will). A loose paper sketch will help you to evaluate proportions and composition as well. I prefer not to trace my sketches later but to draw, peeking at the sketch from time to time. If you do not stick to the sketch 100%, you will have more freedom to experiment with details and to see where the illustration takes you.
The background is extremely important because it sets the mood and affects the colors you will pick later for the hero and the surroundings.
Open Adobe Illustrator, and create a new document by hitting Cmd/Ctrl + N. Type 2560px in the “Width” field and 1440px in the “Height” field. Choose RGB color mode, because we are creating an illustration that will be used on digital screens only. (Note: Shift + O activates the artboard editing mode, so you can change the dimensions of the artboard if you want to alter them or in case you typed them in wrong.)
Hit M to select the Rectangle tool, and click anywhere on the artboard. Type in the same width and height values as your artboard’s (2560px and 1440px).
The safest way to align our rectangle is to use the “Align to Artboard” option from the dropdown menu in the top control bar. Alternatively, you can move the rectangle around and wait for the live guides to help you align it.
Let’s use a gradient as the background to represent the sky. Select the Gradient tool from the toolbar (if the Gradient tool is missing in the toolbar, go to the top menu and select Window → Gradient). By default, a gradient is white to black.
If you would like your colors to look more real, go ahead and search for some reference pictures of your subject. Get some insight into perspective, lighting, composition, depth and everything else. Pick the colors from the image, and play around with them until you are satisfied with the result.
You can adjust the colors by selecting the respective peg located right under the gradient preview in the Gradient panel. I prefer HSB color mode because it enables me to control the hue, saturation and brightness more predictably than RGB or CMYK do.
Select “Radial” as the gradient type from the “Type” dropdown list located at the top of the Gradient panel.
Gradient shape values can be modified by hitting G. Stretch, resize and move the gradient around until the desired effect is reached. In our illustration, I want the sunlight to go from the bottom-right corner all the way to the top-left in a circular manner.
I recommend hitting Cmd/Ctrl + 2 as soon as you are fine with the values, so that we lock the background graphic and don’t accidentally select it later. Plus, we can select multiple objects on the artboard much more easily by clicking and dragging the cursor over these objects.
Once the background is in place, we can move on to adding more objects to the scene. Using an iterative approach, we’ll start by “blocking” colors of our shapes. Then, we’ll gradually add more and more detail.
Tip: Save versions of your artwork. It will help you to track your progress and even to revert if you got stuck at some point.
In Adobe Illustrator, you can choose between several drawing tools. I recommend drawing with the Pencil tool (N) and modifying paths with the Pen tool (P). The Pen tool is more precise and enables you to add, delete and convert anchor points on a path.
I always start by drawing shapes and filling them with a plain color. This technique is called blocking32. Blocking colors within shapes gives you a rough idea of how the illustration will look color-wise. Also, with the primary color in place, it’s much easier to determine which colors to use for highlights and shadows.
Let’s add some mountain peaks to our scene. As we know from sourcing reference images, objects that are closer to us are darker. I am going to make them not black, though, but dark-blue instead. We’ll save black for objects that are even closer.
If you hold Shift while drawing with the Pencil tool (N), the line will be perfectly straight. Let’s draw a cloud and see how a straight line is helpful sometimes. I will use BD5886 for the cloud. Playing around with an object’s opacity is all right, but I prefer to adjust the color manually. (In most cases, lowering the opacity is not enough because real objects tend to reflect colors around them.)
I am always tempted to clone already drawn shapes, but this is a bad habit. Try to avoid copying and pasting as much as you can. Copying the same type of object (another cloud for instance) seems like a quick win. But you won’t save a lot of time, and viewers will spot the clone and smirk. We don’t need that.
In some cases, though, cloning is acceptable. Drawing each leaf independently to create foliage, for example, can be painful. Instead, create as many leaves as you can, and then resize, flip or rotate copies to make them look different.
36 Copying and pasting elements of an illustration is acceptable in some cases, but handle with care. Nothing wrong with cloning foliage, for example. (View large version37)
For the robot’s body, let’s pick cold colors. But keep in mind that the overall atmosphere is warm, so we’ll mix cold gray with a bit of red.
Hit Ctrl + G to group multiple layers belonging to the same object (like a head or foot). It will be easier to rotate, resize or change their position later if required. Send groups to the back or bring them to the front using Cmd/Ctrl + [ or Cmd/Ctrl + ], respectively.
As I mentioned, the Pencil tool is a great simulation of a real pencil (especially if you are using a graphic pen tablet). And the Pen tool comes in handy for tweaking curves.
Another helpful tool is the Smooth tool, which enables you to smoothen curves.
Another nice thing about the Pencil tool (N) is that you can easily modify an existing path simply by drawing on top of the curve. This feature is very helpful for closing an open path, smoothening corners and adding areas without having to draw an additional shape.
To make objects more realistic, let’s add shadows (darker areas), where the light barely reaches the surface. Obviously, some tree bark and some leaves on the branch will need to be darker than the rest of the foliage.
Did you notice that the drawn path automatically becomes smoother? You can adjust the smoothness by double-clicking on the Pencil tool. This will show a dialog containing “Fidelity” and some other options.
47 Quick tip: Almost every tool has dedicated options. Just try double-clicking on it. (View large version48)
Add more shadows along the branch shape, the robot’s body and the foliage, using the same drawing technique.
Highlights (i.e. areas where light reflects off the surface of an object) are just as important as shadows. Let’s add some bright patches along the curve of the tree branch.
Draw a shape along the branch. Hit Cmd/Ctrl + C to copy the branch shape and Cmd/Ctrl + Shift + V to paste the shape in the same place on top of all other objects. Now, select both shapes (the branch and the highlight), go to the Pathfinder panel, and hit “Unite.” “Unite” merges two shapes into one where they overlap. Thus, we’ll have the exact same curve where the highlight follows the branch shape. Holding Shift while using the color picker allows you to pick a single color from a gradient. If you are not holding Shift, the shape will be filled with a gradient of the source object.
We’ll use the same technique for every highlight or shadow that “touches” the border of the shape beneath it. This effect can be achieved using masks; however, masks keep both shapes intact. Selecting masked shapes later might be difficult if you have multiple shapes with the same mask (in our case, the branch is a mask, and the highlights and shadows are masked shapes).
It’s time to add details such as a backpack, a green light on the robot’s head and a reflection on his face. We can also fine-tune some shapes and lines, remove leftovers, and fix inconsistencies. As soon as you like the look of your illustration, stop.
Sometimes I’ll put some grain on top of an illustration by making a layer with monochrome noise in Adobe Photoshop. It adds a little texture to the illustration and smoothens the gradients. It’s especially useful when gradients have noticeable step wedges.
To import your vector art to Adobe Photoshop, select all of your graphics by hitting Command + A, and drag and drop them into Photoshop. Embed as a “Smart Object,” which will enable you to scale the vector artwork up and down without losing quality.
Create a new layer with Command + Shift + N, and fill it with white color. Then, go to Filters → Noise → Add Noise in the main menu. Set the noise level to 100%, and hit “OK.” In the layers panel, set the “Blending mode” to “Overlay” and the “Opacity” to your liking (I usually go with 3 to 5%).
56 Adding a noise layer gives the illustration some extra texture and smoothens the gradients. Left: Our illustration without the noise layer. Right: After adding the noise layer. (Lager version57)
Now we can correct the colors. Hit Cmd/Ctrl + M in Photoshop to open the dialog for curves. Select the “Red,” “Green” or “Blue” channel from the dropdown and play around with the curves.
While most artists, designers and illustrators are eager to develop their own distinctive style, always think of the purpose, the objective, the “why.” Style is merely a means of achieving your objective. Style sells, no doubt — clients will recognize you by your style. At the same time, it will limit the viewer’s expectations of you as an artist, designer or illustrator.
While picking colors from a real image is sometimes reasonable, it depends greatly on the style you’re going for. Black and white with acid color spots here and there? Pale and subdued? Each style demands its own approach to color. What works for a book cover (catchy and provocative) might not work for a wallpaper (imagine staring at extremely bright colors every day).
I always run into the dilemma of which is more important: the idea or the execution of the idea. Your illustration might contain an interesting idea, yet if it’s poorly drawn, it won’t be compelling enough. On the contrary, if your artwork is superb and rich in detail but lacks an idea, is it doing its job? Is it moving people?
Nothing is perfect except pizza, so don’t get stuck in pursuit of perfection. Let the dust settle, and return to your artwork a day or two after finishing it. But don’t leave it unseen for too long. Would you prefer to get it done and move on, or meticulously improve it pixel by pixel?
Illustration is a great way to boost many of your skills and to experiment with drawing techniques, colors and composition. These skills will make you a better specialist in any creative field (such as animation and web design, to name a couple). Just remember that a solid illustration requires patience and is rarely done quickly. The good news is that it pays off.
Buttons are a common element of interaction design. While they may seem like a very simple UI element, they are still one of the most important ones to create.
In today’s article, we’ll be covering the essential items you need to know in order to create effective controls that improve user experience. If you’d like to take a go at prototyping and wireframing your own designs a bit more differently, you can download and test Adobe XD1 for free.
How do users understand an element is a button? The answer is simple. Visual cues help people determine clickability. It’s important to use proper visual signifiers on clickable elements to make them look like buttons.
A safe bet is to make buttons square or square with rounded corners, depending on the style of the site or app. Rectangular shaped buttons were introduced into the digital world a long time ago, and users are very familiar with them.
2 Windows 95 at first run: notice that every button, including famous ‘Start’ button, has a rectangular shape. Image credit: Wikipedia3. (Large preview4)
You can, of course, be more creative and use other shapes, (circles, triangles, or even custom shapes), but keep in mind unique ideas can prove to be a bit riskier. You need to ensure that people can easily identify each varying shape as a button.
5 Here, the Floating Action Button (FAB), which represents the primary action in an Android application, is shaped like a circled icon.
No matter what shape you choose, be sure to maintain consistency throughout your interface controls, so the user will be able to identify and recognize all UI elements as buttons.
Why is consistency so important? Well, because users remember the details, whether consciously or not. For example, users will associate a particular element’s shape as the “button.” Therefore, being consistent won’t only contribute to a great-looking design, but it’ll also provide a more familiar experience for users.
The picture below illustrates this point perfectly. Using three different shapes in one part of your app (e.g. system toolbar) is not only confusing to the user, it’s incorrect design practice.
6 There’s nothing wrong with creativity and experimentation, but keep the design coherent. (Large preview7)
Shadows are valuable clues, telling users at which UI element they are looking. Drop-shadows make the element stand out against the background and make it easily identifiable as a tappable or clickable element, as objects that appear raised look like they could be pressed down, (tapped or clicked). Even with flat buttons (almost flat, to be exact), there are still places for these subtle cues.
8 If a button casts a subtle shadow, users tend to understand that the element is interactive.
Users avoid interface elements without a clear meaning. Thus, each button in your UI should have a proper label or icon9. It’s a good idea to base this selection on the principles of least astonishment: If a necessary button has a label or icon with a high astonishment factor, it may be necessary to change the label or icon.
The label on actionable interface elements, such as a button, should always tie back to what it will do for the user. Users will feel more comfortable when they understand what action a button does. Vague labels like ‘Submit,’ or abstract labels like in the example below, don’t provide enough information about the action.
10 Avoid designing interface elements that make people wonder what they do. Image credit: uxmatters11
The action button should affirm what that task is, so that users know exactly what happens when they click that button. It’s important to indicate what a button does using action verbs. For example, if a user is signing up for an account, a button that says, ‘Create Account,’ tells them what the outcome will be after pressing the button. It’s clear and specific to the task. Such explicit labels serve as just-in-time help, giving users confidence in selecting the correct action.
12 A button’s label should say exactly what will happen when the user presses it. Image credit: Amazon13
If you’re designing a native app, you should follow platform GUI guidelines when choosing a proper location and order for buttons. Why? Because applying consistent design that follows user expectations saves people time.
You should consider how large a button is in relation to the other elements on the page. At the same time, you need to ensure the buttons you design are large enough for people to interact with.
18 Smaller touch targets are harder for users to tap than larger ones. Image credit: Apple19. (Large preview20)
When a tap is used as a primary input method for your app or site, you can rely on the MIT Touch Lab21 study to choose a proper size for your buttons. This study found that the average size of finger pads are between 10–14mm and fingertips are 8–10mm, making 10mm x 10mm a good minimum touch target size. When a mouse and keyboard are the primary input methods, button measurements can be slightly reduced to accommodate dense UIs.
22 10mm x 10mm is a good minimum touch target size. Image credit: uxmag23
You should consider the size of button elements, as well as the padding between clickable elements, as padding helps separate the controls and gives your user interface enough breathing space.
This requirement isn’t about how the button initially looks to the user; it’s about interaction experience with the UI element. Usually, a button isn’t a one-state object. It has multi-states, and providing visual feedback to users to indicate the current state should be a top priority task. This helpful illustration from Material Design27 makes it clear how to convey different button states:
28 Make sure you consider the hover, tap, and active states of the button. Image credit: Material Design29.30 This animation shows the button’s behavior in action. Image credit: Behance31. (Large preview32)
Visually Highlight The Most Important Buttons Link
Ensure the design puts emphasis on the primary or most prominent action. Use color and contrast to keep user focus on the action, and place the button in prominent locations where users are most likely to notice it.
Important buttons, (such as CTAs,) are meant to direct users into taking the action you want them to take. To create an effective call-to-action button, one that grabs the user’s attention and entices them to click, you should use colors with a high contrast in relation to the background and place the button in the path of a user.
If we look at Gmail’s UI33, the interface is very simple and almost monochromatic, with the exception of the ‘Send’ button. As soon as users finish writing a message, they immediately notice this nice blue button.
34 Adding one color to a grayscale UI draws the eye simply and effectively.
The same rule works for websites. If you take a look at the Behance35 example below, the first thing that will catch your attention is a “Sign Up” call-to-action button. The color and the position, in this case, is more important than the text.
36 The most important call-to-action button stands out against the background. (Large preview37)
Visual Distinctions for Primary and Secondary Buttons Link
You can find another example of grabbing the user’s attention with buttons in forms and dialogues. When choosing between primary and secondary actions, visual distinctions are a useful method for helping people make solid choices:
The primary positive action associated with a button needs to carry a stronger visual weight. It should be the visually dominant button.
Secondary actions, (e.g. options like ‘Cancel’ or ‘Go Back’,) should have the weakest visual weight, because reducing the visual prominence of secondary actions minimizes the risk for potential errors, and further directs people toward a successful outcome.
38 Notice how the primary action is stronger in color and contrast. Image credit: Apple39. (Large preview40)
While every design is unique, every design also has a set of items in common. That’s where having a good design checklist comes in. To ensure your button design is right for your users, you need to ask a few questions:
Are users identifying your element as a button? Think about how the design communicates affordance. Make a button look like a button (use size, shape, drop-shadows and color for that purpose).
Does a button’s label provide a clear message as to what will happen after a click? It’s often better to name a button, explaining what it does, than to use a generic label, (like “OK”).
Can your user easily find the button? Where on the page you place the button is just as important as its shape, color and the label on it. Consider the user’s path through the page and put buttons where users can easily find them or expect them to be.
If you have two or more buttons in your view, (e.g. dialog box), does the button with the primary action have strongest visual weight? Make the distinction between two options clear, by using different visual weight for each button.
When looking at the visual distinction for ‘Submit’ button, it should be visually dominant over the other button. Image credit: Lukew
Buttons are a vital element in creating a smooth user experience, so it’s worth paying attention to the best essential practices for them. A quick recap:
Make buttons look like buttons.
Label buttons with what they do for users.
Put buttons where users can find them or expect them to be.
Make it easy for the user to interact with each button.
Make the most important button clearly identifiable.
When you design your own buttons, start with the ones that matter most, and keep in mind that button design is always about recognition and clarity.
This article is part of the UX design series sponsored by Adobe. The newly introduced Experience Design app47 is made for a fast and fluid UX design process, creating interactive navigation prototypes, as well as testing and sharing them — all in one place.
You can check out more inspiring projects created with Adobe XD on Behance48, and also visit the Adobe XD blog49 to stay updated and informed. Adobe XD is being updated with new features frequently, and since it’s in public Beta, you can download and test it for free50.
Editor’s note:Please note that this article is quite lengthy, and contains dozens of CodePen embeds for an interactive view. The page might take a little while to load, so please be patient.
Layout on the web is hard. The reason it is so hard is that the layout methods we’ve relied on ever since using CSS for layout became possible were not really designed for complex layout. While we were able to achieve quite a lot in a fixed-width world with hacks such as faux columns, these methods fell apart with responsive design. Thankfully, we have hope, in the form of flexbox — which many readers will already be using — CSS grid layout and the box alignment module.
In this article, I’m going to explain how these fit together, and you’ll discover that by understanding flexbox you are very close to understanding much of grid layout.
CSS grid layout is currently behind a flag or available in the developer and nightly builds of Firefox, Safari, Chrome and Opera. Everything you see here can be seen to be working if you use a developer or nightly browser or enable the flag in a mainstream browser that has flagged support. I am trying to keep an up-to-date list of support for grid layouts1.
Both grid and flexbox are new values for the display property. To make an element a flex container, we use display: flex; to make it a grid container, we use display: grid.
As soon as we do so, the immediate children of our flex or grid container become flex or grid items. Those immediate children take on the initial values of flex or grid items.
In the first example, we have three elements in a wrapper element set to display: flex. That’s all we need to do to start using flexbox.
Unless we add the following properties with different values, the initial values for the flex container are:
flex-direction: row
flex-wrap: no-wrap
align-items: stretch
justify-content: flex-start
The initial values for our flex items are:
flex-grow: 0
flex-shrink: 1
flex-basis: auto
We’ll look at how these properties and values work later in this article. For now, all you need to do is set display: flex on a parent, and flexbox will begin to work.
To lay out items on a grid, we use display: grid. In order that we can see the grid’s behavior, this example has five cards to lay out.
Adding display: grid won’t make a dramatic change; however, our child items are all now grid items. They have fallen into a single-column track grid, displaying one below the other, the grid creating implicit row tracks to hold each item.
We can take our grid a step further and make it more grid-like by creating some columns. We use the grid-template-rows property to do this.
In this next example, I’ve created three equal-width column tracks using a new unit that has been created for grid. The fr unit is a fraction unit signifying the fraction of available space this column should take up. You can see how our grid items have immediately laid themselves out on the grid, one in each created cell of our explicitly defined columns. The grid is still creating implicit rows; as we fill up the available cells created by our columns, new rows are created to hold more items.
Once again, we have some default behavior in evidence. We haven’t positioned any of these grid items, but they are placing themselves onto our grid, one per cell of the grid. The initial values of the grid container are:
grid-auto-flow: row
grid-auto-rows: auto
align-items: stretch
justify-items: stretch
grid-gap: 0
These initial values mean that our grid items are placed one into each cell of the grid, working across the rows. Because we have a three-column track grid, after filling the grid cell of the third column, a new row is created for the rest of the items. This row is auto-sized, so will expand to fit the content. Items stretch in both directions, horizontal and vertical, filling the grid area.
Box Alignment
In both of these simple examples, we are already seeing values defined in the box alignment module in use. “Box Alignment Module Level 3” essentially takes all of the alignment and space distribution defined in flexbox and makes it available to other modules. So, if you already use flexbox, then you are already using box alignment.
Let’s look at how box alignment works in flexbox and grid, and the problems that it helps us solve.
Equal-Height Columns
Something that was very easy to create with old-school table-based layouts, yet fiendishly difficult using positioning and floats, is equal-height columns. In the floated example below, our cards contain unequal amounts of content. We have no way of indicating to the other cards that they should visually take on the same height as the first card — they have no relationship to each other.
As soon as we set the display property to grid or flex on a parent, we give the children a relationship to each other. That relationship enables the box-alignment properties to work, making equal-height columns simple.
In the flex example below, our items have unequal amounts of content. While the background on each lines up, it doesn’t sit behind the content as it would for floated elements. Because these items are displayed in a row, the property that controls this behavior is align-items. Creating equal-height columns requires that the value be stretch — the initial value for this property.
We see the same with grid layouts. Below is the simplest of grid layouts, two columns with a sidebar and main content. I’m using those fraction units again; the sidebar has 1 fraction of the available space, and the main content 3 fractions. The background color on the sidebar runs to the bottom of the content. Once again, the default value of align-items is stretch.
We’ve seen how the default value of align-items for both grid and flexbox is stretch.
For flexbox, when we use align-items, we are aligning them inside the flex container, on the cross axis. The main axis is the one defined by the flex-direction property. In this first example, the main axis is the row; we are then stretching the items on the cross axis to the height of the flex container. The height of the flex container is, in this case, determined by the item with the most content.
We can use other values, instead of the default stretch:
flex-start
flex-end
baseline
stretch
To control the alignment on the main axis, use the justify-content property. The default value is flex-start, which is why our items are all aligned against the left margin. We could instead use any of the following values:
flex-start
flex-end
center
space-around
space-between
The space-between and space-around keywords are especially interesting. With space-between, the space left over after the flex items have been displayed is distributed evenly between the items.
We can display flex items as a column rather than a row. If we change the value of flex-direction to column, then the main axis becomes the column, and the cross axis is along the row — align-items is still stretch by default, and so stretches the items across row-wise.
We can use justify-items, too, including space-between and space-around. The container needs to have enough height for you to see each in action, though!
In a grid layout, the behavior is similar, except that we are aligning items within the defined grid area. In flexbox, we talk about the main and cross axis; with grids, we use the terms “block” or “column axis” to describe the axis defining our columns, and “inline” or “row axis” to describe the axis defining our rows, as defined in the specification43.
We can align content inside a grid area using the properties and values described in the box alignment specification.
A grid area is one or more grid cells. In the example below, we have a four-column and four-row track grid. The tracks are separated by a grid gap of 10 pixels, and I have created three grid areas using line-based positioning. We’ll look at this positioning properly later in this guide, but the value before the / is the line that the content starts on, and the value after where it ends.
The dotted border is on a background image, to help us see the defined areas. So, in the first example, each area uses the defaults of stretch for both align-items on the column axis and justify-items on the row axis. This means that the content stretches to completely fill the defined area.
In the second example, I have changed the value of align-items on the grid container to center. We can also change this value on an individual grid item using the align-self property. In this case, I have set all items to center, but item two to stretch.
In all of the examples above, I have aligned the content of the grid areas, the areas defined by the start and end grid lines.
We can also align the entire grid inside the container, if our grid tracks are sized so that they take up less space than the container that has been set to display: grid. In this case, we use the align-content and justify-content properties, as with flexbox.
In the first example, we see the default alignment of a grid where the columns and rows have been defined in absolute units and take up less space than the fixed-sized wrapper allows for. The default values for both are start.
Just as with flexbox, we can use space-around and space-between. This might cause some behavior that we don’t want as the grid gaps essentially become wider. However, as you can see from the image below and in the third example in the CodePen, we get the same space between or around the tracks as we see with flexbox.
The fixed-sized tracks will gain additional space if they span more than one track. Element two and four in our example are wider and three is taller because they are given the extra space assigned to the gap they span over.
We can completely center the grid by setting both values to center, as shown in the last example.
We have very nice alignment abilities in both flexbox and grid, and they work in a generally consistent way. We can align individual items and groups of items in a way that is responsive and prevents overlap — something the web has lacked until now!
In the last section, we looked at alignment. The box-alignment properties as used in grid and flexbox layouts are one area where we see how these specifications have emerged in a world where responsive design is just how things are done. Values such as space-between, space-around and stretch allow for responsiveness, distributing content equally among or between items.
There is more, however. Responsive design is often about maintaining proportion. When we calculate columns for a responsive design using the target ÷ context approach introduced in Ethan Marcotte’s original article on fluid grids64, we maintain the proportions of the original absolute-width design. Flexbox and grid layouts give us far simpler ways to deal with proportions in our designs.
Flexbox gives us a content-out approach to flexibility. We see this when we use a keyword value of space-between to space our items evenly. First, the amount of space taken up by our items is calculated, and then the remaining space in the container is divided up and used evenly to space out the items. We can get more control of content distribution by way of properties that we apply to the flex items themselves:
flex-grow
flex-shrink
flex-basis
These three properties are more usually described by the shorthand flex property. If we add flex: 1 1 300px to an item, we are stating that flex-grow should be 1 so that items can grow, flex-shrink should be 1 so that items can shrink, and the flex-basis should be 300 pixels. Applying this to our cards layout gives us the example below.
Our flex-basis here is 300 pixels, and we have three cards in a row. If the flex container is wider than 900 pixels, then the remaining space is divided into three and distributed between the items equally. This is because we have set flex-grow to 1 so that our items can grow from the flex-basis. We have also set flex-shrink to 1, which means that, where we don’t have space for three 300-pixel columns, space will be removed equally.
If we want these items to grow in different proportions, then we can change the flex-grow value on one or more items. If we would like the first item to get three times the available space distributed to it, we would set flex-grow to 3.
The available space is distributed after the amount needed for flex-basis has been taken into account. This is why our first item is not three times the size of our other items, but instead gets a share of three parts of the remaining space. You will see a bigger change by setting the value for flex-basis to 0, in which case we wouldn’t have a starting value to remove from the overall container. Then, the entire width of the container could be distributed in proportion to our items.
A very useful tool to help you understand these values is Flexbox Tester74. Pop the different values into the tester, and it calculates the actual sizes at which your items will end up, and explains why they end up at that size.
If you use auto as your flex-basis value, it will use any size set on the flex item as the flex-basis value. If there is no size, then it defaults to be the same as the value of content, which is the content’s width. Using auto is, therefore, very useful for reusable components that might need to have a set size on an item. You can use auto and be sure that if the item needs to be around a size defined on it, flexbox will respect it.
In the next example, I have set the flex-basis on all cards to auto. I then gave the first card a width of 350 pixels. So, the flex-basis of that first card is now 350 pixels, which is used to work out how to distribute space. The other two cards have a flex-basis based on their content’s width.
If we go back to our original flex: 1 1 300px, add more items to our example and set flex-wrap: wrap on the container, the items will wrap in order to maintain as near as possible the flex-basis value. If we have five images and three fit onto one row, then the next two will wrap onto a new row. As the items are allowed to grow, they both grow equally, and so we get two equal-sized items on the bottom row and three in the row above.
The question then asked is often, “How can I get the items on the bottom row to line up with the ones on the top, leaving a gap at the end?” The answer is that you don’t, not with flexbox. For that kind of behavior you need a grid layout.
Keeping Things in Proportion With Grid Layout
Grid layouts, as we have already seen, have a concept of creating column and row tracks into which items can be positioned. When we create a flexible grid layout, we set the proportions when defining the tracks on the grid container — rather than on the items, as with flexbox. We encountered the fr unit when we created our grids earlier. This unit works in a similar way to flex-grow when you have a flex-basis of 0. It assigns a fraction of the available space in the grid container.
In this code example, the first column track has been given 2fr, the other two 1fr. So, we divide the space into four and assign two parts to the first track and one part each to the remaining two.
Mixing absolute units and fr units is valid. In this next example, we have a 2fr track, a 1fr track and a 300-pixel track. First, the absolute width is taken away, and then the remaining space is divided into three and assigned three parts to track 1 and one part to track 2.
What you can also see from this example is that our items fit into the defined tracks — they don’t distribute across the row, as they do in the flexbox example. This is because, with grid layouts, we are creating a two-dimensional layout, then putting items into it. With flexbox, we get our content and work out how much will fit in a single dimension in a row or column, treating additional rows or columns as entirely new flex containers.
What would be nice, however, is to still have a way to create as many columns of a certain size as will fit into the container. We can do this with grid and the repeat syntax.
In the next example, I will use the repeat syntax to create as many 200-pixel columns as will fit in our container. I am using the repeat syntax for the track listing, with a keyword value of auto-fill and then the size that I want the repeated tracks to be.
(At the time of writing, this was not implemented in Chrome, but works in Firefox Developer Edition.)
We can go a step further than that and combine fraction units and an absolute width to tell the grid to create as many 200-pixel tracks as will fit in the container and to distribute the remainder equally.
In this way, we get the benefits of a two-dimensional layout but still have flexible quantities of tracks — all without using any media queries. What we also see here is the grid and flexbox specifications diverging. Where flexbox ends with distributing items in one dimension, grid is just getting started.
A Separation of Source Order and Visual Display
With flexbox, we can’t do a lot in terms of positioning our flex items. We can choose the direction in which they flow, by setting flex-direction to row, row-reverse or column, column-reverse, and we can set an order, which controls the visual order in which the items display.
With grid layouts, we get to properly position child items onto the grid we have defined. In most of the examples above, we have been relying on grid auto-placement, the rules that define how items we have not positioned are laid out. In the example below, I am using line-based positioning to position the items on the grid.
The grid-column and grid-row properties are a shorthand for grid-column-start, grid-row-start, grid-column-end and grid-row-end. The value before the / is the line that the content starts on, while the value after is the line it ends on.
You can also name your lines. This happens when you create your grid on the grid container. Name the lines in brackets, and then position the items as before but using the names instead of the line index.
You can have multiple lines with the same name, and then target them by line name and index.
You can use a span keyword, spanning a number of lines or, for example, to the third line named col. This type of positioning is useful for creating components that sit in various places in the layout. In the example below, I want some elements to span six column tracks and others to span three. I am using auto-placement to lay out the items, but when the grid encounters an item with a class of wide, the start value will be auto and the end value will be span 2; so, it will start on the line it would normally start on based on the auto-placement rules, but span two lines.
Using auto-placement with some rules in this way will likely leave some gaps in our grid as the grid encounters items that need two tracks and has space for only one. By default, the grid progresses forward; so, once it leaves a gap, it doesn’t go back to place things in it — unless we set grid-auto-flow to a value of dense, in which case, the grid will actually backfill the gaps left, taking the content out of DOM order.
There is also a whole different method of positioning items using the grid layout — by creating a visual representation of our layout right in the value of the grid-template-areas property. To do this, you first need to name each direct child of the grid container that you want to position.
We then lay the items out in this ASCII art manner as the value of grid-template-areas. If you wanted to entirely redefine the layout based on media queries, you could do so just by changing the value of this one property!
As you can see from the example, to leave a cell empty, we use a full stop or a series of full stops with no white space between them. To cause an element to span a number of tracks, we repeat the name.
Accessibility Implications of Reordering
For flexbox and even more so for grid layouts, we need to take great care when using these methods to reorder content. The specification for flexbox states111:
Authors must use order only for visual, not logical, reordering of content. Style sheets that use order to perform logical reordering are non-conforming.
Grid layout gives authors great powers of rearrangement over the document. However, these are not a substitute for correct ordering of the document source. The order property and grid placement do not affect ordering in non-visual media (such as speech). Likewise, rearranging grid items visually does not affect the default traversal order of sequential navigation modes (such as cycling through links).
In both cases, as currently defined, the reordering is only visual. It does not change the logical order of the document. In addition, we need to take great care in considering sighted keyboard users. It would be incredibly simple to cause someone tabbing through the document to tab along the navigation at the top and then suddenly be taken to the bottom of the document due to a grid item that appears early in the source being positioned there.
A New System For Layout
I’ve not covered every aspect of Grid and Flexbox in this guide – my aim being to show the similarities and differences between the specifications, and to throw Box Alignment into the mix. To demonstrate that what these specifications are bringing us is a new layout system, one that understands the kind of sites and applications we are building today.
At the present time Flexbox is all we have in production browsers, however Grid — behind a flag — is shaping up in Chrome, Opera, Safari and Firefox. Flexbox initially emerged prefixed, was used in production by developers, and then changed making us all feel as if we couldn’t rely on it. Grid is being developed behind a flag and if you take a look at the examples in this article in a browser with the flag enabled, you’ll find that the implementations are already very interoperable. With the specification now at Candidate Recommendation status, the specification is all but finished. So when Grid lands, possibly early next year, it is going to land in a very cross-browser compatible state.
Do play with the examples in the article, and there is a whole host of other stuff detailed in the resources below. I would be especially interested in use cases that you can’t achieve with these layout methods. If you find one let me know, I’m interested in finding the edges of the specifications we have so welcome a challenge.
Resources
Here are some resources to help you explore these specifications further.
I realized something the other day: I’ve been designing apps for nine years now! So much has changed since the early days and, it feels like developers and designers have been through a rollercoaster of evolutions and trends. However, while the actual look and functionality of our apps have changed, along with the tools we use to make them, there are some things that have very much stayed the same, such as the process of designing an app and how we go through the many phases that constitute the creation of an app.
Sure, collectively you could argue that we’ve become a lot better at the process. We’ve invented new terminology and even completely new job titles to better facilitate the process of designing mobile applications. But, at its core, the process remains largely unchanged.
And while this approach has become a truism in most of our industry, it’s far from obvious for anyone entering the field. A lot of articles have been written about all the different aspects of this process, but that doesn’t seem to change the fact that I encounter very basic questions from clients and new designers alike. How do you go about designing an app? So, here’s an article about just that. A top level, somewhat simplified, and very honest overview of the steps involved in designing an app. This is an account of how most of the apps I work on are born, complete with shameless links to the tools I use (several of them my own3).
It might be different from how you do it. The steps might be named differently or the tools might vary. In fact, if you’re a seasoned designer, you’ll know most of this. You might even think the content is trivial or, “something everyone knows.” In that case, you’re most likely part of the bubble we live in. But, if you’re a new designer or someone trying to understand what you’re paying other people to do, this will hopefully give you a down-to-earth overview.
Now when people think of ‘designing’ something, their thoughts often circle around the visual aspects of a product. Pixel pushing in Photoshop or laying grids in Sketch, but that’s a common misconception. Design, in the context of this article, covers the entire process. It is every deliberate action meant to produce something. The truth is that from the moment you get an idea, you are designing.
Everything starts with an idea. This might be your idea or an idea that a client has approached you with. Ideas are great, but they’re also a dime a dozen. The sooner you realize that ideas are nothing but passing phantoms of something that might one day turn into a product, the better you’ll be able to handle this phase.
We tend to put way too much stock in ideas, as getting the idea ‘right’ is far less important than people think. Ideas are sealed up and protected by NDA’s, paraded around in pitch decks, and tend to take on a very defined state much too early.
Keeping your idea malleable and changing for as long as possible is, in my experience, much healthier and leads to far better end-results. Ideas should be subjected to a healthy dose of Darwinism before being pursued – survival of the fittest. You want your idea to evolve the best version of itself, and to that end, it can make sense to talk about a circular process within this phase.
Depending on the type of idea, there are different questions to ask. In the case of apps, these are some of the most often asked questions:
Is this idea financially viable?
Making anything takes time, effort and money. Can you recoup your investment? Does the idea have a solid plan for revenue? A realistic one?
Is this idea technically feasible?
Can this be made? Who should make it? How would we go about making it? What sort of tools do we use? What sort of data/API/touch points do we need? What are the obstacles facing the implementation of the idea?
Is someone else already doing this?
Most things today are a remix. While that’s cool, what we can do better? What components of this idea differentiate it from existing ideas? How are we adding something new to the market?
Could this be made simpler/differently?
Are there other ways to accomplish the same goals? Or, are there other methods that could be more effective and take less time to execute?
Those are just a handful of the tough questions you need to ask while you or your clients’ idea is taking shape. To be honest, 90% of app ideas I’ve ever been pitched or have come up with myself, fall flat on the first question. People always underestimate how long it takes to make something and always overestimate how much they stand to gain.
Idea workshops are great ways to force the evolution of your ideas. You can use things like Trello6 to track aspects of your idea in an environment where you can move around and prioritize concepts. Collaboration helps promote the strong aspects of the concept, the ones that resonate with participants. At the same time, collaboration helps identify and eliminate what is detracting from the idea.
7 Generate, challenge, and revise ideas with others. (Large preview8)
Once you’re satisfied with an idea, it’s time to put things in writing.
A ‘Specification’ or a ‘Spec’ is the piece of paper(s) that declares what your app does and how it is accomplished. It’s the blueprint if you will. There are quite a few ways to do a spec, ranging from the lighter (also sometimes called a ‘brief’) to the enveloping complete enveloping breakdown. No matter which way you choose to go about it, always do a spec. I repeat: Always do a spec.
In client projects, specs are often contracts on which estimates can be based on – the mother document that dictates to all parties involved what needs to be made and (roughly) how. In personal or in-house projects they’re not as commonly seen as a priority, but they should be.
You’d be surprised how much of an idea is further developed, changed or refined when you’re asked to put everything in writing. Areas of uncertainty are naturally brought forward and further questions are raised. In a sense, the act of creating a spec is the first conscious and calculated ‘design’ of the solution. A lot of initial ideas and assumptions are explored and illuminated in this document, which keeps everyone involved in tune with what is being built. It can also be beneficial to periodically revisit a spec and update it retroactively while the project moves into its next phases.
A program like Pages, Word, or any other simple markup editor will be fine for this phase. The real trick is deciding what to include and what to leave out of a spec. It is best to keep things short and concise under the assumption that the more you write, the more can be misinterpreted. List both functional and non-functional requirements. Explain what your app is, and not how it needs to be done. Use plain language. In the end, the best spec is the one that is agreed upon by all parties.
Many articles could be written about the art of a good spec, however, this is not one of those articles.
Wireframes, or low fidelity mockups, can be either part of the spec or built from the spec. Information Architects (iA’s) and User Experience Designers (UX designers) usually take ownership of this phase but, the truth is it’s important that everyone on the team discuss and understand how the product is put together and how the app is structured.
If you’re a single designer working on the product, you’re likely the one holding the marker here. Draw on your experience of the platform conventions, knowledge of controls and interface paradigms, and apply that knowledge to the challenges you’re trying to solve alongside those who may have domain-specific knowledge. The fusion of knowledge on how to best achieve things on the platform, with knowledge about the target audience or the goal of the product, creates a strong foundation for the architecture of the app.
9 An example from one of our whiteboarding sessions (Large preview10)
We tend to do workshops either internally or, ideally, with the client, and go through the spec, screen by screen, and whiteboard the wireframes. The wireframes are then brought into a tool to be digitized, shared and revised. Many people prefer applications like Omnigraffle or Sketch, while some of us still use Photoshop.
There are many tools out there that will help you wireframe. On applypixels.com231911 I’ve got a wireframe UI kit12 that I use. Below is an example of how that can be done:
Here I mock up a quick low fidelity prototype of a photo sharing app.
Wireframes are the first deliberate design made in a project. Everything not caught in the production of the spec usually becomes painfully obvious during this phase. Inconsistencies in navigation, entire missing sections of the app or counterintuitive flows are brought forth, discussed and fixed. I like to think of it as the great ironing out of all the wrinkles the original idea has left behind.
13 Digital tools for wireframing can speed up the process (Large preview14)
Armed with a spec and a wireframe, you’re now ready to get serious. These are also the materials that I advise you have ready when you contract other people to work on your project. Specs and wireframes can vary greatly in quality, but showing that you’ve made those initial preparations makes all of the difference. You had an idea, you’ve committed it to a document, and you’ve thought through a proposed solution.
You’d be surprised how many people do not do this. Why? Because it’s difficult and laborious work. You need to be specific about what you want and how you propose it could be done. The single reason why most apps I’m pitched don’t get off the ground is because it is entirely more compelling to talk about the overall idea of an app instead of asking the tough questions and getting into the gritty details of how to execute it.
The next step varies greatly. In fact, the next three steps are all entwined and often run alongside each other. With the spec and the wireframe in hand, we are now ready to attempt a prototype. The word prototype in this context covers many different things, but ultimately it’s about creating a bare-bones version of the app with the goal of testing your hypotheses and get early feedback. Some people use tools like Invision15 or Marvel3216 where you can convert your low-fidelity mockups into interactive “apps” that allow you to tap through the design. Often, designers go straight for a native prototype written in Swift.
There are pros and cons to each approach. More complex and “bigger” apps with larger teams influencing the product (or with more loose specs) may benefit more from this intermediate step, where iterations are quickly and more easily shared. For smaller teams with a more solid spec, going directly to code allows for a quicker turnaround, while laying the foundation for the actual app. This also often confronts implementation issues directly as you’re actually building the real product right from the start.
There are many tools popping up that span the divide between visual design and functional prototype, while aiming to take collaborative design to new interactive heights. Both Framer3017 and Figma18 are two options worth looking at.
How you choose to prototype depends on many different factors. The most important thing in this step when deciding is getting early validation of your idea. A bad experience with a prototype might cause you to uncover issues with your wireframes, your spec, or even the very core of your idea. You can then, before you’ve invested time in the next two phases, make changes or abandon it entirely.
Now, you’ve been ‘designing’ all along, but in this phase, you get to what is more traditionally consider ‘design’. Visual design deals with the appearance of the app. It is not just making things look nice, but also making sure that there’s a consistent and identifiable visual language throughout. Here design helps, not only to tell a story and communicate your brand, to guide users through challenging parts of the app, but also to make particular aspects of the experience more enjoyable.
Proper visual design should build on top of all of the experiences you’ve made in the previous stages. It should support the overall ethos of the idea, the goals defined in the specs, the flows laid out in the wireframes, and the lessons learned from the prototype.
Visual design is not just a ‘skin’. It’s not a coat of paint applied to make things look pretty. It is the visual framework you use to create a coherent and consistent experience, tell an engaging story, and differentiate your product from others’. Great visual design elevates the mundane, clarifies the unclear and leaves a lasting impression with the user.
Rules defined in the great unwritten design manual of your product, inform every choice on every visual solution you may encounter. The work in this stage consists of the designer defining a series of rules and conventions and then applying those to every challenge he or she may encounter when putting the interface together. Luckily you don’t need to reinvent the wheel every time (even though sometimes we do just that). iOS & Android have a ton of existing rules, conventions, and expressions on which we can lean. “UI Kit with a Twist” is an expression being thrown around that covers the idea of a visual design that leans on the standard iOS UI components, but with a sassy colored navbar or other minor customizations.
There is no right way of creating a visual design for an app and this stage is probably the phase that has the most tools and approaches. If you’ve been diligent and digitized (and updated) your wireframes you could start to add embellishment to those and build it from there in Sketch or Photoshop. Another place to start is to base your design off existing iOS UI elements and then tweak from there.
I usually start my visual design based on a UI kit, like the one available from applypixels.com231911 (available for both Sketch20 & Photoshop21). This lets me lean on iOS conventions while attempting to break the mold and experiment in subtle but meaningful ways. There are many other UI kits out there, and places like ui8.net22 can be particularly great sites to find a pre-made style.
Visual design doesn’t end when you hand something off to the developer. It is a continued and constantly evolving process, evaluating your visual rulebook and the choices that dictate. You may find yourself delivering and redelivering assets or tweaking interactions, right up until you ship.
Next up, or as is sometimes the case, alongside, is the development of the app. In an ideal world, the person responsible for developing the app has been part of all previous phases, chiming in with his or her experience, deliberating on the difficulty level of implementation of various proposed designs, and discussing best practices in terms of structure, tools, libraries, and so on.
I am familiar with the desire to clearly separate development from design, both from an organizational and a cultural perspective. It is my clear conviction that the best products are built by teams made of multiple professionals from various disciplines who have a mutual understand of each other. Development shouldn’t be devoid of a design presence and design shouldn’t be without development know-how.
There’s an obvious balance to this. Designers probably shouldn’t have much to say in the choice of an API implementation, like developers probably shouldn’t get to veto a color scheme. However, during a wireframing session a developer could save a designer from making a disastrous proposal that would cause time for implementation to increase tenfold. Likewise, a designer overlooking implementation of navigation could steer the interaction towards a much more enjoyable experience that better fit the consistency and feel of the app. Use your best judgment but, don’t rob your team of their combined expertise by putting up imagined barriers.
A lot could be written about the iterative nature of development as well, but once again, I will let someone else to write that article.
The real truth that seems to catch many people off guard is that you’re never actually done designing. In most good projects, designers have product ownership from spec to ship. You don’t want design becoming a relay race where you hand off something to another department or group of people where you don’t have a say. Even just listing the individual steps like I’ve done, I run the risk of misleading you, as it can very easily be understood as a progression that runs from A to B. Designing apps, or anything for that matter, is rarely a straight line or a clear succession of stages.
While our tools, as well as our products, have changed a lot over these past years, the underlying process of making apps remains largely the same.
Get an idea
Write it down
Build a prototype
Enter into the dance between design and development until something comes out of it
As you progress down this narrowing funnel of bringing an app through development, you make assumptions. Then, you challenge and revise until some nugget of truth makes it into the next stage (or the next build).
People tend to think about building apps the way they think about building a house. First, you lay the foundation, then the walls come up and the appliances are installed. It seems straightforward. There is a blueprint, a design, and a team building it. This fallacy is the source of much grief in the world of making software. It’s why clients expect you to be able to tell them how much their idea costs to develop. It’s why estimates are almost always wrong, and frankly, why we have so many terrible products. It implies that we know the outcome of the process – that right from the start we’re working in a mostly controlled environment with a clearly defined goal.
But, if the process and the stages outlined above teach us anything, it is that we don’t and we shouldn’t have one. The process is there to help us explore the potential by challenging our assumptions and iteratively execute each step – to bring the best nuggets from the idea into the hands of people.
Rather than building a house, designing apps is probably more like composing a symphony. Each profession a separate instrument. In the beginning, it sounds odd, hollow and out of tune. Slowly, however, as we move through the acts and apply our experience and skill, iteratively it finds its direction and becomes some version of the music described in the original idea.
In this article I use several tools available to subscribing members at applypixels.com231911, including the Wireframe kit24 and the iOS 10 UI kit.25 These are my own tools that I care greatly for, but there are many other design templates out there. Worth mentioning is the free tools on Bjango’s26 site and the ones you can purchase on ui827.
For both wireframing and visual design, I recommend you take a look at Photoshop28 or Sketch29.
How can you be sure you’re moving your design problem in a straight line? That you’re moving directly to a solution? From client to payment, from product to audience?
How certain are you of what the second step in your process is? Or the third? Or how long each will take, or if any should be removed? Are they all useful? Do any need improvement? Is each done with aim and purpose? How often do you fall-forward with momentum, rather than move with reason?
These horribly uncomfortable questions are awkward for one reason:
Most of us carry the dead weight of an undefined process.
If you’re like me, your process is a mix of tools, some picked up when studying, a few from colleagues, maybe one or two from an idol, a few oddities taken too seriously, and some wrapped up in the often stale notion of history and tradition – those “because we’ve always done it that way” steps.
We’re too passive, or ignorant, or foolish, or dismissive, or proud when it comes to our workflow. And that’s where we lose.
Being ignorant of our process is to be ignorant of how long things really take, or where opportunities for improvement, in skill or outcome, are hidden. We can’t provide insightful timelines or adjust our process to suit new projects. In all this we lose control of our intellectual and creative growth, letting too many opportunities slip to become the designers (or developers or writers) we aspire to be.
“Discipline is hard–harder than trustworthiness and skill. We are by nature flawed and inconstant creatures. We are not built for discipline. We are built for novelty and excitement, not for careful attention to detail. Discipline is something we have to work at.”
– Atul Gawande, The Checklist Manifesto
A well-defined process is an ordered list of the tasks that get your work done – each given a timeframe, rating of importance, and your level of skill.
A well-defined process gives you an insight into how your projects take shape. Such footing helps you recognise how each step impacts the final outcome, your relationships with your clients and colleagues, and helps you see if the skills you want to develop are being ignored.
1 An example of what a well-defined process might look like. The details of your own process will vary greatly, depending on your interests and career goals. (Large preview2)
Let’s have a closer look at a few helpful benefits that come from defining your process:
Master Your Time and Schedule
Make Better Decisions with Clarity and Focus
Have More Control to Make More (and Better) Choices
What’s more important to a designer or developer (or pretty much anyone who works with clients, budgets, and timelines) than to use their time well? In this section, you’ll learn how to:
Take advantage of a predictable schedule
Warp time to handle the unexpected
Comfortably separate creative and non-creative work
Easily handle a client’s schedule disruptions and demands
As designers we work to schedules and deadlines. Schedules help us manage our workload, timeline, and especially if freelancing, helps us know how much to charge. But it’s easy to have schedule-creep when we don’t know what our process looks like. I’ve been guilty (far too many times) of giving an overly optimistic progress report or timeframe for projects. Hearing the deadline scream past leaves us looking unprofessional, placing both designer and client in a foul mood.
3 Part of our role is to work to deadlines, but it’s common for designers to rely on memory and gut instinct to get them through their process. (Large preview4)
Knowing how long it takes to complete each step in our a process, and what step we’re up to, allows us to set realistic deadlines. We can also, at any point, tell our clients how much is left to be done and how long it’s likely to take.
The client calls and tells you they’ve stuffed up. They noted the date of their launch wrong and the website you’re designing needs to be finished a month earlier than agreed.
That’s okay. You have a super power. You can warp time.
When you know every step in your process, how long each one takes, and how important individual steps are to the final outcome, you can speed things up.
5 Having a clear guide as to how long each step in your process (should) take, realistically allows you to give some tasks more attention, while knowing that you can race through others. (Large preview6)
Let’s say you were going to design custom icons and draw a custom map. You can adjust these minor steps and speed up the process by using a set of purchased icons, and stick with Google Maps to give people directions. These moves are worth making because it allows you to focus on the more valuable tasks of, say, optimizing the product pages for sale, or the home page for email sign ups.
Knowing each step in your process allows you to calmly adjust them, and their expected outcomes, as needed.
But warping time can do more than save your client from themselves. It can help you produce better work. If you need to spend some extra time learning a new skill or gaining a deeper knowledge about the audience, you should feel comfortable doing so, remembering that the steps you know inside out will allow you to catch up. You will have learned a new skill, addressed the audience more directly, and produced something great for the client – that’s a whole lot of winning!
Comfortably Separate Creative And Non-Creative Work Link
Who’s a fan of paperwork? Or quoting? Or bug hunting?
Creatives projects aren’t solely made of creative tasks. There is always other stuff which is never fun, but is inescapably important.
7 A designer’s process is a mix of the creative and the practical. Knowing when, why, and how long each creative and non-creative task of your process takes, you can rearrange them to better suit your day or week. (Large preview8)
Knowing each step in your process allows you to schedule your days more efficiently. You can schedule your most productive time for your most important work, and leave the autopilot stuff to the after-lunch slump. You can shuffle around your process without much worry because you can trust that every step is going to be ticked off.
Easily Handle Client’s Schedule Disruptions And Demands Link
We all have an order we prefer to work in. First, I want my sketches approved, a couple of weeks later we’ll start talking copy, and then down the road we will figure out the photography.
What if the client’s Brand Coordinator is going away and she’s the one approving the copy and the photography, but doesn’t care about sketches? What if the Coordinator is heading overseas for three months, and when you’d normally be presenting sketches, you need to be thinking about words and photos?
9 Sometimes we rely on nothing but habits to get our work done – habits that a client can easily disrupt and throw us off our game. A defined process allows for easy shuffling without losing momentum. (Large preview10)
Having a process means being able to rearrange on the fly without skipping any steps. You can line up the copywriter and photographer much earlier than you normally would. Once the copy and photos are taken, instead of putting them into a polished design, you can go back to what needs to be done (getting those sketches approved!).
Obviously, if this kind of situation comes up you’ll do what needs to be done, regardless of whether or not you know your process well. But, by having that knowledge, you can shuffle things around without stress and manage your timeline easily.
Any product we produce is the result of a thousand small decisions. Everything from how we communicate with the client to what the product will look like and do, comes down to this-or-that choices. Having a strong understanding of your process will allow you to:
Relax and enjoy the reliability,
Keep your focus and ideas on track,
Build stronger relationships with your clients and
A defined process becomes a roadmap that ensures you visit each step. You will always know how far into the project you truly are or how much is left to do, and can take a more educated stab at how many hours are still needed.
11 We can too easily forget important and necessary steps when we’re in the middle of a whirlwind project. Checking off the steps in a defined process helps us get everything done. (Large preview12)
The worth of this knowledge shines through during conversations with your client and colleagues. You’re able to show them where your time (and their money) has been spent, while also being able to judge how much more time (and, again, their money) might be needed to reach the finish line. We’re always better off approaching the deadline when we know how much work is still left to do.
As creatives, we deliberately keep our eyes open to new ideas and methods. We do so in the hope of finding a more effective way of grabbing the audience’s attention and communicating to them in way that is both clear and interesting.
When we’re planning how to finish a new project, we give ourselves a clear set of ideas. (“I’m going to use this kind of grid system with this typography” or “I’m going to use this JavaScript library to add those features.”) But, when we stumble over a new idea that we’re excited about, we sometimes apply it because it’s new, not because it’s better.
Knowing where each step in our process begins and ends gives us the opportunity to simply ask, “Have I gotten lost?”
13 Our attention can easily wander, but having defined outlines for a project and process means we can explore while ensuring we stay on track. (Large preview14)
Accidental discoveries are a marvelous aspect of creative work and can sometimes yield results we never would have planned for. But, if we want to ensure that we’re hitting the right targets and doing so before we run out of time or money, taking a moment to make sure we’re keeping our focus on the outcome rather than our own curiosity is essential.
Build Stronger Relationships With Your Clients Link
Clients who have been brought along in the design process tend to be a lot easier to work with. Regular contact helps them understand where our time is being spent and what progress has been made.
15 Completed established milestones are natural moments to get in contact with clients, helping to build relationships as well as their trust in our process and professionalism. (Large preview16)
Moments between steps give us a great opportunity to fire off an email or two. Often it will be good news (“We’ve finished the wireframing and it’s going well! That thing we were worried about was easily managed, and we’re now slightly ahead of schedule,”) and it helps the client put more trust in our professionalism.
This comes in handy when things go wrong. Imagine how a client feels when they only get the, “Here’s a proof,” or, “Give me content,” emails, then gets a, “We broke something and will miss the deadline,” email? Imagine how such an email would go over when we’ve been in regular contact and built a relationship that can genuinely handle a bump in the road.
Kill The Steps That Aren’t Carrying Their Weight Link
We can pick from a wide range of tools, methods, and ideas to get our work finished. For each project we do, we choose what will best help us achieve our goals. But sometimes there are steps in our process that exist for no other reason than tradition. This is especially true at bigger or older businesses, or in-house studios. Useful steps which have turned stale can sometimes linger in our process.
17 Tradition, routine, habit, and ‘just because’ often lead to steps that chew up our time without much of a return. (Large preview18)
More often than not, they’re probably harmless, but take up time and energy — the print designer who makes all their font outlines even though their printer’s RIP can handle fonts just fine; the developer who manually converts and compresses images into weaker formats when there are build systems and better formats available.
By keeping track of how long each step takes and its impact on the final product, we can ensure our process is deliberate and lean.
Have More Control To Make More (And Better) Choices Link
Once you know your process well, you can start to make higher-level decisions. These are powerful choices — they seem simple and small, but can have a huge impact on how you manage your time, your professionalism, and how deliberately your set of skills develop. Here we will look at how you can:
All this knowledge allows you to ask the insanely rich question: “Do I even like doing all of these things?”
19 Defined spaces around each of the steps in your process means you can more easily outsource aspects of a project you don’t enjoy doing, or haven’t got the time for. Knowing your process well means you’ll understand exactly what the person you’ve outsourced to will need, as well as what they will have to give you back for things to run smoothly. (Large preview20)
Especially for entrepreneurs and freelancers, there are always going to be boring tasks. As valuable and essential as they may be, they still manage to bore us while constantly sending off reminders of how we could better spend our time.
So why not swap tasks with a colleague? Or outsource the duds? Or even kill them off completely? I’m sure it’s possible for any of us to learn the legal skills to punch out an air-tight contract, but we’d rather hire a lawyer, wouldn’t we? Same goes for accounting work and server (hardware) maintenance.
What about development work? If that’s your weak spot, why not outsource it? Or maybe you love to art direct but hate to do the grunt work of designing a thousand different ads for a thousand different markets? Or maybe you love taking the photos but despise doing the touch ups?
Knowing what the edges around these tasks look like (where they start, where they end, what’s needed for them to work, and what the outcome should be), makes it a lot easier to start justifying outsourcing, so you can focus your effort on what matters.
You know those tasks that you never do, even though you know they’ll improve your skills or business?
Archiving, reviews (of skill, process, client interactions, outcome), follow-up emails (“How did we do?”, “How is the audience responding to the campaign?”, “Have sales improved?”, “What is and isn’t working?”, “Thank you for working with us”), planning follow-up work, uploading samples to Dribbble and Behance, plus a thousand other little I-should-but-never-do tasks can be added to your process.
These are the little things that can make our projects, our relationships, and even our opportunities significantly better. If we embed it into our process as a way of closing a job, we can be sure we will get to them and enjoy the benefits they bring.
21 Rolling those (sometimes) dull but essential tasks into your process will eventually build the habit of making sure they’re done before your project is finished. (Large preview22)
It’s good to see where the deadweight in your process is, but making targeted improvements is better. This is why weighing up the importance and skill of each step is most beneficial.
If setting type seems to take too long, and you rate the importance of it highly but your skills at it low, then it’s probably worth investing some time into deliberately practicing what’s found in The Elements of Typographic Style. Or maybe your HTML/CSS skills are tight, but your jQuery is loose? Great. Time to load up some tutorials or enroll in an online course.
23 Forcing yourself to grade how well you perform each step in your process lets you make targeted improvements to your skillset. (Large preview24)
In the middle of a project, when such reflection doesn’t come with any opportunity to take action, such realisations are useless. Reflecting on your process at the end of your project lets you see which of your skills are weak, and is a great time to plan what you’re going to do to strengthen them. Even a day or two of practice can make the outcome of your next project better.
We can only take charge of things we understand. If you want to steer the direction of your skills and professionalism, then act like the designer you want to be.
You will start to gain this understanding by reflecting on your process. Then, you can do more than simply use your knowledge – you can act with wisdom.
Leaps of skill are easily noticeable in our early careers – every few days we add another tool to our belts. Soon, it’s every few months, and before long we know enough to keep our clients happy, so we plateau.
I’m sure most of us aren’t that way inclined, at least not those of us who take the time to read a few thousand words on something as niched-within-a-niche as improving the process of our design work. If you’re reading this, then clearly you’re one of those designers, and I’m willing to bet that the idea of having a stale and just good enough set of skills eats you up inside.
So, take the smallest of small steps and think about what you do, why you do it, and how well it all really works. Then take joy in figuring out how to do it all better.
We gain peace of mind when we have a clear view of where a project is heading. Even more invigorating is knowing the capacity of our ability, and being able to make improvements where we see fit.
We can work better with clients, provide increasingly more services, deliver better results, and best of all, find genuine enjoyment in how we spend our days. We can produce work that isn’t simply done, but deliberately crafted.
It’s easier to start watching what you’re doing than it is to awkwardly fit your effort into some “ideal” imaginary process.
It can be done in as little as four easy steps:
Watch how you work. Note down each step as you move through them. This isn’t the time to worry about whether you’re doing the right or wrong thing, what can be improved, nor what is best.
Grade the importance and your skill level, so you can see what needs work and what you might be able to get rid of or replace with an automated or outsourced process.
Think of your ideal process. Once you’ve finished your project, write out another list – the way you think you should have worked, grading the importance of each step.
Compare the two lists. Look for where they don’t line up, where you have holes, what doesn’t work, how much time was spent on each task and if it correlates with how important you think each is.
That’s all there is to it.
Try scheduling your time to mimic your ideal process for your next project, focusing on the order of the steps, including those you don’t do often enough, while removing what wastes your time. Then, keep track of how it actually works out, compare your new process to your ideal process, and adjust the schedule for each new project until you hit your mark.
Simply being aware of how you want to work and the realities of how you actually work can be enough to start making changes.
Once you have a well-defined process that’s a realistic view of how you work, start making improvements and doing experiments, one step at a time.
We all have visions and dreams. Whether it’s about our personal lives, our work, or about complex concepts that target issues which are hard to grasp. The important thing is to listen to your ideas, to write them down, and, if they wake strong feelings, to pursue them.
It can be easy to achieve this, yet sometimes it’s not. A nice technique is to start small and take small steps instead of going 100% all-in or do nothing at all. We like to play with new things, we like to try out new technology, and our minds want to explore new paths — let’s do it!
FlyWeb1 is a new experimental Web API that allows web pages to host local web servers for exposing content and services to nearby browsers. It also adds the ability to discover and connect to nearby local web servers to the web browser itself. This might be a bit hard to grasp now, but imagine this in combination with a decentralized service picking the nearest edge server via FlyWeb. You wouldn’t need any complex external CDN solutions that choose the “nearest” edge server via geolocation resolution or similar unreliable technologies anymore. Another use case could be an “off-grid on-the-fly network” with devices that use FlyWeb together with Bluetooth and WiFi chips to find other devices in the vicinity and hereby introduce a whole new area of network reliability. As FlyWeb is a technology experimentally developed by Mozilla, you need to have Firefox Nightly installed to test this out.
Coding a line chart isn’t a big deal anymore thanks to libraries like D3 or Highcharts. But what seems so easy, actually isn’t: there are quite some pitfalls which can distort the results2 and lead to false assumptions about the presented data. Line-smoothing is one of them.
Monica Dinculescu just spent a week traveling Taiwan with a 2G roaming plan and now reminds us that we need to care more about proper lazy font loading8 if we use web fonts to not annoy our users.
Afshin Mehrabani illustrates the impact of Web Workers9 when sorting a 50K array with JavaScript on the main thread or in a background task. Great to see why we should consider using asynchronous worker tasks for complex and non-time-critical operations.
Tobias Tom wrote about doing what you think is right for you12 and why it matters to slowly change your habits and take small steps to reach your goals instead of seeing only black and white.
Kate Lunau from Vice Motherboard wrote an article explaining why the only good future of commuting is no more commuting14. Nice to see the topic of how we can work together without sitting in the same room and still be social and productive being picked up.
Jonathan MacDonald’s article “The Paradox of Life Balance15” targets a social problem we all face: While our connected devices offer great things, they also make us neglect real life and social communication. A great piece on why innovation is important and why it’s equally important to balance our activity and prioritize the non-digital reality.
People research what types of economy our society could transfer to after the current form of capitalism. Recently, I learned about The Venus Project20, a trial balloon for a resource-based economy. As Albert Einstein said decades ago: “We cannot solve our problems with the same thinking we used when we created them.” I’m excited to see this being tested out and curious if other proposals and tests will lead to a transformation of our current economy in the future. Not only we as developers need to play and test in search for better technology and better solutions, but we as human beings need to do this in all areas of our life.
With autumn starting to show its full glory, there is really no reason to stay inside and drink your hot cacao. No, it’s time to go outside and soak up all those warm colors nature has to offer, especially the vibrant golden-yellow leaves that can now be found almost everywhere you look. It’s the season of hazy mornings, and beautiful warm color palettes. In this month’s collection, I’ve gathered a couple of illustrations and photos that express this seasonal feeling.
These collections of cutout items in combination with real ones is so beautiful if done right. Mostly something that is done when there’s animation4 involved as well.
Poster for a conference that discusses the social, environmental, and organizational issues plaguing the fashion industry. Some beautiful gradients in an inspiring organic shape.
Bob Lundberg is an illustrator from Sweden that draws inspiration from objects he comes across in everyday life. The result is a harmonious testimony to design objects.
Concert poster that is part of the design/color system that Scott Hansen has going since the release of his latest album ‘Epoch’. Be sure to read the story23 about the meaning and origin of the artwork of Tycho.
This one is created for a fashion editorial in Marie Claire Italia. It’s part of a series that explores the lives of four sisters, their relationships with each other, and their individuality. The eyes are the attention grabbers, maybe even a little creepy. Wonderful water-color work. Be sure to check out the rest36.
Great to see how the mountain roads have been translated here with all the different structures. The fluo wheels are a nice touch, and also the way the pink is applied everywhere to add this sunset feeling.
You don’t have to be a baseball fan to appreciate everything in here. The typography is so on point. These are also hard colors to pull off just right. Be sure to admire them all.
Sad that summer is over but one comfort is that light is usually very pretty this time of the year. This photo proves it. Just look at this. Quite spectacular imho. It’s just like a painting.
So how does a cozy Sunday look like when it is raining outside. Exactly as illustrated here. Some nice shading going on. The light creating this warm feeling is so perfectly done.
Created as a tribute to mid-century modernism in California for Focus Magazine. Superb style and color usage. Beautiful shadow and highlight at play. You feel the light of the sun.
Admiring the soft colors pencil style in this illustration. Oscar’s illustrations are a reaction to designer and founder of type foundry Letters from Sweden85. Göran Söderström’s typeface called Funkis, which is influenced by the aesthetics from the early years of Scandinavian functionalism.
There are still some lingering thoughts that won’t accept fall. Queue a beautiful summer day with a magical view like you see here. Such a perfect scene with that tiny sail boat in the centre.
This illustration was submitted to Type Hike118, a collaborative design project that includes 60 designers and typographers, all celebrating the National Parks Service centennial in the USA. Love how the yellow touches are applied to create this moonlight effect.
What an inspiring color palette! The texture of the wood is also refreshing. All textures are in fact very well done. Love the disproportion of their bodies too.
Added another one to “the places I want to ride my bicycle” list. My friends are not lying when they say, “It’s not LIKE riding through a painting — it IS riding through a painting — up a very, very steep and twisting painting”.
Speaking of brush strokes, this lovely cover for the Autumn Books section of the Wall Street Journal is also very well executed. Perfect autumn feeling.
It has been a while since I last had a look at the ‘Windows of New York’ project. Time to rectify this with this beauty from 137 Second Avenue in East Village.
New work in the swimming pool series from Maria Svarbova. It’s such an inspiring thing to watch. Brilliant in its simplicity. That red accent is just brilliant.
I love illustrations that tackle the future. It’s such a great way to see imagination at work. Here you have a concept of how an airport terminal of the future could look like. Love the color palette and subtle gradient shades.
Design is more than just good looks – something all designers should know. Design also covers how users engage with a product. Whether it’s a site or app, it’s more like a conversation. Navigation is a conversation. It doesn’t matter how good your site or app is if users can’t find their way around.
In this post, we’ll help you better understand the principles of good navigation for mobile apps, then show you how it’s done using two popular patterns. If you want to take a go at prototyping your own navigation, you can download and test Adobe’s Experience Design CC1 for free and get started right away.
Navigation UI patterns are a shortcut for good usability. When you examine the most successful interaction navigation designs of recent years, the clear winners are those who execute fundamentals flawlessly. While thinking outside the box is usually a good idea, there are some rules that you just can’t break. Here are four important rules for creating a great mobile navigation:
First, and most importantly, a navigation system must be simple. Good navigation should feel like an invisible hand that guides the user. An approach to this is to prioritize content and navigation for mobile apps according to the tasks a mobile user is most likely to carry out.
As Jakob Nielsen says2, recognizing something is easier than remembering it. This means that you should minimize the user’s memory load by making actions and options visible. Navigation should be available at all times, not just when we anticipate a user needs it.
Navigation function must be self-evident. You need to focus on delivering messages in a clear and concise manner. Users should know how to go from point A to point B on first glance, without any outside guidance. Think of the shopping cart icon; it serves as an identifier to check out or view items. Users don’t have to think about how to navigate to make a purchase; this element directs them to the appropriate action.
The navigation system for all views should be the same. Don’t move the navigation controls to a new location on different pages. Do not confuse your user — keep words and actions consistent. Your navigation should use “The Principle of Least Surprise.”3 Navigation should inspire users to engage and interact with the content you are delivering.
In his research4 on mobile device usage, Steven Hoober has found that 49% of people rely on one-thumb to accomplish things on their phones. In the figure below, the diagrams on the mobile phones’ screens are approximate reach charts, in which the colors indicate what areas of a screen a user can reach and interact with their thumb. Green indicates the area a user can reach easily; yellow, an area that requires a stretch; and red, an area that requires users to shift the way they’re holding a device.
5 Representation of the comfort of a person’s one-handed reach on a smartphone. Image Source: uxmatters6
When designing, take into account that your app will be used in several contexts; even people who prefer to use a two-handed grip will not always be in a situation where they can use more than one finger, let alone both hands to interact with your UI. It’s veryimportant to place top-level and frequently-used actions at the bottom of the screen. This way, they are comfortably reached with one-handed and one-thumb interactions.
Another important point — bottom navigation should be used for the top-level destinations of similar importance. These are destinations that require direct access from anywhere in the app.
Last but not least, pay attention to the size of targets. Microsoft suggests7 you set your touch target size to 9 mm square or greater (48×48 pixels on a 135 PPI display at a 1.0x scaling). Avoid using touch targets that are less than 7 mm square.
8 Touch targets shouldn’t be smaller than 44px to 48px (or 11mm to 13mm), padding included.
Many apps use the tab bar for an app’s most important features. Facebook makes main pieces of core functionality available with one tap, allowing rapid switching between features.
Three Crucial Moments For Bottom Navigation Design Link
Navigation is generally the vehicle that takes users where they want to go. Bottom navigation should be used for the designated top-level destinations of similar importance. These are destinations requiring direct access from anywhere in the app. Good bottom navigation design follows these three rules.
Avoid using more than five destinations in bottom navigation as tap targets will be situated too close to one another. Putting too many tabs in a tab bar can make it physically difficult for people to tap the one they want. And, with each additional tab you display, you increase the complexity of your app. If your top-level navigation has more than five destinations, provide access to the additional destinations through alternative locations.
Partially hidden navigation seems to be an obvious solution for small screens — you don’t have to worry about the limited screen estate, just place your navigation options into a scrollable tab. However, scrollable content is less efficient, since users may have to scroll before they’re able to see the option they want, so it is best to avoid if at all possible.
Out of sight, out of mind. You should avoid placing too many items in the tab bar to prevent users from scrolling before they can click on the option they want.
The single most common mistake seen on app menus is failing to indicate the user’s current location. “Where am I?” is one of the fundamental questions users need to answer to successfully navigate. Users should know how to go from point A to point B based on their first glance and without any guidance from the outside. You should use the proper visual cues (icons, labels, and colors), so the navigation doesn’t require any explanation.
Bottom navigation actions should be used for content that can be suitably communicated with icons. While there are universal icons that users know well, mostly they are representing functionality like search, email, print and so on. Unfortunately “universal” icons are rare. Unfortunately, app designers often hide functionality behind icons that are actually pretty hard to recognize.
11In this previous version of Bloom.fm app for Android, it’s hard to understand the user’s current location.
Avoid using different colored icons and text labels in your bottom tab bar. Instead, follow this simple rule – tint the current bottom navigation action (including the icon and any text label present) with the app’s primary color.
13 Left: Different colored icons makes your app look like a Christmas tree. Right: Use only one primary color insead. This is the bottom bar menu in the Twitter app for iOS. The messages view is active.
If the bottom navigation bar is colored, make sure to use black or white for the icon and text label of the current location.
Left: Avoid pairing colored icons with a colored bottom navigation bar. Right: Use black or white iconography.
Make targets big enough to be easily tapped or clicked. To calculate the width of each bottom navigation action, divide the width of the view by the number of actions. Alternatively, make all bottom navigation actions the width of the largest action.
Android guidelines suggest following dimensions for the bottom navigation bar on mobile.
15 This shows a fixed bottom navigation bar on mobile with the units in density-independent pixels (dp). Source: Material Design.
Good navigation should feel like an invisible hand that guides the user along their journey. After all, even the coolest feature or the most compelling content is useless if people can’t find it.
Each bottom navigation icon must lead to a target destination, and should not open menus or other pop-ups. Tapping on a bottom navigation icon should guide a user directly to the associated view, or refreshes the currently active view. Don’t use a tab bar to give users controls that act on elements in the current screen or app mode. If you need to provide controls, use a toolbar instead.
Each bottom navigation icon must lead to a target destination.17 To provide on-screen controls, use a toolbar instead of the bottom navigatoin.
As much as possible, display the same tabs in every orientation. It’s best when you can give users a sense of visual stability.
Don’t remove a tab when its function is unavailable. If you remove a tab in some cases but not in others, you make your app’s UI unstable and unpredictable. The best solution is to ensure that all tabs are enabled, but explain why a tab’s content is unavailable. For example, if the user doesn’t have offline files, the Offline tab in the Dropbox app displays a screen that explains how to obtain them. This feature called Empty state18.
If the screen is a scrolling feed, the tab bar can be hidden when people are scrolling for new content and revealed when they start heading back to the top.
The upper tab navigation can disappear dynamically upon scrolling.
The size of the display is a major challenge in communicating your point to the user. Using pictorial icons as menu elements is one of the most interesting solutions to the problem of saving mobile screen space. The shape of the icon explains where it will take you, making them more space-efficient. They can make navigation simple and easy-to-use, but still with enough freedom to separate you from others.
Google Material Design uses the term Floating Action Buttons22 for this type of navigation. They are distinguished by a circled icon floating above the UI and have motion behaviors. Apps like Evernote simplified these controls by using a floating action button for the most important user actions.
However, this pattern has one major downside — the floating action button concealscontent. From a UX point29 of view, users shouldn’t have to take an action to discover what other actions they can take.
Also, many researchers30 have shown that icons are hard to memorize and are often highly inefficient. Only universally understood icons work well (e.g. print, close, play/pause, reply, tweet). That’s why it’s important to make your icons clear and intuitive, and introduce text labels next to your icons.
Navigation is generally the vehicle that takes users where they want to go. Always think about your user persona, and the goals they have when using your app. Then, tailor your navigation to help them meet those goals. You’re designing for your users. The easier your product is for them to use, the more likely they are to use it.
This article is part of the UX design series sponsored by Adobe. The newly introduced Experience Design app31 is made for a fast and fluid UX design process, creating interactive navigation prototypes, as well as testing and sharing them — all in one place.
You can check out more inspiring projects created with Adobe XD on Behance32, and also visit the Adobe XD blog33 to stay updated and informed. Adobe XD is being updated with new features frequently, and since it’s in public Beta, you can download and test it for free34.
Shaders are a key concept if you want to unleash the raw power of your GPU. I will help you understand how they work and even experiment with their inner power in an easy way, thanks to Babylon.js371.
Before experimenting, we must see how things work internally.
When dealing with hardware-accelerated 3D, you will have to deal with two CPUs: the main CPU and the GPU. The GPU is a kind of extremely specialized CPU.
The GPU is a state machine that you set up using the CPU. For instance, the CPU will configure the GPU to render lines instead of triangles; it will define whether transparency is on; and so on.
Once all of the states are set, the CPU can define what to render: the geometry.
The geometry is composed of:
a list of points that are called vertices and stored in an array called vertex buffer,
a list of indexes that define the faces (or triangles) stored in an array named index buffer.
The final step for the CPU is to define how to render the geometry; for this task, the CPU will define shaders in the GPU. Shaders are pieces of code that the GPU will execute for each of the vertices and pixels it has to render. (A vertex — or vertices when there are several of them — is a “point” in 3D).
There are two kinds of shaders: vertex shaders and pixel (or fragment) shaders.
Before digging into shaders, let’s step back. To render pixels, the GPU will take the geometry defined by the CPU and will do the following:
Using the index buffer, three vertices are gathered to define a triangle.
Index buffer contains a list of vertex indexes. This means that each entry in the index buffer is the number of a vertex in the vertex buffer.
This is really useful to avoid duplicating vertices.
For instance, the following index buffer is a list of two faces: [1 2 3 1 3 4]. The first face contains vertex 1, vertex 2 and vertex 3. The second face contains vertex 1, vertex 3 and vertex 4. So, there are four vertices in this geometry:
The vertex shader is applied to each vertex of the triangle. The primary goal of the vertex shader is to produce a pixel for each vertex (the projection on the 2D screen of the 3D vertex):
Using these three pixels (which define a 2D triangle on the screen), the GPU will interpolate all values attached to the pixel (at least their positions), and the pixel shader will be applied to every pixel included in the 2D triangle in order to generate a color for every pixel:
We have just seen that to render triangles, the GPU needs two shaders: the vertex shader and the pixel shader. These shaders are written in a language named Graphics Library Shader Language (GLSL). It looks like C.
An attribute defines a portion of a vertex. By default, a vertex should at least contain a position (a vector3:x, y, z). However, as a developer, you can decide to add more information. For instance, in the former shader, there is a vector2 named uv (i.e. texture coordinates that allow you to apply a 2D texture to a 3D object).
Uniforms
A uniform is a variable used by the shader and defined by the CPU. The only uniform we have here is a matrix used to project the position of the vertex (x, y, z) to the screen (x, y).
Varying
Varying variables are values created by the vertex shader and transmitted to the pixel shader. Here, the vertex shader will transmit a vUV (a simple copy of uv) value to the pixel shader. This means that a pixel is defined here with a position and texture coordinates. These values will be interpolated by the GPU and used by the pixel shader.
Main
The function named main is the code executed by the GPU for each vertex and must at least produce a value for gl_position (the position of the current vertex on the screen).
We can see in our sample that the vertex shader is pretty simple. It generates a system variable (starting with gl_) named gl_position to define the position of the associated pixel, and it sets a varying variable called vUV.
The thing about our shader is that we have a matrix named worldViewProjection, and we use this matrix to project the vertex position to the gl_position variable. That is cool, but how do we get the value of this matrix? It is a uniform, so we have to define it on the CPU side (using JavaScript).
This is one of the complex parts of doing 3D. You must understand complex math (or you will have to use a 3D engine such as Babylon.js, which we will see later).
The worldViewProjection matrix is the combination of three different matrices:
Using the resulting matrix enables us to transform 3D vertices to 2D pixels, while taking into account the point of view and everything related to the position, scale and rotation of the current object.
This is your responsibility as a 3D developer: to create and keep this matrix up to date.
Once the vertex shader is executed on every vertex (three times, then), we will have three pixels with the correct gl_position and a vUV value. The GPU is going to interpolate these values on every pixel contained in the triangle produced with these pixels.
Then, for each pixel, it will execute the pixel shader:
The structure of a pixel shader is similar to that of a vertex shader:
Varying
Varying variables are value created by the vertex shader and transmitted to the pixel shader. Here, the pixel shader will receive a vUV value from the vertex shader.
Uniforms
A uniform is a variable used by the shader and defined by the CPU. The only uniform we have here is a sampler, which is a tool used to read texture colors.
Main
The function named main is the code executed by the GPU for each pixel and that must at least produce a value for gl_FragColor (i.e. the color of the current pixel).
This pixel shader is fairly simple: It reads the color from the texture using texture coordinates from the vertex shader (which, in turn, gets it from the vertex).
The problem is that when shaders are developed, you are only halfway there, because you then have to deal with a lot of WebGL code. Indeed, WebGL is really powerful but also really low-level, and you have to do everything yourself, from creating the buffers to defining vertex structures. You also have to do all of the math, set all of the states, handle texture-loading, and so on.
Too Hard? BABYLON.ShaderMaterial To The Rescue Link
I know what you’re thinking: “Shaders are really cool, but I do not want to bother with WebGL’s internal plumbing or even with the math.”
And you are right! This is a perfectly legitimate question, and that is exactly why I created Babylon.js!
To use Babylon.js, you first need a simple web page:
"use strict"; document.addEventListener("DOMContentLoaded", startGame, false); function startGame() { if (BABYLON.Engine.isSupported()) { var canvas = document.getElementById("renderCanvas"); var engine = new BABYLON.Engine(canvas, false); var scene = new BABYLON.Scene(engine); var camera = new BABYLON.ArcRotateCamera("Camera", 0, Math.PI / 2, 10, BABYLON.Vector3.Zero(), scene); camera.attachControl(canvas); // Creating sphere var sphere = BABYLON.Mesh.CreateSphere("Sphere", 16, 5, scene); var amigaMaterial = new BABYLON.ShaderMaterial("amiga", scene, { vertexElement: "vertexShaderCode", fragmentElement: "fragmentShaderCode", }, { attributes: ["position", "uv"], uniforms: ["worldViewProjection"] }); amigaMaterial.setTexture("textureSampler", new BABYLON.Texture("amiga.jpg", scene)); sphere.material = amigaMaterial; engine.runRenderLoop(function () { sphere.rotation.y += 0.05; scene.render(); }); } };
You can see that I use BABYLON.ShaderMaterial to get rid of the burden of compiling, linking and handling shaders.
When you create BABYLON.ShaderMaterial, you have to specify the DOM element used to store the shaders or the base name of the files where the shaders are. If you choose to use files, you must create a file for each shader and use the following pattern: basename.vertex.fx and basename.fragment.fx. Then, you will have to create the material like this:
var cloudMaterial = new BABYLON.ShaderMaterial("cloud", scene, "./myShader", { attributes: ["position", "uv"], uniforms: ["worldViewProjection"] });
You must also specify the names of attributes and uniforms that you use.
Then, you can directly set the values of your uniforms and samplers using setTexture, setFloat, setFloats, setColor3, setColor4, setVector2, setVector3, setVector4, setMatrix functions.
Pretty simple, right?
And do you remember the previous worldViewProjection matrix, using Babylon.js and BABYLON.ShaderMaterial. You just don’t have to worry about it! BABYLON.ShaderMaterial will automatically compute it for you because you’ll declare it in the list of uniforms.
BABYLON.ShaderMaterial can also handle the following matrices for you:
world,
view,
projection,
worldView,
worldViewProjection.
No need for math anymore. For instance, each time you execute sphere.rotation.y += 0.05, the world matrix of the sphere will be generated for you and transmitted to the GPU.
Now, let’s go bigger and create a page where you can dynamically create your own shaders and see the result immediately. This page is going to use the same code that we discussed previously and is going to use the BABYLON.ShaderMaterial object to compile and execute shaders that you will create.
Incredibly simple, right? The material is ready to send you three pre-computed matrices (world, worldView and worldViewProjection). Vertices will come with position, normal and texture coordinates. Two textures are also already loaded for you:
Texture coordinates (uv) are transmitted unmodified to the pixel shader.
Please note that we need to add precision mediump float on the first line for both the vertex and pixel shaders because Chrome requires it. It specifies that, for better performance, we do not use full precision floating values.
The pixel shader is even simpler, because we just need to use texture coordinates and fetch a texture color:
Let’s continue with a new shader: the black and white shader. The goal of this shader is to use the previous one but with a black and white-only rendering mode.
To do so, we can keep the same vertex shader. The pixel shader will be slightly modified.
The first option we have is to take only one component, such as the green one:
Please note that we also use the world matrix because position and normal are stored without any transformation, and we must apply the world matrix to take into account the object’s rotation.
The goal of this shader is to simulate light, and instead of computing smooth shading, we will apply the light according to specific brightness thresholds. For instance, if the light intensity is between 1 (maximum) and 0.95, the color of the object (fetched from the texture) would be applied directly. If the intensity is between 0.95 and 0.5, the color would be attenuated by a factor of 0.8. And so on.
There are mainly four steps in this shader.
First, we declare thresholds and levels constants.
Then, we compute the lighting using the Phong equation (we’ll consider that the light is not moving):
We already used the diffuse part in the previous shader, so here we just need to add the specular part. You can find more information about Phong shading on Wikipedia25.
We’ve played a lot with pixel shader, but I also want to let you know that we can do a lot of thing with vertex shaders.
For the wave shader, we will reuse the Phong pixel shader.
The vertex shader will use the uniform named time to get some animated values. Using this uniform, the shader will generate a wave with the vertices’ positions:
I would like to conclude this article with my favorite: the Fresnel shader.
This shader is used to apply a different intensity according to the angle between the view direction and the vertex’s normal.
The vertex shader is the same one used by the cell-shading shader, and we can easily compute the Fresnel term in our pixel shader (because we have the normal and the camera’s position, which can be used to evaluate the view direction):
Sometimes the best inspiration lies right in front of us. With that in mind, we embarked on a special creativity mission1 eight years ago: to provide you with inspiring and unique desktop wallpapers every month. Wallpapers that are a little more distinctive than the usual crowd and that are bound to fuel your ideas.
We are very thankful to all artists and designers who have contributed and are still diligently contributing to this mission, who challenge their artistic abilities each month anew to keep the steady stream of wallpapers flowing. This post features their artwork for November 2016. Both versions with and without a calendar can be downloaded for free. It’s time to freshen up your desktop!
Please note that:
All images can be clicked on and lead to the preview of the wallpaper,
You can feature your work in our magazine2 by taking part in our Desktop Wallpaper Calendars series. We are regularly looking for creative designers and artists to be featured on Smashing Magazine. Are you one of them?
Welcome Home Dear Winter
“The smell of winter is lingering in the air. The time to be home! Winter reminds us of good food, of the warmth, the touch of a friendly hand, and a talk beside the fire. Keep calm and let us welcome winter.” — Designed by Acodez IT Solutions3 from India.
“It’s Bonfire Night on the 5th, and the nursery rhyme we learn at school in the UK starts ‘Remember, remember the Fifth of November’, so this is my tribute!” — Designed by James Mitchell86 from the United Kingdom.
“I love the changing seasons — especially the autumn colors and festivals here around this time of year!” — Designed by Rachel Litzinger132 from the United States.
“I designed some Halloween characters and then this idea came into my mind – a bat family hanging around in the moonlight. A cute and scary mood is just perfect for autumn.” — Designed by Carmen Eisendle159 from Germany.
“Thanksgiving is coming and I think one of the things we should be thankful for are the unlikely friendships.” — Designed by Maria Keller188 from Mexico.
“Look at the baby’s smiling face! Babies are always happy. There is an untouched innocence behind their smile that we adults fail to enjoy and cherish. When a baby’s smile can present you with hope, dreams and the aspiration to spread happiness and love, then why can’t you? Yes, let us spread the dream of love, happiness and peace all over the world.” — Designed by Krishnankutty KN321 from India.
“No man’s your enemy. No man’s your slave. Looking at people in an Apartheid way is a thing of the past. My blood is the same as yours, so why must we hurt our brothers and sisters when we are all the same? Believe in love, it will show wonders.” — Designed by Faheem Nistar364 from Dubai.
“It’s the time for being at home and enjoying the small things… I love having a coffee while watching the rain outside my window.” — Designed by Veronica Valenzuela399 from Spain.
“One of the first things that everyone thinks of when they think of November is Thanksgiving. When I think of Thanksgiving, I think of my dad cooking tons of delicious food all day while my sisters and I sit in front of the fire and watch the Macy’s Day Parade. This is my favorite time in November.” — Designed by Gabi Minnich420 from the United States.
“As an assignment for my Digital Theory & Skills class, we were asked to design a wallpaper for Smashing Magazine. I centered my idea around the idea of hot drinks because, in the U.S., we have three National days during the month of November dedicated to beverages that warm our cold, ‘sloth-like’ selves up!” — Designed by Rachel Keslosky433 from the United States.
Please note that we respect and carefully consider the ideas and motivation behind each and every artist’s work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience throughout their works. This is also why the themes of the wallpapers weren’t anyhow influenced by us, but rather designed from scratch by the artists themselves.
Is a person who is sitting by herself in a room alone? From an outside perspective, it might seem so, but the human brain is way more interesting in these regards. We carry a map of relationships inside ourselves, and it depends on this map if the person actually does feel alone or not.
I just read “Stress and the Social Self: How Relationships Affect Our Immune System1”, and I feel that we can learn a lot from it. In fact, I might see social media from a different perspective now. We’re social beings, I love sharing good content with you, so, without further ado, here’s this week’s web dev reading list.
Opera 41 and Chrome 54 are out2, and they come with some interesting new features. The updates now support Custom Elements v1 as well as some new and convenient JavaScript methods like ParentNode.prototype.append() or unprefixed CSS user-select. On the other hand, they removed TouchEvent.prototype.initTouchEvent (you’ll need to use the constructor from now on), and KeyboardEvent.prototype.keyIdentifier has been replaced by KeyboardEvent.prototype.key.
Following a suggestion by other major browser vendors, Mozilla will distrust WoSign and StartCom certificates3 from January 1st, 2017 due to backdated certificates and non-disclosure and denial of an acquisition of the two companies. A great step for better CA security.
With the upcoming Chrome 556 (now in beta), the browser will finally get support for Pointer Events. It will also support JavaScript async/await-functions and revive the CSS hyphens property after years of absence in Chromium browsers. The once Event Listener option will also be added and, to improve load times and prevent failed navigations, cross-origin and parser-blocking scripts injected using document.write() will no longer load over 2G connections (which also means that 3rd-party fallbacks as used by the HTML5Boilerplate7 won’t work anymore in upcoming Chrome versions).
Last week, Smashing Magazine had to deal with an expiring SSL certificate. While this is usually an easy thing to renew, problems may arise if you have HTTP Public Key Pinning (HPKP) enabled and set to a long expiry date (which usually is intended). Mathias Biilmann Christensen now wrote about the lessons learned from this and why you should be aware (and afraid!) of HPKP15 and how to issue a new certificate with an old key16 so that the site won’t break for your users with HPKP enabled.
Brian Armstrong from Canopy explains why you shouldn’t rely on default DNS settings19, as the recent Dyn DNS outage has shown. He covers how to configure DNS the right way, why a longer TTL is important, and why having different nameservers from different providers can save your service’s uptime.
20Having multiple nameservers is good, but make sure that they come from different DNS providers so that requests can be resolved by others if one fails. (Image credit: Brian Armstrong21)
Roman Komarov wrote about conditions in CSS Custom Properties23, about solutions, challenges, and how you can benefit from preprocessors when it comes to more complex conditions. The article also mentions a couple of interesting ideas on how the web standard could be extended.
It’s really interesting to see this kind of back-story: Katie Singer reveals the real amount of energy used to power the internet25 and puts these figures into perspective by comparing how much power anyone of us would need to generate to power a website.
After a few years of designing products for clients, I began to feel fatigued. I wondered why. Turns out, I’d been chasing metric after metric. “Increase those page views!” “Help people spend more time in the app!” And it kept coming. Still, something was missing. I knew that meeting goals was part of what a designer does, but I could see how my work could easily become commoditized and less fulfilling unless something changed.
I thought of how bored I’d be if I kept on that path. I needed to build some guiding principles that would help me find my place in design. These principles would help grow and would shape my career in a way that fits me best.
What I’d like to share here is how I found my principles and regained a sense of fulfillment. I’ll also discuss one of them and hopefully convince you that it’s worth considering when we design products. Speaking of convincing, I’d also like to help you convince your boss that these things are important.
One small string that began to tie it together was watching Bret Victor’s talk “Inventing on Principle1.” The first half is mostly a code demo; then, he gets philosophical and talks about how goals and principles help you. I believe that living by principles can lead you to some really interesting places — for me, they’ve helped me to find the right ways (and places) to work and the right projects to take on (like designing a typeface), and they’ve helped me to identify which areas of my life need to be nurtured so I don’t burn out.
I’ve worked on projects whose goals varied from increasing email signups by 10% to boosting ad impressions by 30%. It was honest work, to be sure. It’s important that our designs meet the needs of the product owners and our clients — this isn’t art school, and there are real constraints and requirements we need to address.
However, it’s not enough to do metrics-based design. That in itself is a bit too clinical and detached, and where’s the fun in that? We need more.
Validating and then meeting a project’s requirements should be the minimum of what we set out to do. Once we set those metrics as our baseline, we’re allowed to be more impactful and thoughtful as we get to the root of a design problem.
What we need to shoot for is to help people fall in love with our products. That means pushing to give our designs a soul.
Here’s what “emotionally connecting” means: It means you’ve created a product that stands out in someone’s heart. The product becomes what people reach for because it’s the most helpful. People might not be able to understand what you’ve done, but they’ll perceive that it’s better. This is one way to make a product that’s indescribably good.
I usually ask these two questions, which get at part of what helps people fall in love with a product:
What’s going to help someone really find this useful?
What’s going to make them care about it?
For years, we’ve focused on making our websites and products be functional, reliable and usable. These qualities are the bedrock of any good product, but it’s when we add a soul to a product that it really comes alive.
My first hint of a design having soul came back in 2005 when I logged into Flickr2 for the first time. Sure, Flickr has undergone many, many changes since then, but I’d like to explain how it helped me, as someone who hadn’t shared much online before. I wasn’t sure how to share something, but I noticed right away that the website greeted me with a friendly “Hello.” The website helped me breeze through the process, and the friendly tone was really assuring.
My Flickr experience was like a pal gently leading me through the process, making it easy to succeed. It was a warm experience that made me want to return. Incidentally, I can say “hello” in a lot more languages now, thanks to what Flickr taught me.
Moving forward to newer examples, we can also consider Slack3, its competitors and email. All of these options help people communicate, but Slack has a personality that helps you feel more connected with it. Slackbot helps you get started by asking you questions in a conversation, much like a real human would when you meet them for the first time. The makers of Slack eschewed the standard idea of filling in a registration form in favor of something more conversational — this makes other services feel stale and unfriendly by comparison. Slack has soulful flourishes everywhere: from smooth animations to a cute little emoji that is shown when you’ve scrolled all the way to the newest message in your group.
To be fair, Slack and Flickr (which, by the way, share cofounders) weren’t the first to try for something more human — that desire has spanned centuries. Lovers of typographic history may recall that Gutenberg wanted the movable type he created to mimic the look of handwriting. He used blackletter-style letters, which were similar to the bible manuscripts that monks illuminated.
These examples makes a strong case for design having a soul. The personality that develops from having one is what wins someone’s heart and makes competitors feel like poor (or, at best, passable) copies. Consider this statement by John Medina in “Brain Rules7“:
Emotionally arousing events tend to do better remembered than neutral events… Emotionally charged events persist much longer in our memories and are recalled with greater accuracy than neutral memories.
In other words, we’re wired to remember products with a soul. Let’s use that to our advantage.
Next, let’s get a little more specific and see how that can play out.
One example of a way to give a product a soul is by adding a “fiddle factor.” A fiddle factor is a playful part of a product that imparts a sense of joy or playfulness when used. I first heard this term in Jony Ive’s unofficial biography, Jony Ive: The Genius Behind Apple’s Greatest Products. Ive had a new take on, of all things, a pen. Noticing that people tend to fiddle with their pens when not writing, he added something to his pen design to give people something to play with when they were idle. Of course, Ive started by making the best possible pen, then added the fiddle factor.
This was a new idea back then, to put something on a pen that was purely there to fiddle with. He was really thinking differently. The pen’s design was not just about shape, but also there was an emotional side to it.
Fiddle factors invite people to idly toy with them and form a deeper connection with what you’ve made. They become that warm little blanket that wraps a product around your heart and makes you want it more.
I’ve described one already with Slack and its emoji use, but here are a few more digital fiddle factors:
That pull-to-refresh spinner with the really cool spinning animation? A fiddle factor.
That fun animation you click to like something on Twitter? Fiddle factor.
In MailChimp, when you find out that Freddy’s arm can extend almost forever when you’re testing an email’s responsive breakpoints? That’s a fiddle factor (albeit a cruel one).
To give a project a soul is to cultivate a relationship with it. You need to know what it needs and understand its nature. In this sense, this relationship is the same as a potter with clay or an architect with wood and steel. Once you understand the nature of your materials, you will know what it can become and what its limits are. This will help you to mold a soul in the right ways. Not doing this will ultimately cause your project to feel inauthentic and fail.
Let’s say you’ve built a playful iOS app. It’s meant to send short, fun replies to friends. In the app, you’ve got an overview page showing the latest emails, and the user can go into a detail view to read a particular message. You could go the standard route of sliding in the email from the right — it’s a simple thing to do, and it’s built right into iOS.
The drawback with built-in transitions is well covered: Anyone can use them. Sure, there are definite benefits to them (namely, that they’re cheaper and faster to implement), but it’s difficult to build something that’s soulful if you only use stock components and animation.
Instead, consider an alternative kind of transition. Think about it like this: Consider the personality you think the app should have. Think about the people who will use this app. I use this chart to help me determine the tone of a project:
Back to our email app. Let’s say it’s a fun email client. In this chart, it shows most strongly in the casual, energetic and easygoing categories. If we think about the animations we’ll use here, it makes sense to be more playful.
So, let’s animate the email message to come up from the bottom of the screen with a little spring in its step. When you’re finished with it, you can swipe it away or pull it back down.
Let’s take that even further, based on the chart and animation:
Maybe the animation could be the basis for how you approach other animations in the app. Your other animations could be similarly fun (but don’t overdo it).
Maybe it will affect which typefaces you choose.
What’s important here is to avoid forcing something in where it doesn’t belong. Don’t be different just for the sake of being different, and don’t overdo it. If we saw the President of the United States deliver the State of the Union address in a Hawaiian shirt, we’d probably feel like something’s amiss and might not take him as seriously as we should. Same here — what we do has to feel natural.
Any interaction, be it with a button or a scroll, is a perfect place to explore adding a fiddle factor. Explore what might happen when the user scrolls to the bottom of the content. Or perhaps you could come up with something unexpected when the user hovers over a photo for a long time. Maybe you could make a neat hover or focus animation.
Adding soul isn’t limited to animation, either. It goes much deeper!
How does it sound? Each person’s voice is totally unique, and your product’s should be, too.
How does it look? We need to stand out and be ourselves; so do the things we make.
How does it act? Could your product know the user on a deeper level and anticipate their needs? That would be deeply soulful.
It’s all well and good for designers to talk about giving their products a soul, but here’s where it gets real: You have a deadline, and a budget. Your boss might not want to go for it, and your engineers might be resistant because it would take them extra time. Let’s talk about a framework for those conversations.
The framework I use to have these discussions centers on the effort required to implement an idea and the idea’s impact on the customer and the business.
While there will inevitably be high-impact, high-effort items in a project, the sweet spot is low-effort, high-impact ideas. These types of ideas help the user in meaningful ways, without significantly affecting your timeline and budget.
This way of looking at ideas also helps me to let go of ideas that might be a little too egocentric. Those usually have low levels of impact and high levels of effort. Mapping them out in this way helps me to focus on what matters most.
I’ve found this approach effective because it enables us to differentiate our products, while making the most of our time.
Let’s go back to custom animations for a moment. If we’re talking about adding fiddle factors and animation to our email app, we can’t build something great by assembling it entirely from off-the-shelf components. If we use only basic components and built-in animations, the product wouldn’t be memorable enough to matter to people. Plus, it will make it difficult for us to fall in love with what we’re building and to give the product a soul.
The soul of a product lives in the attractive needs. The Kano model invites us to think of one or two features that set the product apart from its competition. Framing your high-impact, low-effort ideas with this model will help you make a strong case for soul.
At our core, we’re people who care about our craft. We want to build great products, and the products need to have those nice touches. Being able to fully ply our trade also helps with employee retention. We’re happier when we’re able to do our best work. If we’re able to fully stretch ourselves to make something great, we’re going to keep giving our best in future. Conversely, if we’re prevented from doing our best work, we’ll become disconnected and disinterested, motivating us to go elsewhere when the opportunity presents itself.
If your boss or company doesn’t give you this freedom and you think it’s important, it might be time to plan your next transition.
It’s not enough to simply design something and meet a goal. This is a surefire way to burnout and boring products. We’ve got to do more for ourselves, our products and our industry. Finding our principles will help us find the right place to work and to do our best work.
Giving our products a soul will make them better, more engaging products. The next time you’re designing, ask yourself what would make someone find your product useful, and what would make them care about it more than another product? Once you do that, you’ll be well on your way to cultivating a healthy relationship with your products and building things that people really love.
Also, it’s not enough for us to have these ideas; convincing our team members and bosses to come along with us is important. Once we test and articulate the value of what we do, we’ll have a much easier and more rewarding time.
One of the most popular children’s television heroes here in the Czech Republic is The Little Mole1, an innocent, speechless and cheerful creature who helps other animals in the forest.
TV heroes often fight against people who destroy their natural environment. When watching The Little Mole with my kids, I sometimes picture him as a mobile website user. Do you want to know why?
We, as web designers, often treat our users the same way the “bad guys” treat The Little Mole, especially on mobile websites.
One episode of the series is particularly dramatic. An old man tries to get rid of the mole in the garden by any means and eventually tries to poison him. Web designers do the same thing when they make it difficult to use the mobile version of a website and try to “poison” the user, eventually making them leave the website.
So let’s be a little sarcastic today and try to poison the mobile user. How does that sound? Just follow my instructions.
Let’s make a slow website, disable zooming, hide the navigation and fill up the page with fixed-positioned elements. I’ll bet the poor mobile user won’t be able to survive this.
Making the website load slowly is the best weapon against the mobile user. Can the visitor go to and return from the post office before the website has finished loading? Then you’re doing a great job! You are poisoning the mobile user effectively.
Now, let’s be serious. The transmission speed of mobile networks is slow, and even though speeds are increasing to 3G and 4G, the networks aren’t everywhere2, and they just can’t compete with wired ones.
Various test and surveys show that the website speed has a significant impact3 on conversions and a website’s overall effectiveness. The user shouldn’t have to wait more than a couple of seconds for a website to render, even when using an EDGE connection.
Moreover, don’t forget that website speed is one of the criteria4 that Google considers for search results and AdWords campaigns. Therefore, it affects not only conversions but also whether users will land on your website at all.
The solution is quite simple: Think about speed when you are developing a website’s concept. Start with a performance budget5.
Don’t have time to read this now? I completely understand. Save the text for later. Fortunately, tools are available to tell you what is wrong with your website. First, test your website with PageSpeed Insights16, and then continue to WebPagetest17.
It is true that various studies on carousels do not explicitly say they are inappropriate. However, carousels are complicated both in implementation and for the user experience. So, using them is risky.
18 Nike’s carousel (left) does not make it clear that the content continues to the right. Best Buy’s (right) does it better: Subsequent items are visible, and therefore it is evident you can scroll to the right. (View large version19)
It is highly probable that, by using carousels, you will be hiding some content, instead of promoting it. According to some surveys, the vast majority of users see only the first image20, and banner-based carousels are usually just ignored because of “banner blindness.”
Don’t use the carousel just as eye candy or to hide unnecessary content.
Carousels are great at advertising secondary content that is related to the main content.
Use the first slide to announce the other slides.
The main purpose of that first slide is entice the user to view the second and third slides.
Make the navigation usable on small phones.
Those small dots used as navigation on the desktop do not count as “usable” on mobile phones!
Make sure custom gestures don’t conflict with default browser gestures.
Are you using the swipe gesture? Make sure it does not conflict with a built-in browser gesture.
Don’t slow down the website.
This has to do primarily with data demand and implementation of the carousel.
22 Newegg’s carousel (left) represents a conventional approach. B&H’s (right) is a good example, saving vertical space and enticing the user to browse additional slides by showing the next one. (View large version23)
Make the navigation easily accessible? Come on, get serious! You could end up with thousands of users.
When you hide the menu on a website, people will stop using it. In a recently published study, Nielsen Norman Group found24 that hidden navigation on mobile has a negative effect on content discoverability, task completion and time spent on task.
If there is something important in the navigation, and you can display it, do it. If you can’t show the whole menu, then simplify it, or at least show the important parts of it. For this reason, I tend to recommend the “priority plus” navigation pattern25.
26 If the navigation also carries content, always display at least a few items. (View large version27)
What if you can’t show the important items? OK, then, hide it under a hamburger icon, label it “Menu”28, and make sure users can use the website without the menu.
I regard less common gestures to be risky for a mobile UI, for two reasons29:
Custom gestures might conflict with the browser’s default gestures.
If your carousel supports swipe gestures, for example, the user might accidentally perform an “edge swipe” (a gesture very similar to a regular swipe), which some mobile browsers interpret as a command to navigate the browsing history.
Less common gestures are unknown to many users.
Therefore, you’ll have to teach the user. This makes sense if we are talking about daily-used apps, but not about websites.
Using a carousel does not have to be such a problem. However, I have seen news websites support swipe gesture for navigation between articles. For the user, this is unusual and confusing.
The swipe gesture is not the only problem here. Tapping the bottom part of the Safari browser on iOS will reveal a hidden menu. Therefore, if you stick navigation elements at the bottom, the user might be forced to tap twice30.
Before using any uncommon gesture, test that it doesn’t conflict with any browser’s built-in gestures.
OK, let’s be serious. Are your tap targets big enough that a basketball player could easily hit them with their thumb?
In his book Designing For Touch31, Josh Clark refers to a study by Steven Hoober and Patti Shank32. The researchers found that, if placed at the center of the mobile screen, a tap target can be as small as 7 square millimeters; however, if placed at the top or bottom, it should be at least 11 square millimeters.
However, millimeters are rather impractical for web use. We use pixels, right? So, how do we deal with the variety of DPIs on mobile devices? Probably to most readers’ surprise, Josh Clark says in his book:
Nearly all mobile browsers now report a device-width that sizes virtual pixel at roughly the same pixel density: 160 DPI is the de facto standard for touchscreen web pixels.
Again, all you need to do is set the viewport meta tag correctly:
There is one more step: Use em or rem units that best suit the responsive design. The default font size for most browsers is 16 pixels, so we can use the following conversion:
37 Even if your website is not “meant” for mobile devices, there is no reason not to let mobile users get a sneak peek. Some websites don’t adapt to particular viewport sizes. What a shame! (View large version38)
We can’t assume that smartphone screens are around 320 pixels, that tablets are around 768 pixels and that desktops are over 1024 pixels. A page should seamlessly adjust to screens that are 768 pixels and lower.
So, what resolutions should we take into account? All of them, my friend.
In my development career, I have been testing responsive websites from 240 to approximately 2600 pixels wide. I admit that making all sizes look perfect is sometimes not humanly possible, but the bottom line is that the layout should not fall apart — assuming your intention is not to scare away mobile users.
Like most of you, I simply expand the browser window (or use the developer tools’ responsive mode) from the smallest size to full width. It is a kind of “Hay! mode”, found in the Brad Frost’s testing tool39.
Also, Don’t Change the Design When the Phone Switches from Portrait to Landscape Mode Link
I think that users expect the same, or at least a very similar, look when browsing a website, regardless of how they hold their phone. I remember one of my lecture participants telling me a story. When his company redesigned a website, a lot of people started calling the support desk. They were all complaining about a particular error: The website menu was disappearing. After a while, they discovered that these were tablet users. When they viewed the website in landscape mode, the menu was there. If the tablet was rotated into portrait mode, the menu was hidden under a “hamburger” icon.
We have a problem, though. People usually can’t make calls from a desktop browser41. However, instead of ignoring phone links, desktop browsers open an incomprehensible dialog box that invites the user to select an app to make the call. In most cases, no such app is available.
Dear friend, we don’t want to poison desktop users either. So, on this rare occasion, I recommend using device detection and inserting an active link for mobile users only.
In the HTML, the phone number would be inactive. We’ll just wrap it in a span tag and apply Javascript later:
Phone: <span>123456789</span>
Using jQuery and the isMobile42 detection library, we’ll replace the element with a phone-number class and a link:
Disable the zoom if you really want to stick it to users. It’s inhumane — and very effective.
But seriously, by disabling zooming, you are not only making life a little more complicated for users with poor eyesight. Even users with good eyesight43 zoom on mobile devices, for various reasons:
to see an image up close,
to more easily select text,
to magnify content with low contrast.
Zooming is actually disabled on a sizeable proportion of mobile websites. Consider the importance of viewing image details in an online store. Zooming is disabled on 40% of e-commerce websites44 tested by the Baymard Institute. Mind-boggling, isn’t it?
Just as desktop users can’t do without the back button and scrolling, so too do mobile users need zooming.
The WCAG’s accessibility guidelines tell us that users must be able to increase text size by 200%.45
Sure, there are cases when you have to disable zooming — for fixed elements, for example. But zooming is sometimes disabled by accident, such as by insertion of the wrong meta viewport tag. The one below is the only correct one, whereas incorrect tags contain parameters such as maximum-scale=1 and user-scalable=no.
9. Set * { user-select: none }, And All Is Good Link
Some users will visit your beloved website and copy all of the text. This is shocking and must be stopped.
Dear friends, setting the user-select property46 to none can be useful, but only in parts of an interface that you expect users to interact with, where selection might do no good.
Therefore, I recommend using user-select: none for the following elements only:
icon navigation items,
carousels with overlaid text,
control elements such as dropdowns and navigation.
Please, never ever disable the selection of static text and images.
If the user lives to see the page load, kill them for good by making the font flicker or hide the content completely.
Using web fonts is not wrong, but we have to make sure they are the first elements of the website to load. Some browsers wait for web fonts to load before displaying the content. This is known as a flash of invisible text (FOIT). Other browsers (Edge and Explorer) show a system font right where you least want it, known as a flash of unstyled text (FOUT).
There is a third possibility, flash of faux text47 (FOFT). Here, content is rendered with a regular cut of the web font, and then bold and italic cuts are displayed right after that.
48 FOUT in practice: Showing system fonts is better than showing a blank screen while the web fonts load. (View large version49)
My projects are usually content-based websites, so I prefer to display the content as quickly as possible using a system font (FOUT). This is when I like Microsoft browsers. I also use a small library named Font Face Observer50. Let’s look at the code. First, the JavaScript:
var font = new FontFaceObserver('Webfont family'); font.load().then(function () { document.documentElement.className += ' webfont-loaded'; });
And here is the CSS:
body { font-family: sans-serif; } .webfont-loaded body { font-family: Webfont Family; }
11. Clutter The Page With Social Media Buttons Link
If you can’t poison them with your own concoction, use your neighbor’s.
Facebook, Twitter and Google buttons are uncomfortable for mobile users, for two reasons:
They download a huge amount of data and slow the loading and rendering of websites.
Tests show that when the official social sharing buttons are used, visitors will download 300 KB more over more than 20 requests.
They are usually useless. Social sharing is often integrated in the operating system.
A Moovweb study carried over the course of one year across 61 million mobile sessions showed that only 0.2% of mobile users do any social sharing.
The vast majority of social buttons are useless, even on desktop. Sharing is particularly useless in an online store, because a low sharing count is demotivating52 for the buyer. But let’s not go there. We are trying to poison the mobile beast.
If you don’t want to poison the mobile user but you need social sharing buttons, try using social sharing URLs53 or a plugin such as Social Likes54, which implements them with less impact on loading speed.
12. Faulty Redirection From Desktop To Mobile Link
A “killer” developer who has an m-dot version of a website has one more way to poison the user. Hooray!
We see faulty redirects55 on practically every other website with an m-dot version.
The correct implementation looks something like this:
If a mobile visitor goes to www.example.com/example, the server detects their device and redirects them to m.example.com/example (not to m.example.com). The same happens on a mobile version accessed from a desktop.
If that URL does not exist, then leaving the user on the desktop version is better than redirecting them to the m-dot home page.
Let search engines know about the two versions of the website by using <link rel="alternate"> or by indicating it in the sitemap.xml file. A detailed guide56 is in Google’s help section for webmasters.
The ideal solution is a responsive website that serves the same URLs to all devices. An m-dot version of a website increases development and maintenance costs. Also, it is not the only type of website that can be optimized for a strong smartphone UX or for mobile network speed.
Read what Karen McGrane says in her book Going Responsive57, referring to a study by Doug Sillars58, the technical lead for performance in AT&T’s Developer Program:
It’s a myth that the only way to make a fast-loading site on mobile is an m-dot. Good coding and decision-making practices can serve up responsive sites that are every bit as fast as any other method.
Now, the only thing left to do is hide what is not necessary — the content, for example.
Hide content from the mobile user. They don’t need it anyway.
Whether you like it or not, people visit websites to see the content. Yes, we are forced to live among such spiteful creatures.
59 The user seeks content. Give it to them as quickly as possible. Then, you can force them to download an app or submit their contact details. (View large version60)
Unfortunately, a lot of websites hide content, for reasons I don’t understand. Perhaps the content is not worthwhile, but I find that hard to believe. Numerous elements can cause content to be hidden:
Cookie bar
Some European websites are obliged to show the unfortunate cookie consent61 notification. And we can do nothing about it. However, this doesn’t mean that a cookie bar should be fixed and take up half the screen.
Online chat window or newsletter subscription ad
Positioning elements as fixed is very unfortunate on mobile devices. You are hiding content that the user came to see and are displaying content that they are not interested in. Using these elements is OK, but avoid fixing their position on mobile devices.
App-download interstitials
These are painful. Some websites invite you to install the accompanying app, instead of showing you the content. But users came to see the website! Instead, use smart app banners62 on iOS or native app install banners63 on Android to advertise your native app.
Google has decided64 that, effective January 2017, overlapping content on mobile websites will be penalized:
[Content that is visually obscured by an interstitial] can frustrate users because they are unable to easily access the content that they were expecting when they tapped on the search result.
Pages that show intrusive interstitials impair the user experience more than pages whose content is immediately accessible.
For the record, Google will not penalize websites that show interstitials, such as cookie bars or age confirmations on adult websites.
How Many Mobile Users Have You Poisoned Today? Link
That’s about it. Now, let’s be serious. There wasn’t anything “new” above, was there?
All the more reason to be sorry that the vast majority of responsive websites poison the mobile user.
Let’s summarize the critical information in a short checklist:
Does your website render quickly enough on mobile?
Do less important elements block more important ones? Have you chosen the optimal strategy to render web fonts? Are third-party plugins (such as for social media) slowing down the website?
Are you hiding content?
Are fixed elements getting in the way? Are you hiding content for particular resolutions or in landscape or portrait mode?
Is the UI mobile-friendly?
Are the tap targets large enough? Are complex UI elements such as carousels implemented correctly on mobile? Are phone numbers clickable? Does content selection remain enabled? Do you make navigation visible wherever possible?
Do you respect the native browser?
Have you disabled zooming by accident? Do you support gestures that conflict with browser defaults?
Is your redirection implemented correctly (if you’re using an m-dot version)?
Be kind to mobile users. Do not be the wicked old man who tries to get rid of The Little Mole in his yard. Do you want to know how the fairy tale ends? The Little Mole survives, laughs at the old man and moves to another garden.
Editor’s note: So you’ve attended a conference, listened to some truly inspiring talks, made quite a few valuable connections, maybe even attended a hands-on workshop and learned a thing or two. What now? How do you bring back the new knowledge and ideas and connections to your team and to your work? This article highlights a practical strategy of getting there without much effort. With SmashingConf Barcelona1 taking place next week, we thought this article would come in handy.
Have you ever been to a conference with top speakers, awesome people to network with and such a great energy that you got fired up and couldn’t wait to get home to start applying everything you’ve learned? How do things look two weeks later? Did you implement all of that learning into action? How about two months later? Were you still taking action on that knowledge?
If you got off track, don’t worry. That happens to most participants at any conference, no matter how great it is. In fact, the better the conference, the more likely you’ll be overwhelmed and won’t implement everything you wish to implement — unless you create a game plan to put the knowledge you acquire into action in a way that doesn’t overload you.
Here are a few tips on how you can do that in just a few minutes between sessions.
It is easy to tell yourself, “I’ll implement this strategy in my organization as soon as I get back home,“ or “Two months from now, I’ll have mastered this skill.” The hardest part is ensuring that you actually implement the strategy or that you practice enough to master the skill.
To solve this problem, at the end of each presentation, commit yourself to applying the newly acquired knowledge for a certain period of time or for a certain number of times each day. Setting aside 30 minutes to focus on that subject every day is a lot more effective than completely transforming how you work in just a week, particularly if you have trouble acquiring habits.
2 Thirty minutes in twenty-four hours is not much. Set them aside daily and use them wisely. (Image credit3)
Write Down The Reason Why You Are Implementing This New Action Link
Will this new action save you time in the future? Will it triple your productivity? Will it make you a better web designer? How so?
Write down your motivation right after the presentation is over, so that once you are back home and face the first setback, you can read again why you decided on it in the first place. This will help you to persevere until it works for you.
Be Clear On How This New Action Will Affect You Link
For every action there is a reaction. It would be naive to think that this new action won’t have any impact on you.
Maybe you will incur some extra costs. Maybe you will get frustrated until you master the new skill. Or maybe you will have to work a bit more in the beginning. When you become clear on those possible setbacks, you can create strategies to minimize or even eliminate them.
Also, be clear on the positive impact it will have on you. When things are in place and you start reaping the rewards of your actions, what will those rewards look like? Will you be making or saving more money? Will the quality of your work skyrocket? Once you know the rewards, you can create strategies to get the most out of them.
You don’t need to come up with those strategies at the conference. All you need is a quick brainstorm on the positive and negative impact of this new action. You can come up with the strategies later on, once the conference is over. For now, just have an idea of what might get in your way, so that you can prepare for it, and know what you might get out of it, so that you can leverage it.
Be Clear On How This New Action Will Affect Others Link
If you work on a team, your new action will directly or indirectly affect them, especially if you are the only one who attended the conference. Ideally, your whole team would go to the conference with you, but that is not always possible.
When you get back from a conference and start working harder, some of your team members might get jealous and try to undermine your efforts (even if subconsciously). You may have learned of a great idea at the conference that your colleagues don’t find so great because they don’t share the context in which you learned it; and, because of that, they might turn down your idea. That can be discouraging.
Once you’re clear on how your ideas or new work ethic might affect your colleagues, you can come up with ways to give them the context they need, so that they see the power of the idea before you present it to them.
4 Share recordings of talks with your colleagues (if available), or walk them through your notes explaining key concepts. (Image credit5)
If you feel comfortable presenting, you could even offer a mini-workshop for them based on what you’ve learned. That’s a double-win because, first, they will learn the key points and, secondly, by teaching them, you will assimilate about 90% of what you learned at the conference.
Once again, you don’t need to come up with strategies during breaks between sessions, but at least brainstorm on how this new action could impact those around you.
Every conference has two types of audience members, the ones who are always with their head down taking notes and the others who don’t bother to write anything. Both types miss out on important aspects of live presentations. The person who is always writing very often gets so focused on the content that they end up missing the broader context. That leads to confusion and abstract content when they get around to reviewing their notes. The person who pays attention but doesn’t take notes might understand the context and learn a lot better, but because our brains can process only seven pieces6 of information at a time (give or take two), that person will have a hard time recalling everything they learned.
7 The act of writing stuff down helps you to remember it. (Image credit8)
To solve this dilemma, develop the habit of taking notes quickly and effectively. Sketchnotes9, mindmaps and flowcharts are great options. Familiarize yourself with them, and apply whichever you feel is most appropriate for the moment. For instance, sketchnotes could be your standard method of note-taking; when you want to take notes about a complex strategy, you could draw a mindmap; and when you are learning about a process that has multiple steps and a certain order to those steps, you could opt for a flowchart. As you become familiar with these note-taking options, you’ll notice what works best for you in each situation. For now, just pick one and start with it.
In most cases, when you start implementing what you’ve learned, you won’t get results right away. Most likely you will go through some frustration before succeeding. Encountering challenges in implementing a new strategy or concept is perfectly natural. It simply means you are learning something new, and as a result, it will help you to become a better at what you do.
According to the four-stage model of learning a new skill, created by Noel Burch and made famous by Abraham Maslow, when we learn any new skill we move first from unconscious incompetence (where we have no idea that we don’t know something) to conscious incompetence (where we are aware of our lack of skill). This is the time when most people might get frustrated and give up.
To get yourself excited about the initial challenges and avoid the initial discouragement, set up a reward system. This will remind you that you are on the right track.
You could simply give yourself a small prize to reward your commitment. If you have taken the action you have committed to take for the entire week, reward yourself with a massage, a few hours of playing that video game you love or going to your favorite restaurant. This will keep you going when frustration would normally set in.
By using rewards to keep you from giving up, you will carry yourself through the initial challenges of the conscious-incompetence phase, and soon you will reach conscious competence, when you can perform the new skill quite well but still need to think about it. The great news is that, once you are at the conscious-competence level and keep performing this new skill, very soon it will become automatic, and you won’t need to even think about it — you will just do. This is the fourth stage, of unconscious competence.
Rewards will facilitate this process because they will condition you to perform the action more often.
Setting up a reward system is quite simple. Think of one or more small rewards you can give yourself each time you take the action you’ve committed to take. For example, If you’ve committed to working with a particular skill set for an hour every work day, by the end of that hour, give yourself a 30-minute break or your favorite candy bar or a bottle of your favorite beer. The reward doesn’t have to be big, but it has to feel like an acknowledgement of your effort.
You might be thinking, “This reward stuff won’t work for me; I’d rather reward myself when I finish the project.” Well, consider the famous research of Ivan Pavlov12 on conditioning. If you reward yourself only when you complete a project, you will condition yourself to complete projects more quickly, and you will most likely grow willing to sacrifice the quality of your work for it. This process happens unconsciously. On the other hand, if you reward yourself for working hard, you will condition yourself to work hard and focus. By working hard and being focused, you will automatically get more work done in a shorter period of time, without sacrificing quality.
Pro tip: Think of many different rewards, write them down, and put the paper in a jar. Every time you earn a reward, pull a paper from the jar and give yourself that reward. This relies on the same psychological trick that subscription boxes and scratch cards use to hook you in, but this time you are the one in control, and you win every single time.
During the conference, you will be among awesome people with objectives very similar to yours. You will also have plenty of opportunities to network with them. Why not take advantage of that?
When you meet a couple of fellow web designers with similar aspirations, and you are constantly in touch with them, even if only via Facebook, you can support each other to stay on track with your goals. You will also have a great source of learning because they will be implementing the same things as you.
Instead of asking the standard questions like, “Where are you from?” and “What do you do?,” ask questions that will reveal whether you could be good accountability partners for each other:
“What topic have you liked most so far, and what are you most likely to put into practice right away?”
“What inspired you to come to this conference?”
“I really liked what the last speaker said. Have you tried doing that?”
Questions like these take the pressure off because, while they still count as small talk, they allow you to talk about many subjects without stalling the conversation. And you will be able to more easily spot who among the attendees would be a good accountability partner.
Perhaps a few pressing projects are awaiting your return. Or maybe some things will distract you in the coming weeks. Or perhaps you have a history of losing enthusiasm quickly.
Whatever it is, now is the time to become aware of those things. Devote a couple of minutes to thinking of what could prevent you from following through with your actions. After the event, you can devise ways to avoid those obstacles and to follow through on the actions until they get you the result you want.
Let’s face it: You are a busy person and have a lot of things going on in your life right now. Chances are, when you get back to your daily life, your schedule will be full of activity. If you try to fit in the action “whenever you can,” you will either push it to the end of the day (causing you stress) or not do it at all.
Before you go home, schedule a specific time every day to take the action. Treat the time as if it were a meeting with someone else. Do not try to justify moving it around anytime something comes up. Move it only when something truly urgent comes up.
You will learn a ton of great content at the conference, content that could help you a lot. Most people try to implement everything at once; after all, they don’t want to miss out on the potential benefits. What happens instead is that they end up getting overwhelmed and start slacking off. This is a waste of a great conference.
If you can successfully implement just one strategy or action in your daily life, then the conference will have been more than worth your investment of time and money. That being said, you can still try to implement everything you believe would be useful to you. All you need to do is to write a list of things to implement; as soon as you have successfully implemented one item on the list, and it is now a habit or requires little conscious effort on your part, move on to the next item.
Beware! There is a popular myth that developing a new habit takes 21 days. This myth is based on a misreading of Maxwell Maltz’ book Psycho-Cybernetics. The myth was debunked when researchers at the University College London found that developing a habit takes on average 66 days. You don’t have to take two months to move on to the next item in your list, though. The study just shows the average; according to the same study, some people developed habits in as little as 18 days, and others in as long as 254 days. What matters is not the time, but how much repetition and intensity you put into implementing the new skill.
Getting to the next item in the list might take a few days or a few months. Either way, you will eventually get through the list and have implemented everything you find valuable.
So, get excited to attend your next conference, because if you follow the steps in this article, you will have a concise and effective action plan, one that will bring you more results than all of the previous conferences you’ve attended combined!
As people working in front of a screen all day, we often struggle to find the right balance. I’m not talking about work-life balance alone here, but of how our life that is completely virtual during the day often causes us to not take real life into account.
We tend to forget that our bodies need something else than coding all day. And we need to take care of our fellow human beings in real life as well. Just think about this number: The average US person will spend over 9 hours in front of a screen1 today. Time to become more aware of how we can keep the balance between the virtual and the real world.
Do you remember jsPerf2? It has been down for years (due to spam), now it celebrates its revival. Finally a chance to use this great, great tool again.
Automated browser testing usually causes a lot of trouble and custom build solutions. TestCafé7 now tries to solve this with a Node.js tool that takes care of all the stages: starting browsers, running tests, gathering test results, and generating reports without the need for a browser extension.
Jason Grigsby explains how we can use Client Hints for responsive images8. With Client Hints, the browser can tell a server via HTTP headers what types of content it prefers based on information about the device pixel ratio, viewport width, and width of the image element on the page. This allows the server to serve the most appropriate image back to the client.
As developers, do we really need to code all day? Is it necessary to have side projects and participate in open-source? Belén Albeza doesn’t think so. She shares why having a life away from coding matters15 and why you can be a passionate developer nonetheless. It’s important to have a balance between your computer time and other life activities (to help gathering data on this matter, please fill out this survey 16), and that’s also the message we have to send across to new developers17. You can do great coding in a normal workday.