When you examine the most successful interaction designs of recent years, the clear winners are those who provide an excellent functionality. While functional aspect of a design is key to product success, aesthetics and visual details are equally important — particularly how they can improve those functional elements.
In today’s article, I’ll explain how visual elements, such as shadows and blur effects, can improve the functional elements of a design. If you’d like to try adding these elements to your designs, you can download and test Adobe XD1for free and get started right away.
There’s a reason GUI designers incorporate shadows into their designs — they help create visual cues in the interface which tell human brains what user interface elements they’re looking at.
Since the early days of graphical user interfaces, screens have employed shadows to help users understand how to use an interface. Images and elements with shadows seem to pop off of a page, and it gives users the impression that they can physically interact with the element. Even though visual cues vary from app to app, users can usually rely on two assumptions:
Elements that appear raised look like they could be pressed down (clicked with the mouse or tapped with a finger). This technique is often used as a visual signifier for buttons.
Elements that appear sunken look like they could be filled. This technique is often used as a visual signifier for input fields.
You can see how the use of shadows and highlights help users understand which elements are interactive in this Windows 2000 dialog box:
Create a Visual Hierarchy and Impression of Depth Link
Modern interfaces are layered and take full advantage of the z-axis. The position of several objects in the z-axis act as important cues to the user.
Shadows help indicate the hierarchy of elements by differentiating between two objects. Also, in some cases, shadows help users understand that one object is above another.
Why is it so important to visualize the position of an element within three-dimensional space? The answer is simple — laws of physics.
Everything in the physical world is dimensional, and elements interact in three-dimensional space with each other: they can be stacked or affixed to one another, but cannot pass through each other. Objects also cast shadows and reflect light. The understanding of these interactions is the basis for our understanding of the graphical interface.
Let’s have a look at Google’s Material Design for a moment. A lot of people still call it flat design, but the key feature is that it has dimension — the use of consistent metaphors and principles borrowed from physics help users make sense of interfaces and interpret visual hierarchies in context.
One very important thing about shadows is that they work in tandem with elevation. The elevation is the relative depth, or distance, between two surfaces along the z-axis. Measured from the front of one surface to another, an element’s elevation indicates the distance between surfaces and the depth of its shadow. As you can see from the image below, the shadow gets bigger and blurrier the greater the distance between object and ground.
Some elements like buttons have dynamic elevation, meaning they change elevation in response to user input (e.g., normal, focused, and pressed). Shadows provide useful clues about an object’s direction of movement and whether the distance between surfaces is increasing or decreasing. For users to feel confident that something is clickable or tappable, they need immediate reassurance after clicking and tapping, which elevation provides through visual cues:
When Apple introduced iOS 8, it raised the bar for app design, especially when it came to on-screen effects. One of the most significant changes was the use of blur throughout, most notably in Control Center; when you swipe up from the bottom edge of a screen you reveal the Control Center, and the background is blurred. This blur occurs in an interactive fashion, as you control it completely with the movement of your finger.
Apple moved further in this direction with the latest version of iOS which uses 3D Touch for the flashlight, camera, calculator and timer icons. When a user’s hand presses on those icons, real-time blur effect takes place.
Blur technique has the following benefits for modern mobile interfaces: Link
Make User Flow Obvious
Blur effects allow for a certain amount of play within the layers and hierarchy of an interface, especially for mobile apps. It’s a very efficient solution when working with layered UI since it gives the user a clear understanding of a mobile app’s user flow.
The Yahoo Weather20 app for iOS displays a photo of each weather location, and the basic weather data you need is immediately visible, with more detailed data only a single tap away. Rather than cover the photo with another UI layer, the app keeps you in context after you tap — the detailed information is easily revealed, and the photo remains in the background.
Direct the User’s Attention
Humans have a tendency to pay attention to objects that are in focus and ignore objects that aren’t. It’s a natural consequence of how our eyes work, known as accommodation reflex23. App designers can use it to blur unimportant items on the screen in an effort to direct a user’s attention directly to the valuable content or critical controls. The Tweetbot24 app uses blur to draw users attention to what needs to be focused on; the background is barely recognizable, while the focus is on information about accounts and call to action buttons.
Make Overlaid Text Legible
The purpose of text in your app is to establish a clear connection between the app and user, as well as to help your users accomplish their goals. Typography plays a vital role in this process, as good typography makes the act of reading effortless, while poor typography turns users off.
In order to maximize the readability of text, you need to create a proper contrast27 between the text and background. Blur gives designers a perfect opportunity to make overlaid text legible — they can simply blur a part of the underlying image. In the example below, you can see a restaurant feed which features the closest restaurants to the user. Immediately, your attention goes to the restaurant images as they feature a darkened blur with text overlay.
Blurred effect can seamlessly blend into the website design.
Decorative Background
Together with full-screen photo backgrounds, frequently used for website decorations, blur backgrounds have found their niche in modern website design. This decorative effect also has a practical value: by blurring one object, it brings focus to another. Thus, if you want to emphasize your subject and leave the background out of focus, the blurring technique is the best solution.
The website for Trellis Farm uses an iconic image of a farm to give visitors a sense of place for its website. For added interest, the photo is layered with a great typeface to grab a visitor’s attention. The blur is nice because it helps the visitor focus on the text and the next actions to take on the screen.
Progressive Image Loading
As modern web pages load more and more images, it’s good to think of their loading process, since it affects performance and user experience. Using blur effect you can create a progressive image loading effect. One good example is Medium.com, which blurs the post image cover as well as images within the post content until the image is fully loaded. First, it loads a small blurry image (thumbnail) and then makes a transition to the large image.
This technique has two benefits:
It helps you serve different images sizes depending on the device that makes the requests, optimizing the weight of the page.
The thumbnails are very small (just a few kilobytes) which combined with the blurry effect allows for a better placeholder than a solid color, without sacrificing payload.
If you want to reproduce this effect on your site see the Resources and Tutorials section.
Testing Websites’ Visual Hierarchy
Blur effect can be used not only as visual design technique but also as a good testing technique for page visual hierarchy.
A blur test is a quick technique to help you determine if your user’s eye is truly going where you want it to go. All you need to do is, take a screenshot of your site and add a 5–10 px Gaussian blur in Photoshop. Look at a blurred version of your page (like the Mailchimp example below) and see what elements stand out. If you don’t like what’s projecting, you need to go back and make some revisions.
Mailchimp’s homepage passes the blur test because the prominent items are the sign-up button and text copy which states the benefits of using the product.
Blur effect isn’t exactly free. It costs something — graphics performance and battery usage. Since blurring is a memory bandwidth and power intensive effect, it can affect system performance and battery life. Over-used blurs result in slower apps with largely degraded user experiences.
We all want to create a beautiful design, but at the same time, we can’t make users suffer from long loading or empty battery. Blur effects should be used wisely and sparsely — you need to find a balance between great appearance and the resource utilization. Thus, when using blur effects always check CPU, GPU, Memory and Power usage of your app (see section Resources and Tutorials for more information).
Blur Effect and Text Readability Issues
Another factor that you should remember — blurring is not as dynamic. If your image ever changes, make sure the text is always over the blurry bits. In the example below, you can see what happens when you forget this.
Blur Effect and Content-Heavy Pages
Blurred background can cause a problem when it is used for screens filled with a lot of content. You can compare two examples below. The screen on the left using blur effect looks dirty, and the text seems unreadable. The screen without blur effect is much clearer.
In his article “How Medium Does Progressive Image Loading39,” José M. Pérez provides solutions on how to incorporate progressive image loading using blur effect using CSS filters or HTML canvas elements.
The article, “Creating A Blurring Overlay View40,” provides examples of applying the blur effect to images in Apple iOS 8+ using the UIVisualEffectView class, with both Objective-C and Swift code samples. This is a native API that has been fine-tuned for performance and great battery life.
Shadows and blur effects provide visual cues that allow users to better and more easily understand what is occurring. In particular, they allow the designer to inform users on objects’ relationships with each other, as well as potential interactions with these objects. When carefully applied, such elements can (and should) improve a functional aspect of design.
This article is part of the UX design series sponsored by Adobe. The newly introduced Experience Design app42 is made for a fast and fluid UX design process, as it lets you go from idea to prototype faster. Design, prototype and share — all in one app.
You can check out more inspiring projects created with Adobe XD on Behance43, and also visit the Adobe XD blog44 to stay updated and informed. Adobe XD is being updated with new features frequently, and since it’s in public Beta, you can download and test it for free45.
JavaScript module bundling has been around for a while. RequireJS had its first commits in 2009, then Browserify made its debut, and since then several other bundlers have spawned across the Internet. Among that group, webpack has jumped out as one of the best. If you’re not familiar with it, I hope this article will get you started with this powerful tool.
In most programming languages (including ECMAScript 2015+, which is one of the most recent versions of the standard for JavaScript, but isn’t fully supported across all browsers yet), you can separate your code into multiple files and import those files into your application to use the functionality contained in them. This wasn’t built into browsers, so module bundlers were built to bring this capability in a couple forms: by asynchronously loading modules and running them when they have finished loading, or by combining all of the necessary files into a single JavaScript file that would be loaded via a <script> tag in the HTML.
Without the module loaders and bundlers, you could always combine your files manually or load your HTML with countless <script> tags, but that has several disadvantages:
You need to keep track of the proper order in which the files should load, including which files depend on which other files and making sure not to include any files you don’t need.
Multiple <script> tags means multiple calls to the server to load all of your code, which is worse for performance.
Obviously, this entails a lot of manual work, instead of letting the computer do it for you.
Most module bundlers also integrate directly with npm or Bower to easily allow you to add third-party dependencies to your application. Just install them and throw in a line of code to import them into your application. Then, run your module bundler, and you’ll have your third-party code combined with your application code, or, if you configure it correctly, you can have all of your third-party code in a separate file, so that when you update the application code, users don’t need to download the vendor code when they need to update their cache of your application code.
Now that you have basic knowledge of the purpose of webpack, why should you choose webpack over the competition? There are a few reasons:
Its relative newness gives it a leg up because it is able to work around or avoid the shortcomings and problems that have popped up in its predecessors.
Getting started is simple. If you’re just looking to bundle a bunch of JavaScript files together without any other fancy stuff, you won’t even need a configuration file.
Its plugin system enables it to do so much more, making it quite powerful. So, it might be the only build tool you need.
I’ve seen only a few other module bundlers and build tools that can say the same thing, but webpack seems to have one thing over those: a large community that can help when you get stuck. Browserify’s community is probably just as big, if not larger, but it lacks a few of the potentially essential features that come with webpack. With all the praise I’ve given webpack, I’m sure you’re just waiting for me to move on and show some code, right? Let’s do that, then.
Before we can use webpack, we need to install it. To do that, we’re going to need Node.js and npm, both of which I’m just going to assume you have. If you don’t have them installed, then the Node.js website1 is a great place to start.
Now, there are two ways to install webpack (or any other CLI package, for that matter): globally or locally. If you install it globally, you can use it no matter what directory you’re in, but then it won’t be included as a dependency for your project, and you can’t switch between versions of webpack for different projects (some projects might need more work to upgrade to a later version, so they might have to wait). So, I prefer to install CLI packages locally and either use relative paths or npm scripts2 to run the package. If you’re not used to installing CLI packages locally, you can read about it in a post I wrote about getting rid of global npm packages3.
We’re going to be using npm scripts for our examples anyway, so let’s just forge ahead with installing it locally. First things first: Create a directory for the project where we can experiment and learn about webpack. I have a repository on GitHub4 that you can clone and whose branches you can switch between to follow along, or you can start a new project from scratch and maybe use my GitHub repository for comparison.
Once you’re inside the project directory via your console of choice, you’ll want to initialize the project with npm init. The information you provide really isn’t that important, though, unless you plan on publishing this project on npm.
Now that you have a package.json file all set up (npm init created it), you can save your dependencies in there. So, let’s use npm to install webpack as a dependency with npm install webpack -D. (-D saves it in package.json as a development dependency; you could also use --save-dev.)
Before we can use webpack, we should have a simple application to use it on. When I say simple, I mean it. First, let’s install Lodash5 just so that we have a dependency to load into our simple app: npm install lodash -S (-S is the same as --save). Then, we’ll create a directory named src, and in there we’ll create a file named main.js with the following contents:
var map = require('lodash/map'); function square(n) { return n*n; } console.log(map([1,2,3,4,5,6], square));
Pretty simple, right? We’re just creating a small array with the integers 1 through 6, then using Lodash’s map to create a new array by squaring the numbers from the original array. Finally, we’re outputting the new array to the console. This file can even be run by Node.js, which you can see by running node src/main.js, which should show this output: [ 1, 4, 9, 16, 25, 36 ].
But we want to bundle up this tiny script with the Lodash code that we need and make it ready for browsers, which is where webpack comes in? How do we do that?
The easiest way to get started with using webpack without wasting time on a configuration file is just to run it from the command line. The simplest version of the command for webpack without using a configuration file takes an input file path and an output file path. Webpack will read from that input file, tracing through its dependency tree, combining all of the files together into a single file and outputting the file at the location you’ve specified as the output path. For this example, our input path is src/main.js, and we want to output the bundled file to dist/bundle.js. So, let’s create an npm script to do that (we don’t have webpack installed globally, so we can’t run it directly from the command line). In package.json, edit the "scripts" section to look like the following:
Now, if you run npm run build, webpack should get to work. When it’s done, which shouldn’t take long, there should be a new dist/bundle.js file. Now you can run that file with Node.js (node dist/bundle.js) or run it in the browser with a simple HTML page and see the same result in the console.
Before exploring webpack some more, let’s make our build scripts a little more professional by deleting the dist directory and its contents before rebuilding, and also adding some scripts to execute our bundle. The first thing we need to do is install del-cli so that we can delete directories without upsetting the people who don’t use the same operating system as us (don’t hate me because I use Windows); npm install del-cli -D should do the trick. Then, we’ll update our npm scripts to the following:
We kept "build" the same as before, but now we have "prebuild" to do some cleanup, which will run prior to "build" every time "build" is told to run. We also have "execute", which uses Node.js to execute the bundled script, and we can use "start" to do it all with one command (the -s bit just makes it so that the npm scripts don’t output as much useless stuff to the console). Go ahead and run npm start. You should see webpack’s output, quickly followed by our squared array, show up in your console. Congratulations! You’ve just finished everything in the example1 branch of the repository I mentioned earlier.
As fun as it is to use the webpack command line to get started, once you start using more of webpack’s features, you’re going to want to move away from passing in all of your options via the command line and instead use a configuration file, which will have more capability but which will also be more readable because it’s written in JavaScript.
So, let’s create that configuration file. Create a new file named webpack.config.js in your project’s root directory. This is the file name that webpack will look for by default, but you can pass the --config [filename] option to webpack if you want to name your configuration file something else or to put it in a different directory.
For this tutorial, we’ll just use the standard file name, and for now we’ll try to get it working the same way that we had it working with just the command line. To do that, we need to add the following code to the config file:
We’re specifying the input file and the output file, just like we did with the command line before. This is a JavaScript file, not a JSON file, so we need to export the configuration object — hence, the module.exports. It doesn’t exactly look nicer than specifying these options through the command line yet, but by the end of the article, you’ll be glad to have it all in here.
Now we can remove those options that we were passing to webpack from the scripts in our package.json file. Your scripts should look like this now:
We have two primary ways to add to webpack’s capabilities: loaders and plugins. We’ll discuss plugins later. Right now we’ll focus on loaders, which are used to apply transformations or perform operations on files of a given type. You can chain multiple loaders together to handle a single file type. For example, you can specify that files with the .js extension will all be run through ESLint6 and then will be compiled from ES2015 down to ES5 by Babel7. If ESLint comes across a warning, it’ll be outputted to the console, and if it encounters any errors, it’ll prevent webpack from continuing.
For our little application, we won’t be setting up any linting, but we will be setting up Babel to compile our code down to ES5. Of course, we should have some ES2015 code first, right? Let’s convert the code from our main.js file to the following:
import { map } from 'lodash'; console.log(map([1,2,3,4,5,6], n => n*n));
This code is doing essentially the same exact thing, but (1) we’re using an arrow function instead of the named square function, and (2) we’re loading map from 'lodash' using ES2015’s import. This will actually load a larger Lodash file into our bundle because we’re asking for all of Lodash, instead of just asking for the code associated with map by requesting 'lodash/map'. You can change that first line to import map from 'lodash/map' if you prefer, but I switched it to this for a few reasons:
In a large application, you’ll likely be using a pretty large chunk of the Lodash library, so you might as well load all of it.
If you’re using Backbone.js, getting all of the functions you need loaded individually will be very difficult simply because there is no documentation specifying how much of it is needed.
In the next major version of webpack, the developers plan to include something called tree-shaking, which eliminates unused portions of modules. So, this would work the same either way.
I’d like to use it as an example to teach you the bullet points I just mentioned.
(Note: These two ways of loading work with Lodash because the developers have explicitly created it to work that way. Not all libraries are set up to work this way.)
Anyway, now that we have some ES2015, we need to compile it down to ES5 so that we can use it in decrepit browsers (ES2015 support8 is actually looking pretty good in the latest browsers!). For this, we’ll need Babel and all of the pieces it needs to run with webpack. At a minimum, we’ll need babel-core9 (Babel’s core functionality, which does most of the work), babel-loader10 (the webpack loader that interfaces with babel-core) and babel-preset-es201511 (which contains the rules that tell Babel to compile from ES2015 to ES5). We’ll also get babel-plugin-transform-runtime12 and babel-polyfill13, both of which change the way Babel adds polyfills and helper functions to your code base, although each does it a bit differently, so they’re suited to different kinds of projects. Using both of them wouldn’t make much sense, and you might not want to use either of them, but I’m adding both of them here so that no matter which you choose, you’ll see how to do it. If you want to know more about them, you can read the documentation pages for the polyfill14 and runtime transform15.
Anyway, let’s install all of that: npm i -D babel-core babel-loader babel-preset-es2015 babel-plugin-transform-runtime babel-polyfill. And now let’s configure webpack to use it. First, we’ll need a section to add loaders. So, update webpack.config.js to this:
We’ve added a property named module, and within that is the rules property, which is an array that holds the configuration for each loader you use. This is where we’ll be adding babel-loader. For each loader, we need to set a minimum of these two options: test and loader. test is usually a regular expression that is tested against the absolute path of each file. These regular expressions usually just test for the file’s extension; for example, /.js$/ tests whether the file name ends with .js. For ours, we’ll be setting this to /.jsx?$/, which will match .js and .jsx, just in case you want to use React in your app. Now we’ll need to specify loader, which specifies which loaders to use on files that pass the test.
This can be specified by passing in a string with the loaders’ names, separated by an exclamation mark, such as 'babel-loader!eslint-loader'. webpack reads these from right to left, so eslint-loader will be run before babel-loader. If a loader has specific options that you want to specify, you can use query string syntax. For example, to set the fakeoption option to true for Babel, we’d change that previous example to 'babel-loader?fakeoption=true!eslint-loader. You can also use the use option instead of the loader option which allows you to pass in an array of loaders if you think that’d be easier to read and maintain. For example, the last examples would be changed to use: ['babel-loader?fakeoption=true', 'eslint-loader'], which can always be changed to multiple lines if you think it would be more readable.
Because Babel is the only loader we’ll be using, this is what our loader configuration looks like so far:
If you’re using only one loader, as we are, then there is an alternative way to specify options for the loader, rather than using the query strings: by using the options object, which will just be a map of key-value pairs. So, for the fakeoption example, our config would look like this:
We need to set the presets so that all of the ES2015 features will be transformed into ES5, and we’re also setting it up to use the transform-runtime plugin that we installed. As mentioned, this plugin isn’t necessary, but it’s there to show you how to do it. An alternative would be to use the .babelrc file to set these options, but then I wouldn’t be able to show you how to do it in webpack. In general, I would recommend using .babelrc, but we’ll keep the configuration in here for this project.
There’s just one more thing we need to add for this loader. We need to tell Babel not to process files in the node_modules folder, which should speed up the bundling process. We can do this by adding the exclude property to the loader to specify not to do anything to files in that folder. The value for exclude should be a regular expression, so we’ll set it to /node_modules/.
Alternatively, we could have used the include property and specified that we should only use the src directory, but I think we’ll leave it as it is. With that, you should be able to run npm start again and get working ES5 code for the browser as a result. If you decide that you’d rather use the polyfill instead of the transform-runtime plugin, then you’ll have a change or two to make. First, you can delete the line that contains plugins: ['transform-runtime], (you can also uninstall the plugin via npm if you’re not going to use it). Then, you need to edit the entry section of the webpack configuration so that it looks like this:
entry: [ 'babel-polyfill', './src/main.js' ],
Instead of using a string to specify a single entry point, we use an array to specify multiple entry files, the new one being the polyfill. We specify the polyfill first so that it’ll show up in the bundled file first, which is necessary to ensure that the polyfills exist before we try to use them in our code.
Instead of using webpack’s configuration, we could have added a line at the top of src/main.js, import 'babel-polyfill;, which would accomplish the exact same thing in this case. We used the webpack entry configuration instead because we’ll need it to be there for our last example, and because it’s a good example to show how to combine multiple entries into a single bundle. Anyway, that’s it for the example3 branch of the repository. Once again, you can run npm start to verify that it’s working.
Let’s add another loader in there: Handlebars. The Handlebars loader will compile a Handlebars template into a function, which is what will be imported into the JavaScript when you import a Handlebars file. This is the sort of thing that I love about loaders: you can import non-JavaScript files, and when it’s all bundled, what is imported will be something useable by JavaScript. Another example would be to use a loader that allows you to import an image file and that transforms the image into a base64-encoded URL string that can be used in the JavaScript to add an image inline to the page. If you chain multiple loaders, one of the loaders might even optimize the image to be a smaller file size.
As usual, the first thing we need to do is install the loader with npm install -D handlebars-loader. If you try to use it, though, you’ll find that it also needs Handlebars itself: npm install -D handlebars. This is so that you have control over which version of Handlebars to use without needing to sync your version with the loader version. They can evolve independently.
Now that we have both of these installed, we have a Handlebars template to use. Create a file named numberlist.hbs in the src directory with the following contents:
<ul> {{#each numbers as |number i|}} <li>{{number}}</li> {{/each}} </ul>
This template expects an array (of numbers judging by the variable names, but it should work even if they aren’t numbers) and creates an unordered list with the contents.
Now, let’s adjust our JavaScript file to use that template to output a list created from the template, rather than just logging out the array itself. Your main.js file should now look like this:
import { map } from 'lodash'; import template from './numberlist.hbs'; let numbers = map([1,2,3,4,5,6], n => n*n); console.log(template({numbers}));
Sadly, this won’t work right now because webpack doesn’t know how to import numberlist.hbs, because it’s not JavaScript. If we want to, we could add a bit to the import statement that informs webpack to use the Handlebars loader:
import { map } from 'lodash'; import template from 'handlebars-loader!./numberlist.hbs'; let numbers = map([1,2,3,4,5,6], n => n*n); console.log(template({numbers}));
By prefixing the path with the name of a loader and separating the loader’s name from the file path with an exclamation point, we tell webpack to use that loader for that file. With this, we don’t have to add anything to the configuration file. However, in a large project, you’ll likely be loading in several templates, so it would make more sense to tell webpack in the configuration file that we should use Handlebars so that we don’t need to add handlebars! to the path for every single import of a template. Let’s update the configuration:
This one was simple. All we needed to do was specify that we wanted handlebars-loader to handle all files with the .hbs extension. That’s it! We’re done with Handlebars and the example4 branch of the repository. Now when you run npm start, you’ll see the webpack bundling output, as well as this:
Plugins are the way, other than loaders, to install custom functionality into webpack. You have much more freedom to add them to the webpack workflow because they aren’t limited to being used only while loading specific file types; they can be injected practically anywhere and are, therefore, able to do much more. It’s hard to give an impression of how much plugins can do, so I’ll just send you to the list of npm packages that have “webpack-plugin”16 in the name, which should be a pretty good representation.
We’ll only be touching on two plugins for this tutorial (one of which we’ll see later). We’ve already gone quite long with this post, so why be excessive with even more plugin examples, right? The first plugin we’ll use is HTML Webpack Plugin17, which simply generates an HTML file for us — we can finally start using the web!
Before using the plugin, let’s update our scripts so that we can run a simple web server to test our application. First, we need to install a server: npm i -D http-server. Then, we’ll change our execute script to the server script and update the start script accordingly:
… "scripts": { "prebuild": "del-cli dist -f", "build": "webpack", "server": "http-server ./dist", "start": "npm run build -s && npm run server -s" }, …
After the webpack build is done, npm start will also start up a web server, and you can navigate to localhost:8080 to view your page. Of course, we still need to create that page with the plugin, so let’s move on to that. First, we need to install the plugin: npm i -D html-webpack-plugin.
When that’s done, we need to hop into webpack.config.js and make it look like this:
The two changes we made were to import the newly installed plugin at the top of the file and then add a plugins section at the end of the configuration object, where we passed in a new instance of our plugin.
At this point, we aren’t passing in any options to the plugin, so it’s using its standard template, which doesn’t include much, but it does include our bundled script. If you run npm start and then visit the URL in the browser, you’ll see a blank page, but you should see that HTML being outputted to the console if you open your developer’s tools.
We should probably have our own template and get that HTML to be spitted out onto the page rather than into the console, so that a “normal” person could actually get something from this page. First, let’s make our template by creating an index.html file in the src directory. By default, it’ll use EJS for the templating, however, you can configure the plugin to use any template language18 available to webpack. We’ll use the default EJS because it doesn’t make much difference. Here are the contents of that file:
<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title><%= htmlWebpackPlugin.options.title %></title> </head> <body> <h2>This is my Index.html Template</h2> <div></div> </body> </html>
You’ll notice a few things:
We’re using an option passed to the plugin to define the title (just because we can).
There’s nothing to specify where the scripts should be added. This is because the plugin will add the scripts to the end of the body tag by default.
There’s a random div with an id in there. We’ll be using this now.
We now have the template we want; so, at the very least, we won’t have a blank page. Let’s update main.js so that it appends that HTML to that div, instead of putting it into the console. To do this, just update the last line of main.js to document.getElementById("app-container").innerHTML = template({numbers});.
We also need to update our webpack configuration to pass in a couple options to the plugin. Your config file should now look like this:
The template option specifies where to find our template, and the title option is passed into the template. Now, if you run npm start, you should see the following in your browser:
That brings us to the end of the example5 branch of the repository, in case you’re following along in there. Each plugin will likely have very different options and configurations of their own, because there are so many of them and they can do a wide variety of things, but in the end, they’re practically all added to the plugins array in webpack.config.js. There are also many other ways to handle how the HTML page is generated and populated with file names, which can be handy once you start adding cache-busting hashes to the end of the bundle file names.
If you look at the example project’s repository, you’ll see an example620 branch where I added JavaScript minification via a plugin, but that isn’t necessary unless you want to make some changes to the configuration of UglifyJS. If you don’t like the default settings of UglifyJS, check out the repository (you should only need to look at webpack.config.js) to figure out how to use the plugin and configure it. But if you’re good with the default settings, then all you need to do is pass the -p argument when you run webpack on the command line. That argument is the “production” shortcut, which is equivalent to using --optimize-minimize and --optimize-occurence-order arguments, the first of which minifies the JavaScript and the second of which optimizes the order in which the modules are included in the bundled script, making for a slightly smaller file size and slightly faster execution. The repository has been done for a while, and I learned about the -p option later, so I decided to keep the plugin example for UglifyJS in there, while informing you of an easier way. Another shortcut you can use is -d, which will show more debugging information from the webpack output, and which will generate source maps without any extra configuration. You can use plenty more command line shortcuts21 if that’s easier for you.
One thing that I really enjoyed with RequireJS and couldn’t quite get to work with Browserify (though it may be possible) is lazy-loading modules. One massive JavaScript file will help by limiting the number of HTTP requests required, but it practically guarantees that code will be downloaded that won’t necessarily be used by the visitor in that session.
Webpack has a way of splitting a bundle into chunks that can be lazy-loaded, and it doesn’t even require any configuration. All you need to do is write your code in one of two ways, and webpack will handle the rest. Webpack gives you two methods to do this, one based on CommonJS and the other based on AMD. To lazy-load a module using CommonJS, you’d write something like this:
require.ensure(["module-a", "module-b"], function(require) { var a = require("module-a"); var b = require("module-b"); // … });
Use require.ensure, which will make sure the module is available (but not execute it) and pass in an array of module names and then a callback. To actually use the module within that callback, you’ll need to require it explicitly in there using the argument passed to your callback.
Personally, this feels verbose to me, so let’s look at the AMD version:
require(["module-a", "module-b"], function(a, b) { // … });
With AMD, you use require, pass in an array of module dependencies, then pass a callback. The arguments for the callback are references to each of the dependencies in the same order that they appear in the array.
Webpack 2 also supports System.import, which uses promises rather than callbacks. I think this will be a useful improvement, although wrapping this in a promise shouldn’t be hard if you really want them now. Note, however, that System.import is already deprecated in favor of the newer specification for import(). The caveat here, though, is that Babel (and TypeScript) will throw syntax errors if you use it. You can use babel-plugin-dynamic-import-webpack22, but that will convert it to require.ensure rather than just helping Babel see the new import function as legal and leave it alone so webpack can handle it. I don’t see AMD or require.ensure going away any time soon, and System.import will be supported until version 3, which should be decently far in the future, so just use whichever one you fancy the best.
Let’s augment our code to wait for a couple seconds, then lazy-load in the Handlebars template and output the list to the screen. To do that, we’ll remove the import of the template near the top and wrap the last line in a setTimeout and an AMD version of require for the template:
import { map } from 'lodash'; let numbers = map([1,2,3,4,5,6], n => n*n); setTimeout( () => { require(['./numberlist.hbs'], template => { document.getElementById("app-container").innerHTML = template({numbers}); }) }, 2000);
Now, if you run npm start, you’ll see that another asset is generated, which should be named 1.bundle.js. If you open up the page in your browser and open your development tools to watch the network traffic, you’ll see that after a 2-second delay, the new file is finally loaded and executed. This, my friend, isn’t all that difficult to implement but it can be huge for saving on file size and can make the user’s experience so much better.
Note that these sub-bundles, or chunks, contain all of their dependencies, except for the ones that are included in each of their parent chunks. (You can have multiple entries that each lazy-load this chunk and that, therefore, have different dependencies loaded into each parent.)
Let’s talk about one more optimization that can be made: vendor chunks. You can define a separate bundle to be built that will store “common” or third-party code that is unlikely to change. This allows visitors to cache your libraries in a separate file from your application code, so that the libraries won’t need to be downloaded again when you update the application.
To do this, we’ll use a plugin that comes with webpack, called CommonsChunkPlugin. Because it’s included, we don’t need to install anything; all we need to do is make some edits to webpack.config.js:
Line 3 is where we import the plugin. Then, in the entry section, we use a different setup, an object literal, to specify multiple entry points. The vendor entry marks what will be included in the vendor chunk — which includes the polyfill as well as Lodash — and we put our main entry file into the main entry. Then, we simply need to add the CommonsChunkPlugin to the plugins section, specifying the “vendor” chunk as the chunk to base it on and specifying that the vendor code will be stored in a file named vendor.bundle.js.
By specifying the “vendor” chunk, this plugin will pull all of the dependencies specified by that chunk out of the other entry files and only place them in this vendor chunk. If you do not specify a chunk name here, it’ll create a separate file based on the dependencies that are shared between the entries.
When you run webpack, you should see three JavaScript files now: bundle.js, 1.bundle.js and vendor.bundle.js. You can run npm start and view the result in the browser if you’d like. It seems that webpack will even put the majority of its own code for handling the loading of different modules into the vendor chunk, which is definitely useful.
And that concludes the example8 branch, as well as the tutorial. I have touched on quite a bit, but it only gives you a tiny taste of what is possible with webpack. Webpack enables easy CSS modules23, cache-busting hashes, image optimization and much much more — so much that even if I wrote a massive book on the subject, I couldn’t show you everything, and by the time I finished writing that book, most (if not all) of it would be outdated! So, give webpack a try today, and let me know if it improves your workflow. God bless and happy coding!
Front page image credit: webpack24 (official site)
The mobile app market is growing faster than a beanstalk. The industry is huge and growing daily, and there is no end in sight. Expectedly, the mobile developer population has boomed, and the number of mobile apps in the market has hit new heights. The revenue generated by the global mobile app industry has skyrocketed.
Hybrid monetization models, such as in-app ads and in-app purchases, are quickly gaining popularity in the business world. Most studies show that in-app advertising is set to be a key driver of mobile growth over the coming years (see Statista’s1, IHS Markit’s2 and Forbes’s3 reports).
In this article, we’ll shed some light on the following questions:
On average, how much revenue does a mobile app generate?
Is the average revenue truly growing?
What are the biggest challenges facing the mobile app industry today?
What are the most popular monetization models in the market today? Which ones will be driving growth tomorrow? Which models have outlived their time?
I’ll try to present comprehensive answers, backed by statistical reports and expert opinion.
The Mobile App Market Still Has A Lot Of Room To Grow
App Annie reports8 that in 2015, the mobile app industry generated a whopping $41.1 billion in gross annual revenue and that this figure will rise to $50.9 billion. Gross annual revenue is projected to exceed $189 billion by 2020, according to Statista9. Though the figures differ somewhat between researchers, the overall picture is that the market is far from saturated. App Annie’s predictions corroborate reports from Forrester10 that only 46% of the world’s population will own smartphones by the end of 2016. This goes to show that the much-discussed mobile revolution is just starting.
According to another Forrester statistic11, there is a huge gap between leading companies that regard mobile devices as a catalyst to transforming their business and companies that consider mobile devices to be just another development channel. As of early 2016, only 18% of companies surveyed were in the first category. This number is expected to pass 25% by next year.
Consumers are evolving more rapidly than businesses. Today, the mobile Internet has clearly become a necessity for many users.
As for app popularity, aggregator apps12 are likely to move to the forefront. These are tools that pull content from multiple online sources and compile it into one easy-to-follow interface. The content could range from breaking news to niche subjects of interest. Aggregators are meant for those who don’t have the time or desire to visit numerous websites or install numerous apps. Some popular aggregator apps are Flipboard, News360, Feedly and IFTTT.
Aggregator apps tend to become user favorites when they are convenient or enhance the shopping experience. For instance, Facebook has done this with its Messenger app, which lets users read their feeds and order Uber rides.
Rich And Poor Platforms
Two mobile app giants, Android and iOS, dominate the global smartphone market. A study by Gartner13 found that by Q3 of 2016, 87.8% of smartphones sold worldwide were Android. This figure is 3.1% higher than a year ago. iOS’ market share is 11.5% — 2.5% less than in 2015. Though this figure is negligible for its already huge market share, this growth greatly decreased the positions of other market players. Windows, which accounted for 0.4% of all smartphones sold, came third in the mobile platform race, with its share decreasing 2.5%14 over the year.
Apple and Google have the largest and most popular app stores. For now, it seems as if no other competitor could dream of catching up to their variety of applications and number of developers.
InMobi estimates15 that 55% of app developers make less than $1,000. Moreover, a third of app developers worldwide haven’t managed to reach 10,000 total downloads of their products. Income stratification is more pronounced among Android developers, whereas income distribution is more balanced among iOS developers.
Since 2016, over 25% of iOS developers18 have generated over $5,000 in monthly revenue. Only 16% of Android developers have achieved a similar feat.
There are interesting statistics on the average monthly revenue earned by the mobile operating systems. Forbes estimates19 that iOS earned an average of $4,000 per month, pushing Android to second place with its $1,125. An outlier, Windows Phone came third with just $625.
However, this situation changed dramatically in 2016. According to Statista20, a Windows Phone app fetches $11,400 on average per month, whereas iOS app generates $8,100. Android makes $4,900 in average monthly revenue. However, about 75%21 of developers are most favorable to Android. They plan to boost their revenues by making Android-based products.
The Mobile App Spectrum
High-performance mobile CPUs, featuring powerful graphics, quality displays and fast Internet connection, have turned smartphones into gaming devices. According to reports by App Annie22, mobile games, which accounted for less than 50% of total mobile app revenue in 2011, generated 85% of mobile app market revenue in 2015. This figure represents a total of $34.8 billion worldwide.
There has been a sharp increase in the time spent by users in different app categories. Non-gaming apps have overtaken games in the rapid rise of app usage. By late 2015, the mobile app market, according to Flurry Analytics Blog25, recorded the following new heights in app usage:
Customization apps, such as launchers, icons, wallpapers, and lock-screen and other device-customization apps, topped the list, with a staggering 332% rise in session usage.
Mobile versions of newspapers and magazines came in second, with a huge growth of 135%.
Productivity tools and apps came in third in the usage list, with a 125% rise.
Lifestyle and shopping solutions recorded an 81% growth and were ranked fourth.
Travel, sport, health and fitness utilities, along with messengers and social apps, gained from 53 to 54%.
Games turned out to be the only outlier, with a 1% decline in users’ time.
Monetization Models
Currently, there are six popular app monetization models.
If a paid app has not yet been purchased, then only screenshots, a description and a video will be available for preview. These items are meant to convince people to buy the app and show that they will get exactly what they see. However, this type of model makes it difficult for the user to make up their mind, which may have contributed to the disappointing statistic that no paid app is among the list of applications that have generated the highest revenues. Paid apps are the only ones rapidly losing the popularity battle, although they are still proving their worth in some cases.
Minecraft Pocket Edition26 is the most paid app in the Google Play Store. Officially released in 2011, Minecraft is a sandbox video game, and it goes for $6.99 per download.
Under the freemium model (a combination of “free” and “premium”), users get basic features at no cost and can access richer functionality for a one-time or subscription fee. Typically, the number of people willing to pay is relatively low. Consequently, apps using this model are focused on securing the highest possible number of downloads.
Heavily criticized for its potentially exploitative mechanisms, the freemium model performs remarkably when used thoughtfully. A perfect example is Clash of Clans4829.
The subscription model is similar to freemium. The difference is that users pay to access all of the content — not only certain features. This model generates predictable, long-term revenue flow for the owner by keeping customer loyalty high.
An excellent example of a subscription app is Lumosity32, which has over 50 different exercises designed by cognitive psychologists to train the brain. Lumosity offers both a monthly ($11.99 per month) and a yearly ($59.99 per year) subscription. With an average rating of five stars, coming from over 93,000 users, Lumosity is a phenomenal success in the subscription app sector.
Monetization through in-app purchases is especially common in mobile games, as well as various product catalog apps, which charge a service fee for every item sold. This model is so flexible that some games go too far in encouraging users to make purchases. As of February 2016, about 1.9% of mobile gamers made in-app purchases, as reported by Tech Times35, and this number is rising steadily.
An example is MeetMe36, a social app from which users can also buy certain goods and services. In MeetMe, you can pay a certain amount to increase your profile views. The developers generate decent income thanks to a clear sales model.
Crowdfunding is a relatively young monetization model. Developers present the idea for the app they want to develop on a popular online platform, such as Kickstarter or Indiegogo, requesting donations. Some interesting projects attract funding several times higher than the amount originally requested, whereas mediocre ones do not get anywhere near the desired amount.
Tech startup Shadow39 is an ideal example. The project achieved impressive crowdfunding success, with the developers generating $82,577 from 3,784 backers. Shadow approached this app-based crowdfunding challenge for its sleep- and dream-tracking software by adding a level of exclusivity to the rewards and project.
When delivered with the exclusive Shadow membership card, the app would have been free, but when the price rose to $8, it still fetched about $20,000 from this tier alone.
Sponsorship is a rather new monetization model. Users act as an advertising sponsor in exchange for a fee. A share from each fee goes to the developers. The model is still in its infancy, and the marketing strategy needs to be polished.
RunKeeper42, with a community of over 45 million users, is a great example of the sponsorship business model. It rewards users for covering a certain distance by running or riding a bicycle. Advertisers then pay the users. Credit here goes to the developer for not including any annoying ads.
Monetization Through Advertising
This is the most popular45 monetization model and needs to be examined more closely. The reason behind its popularity is obvious: Users like to download free apps, and the higher the number of downloads, the greater the developer’s revenue. A report released by IHS Markit46 shows that, by 2020, in-app advertising will attract $53.4 billion in total revenue per year. That translates to almost 63% of mobile display advertising revenue.
As in other sectors, a few major developers of advertising-based apps are generating the bulk of the revenue. All other developers are forced to settle for leftovers. Klick Health47 reports that the indisputable leader is Facebook, with 44.3% of all mobile ads shown being in Facebook apps. Others in the ranking are, in order, Alibaba, Google, Tencent, Twitter, Pandora and Yahoo.
Companies that generate the highest advertising revenue often end up becoming major advertising sponsors. This trend is especially prominent in mobile games. The largest vendors, such as Rovio, Gameloft and Disney, employ hundreds of small indie studios, which advertise their own game products in less popular games.
The Bottom Line
As you can see, the explosive growth in the mobile app market isn’t stopping anytime soon. Despite increasingly strong competition in the industry, developers are applying new monetization methods and creating more interesting and useful solutions for users. At least two new monetization models have been shown to be very effective, gaining popularity in the last couple of years.
None of the models covered above could be described as inefficient. Rather, developers and publishers have gained skill in deploying these models in particular cases. For instance, the subscription model works only for certain niches but is the most profitable of all. At the same time, the freemium model, much criticized for being potentially unscrupulous, shows remarkable results when used thoughtfully, Clash of Clans4829 being a a perfect example. Paid apps are the only ones rapidly falling out of favor, although they are still proving their worth in some cases.
Hybrid monetization models, such as in-app ads and in-app purchases, are clearly gaining popularity in the business world. Most studies show that in-app advertising is set to be a key driver of mobile growth over the coming years.
Today, iOS and Android are the leading mobile operating systems, and tech giants Apple and Google own the biggest mobile app stores.
Time will tell how the mobile app market will develop. Market trends show that the market will continue to generate higher and higher revenue in the foreseeable future. So, it is quite clear that the much-discussed mobile app revolution is just beginning.
As web developers, we need to rely on our knowledge, and choosing solutions we’re already familiar with is often the most convenient approach to solving a problem. However, not only technology is evolving but also our knowledge of how to use it.
For a while, we thought it’s best to use base64 encoding for inlining assets into CSS files, for example, and that loading JavaScript asynchronously will make websites faster. With more evidence and research, however, we came to realize that we were wrong. We should take this as an occasion to remind ourselves to question our habits and from now on ask ourselves if the solution we have in mind for a given problem really is still the best one we could choose.
Jack Franklin explains context in ReactJS applications13 — something that’s discussed a lot in the community and often defined in many different ways. Jack explains why it exists, when to make use of it, and how you can avoid using it.
With all the talk about CSS in JavaScript and markup in JavaScript, we had quite some discussions lately. However, these topics were mostly discussed in the context of React applications. Now there’s a holistic approach to keeping everything in one place for vue.js15: your markup, your logic, your styles. An interesting concept that could be very useful for vue.js web applications.
So how can you be sure to not leak data from your phone or computer to authorities when you cross the border20? According to The Verge, the only reliable way is to delete it upfront. An interesting look at the options you have and why it can indeed make sense to delete your data on the phone and restore it later from the Cloud, if you want to avoid any hassle.
Have you ever dreamt of exploring the deep sea and getting up close to its fascinating, weird creatures? Or maybe you’ve dreamt of boarding a spacecraft to experience the beauty of our planet from above? The desire to leave the beaten tracks and explore unfamiliar terrain is human nature.
To celebrate mankind’s urge to explore, the creative folks at Vexels163 created a set of 30 adventurous icons that take you on a journey from the ground of the sea right up to outer space. The set offers all the building blocks you’ll need to create your own little universe and become an explorer yourself: Cute jellyfish, strange deep-sea fellows, a submarine, trees, a helicopter, hot air balloons, satellites, planets, meteors, and much more. Nature and technology beautifully united — I’m sure you’ll agree.
Please note that the set is released under a Creative Commons Attribution 3.0 Unported8. This means that you may modify the size, color and shape of the icons (more details in the readme.txt file). Attribution is required, so if you would like to spread the word in blog posts or anywhere else, please do remember to credit the designers and provide a link to this article.
“Nature is always inspiring. Whether it’s space, with its stars and Sun, or our home planet with its waterfalls and mountains, our design team is always motivated by what surrounds us. And that includes technology. From planes to submarines, we can explore our world with their help. We wanted to create a freebie that showed it so we made this elements set, called From Space to Earth.
It contains elements and icons with everything from rockets and asteroids to planes, fishes and even some dinosaur bones. We hope you get inspired to create some art of your own. Grab everything you need from this freebie and start creating!”
There’s a technique for improving one’s user interface design skills that is the most efficient way I know of expanding one’s visual vocabulary but that I’ve rarely heard mentioned by digital designers.
What’s going on here?
I’m talking about copywork. Copywork is a technique that writers and painters have been using for centuries. It is the process of recreating an existing work as closely as possible in order to improve one’s skill. In our case, this means recreating a user interface (UI) design pixel for pixel.
It’s not as pointless as it sounds, I promise. The trick is to pick a design that is better than what you are currently capable of. By copying something outside of your wheelhouse, you will be expanding your skills.
So, if you want to improve your use of color, copy something with some crazy gradients or a bold palette. If you want to get better at luxury branding, copy a preeminent website with a ritzy look and feel.
Obviously, this technique is not rocket science. Actually, it would be hard to think of a more mundane exercise. But it is the most effective way I know to improve my UI design skills.
I first heard about copywork on the blog The Art of Manliness4, where Brett McKay gives a long history of those who’ve used copywork to develop their writing skill.
Jack London copied swaths of Rudyard Kipling’s writing to adapt his forebear’s world-class cadence and phrasing.
Robert Louis Stevenson would meticulously study sections of writing he found particularly beautiful, then reproduce them word for word from memory.
Benjamin Franklin followed a variant of copywork, writing notes about each sentence in an essay and then, a few days later, trying to recreate the essay by reading his notes — and comparing the results.
The list goes on. I know that Raymond Chandler, the famous mystery writer, used a technique similar to Benjamin Franklin’s, rewriting a novelette from a detailed description, and then comparing his with the original to study the flow.
He actually wrote to the original author later in life, telling him how instructive the exercise was. Pay attention to his analysis:
I found out that the trickiest part of your technique was the ability to put over situations, which verged on the implausible but which in the reading seemed quite real. I hope you understand I mean this as a compliment. I have never even come near to doing it myself. Dumas had this quality in very strong degree. Also Dickens. It’s probably the fundamental of all rapid work, because naturally rapid work has a large measure of improvisation, and to make an improvised scene seem inevitable is quite a trick.
This is not a rote exercise. Chandler is extremely thoughtful about the differences between his work and the original and is well versed in the subtleties of style of many authors. Can you speak this articulately about UI design? If not, may I recommend copywork?
Just as a writer copying the greats before him unconsciously absorbs the tiniest choices those authors made — the word choice, phrasing, cadence and so on — a designer doing copywork also absorbs the subtlest choices in the designs they study — the spacing, layout, fonts, decorative elements. Therein lies its power.
Let’s take a quick look at copywork in one other art form, though, one with a remarkably long history.
If you’ve wandered through an art museum, you’ve probably seen copywork in action. Apart from my own desk, it’s the only place I’ve seen it.
Painters have an even longer history than writers of copying the masters. Leonardo da Vinci developed his art (one of his arts, anyhow) by copying his teacher, Andrea Del Verrocchio — a common practice among Renaissance apprentice artists. Da Vinci actually prescribed copywork as practice numero uno for art students:
The artist ought first to exercise his hand by copying drawings from the hand of a good master.
Why? Because copying directly from a master provides a controlled setting in which to train your eye.
When you’re painting a live scene, on the other hand, there’s a lot to worry about — the model will move, the wind will pick up, the sun will set. Until your brain can think naturally in shape and color, painting in the real world will be tough. But in the studio, you can take all the time you need to absorb the basics.
While UI designers do not model anything after a natural scene in the same way as painters, copywork provides a useful way to eliminate variables and distractions while honing your skill.
But although it was once a foundational exercise of some of the world’s greatest artists, copywork has fallen out of favor. Nowadays, it’s viewed as rote, uncreative and reeking of plagiarism.
The gist is this: When you recreate a design, pixel for pixel, you’re forced to remake every decision the original designer made. Which font? How big? How are things laid out? Which images and background and decorations? You immerse yourself in the small design decisions made by awesome designers.
You might argue that you’d be missing out on all of the choices the designer considered, and the rationale for why they picked what they did. Fair enough — but that’s missing the point. Done right, copywork exposes you to design decisions you simply wouldn’t have made on your own.
Let’s take an example. One of the most vocabulary-expanding pieces I’ve copied is Dann Petty’s wonderful Epicurrence10 website. I internalized three things from the header alone:
Insanely large font size
My copy of the original included the Hawaii initials “HI” in size 365 font. Never in my years of professional work had I even considered making text that big. Yet he uses it as a visual element, aligning it with the other header elements, even putting an image between the letters. Very cool.
Paint stroke as “shadow”
A watercolor smudge runs across the bottom of the seal, the header and the pineapple. It’s in the spot where a shadow might be, as if the shadow were painted on the page. Whoa — that’s not the usual way of doing it!
Uppercase type with generous letter-spacing
No doubt, that uppercase text adds a strong element of alignment, and pumping up the letter-spacing is a textbook way to add some classiness to type, but I find myself getting self-conscious about doing it much. It was cool to see that all of the text here is capitalized, and basically all of it has modified letter-spacing, too.
Now, I’d seen Dann Petty’s design before deciding to copy it. I thought, “Wow, this looks great.” And even as my eyes glossed over the design, it’s not like I immediately internalized every technique he used. Only when I copied it did I start to consciously adopt those things in my UI toolkit.
Here’s another example, the Skedio icon set14 by master icon designer Vic Bell. (Her originals are in blue, my copywork in red.)
This was a fascinating exercise for me, particularly because Vic’s icons are a step or two more detailed than most of what I make for the apps I work on. She added this complexity in two subtle ways:
A second, lighter shade of blue
Compare the fill color of the “Settings” icon (row 2, icon 1) to the outline color. Most icons I’ve designed are one color only.
A second, lighter line width
You can see it in the “text” of the “Tags” icon (row 1, icon 2) and in the arrow on the “Upgrades” icon (row 1, icon 3). I’ve lived by the rule that consistency is paramount in icon design, so seeing Vic’s use of 3-pixel accent lines in a primarily 4-pixel line set was fascinating.
But the strength of copywork is not just in seeing these details at a superficial level, but also in becoming intimately familiar with how they are used across the design.
Let’s take the idea of the second, lighter shade. It’s one thing to decide to use a second shade as an accent color. Here are four ways Vic has used the lighter shade in this icon set:
As a shadow
The trash can lid of the “Delete” icon (row 2, icon 3) has this secondary blue in its shadow. You can see a similar but even subtler shadow beneath the medallion in the “Upgrades” icon (row 1, icon 3).
As a gleam of light
The lighter shade is used as a reflection of light in the magnifying glass of the “Search” icon (row 3, icon 5).
For color contrast
Vic uses white and light blue as two contrasting colors in the life-preserver ring of the “Help and feedback” icon (row 1, icon 4). Same story with the pencil in the “Rename” icon below it (row 2, icon 4).
For visual weight contrast
This one was the subtlest for me. Notice how the background cards — but not the foreground card — of the “All sketches” icon (row 1, icon 1) and the “Layers” icon (row 3, icon 5) are light blue. The foreground card in both is white, giving it more contrast with the rest of the icon. If the background cards had white fills, then the sharp contrast between their borders and fills would have distracted the eye — as it is, the eye is directed straight to the front card.
These strategies are more detailed than any class on icons would get into. They’re one-off tips and techniques that go straight from the mind of a master to yours, if you’re willing to put the effort into doing copywork.
All right, let’s cover one more example here.
I saw Taylor Perrin’s Día de los Muertos design not long ago, and it blew me away. He does a fantastic job of using elements that I struggle with, such as beautiful photography, rich textures and panoramic layouts.
A lot of this is due to what I spend my time designing — mostly information apps for businesses. The look is clean and simple, the branding staid.
I’ve been a huge proponent of Sketch since day one, and I even teach a UI course based on Sketch19, but there are downsides to its simplicity. Namely, since switching from Photoshop to Sketch, my design reflexes have tended way towards everything being flat rectangles. In this exercise, I textured almost every background on the whole page, and it was a great reminder that a pass in Photoshop during the design process allows me much more versatility in style than simple colored rectangles.
Making decent assets shine
One of the first assets I had to track down and restyle was the illustration of the skeleton playing the horn. When I found it online20, I was underwhelmed. In fact, if I had been designing the original mockup and found that illustration, I probably would have passed it up. Yet it looked great in Perrin’s mockup. Comparing the original image to his mockup was a lesson in all of the cleanup work you can do with mediocre assets, and in envisioning their use beyond their immediate appearance.
Full-width layouts
Although it’s become popular to have panoramic layouts span the page’s width no matter how wide the screen gets, I’ve done very little of this myself. This exercise was a great confidence-booster for this type of layout. As I copied, I would try to guess certain details (such as the font size of a particular piece of copy) and see what felt right; then, I checked and adjusted. The next time I make a layout 1400 pixels wide, I won’t feel odd about headings being 60 pixels.
So, am I as good as Dann Petty or Vic Bell or Taylor Perrin now? I’m afraid that will take some time — they’re among the best UI designers in the game. But you better believe that every day I am pushing my set of skills and techniques to incorporate their know-how.
Copy something that pushes one of your skills beyond its current level. Copy something that exhibits a technique you’ve not honed.
For ideas, I love to browse Dribbble. I keep a Dribbble bucket of copywork images22 — things I’ve found from other designers that I want to copy (or already have).
I can also recommend a few designers who have carved out their own style and offer a lot to those who are still learning the basics:
Jonathan Quentin23 uses color and gradients really well. Check out his work to up your color chops.
Brijan Powell24 is the boss at making designs that are classy and upscale yet thoroughly masculine.
Tubik Studio25 is fantastic at everything bright, colorful and cheery.
Though I use Dribbble for copywork (it’s a great way to quickly browse and save high-quality images), live websites work great, too. For instance, Blu Homes26 is next on my list!
If you’re a professional designer, here are a few more ideas:
I sometimes copy my clients’ websites and apps if they want more UI work done in that style.
I sometimes copy my potential clients’ websites and apps, so that from the very first conversation, I have a deeper knowledge of their visual design language.
For instance, I did some design work for Soylent27, the meal-replacement company. As soon as they reached out to me, I put it on my to-do list to reproduce their home page. I wanted to be able to talk fluently about how they used color, typography and imagery.
I think copywork is subject to diminishing returns — so, no, you don’t have to copy perfectly. But (and this is important) you can’t copy it worse than the original. You have to achieve something that you view as equal or better, even if the details don’t totally line up.
You won’t always have access to the same fonts and resources, so slight differences are par for the course. I’ve found Identifont30 and WhatTheFont31 to be great resources for finding which fonts are used in images.
Do You Copy Multiple Images in Series to Improve a Certain Skill? Link
Copying five designs that demonstrate excellence in, say, typography is a great way to get better at that one skill. However, I much prefer to bounce around to whatever catches my eye. It’s a good counterbalance to what I’m working on at the time, and it keeps things interesting.
I copy in Sketch. I find that CSS is a poor medium for visual thinking, and copywork is a visual exercise. Writing CSS, I get bogged down thinking about the proper way to position each little element — which is exactly the kind of thing to worry about if you want to be a good CSS coder, but a terrible distraction if you’re trying to improve your visual design skill.
No, but here’s some great advice from designer Sean McCabe about avoiding plagiarism:
Soak up all the inspiration you want.
Sleep on it.
Produce your new work from memory.
Incidentally, copying from memory, rather than from the original right in front of you, is a variant of copywork that makes you far less prone to exact reproduction. However, I’d recommend this exercise for more advanced copiers. Working blind, you’ll be making both the low-level and high-level decisions without guidance. If you’re trying to reproduce something above your level, there’s plenty to be learned from direct copying.
There is remarkable consensus among artists and creative folk that creativity is fundamentally about mixing together what already exists. Nothing is entirely original.
Immature poets imitate; mature poets steal; bad poets deface what they take, and good poets make it into something better, or at least something different. The good poet welds his theft into a whole of feeling which is unique, utterly different from that from which it was torn; the bad poet throws it into something which has no cohesion.
– T.S. Eliot
I wanted to hear music that had not yet happened, by putting together things that suggested a new thing which did not yet exist.
– Brian Eno
All writing is in fact cut-ups. A collage of words read heard overheard. What else?
– William S. Burroughs
Copywork enables you to pick up inspiration and remix it into your own style. A casual glance at a great new design would only reveal the surface level of technique and style, but with copywork, you can go deep and really expand your skills.
Unless you’re known the world over for your inimitable style, then yes, you would probably benefit from it.
That covers the most frequent questions I get about copywork. It’s a simple practice but one that pays dividends. If you consistently copy pieces that impress you or are above your level, then you’ll pick up a handful — if not dozens — of techniques and tactics that you can apply to whatever you are working on. The rest of the art world has been doing this for centuries; it’s time for designers to catch up.
So, the next time you want to expand your visual vocabulary, open up a great design (for starters, you can browse my Dribbble copywork bucket32 for inspiration), put on some good music, and start cranking out the pixels.
For luxury companies and upscale lifestyle service providers, excellence in experience is an essential component of the value delivered. Conceptually different from the mass market, the luxury domain relies not only on offering the highest differentiated products and services, but on delivering experiential value.
Adopting technology and embracing a digital presence through platforms and initiatives, the luxury industry today is tackling the challenge of designing an unparalleled user experience (UX) online. In this article, we’ll present a case study and share observations on the peculiarities of the UX design of a luxury lifestyle service platform and its mobile apps.
Some time ago, 415Agency1 teamed up with VERITAMO2 to design a digital UX for its platform, which enables luxury service providers and lifestyle-management companies to deliver personalized services to its clients. Among the services offered are travel arrangements and bookings, luxury shopping experiences, gastronomy and more. A typical client of the platform is either a provider of high-end services or a lifestyle-management company that serves affluent clients. The company offers a back-office solution, together with integrated white-label mobile apps for use by its clientele.
The goal was to enable service providers to deliver a unique UX journey to a very particular type of consumer. We were extremely curious to solve the challenge of creating an upscale mobile experience in a time when digital personalization and customization are available to anyone.
According to a recent study by McKinsey & Company3, modern luxury consumers have become “highly digital, social and mobile,” with 75% already owning several digital devices. They are known for putting less value on owning physical high-end items, focusing instead on the authentic and special experiences that luxury companies offer. Moreover, they want their experience to be smooth, omnichannel and available 24/7, but at the same time only when and where they want. Based on research by Bain & Company4, the profiles of luxury consumers today are very diverse, both demographically and geographically, covering many different segments of people. With the changes and expansion in the luxury customer’s profile, mindset and habits, luxury companies and service providers have to experiment at the intersection of technology, culture and commerce to keep their devotees interested, informed and entertained.
For our project, our primary understanding of the luxury service’s end users (their demographics and psychographics) was based on insights from VERITAMO’s customers. Based on their observations, we were able to frame the initial end user’s profile and make it the baseline for our further work. The insights highlighted the following important areas:
prospective audience,
user behavior patterns and rhythms,
context of use of concierge services platforms (web, mobile),
potential drivers of user behaviors and reactions,
implicit user goals.
These initial findings covered the core user research questions: who, what, when and where. We used this data to set early hypotheses on the peculiarities of the digital luxury experience and on ways to address it in our design solution. We expected end users of luxury lifestyle services to be highly detail oriented; to be willing to learn and participate in all stages of service requests, search and the booking process; and to anticipate the highest and the most transparent level of customer service, with an exclusive and extra personal touch.
We focused on investigating the missing why and how: understanding customer incentives and the steps they needed to take within the app to reach their goals. Additional surveys and user interviews were conducted iteratively during the design process. Some of our initial assumptions and the methods we selected turned out to be incorrect or inappropriate, so we had to adjust them along the way.
Our initial assumptions about the digital luxury experience spun around highly personalized service delivery. At the beginning, we believed that swift customer service and elevated human-to-human interactions were key to offering efficient mobile tools to connect luxury consumers with their service providers. We believed that these aspects alone were enough for service providers to lure discerning, busy consumers to the mobile apps created by VERITAMO.
As it turns out, we were wrong. App usage statistics across the spectrum of service providers showed that there was no significant difference in the rate of orders from clients who downloaded the app and those who didn’t. Furthermore, mobile user retention suffered dramatically when service providers did not make any effort to market their apps.
We formed several hypotheses to explain this. With so many apps out there competing for real estate on users’ phone, incremental improvement to interaction with service providers was not enough. After all, consumers already had communication channels established with their service providers — even if they were brittle and inefficient.
Based on feedback from digital product managers and client services managers at the biggest concierge companies (including AmEx Centurion, John Paul, Quintessentially, Ten Group, LesConcierges, Aspire Lifestyles and several others), we learned that luxury service providers were seeking better management of and greater transparency with client requests. We decided to make this our key design motivation.
Initially, when working on the service discovery process, we offered mobile users a vast variety of search options, including multiple search criteria, filters and instant browsing. The initial design contained a flyout search menu (via the famous hamburger icon), which confused users about the navigation. They browsed only the current category selected, without understanding that other search options were available.
So, we changed the design to the variant below:
Using the combination of screen recording and concierge testing we observed users were still struggling with the discovery process. Customers expected immediate results with minimum data input. Yet they also expected several options to choose from. Some users reported being overwhelmed by choices that may or may not have been of interest to them. Additionally, the absence of expected results (such as “The restaurant that I know is hot right now”) created a negative impression of the service provided by their lifestyle manager (“They don’t even know the best restaurants in my city.”).
Mobile users relied more on the suggested offerings preselected by their service provider. Rather than desiring freedom of choice, they valued interaction with a dedicated advisor who would promptly respond to their requests with just a few relevant options. Discovery of services became a secondary feature of the mobile platform.
This observation can be further explained by the high degree of sophistication of the affluent clientele and their motivation to research available services. With limited time available, these customers have a precise reason for turning to their service providers for advice. After all, such needs are their very reason for retaining the services of a lifestyle manager in the first place.
Based on the user test results, we limited the number of service options to several categories, including “recommended / featured services,” “popular in your area,” etc. At the same time, in order to offer the richness of experience that one would encounter in close one-on-one communication, we improved the app’s navigation to enable easy access to the concierge chat feature and to allow delivery of options for review directly in the communication thread.
Using our evolutionary approach to UX design, we combined the client CRM, the content management system and interactive messaging functionality to create something quite powerful for the luxury industry. Service providers are now able to serve multiple clients simultaneously, without sacrificing personalization and exclusivity. The next step in our roadmap is to test targeted suggestions for each client in order to automate predictable and mundane tasks, freeing each service provider to concentrate on their main value proposition, which is to humanize the personal, bespoke approach to serving their clients’ needs.
We expected a transparent booking process to involve several stages for both the client and the concierge, with all steps tracked in the app — for example, order status options (booked, processed, confirmed, rejected), payment status (requested, pending, confirmed), etc. — and the possibility to send out request status notifications to clients. As mentioned before, initially we believed this information was crucial for picky luxury consumers. The reality was that we were all wrong: customers were not interested in participating and following the multi-step process. They considered it extremely important to know that someone was working on their request and what the outcome is, but without knowing many more details, which were considered bothersome.
They also expected the “one-button approach” and instant order confirmations. They anticipated it to be as short as possible, with immediate results, no extra information, and always five-star customer service, which implied a concierge to be handling all transitional steps, including changes, issues, and updates.
Our initial assumptions about the psychology of the perception of luxury17 helped us to create a rather sophisticated onboarding process. It included, first, limiting initial access to a mobile service in order to create artificial demand and, secondly, personalizing “invitations” for each user. We used the term “nomination” (rather than “invitation”) and implemented an approval workflow to accept new users to the service. Prospective clients had to “apply” for membership and await approval.
This approach was met with positive feedback from service providers because it enabled them to control their membership base and to weed out time-wasters. As for mobile users, our tests told us that our approach was not ideal and had to be improved.
We measured, first, the time it took for a person to respond to an onboarding questionnaire and, secondly, retention of approved users. We assumed that the demand-creation approach would outweigh the negative effect of limiting immediate access to the app.
The incentive to complete the questionnaire would be to get access to the mobile service. However, because service providers took some time approving accounts, the incentive quickly disappeared. Users who had to wait for approval were two to three times less likely to come back once the approval notice was sent to them.
We simplified the onboarding process from over 2 minutes down to an average of around 40 seconds, by asking for only the most basic information before an approval was made and then asking for the rest upon first successful entry into the app. We also introduced a pre-approval process to eliminate the wait time and to allow access to the app right away, while still properly communicating the privilege of access.
Further testing is required to assess the effect of the term “nomination” (as opposed to “invitation”) on the likelihood of referrals because existing users can “nominate” their friends to get exclusive membership with their service provider.
The visual part of UX design in the luxury industry is essential to communicating excellence and exclusivity. In order to create a holistic app experience, we strived to reflect these qualities both in the UI design and the functionality. With maximum attention to detail in the typography, color palette and iconography design, we aimed to establish solid visual cues that would determine how users experience the app. Using colors associated with luxury — gold, jet black, dark blue — we emphasized the timelessness of the experience. Thin classic typography and a minimalist design of icons added to the feel of modernity and elegance.
Luxury companies need to prove their value to customers more extensively than other brands, offering an experience that justifies the price and loyalty. With utmost customer care and exclusiveness of selection, they ensure that the transition to a sale happens seemingly effortlessly.
While from the user’s perspective the process may look absolutely effortless and refined, the system behind it is truly sophisticated. In the case of VERITAMO’s platform, the interface for the service advisor and concierge had to have a highly detailed structure and yet be as simple as possible to use. It needed to contain all information about the user: preferences, recent choices, a summary of their previous experience, current requests, current order status, history of requested changes and other details. It was absolutely necessary to provide a highly personalized level of customer service and to address user inquiries, concerns and frustrations with class, swiftness, and simplicity.
The customer experience in each industry is perceived differently. Very often, when we take up a new project, our initial assumptions put us in a rigid framework of predetermined creative responses that misalign UX design solutions with real user needs. Filing our observations as “Things we wish we knew when starting the project,” we see that it is essential to do a reality check on one’s expectations of user behavior and to keep in mind that a compelling UX design goes beyond the confines of a particular industry and the user’s social standing.
These are our key observations and takeaways on UX design for the luxury domain:
Time is the ultimate asset for upscale service users. Time and convenience greatly influence their online behavior.
Luxury consumers know exactly what kind of experience they are looking for and/or anticipate advice from trusted sources.
The user journey should be as short as possible, enabling users to get what they are looking for instantly.
Transparency is expected. Moreover, it translates to superior service delivery, which is associated with immediacy and a simplified user flow, rather than an overly detailed step-by-step process.
“Less is more” applies to the process of searching services, as well as to the overall user journey. The process of creating a highly sophisticated UX needs to focus on what to leave out, rather than what to include.
Human interaction matters and is highly valued.
Fewer options are better, but those options are expected to be highly relevant and individually tailored.
The “one-button” approach is the epitome of UX design for the luxury field.
The user path should be simple and clear, with a highly sophisticated yet simple back-office support system.
Despite the fact that modern luxury lifestyle consumers are becoming highly sophisticated and tech-savvy, many of the key observations our team made during this project do not seem to be exclusive to the luxury field. Sound UX principles apply to all user groups, regardless of their social status or preferences.
Today, users anticipate a superior experience and have a strong understanding of the value delivered. They are focused on results and a one-button approach, expecting their orders to be addressed efficiently, at the highest level of service and with maximum transparency. However, more so in the luxury field, human interaction within the digital experience is not an option, but rather an undeniably powerful tool that improves communication and increases loyalty.
At the end of the day, a white-glove UX is all about delivering the right information, in the right amount, in the right place and at the right time, while maintaining a refined and confident appearance.
I’ve been thinking a lot about speech for the last few years. In fact, it’s been a major focus in several of my talks of late, including my well-received Smashing Conference talk “Designing the Conversation1.” As such, I’ve been keenly interested in the development of the Web Speech API2.
If you’re unfamiliar, this API gives you (the developer) the ability to voice-enable your website in two directions: listening to your users via the SpeechRecognition interface3 and talking back to them via the SpeechSynthesis interface4. All of this is done via a JavaScript API, making it easy to test for support. This testability makes it an excellent candidate for progressive enhancement, but more on that in a moment.
A lot of my interest stems from my own personal desire to experiment with new ways of interacting with the web. I’m also a big fan of podcasts and love listening to great content while I’m driving and in other situations where my eyes are required elsewhere or are simply too tired to read. The Web Speech API opens up a whole range of opportunities to create incredibly useful and natural user interactions by being able to listen for and respond with natural language:
– Hey Instapaper, start reading from my queue!
– Sure thing, Aaron…
The possibilities created by this relatively simple API set are truly staggering. There are applications in accessibility, Internet of Things, automotive, government, the list goes on and on. Taking it a step further, imagine combining this tech with real-time translation APIs (which also recently began to appear). All of a sudden, we can open up the web to millions of people who struggle with literacy or find themselves in need of services in a country where they don’t read or speak the language. This. Changes. Everything.
But back to the Web Speech API. As I said, I’d been keeping tabs on the specification for a while, checked out several of the demos and such, but hadn’t made the time to play yet. Then Dave Rupert finally spurred me to action with a single tweet:
Within an hour or so, I’d gotten a basic implementation together for my blog6 that would enable users to listen to a blog post7 rather than read it. A few hours later, I had added more features, but it wasn’t all wine and roses, and I ended up having to back some functionality out of the widget to improve its stability. But I’m getting ahead of myself.
I’ve decided to hit the pause button for a few days to write up what I’ve learned and what I still don’t fully understand in the hope that we can begin to hash out some best practices for using this awesome feature. Maybe we can even come up with some ways to improve it.
So far, my explorations into the Web Speech API have been wholly in the realm of speech synthesis. Getting to “Hello world” is relatively straightforward and merely involves creating a new SpeechSynthesisUtterance (which is what you want to say) and then passing that to the speechSynthesis object’s speak() method:
var to_speak = new SpeechSynthesisUtterance('Hello world!'); window.speechSynthesis.speak(to_speak);
Not all browsers support this API, although most modern ones do8. That being said, to avoid throwing errors, we should wrap the whole thing in a simple conditional that tests for the feature’s existence before using it:
if ( 'speechSynthesis' in window ) { var to_speak = new SpeechSynthesisUtterance('Hello world!'); window.speechSynthesis.speak(to_speak); }
Once you’ve got a basic example working, there’s quite a bit of tuning you can do. For instance, you can tweak the reading speed by adjusting the SpeechSynthesisUtterance object’s rate property. It accepts values from 0.1 to 10. I find 1.4 to be a pretty comfortable speed; anything over 3 just sounds like noise to me.
You can also tune things such as the pitch15, the volume16 of the voice, even the language being spoken17 and the voice itself18. I’m a big fan of defaults in most things, so I’ll let you explore those options on your own time. For the purpose of my experiment, I opted to change the default rate to 1.4, and that was about it.
When I began working with this code on my own website, I was keen to provide four controls for my readers:
play
pause
increase reading speed
decrease reading speed
The first two were relatively easy. The latter two caused problems, which I’ll discuss shortly.
To kick things off, I parroted the code Dave had tweeted:
var to_speak = new SpeechSynthesisUtterance( document.querySelector('main').textContent ); window.speechSynthesis.speak(to_speak);
This code grabs the text content (textContent) of the main element and converts it into a SpeechSynthesisUtterance. It then triggers the synthesizer to speak that content. Simple enough.
Of course, I didn’t want the content to begin reading immediately, so I set about building a user interface to control it. I did so in JavaScript, within the feature-detection conditional, rather than in HTML, because I did not want the interface to appear if the feature was not available (or if JavaScript failed for some reason). That would be frustrating for users.
I created the buttons and assigned some event handlers to wire up the functionality. My first pass looked something like this:
var $buttons = document.createElement('p'), $button = document.createElement('button'), $play = $button.cloneNode(), $pause = $button.cloneNode(), paused = false, to_speak; if ( 'speechSynthesis' in window ) { // content to speak to_speak = new SpeechSynthesisUtterance( document.querySelector('main').textContent ); // set the rate a little faster than 1x to_speak.rate = 1.4; // event handlers to_speak.onpause = function(){ paused = true; }; // button events function play() { if ( paused ) { paused = false; window.speechSynthesis.resume(); } else { window.speechSynthesis.speak( to_speak ); } } function pause() { window.speechSynthesis.pause(); } // play button $play.innerText = 'Play'; $play.addEventListener( 'click', play, false ); $buttons.appendChild( $play ); // pause button $pause.innerText = 'Pause'; $pause.addEventListener( 'click', pause, false ); $buttons.appendChild( $pause ); } else { // sad panda $buttons.innerText = 'Unfortunately your browser doesn’t support this feature.'; } document.body.appendChild( $buttons );
This code creates a play button and a pause button and appends them to the document. It also assigns the corresponding event handlers. As you’d expect, the play button calls speechSynthesis.speak(), as we saw earlier, but because pause is also in play, I set it up to either speak the selected text or resume speaking — using speechSynthesis.resume() — if the speech is paused. The pause button controls that by triggering speechSynthesis.pause(). I tracked the state of the speech engine using the boolean variable paused. You can kick the tires of this code over on CodePen19.
I want to (ahem) pause for a moment to tuck into the speak() command, because it’s easy to misunderstand. At first blush, you might think it causes the supplied SpeechSynthesisUtterance to be read aloud from the beginning, which is why I’d want to resume() after pausing. That is true, but it’s only part of it. The speech synthesis interface actually maintains a queue for content to be spoken. Calling speak() pushes a new SpeechSynthesisUtterance to that queue and causes the synthesizer to start speaking that content if it’s not already speaking. If it’s in the process of reading something already, the new content takes its spot at the back of the queue and patiently waits its turn. If you want to see this in action, check out my fork of the reading speed demo20.
If you want to clear the queue entirely at any time, you can call speechSynthesis.cancel(). When testing speech synthesis with long-form content, having this at the ready in the browser’s console is handy.
As I mentioned, I also wanted to give users control over the reading speed used by the speech synthesizer. We can tune this using the rate property on a SpeechSynthesisUtterance object. That’s fantastic, but you can’t (currently, at least) adjust the rate of a SpeechSynthesisUtterance once the synthesizer starts playing it — not even while it’s paused. I don’t know enough about the inner workings of speech synthesizers to know whether this is simply an oversight in the interface or a hard limitation of the synthesizers themselves, but it did force me to find a creative way around this limitation.
I experimented with a bunch of different approaches to this and eventually settled on one that works reasonably well, despite the fact that it feels like overkill. But I’m getting ahead of myself again.
Every SpeechSynthesisUtterance object offers a handful of events you can plug in to do various things. As you’d expect, onpause21 fires when the speech is paused, onend22 fires when the synthesizer has finished reading it, etc. The SpeechSynthesisEvent23 object passed to each of these includes information about what’s going on with the synthesizer, such as the position of the virtual cursor (charIndex24), the length of time after the current SpeechSynthesisUtterance started being read (elapsedTime25), and a reference to the SpeechSynthesisUtterance itself (utterance26).
Originally, my plan to allow for real-time reading-speed adjustment was to capture the virtual cursor position via a pause event so that I could stop and start a new recording at the new speed. When the user adjusted the reading speed, I would pause the synthesizer, grab the charIndex, backtrack in the text to the previous space, slice from there to the end of the string to collect the remainder of what should be read, clear the queue, and start the synthesizer again with the remainder of the content. That would have worked, and it should have been reliable, but Chrome kept giving me a charIndex of 0, and in Edge it was always undefined. Firefox tracked charIndex perfectly. I’ve filed a bug for Chromium27 and one for Edge28, too.
Thankfully, another event, onboundary29, fires whenever a word or sentence boundary is reached. It’s a little noisier, programmatically speaking, than onpause because the event fires so often, but it reliably tracked the position of the virtual cursor in every browser that supports speech synthesis, which is what I needed.
Here’s the tracking code:
var progress_index = 0; to_speak.onboundary = function( e ) { if ( e.name == 'word' ) { progress_index = e.charIndex; } };
Once I was set up to track the cursor, I added a numeric input to the UI to allow users to change the speed:
var $speed = document.createElement('p'), $speed_label = document.createElement('label'), $speed_value = document.createElement('input'); // label the field $speed_label.innerText = 'Speed'; $speed_label.htmlFor = 'speed_value'; $speed.appendChild( $speed_label ); // insert the form control $speed_value.type = 'number'; $speed_value.id = 'speed_value'; $speed_value.min = '0.1'; $speed_value.max = '10'; $speed_value.step = '0.1'; $speed_value.value = Math.round( to_speak.rate * 10 ) / 10; $speed.appendChild( $speed_value ); document.body.appendChild($speed);
Then, I added an event listener to track when it changes and to update the speech synthesizer:
function adjustSpeed() { // cancel the original utterance window.speechSynthesis.cancel(); // find the previous space var previous_space = to_speak.text.lastIndexOf( ' ', progress_index ); // get the remains of the original string to_speak.text = to_speak.text.slice( previous_space ); // math to 1 decimal place speed = Math.round( $speed_value.value * 10 ) / 10; // adjust the rate if ( speed > 10 ) { speed = 10; } else if ( speed < 0.1 ) { speed = 0.1; } to_speak.rate = speed; // return to speaking window.speechSynthesis.speak( to_speak ); } $speed_value.addEventListener( 'change', adjustSpeed, false );
This works reasonably well, but ultimately I decided that I was not a huge fan of the experience, nor was I convinced it was really necessary, so this functionality remains commented out in my website’s source code30. You can make up your mind after seeing it in action over on CodePen31.
At the top of every blog post, just after the title, I include quite a bit of meta data about the post, including things like the publication date, tags for the post, comment and webmention counts, and so on. I wanted to selectively control which content from that collection is read because only some of it is really relevant in that context. To keep the configuration out of the JavaScript and in the declarative markup where it belongs, I opted to have the JavaScript look for a specific class name, “dont-read”, and exclude those elements from the content that would be read. To make it work, however, I needed revisit how I was collecting the content to be read in the first place.
You may recall that I’m using the textContent property to extract the content:
var to_speak = new SpeechSynthesisUtterance( document.querySelector('main').textContent );
That’s all well and good when you want to grab everything, but if you want to be more selective, you’re better off moving the content into memory so that you can manipulate it without causing repaints and such.
var $content = document.querySelector('main').cloneNode(true);
With a clone of main in memory, I can begin the process of winnowing it down to only the stuff I want:
Here, I’ve separated the creation of the SpeechSynthesisUtterance to make the code a little clearer. Then, I’ve cloned the main element ($content) and built a nodeList of elements that I want to be ignored ($skip). I’ve then looped over the nodeList — borrowing Array’s handy forEach method — and set the contents of each to an empty string, effectively removing them from the content. At the end, I’ve set the text property to the cloned main element’s textContent. Because all of this is done to the cloned main, the page remains unaffected.
Sadly, the value of a SpeechSynthesisUtterance can only be text. If you pipe in HTML, it will read the tag names and slashes. That’s why most of the demos use an input to collect what you want read or rely on textContent to extract text from the page. The reason this saddens me is that it means you lose complete control over the pacing of the content.
But not all is lost. Speech synthesizers are pretty awesome at recognizing the effect that punctuation should have on intonation and pacing. To go back to the first example I shared, consider the difference when you drop a comma between “hello” and “world”:
if ( 'speechSynthesis' in window ) { var to_speak = new SpeechSynthesisUtterance('Hello, world!'); window.speechSynthesis.speak(to_speak); }
With this in mind, I decided to tweak the pacing of the spoken prose by artificially inserting commas into the specific elements that follow the pattern I just showed for hiding content:
While I was doing this, I also noticed some issues with certain elements running into the content around them. Most notably, this was happening with pre elements. To mitigate that, I used the same approach to swap carriage returns, line breaks and such for spaces:
With those tweaks in place, I’ve been incredibly happy with the listening experience. If you’d like to see all of this code in context, head over to my GitHub repository38. The code you use to drop the UI into the page will likely need to be different from what I did, but the rest of the code should be plug-and-play.
As it stands right now, the Web Speech API has not become a standard and isn’t even on a standards track39. It’s an experimental API and some of the details of the specification remain in flux. For instance, the elapsedTime property of a SpeechSynthesisEvent originally tracked milliseconds and then switched to seconds. If you were doing math that relied on that number to do something else in the interface, you might get widely different experiences in Chrome (which still uses milliseconds) and Edge (which uses seconds).
If I was granted one wish for this specification—apart from standardization—it would be for real-time speed, pitch and volume adjustment. I can understand the need to restart things to get the text read in another voice, but the others feel like they should be manipulable in real time. But again, I don’t know anything about the inner workings of speech synthesizers, so that might not be technically possible.
In terms of actual browser implementations, basic speech synthesis like I’ve covered here is pretty solid in browsers that support the API40. As I mentioned, Chrome and Edge currently fail to accurately report the virtual cursor position when speech synthesis is paused, but I don’t think that’s a deal-breaker. What is problematic is how unstable things get when you start to combine features such as real-time reading-speed adjustments, pausing and such. Often, the synthesizer just stops working and refuses to start up again. If you’d like to see that happen, take a look at a demo I set up41. Chances are that this issue would go away if the API allowed for real-time manipulation of properties such as rate because you wouldn’t have to cancel() and restart the synthesizer with each adjustment.
Long story short, if you’re looking at this as a progressive enhancement for a content-heavy website and only want the most basic features, you should be good to go. If you want to get fancy, you might be disappointed or have to come up with more clever coding acrobatics than I’ve mustered.
As with most things on the web, I learned a ton by viewing other people’s source, demos and such — and the documentation, naturally. Here are some of my favorites (some of which I linked to in context):
The sharing spirit in the design community is remarkable. Designers spend countless hours on side projects, and without asking for anything in return, they share their creations freely with the community, just to give something back, to inspire, and to support fellow folks in their work.
When working on a project yourself, freebies like these can come to the rescue when you have to get along on a tight budget but, more often that that, they simply are the missing piece that’ll make your design complete.
In this post, we hand-picked 30 fonts that are bound to give your project the finishing touch, and maybe even inspire you to something entirely new. The fonts can all be downloaded for free. However, please note that some of them are free for personal use only and are clearly marked as such in the description. Also, please be sure to check the license agreements before using a font in your project as they may change from time to time.
For more free font goodness, also check out the following posts:
Luis Calzadilla’s font L-7 Stencil5 is a good match for all those occasions when you want to make a bold statement while keeping the typeface itself rather than sleek and slim. Characteristic for the sans-serif font are the stencil-style, fragmented letters and the rounded terminals. The font supports capital letters and numbers and can be used for free in personal projects. If you want to use it in a commercial project, please be sure to credit the designer.
It’s not only the name of the brush sans Westfalia8 that wakes allusions of the famous campervan. With its hand-drawn feel, messy edges, and varied line thickness, the font also caters for a warm feeling of authenticity and adventure. Westfalia comes in one weight, with capital letters, numbers and punctuation marks, and works especially well as bold headings or on posters. It’s free to use for both personal and commercial projects.
If you’re looking for something to add a personal touch to your projects, the modern calligraphy typeface Setta Script11 might be for you. It comes with 244 glyphs and 69 alternate characters with Opentype features. Ligatures are also supported. A perfect match for greeting cards and invitations.
Inspired by the old growth forests of the West Coast, Old Growth14 is a rough sans-serif font with edges as uneven as the treetops in the woods. This one works especially well for branding, quotes, and headlines. You’re free to use the font to your liking in personal as well as commercial projects.
Inspired by the typography of the 1920’s, Marius Kempken designed Moderne Sans17. The typeface is based on uppercase letters, but lowercase letters and numbers are included in the font, too. You may use Moderne Sans freely in both personal as well as commercial work.
The font family Octanis20 beautifully merges the new and the old. It comes in eight styles ranging from modern, even a bit futuristic sans-serif versions to a rather vintage-inspired slab serif. A nice choice for headlines and logos, but also paragraphs of text look great with it. You may use the typeface for free in both personal and commercial projects.
A balanced upright script with style and moxie. That’s Escafina23. Escafina is a modern interpretation of the letters you usually find in mid-century advertising and signage. It comes in three styles (high, medium, and low) and supports over 100 languages. Personal licenses are pay-as-you-want.
You know those little boxes that appear when a computer can’t render a character? Because of their shape, they are often referred to as “tofu”. Google’s answer to these little boxes is a font family that aims to support all existing languages and, thus, put an end to “tofu”. And what name could be better suited for such an undertaking as “Noto26”, which is assembled from “no more tofu”? The Noto typeface comes in multiple styles and weights and is freely available. Perfect for when your project needs to support languages that other fonts usually fail to display.
To give your project an authentic, handmade touch, Bonfire29 might be just what you were looking for. The hand-drawn brush font shines with its unique swashes. The free version includes upper and lowercase letters in one style that you may use for personal projects.
If you’re looking for a typeface with a seamless flow that still makes a bold statement, Etna32 may be one for you. Characteristic for Etna are the pointy edges of the capital letters that majestically stand out like the tip of a mountain. While the full version covers Latin as well as Cyrillic alphabets, the free version comes with Latin characters only. Free for personal use.
Vintii35 is certainly a friendly and playful typeface that doesn’t take itself too seriously. With its cut-out looks, it’s a good catch for headlines and short descriptions, but it’s readable in larger blocks of text as well. The font contains all basic glyphs and characters and can be used to your liking.
To create his typeface Plume38, Krišjānis Mežulis chose a quite extraordinary approach: He used a thick brush to paint the individual letters, numbers, and punctuation marks on a plastic surface. The result: a crisp typeface with a unique splashed look.
Simple rounded shapes and a sleek overall look are the determining elements of the font Coves41. It comes in two weights (light and bold) and offers full glyph support. You’re free to use Coves in personal projects. If you’re interested in a commercial license, please be sure to contact the designer.
Zefani44 is a typeface with a strong character and an elegant, sophisticated look. The stencil version comes with uppercase letters and can be used for free in private projects.
If you’re looking for a font with personality that is humble enough not to steal your content the show, check out Kano47. With its geometric structure and sharp edge points, it makes a statement that is ideal for logos, posters, and other typographic work. Kano is free to use in personal and commercial projects.
Ailerons50 can be translated as “little wing” in French, and that’s exactly where the typeface sought its inspiration: in aircraft models of the 1940s. The typeface is clean and stylish and works especially well for titles. You may use it freely as long as it’s for personal use only. If you’re interested in using Ailerons in a commercial project, please contact the designer.
Do you have a sweet spot for handlettering? Then, take a look at Noelan Script53. The modern calligraphy typeface comes with Opentype features that allow swashes to be automatically connected for intial and terminal. And to improve the handwritten look even further, you can mix and match alternate characters for more variety. Noelan is free for personal and commercial use.
Inspired by vintage print catalogs from the early 1900s, Mark Richardson set out to create a typeface that captures the aesthetics of the era. What came out of it, is the free font Phalanx56, and, well, rustic and honest are probably the words that best describe its look. Phalanx comes with a full uppercase alphabet and numbers. You’re free to use it as you wish.
How about some 90s vibes for a change? Shkoder 198959 seeks inspiration in the good things of the decade: sports, tech, and everything else that inspired a kid of the time. The typeface consists of caps, numbers, and a lot of glyphs that make it a good fit also for non-English projects. Two weights – one light, one black – are available. You may use Shkoder 1989 for any kind of project. If you decide to use it commercially, shoot the designers an email – they’d love to hear about it.
A font that beautifully captures the aesthetic found in popular handwriting pieces is Wayward63. The uppercase alphabet pairs well with script lettering and gives branding projects a personal touch. Free to use, also commercially.
Aqua Grotesque66 is a grotesque typeface with a retro, 1940s touch. Its crisp, geometric shapes cater for a fresh and unique look. Feel free to use it as you like.
“A funny font for funny people.” That’s how the font Daddy69 describes itself. Originally created for a children’s book, Daddy is bound to bring a fresh and playful twist to any kind of project. It’s free to use, even commercially.
A sharp and precise design that enables a clear communication with the reader – that’s Santral72. Santral was designed with a focus on keeping the balance between visual perfection and optical impression. The complete font family includes twelve weights and italic versions, two of them (Light and Light Italic) can be downloaded for free for personal projects.
The hand-painted brush script typeface Hensa75 is a nice choice for logos, packaging, greeting cards and the like. It supports standard Latin characters (upper- and lowercase), numerals, punctuation, ligatures, and – for the extra handmade touch – a set of swashes. Free for private and commercial use.
Its high x-height and long descenders make Affogato78 an unusually expressive, yet friendly, typeface. It comes in five weights and a vast variety of glyphs which make it a good fit for diacritic-heavy languages, too. Affogato looks especially good as display type or in logos, but body copy works well, too. You may use it for free (also commercially) or can pay what you want for a license to show the designer your appreciation.
How about something experimental for a change? Inspired by Kandinsky and Gestalt’s optical research, Alfonso Armenteros Parras designed Stijla81, a typeface that wants to push the boundaries of legibility. The free version comes with a standard Latin alphabet and numbers.
Another rather experimental font is Accent84. The combination of fine lines and bold geometric shapes works best for short titles and short words. You may use Accent for free in both personal and commercial projects.
Art nouveau and the modern Didot typeface were the source of inspiration for Soria87. Soria comes with a good selection of glyphs and beautiful ligatures. A timely piece with a unique, vintage touch.
A unique yet functional font is Orkney90. With its geometric look and a high level of readability also in small font sizes, it works well in both print and web projects. The Orkney family includes four weights with more than 400 characters and wide language support. Released under the SIL Open Font License, you may use it commercially.
Technically speaking, Multicolore94 isn’t a font as it’s multicolored and you cannot write with it in your favorite program either. Instead, you’ll need a vector editing application to create text with it. But that’s nothing to worry about as the bold and playful fellow is best suited for text that includes only a few words anyhow. Multicolore comes in EPS, AI and PDF formats and is free even for commercial use.
Did you stumble across a free font recently that caught your attention? We’d love to hear about it in the comments!
Many criticize gestural controls as being unintuitive and unnecessary. Despite this, widespread adoption is underway already, and the UI design world is burning the candle at both ends to develop solutions that are instinctively tactile. The challenges here are those of novelty.
Even though gestural controls have been around since the early 1980s1 and have enjoyed a level of ubiquity since the early 2000s, designers are still in the beta-testing phase of making gestural controls intuitive for everyday use.
This article will explore the benefits and drawbacks of gestural controls for mobile UIs, as well as offer advice on effective implementation that avoids the gap in user familiarity.
Gestures come in all shapes and sizes. The most common are listed in the graphic below. These are the conventional controls to which most active mobile device users are accustomed. These are the most used across platforms and, in that regard, the most intuitive. At least that’s the case with people who have significant experience using gestural controls.
This level of intuition can’t be applied, however, to the diminishing population who are flying blind when confronted with a mobile interface. According to an oft-cited study8 by Dan Mauney, there are a great deal of similarities in the way people expect a mobile interface to work. This study asked participants from nine countries to create a set of 28 actions using a gestural interface.
The results were stunningly similar. There wasn’t a ton of variability between actions. Most people expected certain actions to work the same. Deleting, for example, was most often accomplished by dragging an element off of the screen. Menus were constantly consulted — despite warnings not to do this. People often drew a question mark to indicate help functionality.
Oddly enough, the standard set of controls used across most apps were these:
tap,
double-tap,
drag,
flick,
pinch,
spread,
press,
press and tap,
press and drag,
rotate.
These didn’t always account for the intuitive gestures most people in the study created when left to their own devices. This presents a big question: How intuitive are gestural interfaces? Not only that, but what are the pros and cons of implementing a gestural interface?
Regardless of the drawbacks, one thing is clear: Gestural interfaces aren’t going anywhere. That’s why it’s vital for today’s designers to firmly grasp the underlying concepts that make gestural controls effective. Otherwise, the chance that the usability of their work will suffer increases dramatically.
Gestural controls are popular because of two major factors:
the meteoric rise of mobile devices with touchscreens,
the movie Minority Report.
Kidding. It’s just about the mobile devices. The Minority Report HUD display is such a fantastic example, however, that it’s become somewhat of a trope to discuss it in conversations about touch interfaces, but we’re still a ways off from interacting with holographic projections.
Even so, this foreboding Tom Cruise vehicle did a great job of showing what will eventually be possible with UI design. And the important part of that is getting something that’s usable and intuitive. Let’s examine how that’s possible in our first tangible benefit of gestural control.
Touch UIs only feel intuitive when they approximate interaction with a physical object. This means that tactile feedback, and the ability to manipulate the UI elements, has to work as an abstraction of a real object in order to be truly intuitive.
Even poorly designed interfaces only take a little experimentation to figure out, at least for power users. Think about how often you’ve skipped a tutorial to just interact with an app’s interface. You might miss some fine details, but it’s fairly easy to discover the primary controls for most interfaces within a few minutes of unguided interaction. Still, there’s a serious limiter on user delight if there’s no subtle guidance from the designer. So, how do you teach your users without distracting them from the application?
The best approach to creating intuitive touch-based interaction is through a process called progressive disclosure. This is a process by which a user is introduced to controls and gestures as they proceed through an interface’s flow. Start by showing users only the most important options for interaction. You can do this with visual cues, or through a tutorial-like “get started” process. I favor the former, because many users (myself included) will usually skip a tutorial12 to start interacting with an app right away.
Slight visual cues and animations that give instant feedback in response to touch are the perfect delivery method for progressive disclosure. A fantastic example of this is visible in Apple products’ “slide to unlock” commands, although the feature has since been removed.
The interface guides you with text, indicates the direction with an arrow and offers immediate feedback action with animation. You can take this same concept and draw it out further with more multifaceted applications.
In his 2013 article about gestural interfaces16, Thomas Joos, contributor to Smashing Magazine, covers this process thoroughly, pointing to YouTube’s Capture application as an example.
Both progressive disclosure and the tutorial techniques offer guidance should a user require it. The disclosure method, however, has the added benefit of respecting the user enough to expect they can figure out a process.
Because they’re completing a task with minimal guidance (achieving goals, as it were), they feel a sense of accomplishment. The design is adding to their delight in interacting with the app. This can help to create habits and obviously makes it much easier to learn related or increasingly complex operations within the application. You’ve established a pattern of minimal guidance; all you have to do is repeat it as the functions layer on in complexity.
The important thing to remember when teaching users how to use your interface is the three-part process of habit formation:
trigger,
action,
feedback.
The trigger is the inciting action, such as a push notification reminding a user to interact with the app. The action is where you leave your subtle clue as to how the user should gesticulate in order to complete the goal. Then comes the feedback, which works as a sort of reward for a job well done.
This habit-formation process is a form of user onboarding, or a way of ensuring that new users are successful when they start using your application, and then converting casual visitors into enthusiastic fans. A great example of this process (specifically, the third step20) can be seen in the Lumosity app.
The Lumosity app is a game-based brain-training application. It allows users to set up their own triggers, which manifest as push notifications.
It then progresses to the actions, the games themselves. These games are gesture-based, and each is introduced by a quick, easy, simple tutorial.
Note the succinct instructions. A quick read and the performance of the instructions provide instant feedback on user actions.
Finally, after the user has finished each exercise, the feedback is offered — then again, when they’ve finished a set number of exercises in a given day.
Providing these stimuli to the user reinforces their satisfaction from performing their tasks, as well as their memory of how to perform them. Gestural controls are a skill, like any other. If you can make learning them fun, then the curve for retention will decrease significantly.
Of course, easy learning is only one benefit of a gestural UI. Another big one is the fact that it promotes a minimalist aesthetic.
Screen real estate on a mobile device is a big deal. Your space is limited, and you have to use it wisely — especially if you have an abundance of features. That’s why so many interfaces are resorting to the hamburger menu icon to hide navigation controls.
Using gestures for navigation of a website might be a bit of a tradeoff in usability, but it makes an app look pretty slick. Just take a look at the Solar app, which is highly minimalist and offers those subtle cues we talked about earlier.
Though the clarity of the actions a user is meant to take is decreased slightly, the look and feel of the app are boosted in a tangible way. Plus, delight is increased because the user is given more autonomy to figure out what to do on their own. Speaking of delight…
Something that’s easy to use and easy on the eyes is also easy to enjoy. Gestural controls enable a tactile experience for users, and that’s downright enjoyable. Using haptic feedback to indicate a successful interaction, for example, can give users a subtle sense of accomplishment. This could be as simple as a confirmative vibration upon muting the phone (as in the case of both Apple and Android products).
Basically, in addition to the visual and audio appeal of a product, designers can now begin incorporating touch sensations as a way to engage users. The folks over at Disney are exploring this concept31 with no lack of zeal.
That brings us to our final point. This is unexplored territory — a whole new world of interaction for designers to bring to life in living color! While usability and industry best practices should always be considered and consulted, this is a chance to break creatively from convention. While it might not always work out to be revolutionary, experimentation in this field can’t help but be exciting.
Oddly enough, with all of the futuristic appeal and hype paid to gestural controls, the trend isn’t universally beloved. In fact, there’s a sizeable camp in the design world that considers gestural controls to be a step back in usability.
At least part of this pushback is due to the rush to implement. Many designers working on gestural interfaces are ignoring the standard UX caveats that have been shown to measurably improve a product’s usability. Moreover, the inclination towards conformity in design is always pretty high. You’re reading what is essentially a “best practices” article, after all. And it’s one of thousands.
This means that people are using the same techniques and design patterns across any number of applications and interfaces, even when they don’t make sense, due to “conventional wisdom.”
Designers sometimes duplicate the same usability problems in their work that you find in other popular gestural interfaces employed by industry big boys, such as Google and Facebook — for example, the preference for icons over text-based links. In an effort to save space, designers use pictures rather than text. This, in itself, isn’t exactly a cardinal sin, and it can be very helpful in moderation.
The problem is that it isn’t exactly super-intuitive. Pictures are subjective. They can mean different things to different people, and assuming that your users will know what an obscure icon is supposed to do is quite the gamble.
Check the interface of music app Bloom.fm.
There’s a lot going on here. What’s the raindrop supposed to be? Is that a warning for a snowstorm in the bottom left? A musical note overlaying a hamburger menu in the top right, right across the screen from a regular hamburger menu? What am I looking at?
Granted, some users can hit the ground running with these interfaces and learn a lot as they go. But the point is that nothing about this interface gives you a sense of immediate apprehension. It’s not intuitive.
To address this, Bloom.fm might be better served by removing these dissonant symbols from the main screen entirely. Put these functions (whatever they are) in the hidden menu. After all, if you’re on a music player screen, what more do you really need than play, pause, fast forward and rewind?
This brings us to my next point, which is the overarching problem with gestural interfaces: All of the controls and gestural functions are always hidden. You’re depending on a user’s prior familiarity with basic gestural concepts to get along.
This means that any departure from convention will be seen as unfamiliar. Even more problematic is that there’s no industry standard for gestural controls. It’s like the Wild West, but with more tapping and less shooting. Double-tapping might mean one thing in one app and something completely different in another.
Gestural controls can even change between iterations of an application. For instance, the image-sharing application Imgur used the double-tap gesture for zooming in an old version, but an update to the interface changed the gesture to something different entirely. Now it’s used to “upvote” a post (i.e. increasing its popularity and giving the poster fake Internet points).
Which leads to another problem: The learning curve, depending on your attention to usability detail, can be quite steep. While picking up gestural skills is usually pretty easy, as discussed above, the greater room to explore and implement new design patterns means that touch UIs can be highly variable. Variability is roughly equivalent to unpredictability, and that’s the opposite of intuitive.
To combat this, well-designed touch UIs stay in their lane for the most part, relying on visual cues (particularly animations) and text-based explanations in some cases, to establish a connection between a gesture and a function in the user’s mind.
As stated at the beginning of this article, despite any deficiencies that may or may not be innate in the basic concepts of gestural interfaces, the touch UI is here to stay. Its flexibility and mild learning curve (for the basics anyway) practically ensure it.
The bottom line of this whole thing is that, regardless of the benefits and disadvantages, touch is the dominant interface of the future. In other words, you’ll have to find a way to make it work. Proceed with caution, and stick with the familiar whenever possible. The best advice I can give is to keep it simple and test with users above and beyond what’s required. It’s in your best interest to figure out how and when to introduce new controls, to make sure you’re not an example in someone else’s article about UI usability.
If you’d like to learn more about the implementation of touch gestures, check out these helpful resources:
CSSRooster takes your HTML code as input, including CSS styles, and then writes class names for your HTML tags by analyzing the patterns hidden in your code.
Everyone here can have a big impact on a project, on someone else. I get very excited about this when I read stories like the one about an intern at Google who did an experiment that saves tons of traffic, or when I get an email from one of my readers who now published an awesome complete beginner’s guide to front-end development.
We need to recognize that our industry depends on people who share their open-source code and we should support them and their projects1 that we heavily rely on2. Finally, we also need to understand that these people perhaps don’t want a job as an employee at some big company but remain independent instead. So if you make money with a project that uses open-source libraries or other resources, maybe Valentine’s Day might be an occasion to show your appreciation and make the author a nice present.
So here’s something that helps beginners to start with web development and advanced devs to recap some of their knowledge: Oliver James wrote “HTML & CSS Is Hard (But It Doesn’t Have To Be)3”, a friendly web development tutorial for complete beginners.
With the Enterprise Onion Toolkit7, you can finally deploy HTTP and HTTPS onion sites at scale. While the project is still in its early days, the tool makes it easy to provide access to your web service via a hidden Tor service, which in some countries can be essential for journalists and activists.
Rembrandt.js8 is an image comparison tool based on node-canvas running on a server or in the client. Great for visual regression testing, for example.
The latest Docker release offers a great solution to store your secrets securely in containers. The Docker Secrets Management10 is a solid approach to do so.
Facebook collects data about you in hundreds of ways, across numerous channels. It’s very hard to opt out, but reading this article by Vicki Boykis14 on what they collect, you’ll learn to better understand the risks of the platform so you can choose to be more restrictive with your Facebook usage.
In only 1 1⁄2 months a gigantic crack developed in the Antarctic ice shelf, and it’s likely to break apart in the next few months16, setting free about 2,300 square miles of ice into the sea. But the key is not this tiny piece of ice but the much bigger ice shelves that’ll follow. A video captured by the NASA back in November shows the crack in detail17.
If you haven’t read “Nineteen Eighty-Four (1984)” by George Orwell yet, here’s your chance: The entire book is available for free as PDF18 and Audio19 versions. I personally recommend it to everyone who is only slightly interested in one of these topics: social change, politics, technology.
Creating a clock in Sketch might not sound exciting at first, but we’ll discover how easy it is to recreate real-world objects in a very accurate way. You’ll learn how to apply multiple layers of borders and shadows, you’ll take a deeper look at gradients and you will see how objects can be rotated and duplicated in special ways. To help you along the way you can also download the Sketch editable file1 (139 KB).
This is a rather advanced tutorial, so if you are not that savvy with Sketch yet and need some help, I would recommend to first read “Design a Responsive Music Player in Sketch” (Part One2 | Part Two3) that cover a few key aspects in detail when working with Sketch. You can also have a look at my personal project sketchtips.info4 where I regularly provide tips and tricks about Sketch.
The first step is to create a new document, named “clock.” Now, set up a new artboard with the same name, 600 pixels in both dimensions and positioned “0” (X) and “0” (Y). For the background color, choose a toned-down blue; I picked the one from the Global Colors in the color dialog (#4A90E2). Center the artboard to the canvas with Cmd + 3, and let’s get started.
The base of the clock is a simple white circle with a diameter of “480px.” Drag it to this size after you have pressed O on the keyboard. Align it to the center of the artboard, and name it “Face.” For the bezel, add a first Inside border with a Thickness of “16.” With just a single solid color, it would look quite dull; to give it a metallic appearance, we will add an angular gradient instead (Fig. 2). After you have picked this fill type for the border in the color dialog (last icon), click on the first color stop on the left of this dialog. Move it a bit to the right with the arrow key (press it about four times). Jump to the other color stop with Tab, and use the arrow key again to slightly move its position, but this time to the left (about six times). Change the color to “#BFBEAC”; I’ve mixed in a small amount of yellow to give it a more natural look, which also applies to some of the other light colors in the gradient. Now go back to the first stop again and change this one to a color of “#484848”.
After that add six more color stops with a double-click each, their colors being (from left to right): “#BDBDBD”, “#A1A091”, “#C9C9C9”, “#575757”, “#C9C8B5”, “#555555”. For the positions, please refer to the Fig. 2. It looks way better now, but it is still not the result I had in mind. I also want the frame to have a 3D feeling, which is achieved with two additional borders: one below (add it with a click on the “+” button), with an Inside position and a thickness of “21.” Because it is placed below, it will be covered partly, but due to its increased size, it can still be seen a little. Keep this in mind when you stack borders.
Assign the second border a linear gradient (the second icon in the color dialog), going from the top-left of the clock face to the bottom-right. For the start color at the top, choose “#929292”; for the one at the bottom, “#D6D6D6” (Fig. 3). This alone gives the clock much more depth, but another border should give us the final look. This time, add one with an Outside border, stacked between the other two and with a thickness of 5 pixels. This one also needs a linear gradient in the same direction, but from light to dark, with a color of “#BDBDBD” at the beginning and “#676767” at the end.
Now that we have taken care of the frame itself, we also want it to look slightly raised from the clock face. This is accomplished with a light Inner Shadow. Because the borders already cover a certain part of the clock, the shadow needs to be quite big so that it can be seen. To counteract this, increase the Spread of the shadow to a relatively large value of “26,” which will pull the shadow in. Setting the Blur to “10” now gives us a nice centered shadow; however, that doesn’t respect the lightning of the scene. The light is supposed to come from the top-left, so we need to correct both the X and Y positions to “3.” To echo the theme of the artboard’s background, I have chosen a darker shade of the blue, with “#162A40” at “23%” alpha. Save this color to the Document Colors for later reference.
This is not the only shadow we will use. Another one on the outside will make sure that the clock contrasts with the background and looks as if it would hang on a wall. The shadow should be black, with an alpha value of “23%” and the remaining properties “6/6/14.” This time, we don’t need to increase the Spread because we’ve only set a slight outside border for the circle. The raised effect is even reinforced with a slight gradient on the background itself. Because we have set it directly on the artboard, we need to overlay a rectangle (press R) for this purpose.
Add one that covers the whole artboard (name it “Background shadow”) but that is behind the clock face, and change the fill to a radial gradient. Move its center to the bottom-right third of the artboard (Fig. 4, 1); to change the size, drag the indicator on the circle line to the top-left third of the artboard (Fig. 4, 2). Be sure to use the point that is connected to the center with a line (the other point would change the gradient’s shape to an ellipse). Set both color stops to black in the inspector: the one at the center should have full opacity (100%), and the one on the outside none at all (0%). The shadow would be way too strong like this, so decrease the general opacity of this layer to 24% (close the color dialog and press 2, rapidly followed by 4).
With the last step we finished the casing of the clock, so let’s take care of the clock face itself now. To make the alignment for all of the elements easier, let’s add some custom guides first: show the rulers with Ctrl + R, and make sure that the circle is selected. Now, add a vertical guide at its center with a click on the upper ruler. As a guide hover over the ruler until the red line is directly above the middle handles of the shape on the canvas. Do the same for the horizontal guide on the left ruler. For the correct placement, you could also have a look at the positions of the guides when you hover over the rulers: with an artboard size of 600 pixels, this would be 300 pixels for both.
To break the ground, we’ll add the scale for the hours. Create a rectangle at the top of the clock face, above the circle, for the mark of the twelfth hour. The easiest way is to add a rectangle with a random size first and then change it in the inspector. It should have the dimensions “6” (width) and “18” (height), with a black fill. Move it “31px” away from the outer edge of the circle: Hold Alt to show the smart guides, including the distance; point to the circle with the mouse; leave it there; and use the arrow keys to reposition the shape until the spacing is correct (while still holding Alt). Also, center it to the clock face horizontally after selecting both layers, making a right-click and selecting Align Horizontally. But what about the remaining hour marks? It would be quite tedious to create and rotate them by hand.
Luckily, Sketch offers a handy feature that can do both at the same time: Rotate Copies. Select it from Layer → Paths in the menu bar. The following dialog lets you define how many additional copies of the selected element to make. With a total of twelve hours, we require eleven more marks. After you have entered this value and confirmed the dialog, you will be presented with all of the lines and a circular indicator in the middle. You can drag this circle around at will; based on the position, it gives you a wealth of different patterns. Try to move it around! Also, give some other shapes (instead of a rectangle) a shot as a starting point to see what can be done with this option.
However, for the correct placement of the hour marks, move the indicator down until it is at the intersection of the guides that we added earlier (Fig. 5). That was easy! Please note that you won’t be able to alter this position anymore as soon as you click anywhere else on the canvas. But you will still be able to change the individual elements after accessing the related Boolean group with a double-click on the canvas. Rename it to “Hour marks.”
For the minutes, we can take a similar approach, but instead of lines, we will create circles for these marks. To make that easier, set the hours to “20%” opacity first with 2. Now, draw a circle with a diameter of “8px” at the same position as the current mark on the twelfth hour, which you should move “40px” from the top edge of the clock. Also, set its color to black.
The Rotate Copies option comes into play again. This time we need “59” additional copies. Like before, align the circular indicator to the intersection of the guides. At once, we’ve added all of the marks for the minutes. Rename the new Boolean group to “Minute marks,” and access it with a double-click. However, we don’t need the marks at the same positions as the hours, so we will delete them now: Click on the mark at “12” on the canvas, hold Shift, click on the other round marks that overlap, and delete all twelve of them. You can now set the hours to full opacity again.
This brings us a huge step closer to the final clock face. However, we have still some work to do. First, the digits. To give the clock a modern appearance I have chosen the futuristic Exo 2 family from Google15. Unfortunately, you can’t use Rotate Copies to distribute text layers, but we would need to align them manually anyway due to the different shapes of the numbers, so let’s go for it.
To make the alignment easier, create a circle with a diameter of “360” at the center of the clock, and assign it a thin gray border (no fill). Add the “12” at the top, with a font size of “52,” a “Bold” weight and a black fill: Align it with the arrow keys, so that its topside touches the helper circle (Fig. 6). The number should also be centered to the corresponding hour mark. Continue in the same manner for the remaining hours. Always make sure that they touch the circle on the inside. The easiest way is to drag the preceding number out while holding Alt, move it to the new place, change the content, and set the final position with the arrow keys. When you are finished, delete this helper shape. Also, create a “Digits” group for all of the numbers.
The remaining elements to take care of are the watch hands. Zoom in a bit to start with the second hand. It’s made of a simple red (#DF321E) rectangle with dimensions of “4” (width) and “200” (height), and whose lower two vector points are moved in “1px” each to form a slight trapezoid. To achieve this, press Enter to go into vector point mode, hit Tab two times to go to the lower-right point, and press the left arrow key on the keyboard to move it 1 pixel to the left. Hit Tab again to continue to the lower-left point, which you’ll move in with the right arrow key. Leave this mode again by pressing Esc two times, zoom back to 100% with Cmd + 0, and center the hand to the artboard horizontally. On the Y axis, it should be “192px” away from the top of the watch. Because it is supposed to point to the “6,” we don’t need to rotate it, but make sure that it is above the “Digits” group in the layers list. Finally, name it “Second,” but hide it for now.
You can create the minute hand in the same fashion: Add a black rectangle with the dimensions “10” (width) and “210” (height), and zoom into it with Cmd + 2. For this shape, we’ll add some points at the top and bottom. Like before, enter vector point mode, but move the lower points in “2px” each. Now hold Cmd and click on the top segment to add a point in the exact middle. Push this point up by 3 pixels. Do the same for the lower segment, but move it down by 4 pixels (Fig. 7).
Finally, give the pointer a three-dimensional appearance with a crest (Fig. 8). One way to achieve this is to add a gradient with a hard stop in the middle, consisting of two stops at the same position. Add a gradient fill on top of the existing fill, assigned black with “100%” alpha for the first color stop and white with “0%” for the last stop. Bring the gradient to a horizontal position with the left-pointing arrow in the color dialog.
Now add another point with a double-click on the gradient axis in the color dialog, moved to the exact middle with 5 on the keyboard. Give it 100% alpha, and make sure it is black. Add another one to the right, and also move it to the center with 5, but then press the right arrow key once to offset it slightly to the right. After you have changed it to white with “30%” alpha, you’ll see that this has resulted in a hard edge, thanks to the same position of the color stops. To conclude, leave the color dialog by clicking anywhere on the canvas, and name this shape “Minute.” Place it 188 pixels away from the top of the clock, centered horizontally on the artboard.
It’s quite an easy task to get to the hour hand from here. Duplicate its minute counterpart, but hide the original layer, name the new one “Hour,” and change the dimensions to “12” (width) and “162” (height). That already gives us the final shape. However we need to mirror it horizontally to bring the gradient to the opposite side: Right-click on the shape, and select Flip Horizontal from the Transform menu. After that, position it “202px” from the top of the clock face, and center it. Be sure that the order of the hands is second, hour, minute in the layers list, and combine all of them into the new group, “Hands.” It should be above the “Digits” group.
Time to set the clock. The second hand, which you can show again now, already points in the right direction, but the other two hands should read 10:07. Rotating the hour pointer in the default way doesn’t give us the correct result because it alters the position we’ve already set. You may remember that it’s possible to adjust the point around which an element rotates. For this to work, we need to use the Rotate icon in the toolbar (Fig. 9, 1), which gives us a little indicator at the center of the object (Fig. 9, 2).
Drag it to the intersection of the custom guides defined earlier, and try to perform the rotation now: The hand will move like on a real clock. Take this opportunity to set the hour hand to a little after 10:00, at about “233” degrees. Show the minute hand again, and proceed in the same manner, but rotate it until it is at the seventh minute of the hour (“–137” degrees). Please note that you need to perform the rotation on the canvas; the input field in the inspector won’t respect the altered rotation point.
For the final touch and to further strengthen the 3D effect of the watch, add some shadows to the hands. Start with the second hand: To respect that the light comes from the top-left, we need to set the properties to “2/5/4/0” with the dark blue that we saved to the Document Colors (#162A40), but at “30%” opacity. The same blur and color can be used for the shadow of the hour hand, but the X and Y positions need to be changed to “–3” and “–2.” The same goes for the minute hand, but with values of “–4” and “–2.”
To top everything off, we will add one last element: a small red circle with a diameter of 12 pixels at the center of the clock that will hold all of the hands at their positions, and named “Cover” (Fig. 10). Take over the color from the second hand with the color picker and add a second fill on top of it: a radial gradient that has the same size and position as the circle, starting with 0% black at the center and going to 20% black on the outline. Also, add a shadow to raise it slightly from the hands. Give it the properties “0/0/5/0” with 50% black.
The result is a realistic wall clock. You’ve learned not only how to stack multiple borders, but also how to apply gradients to create distinctive effects. You’ve also learned more about rotations and how to use the Rotate Copies function to add multiple copies of the same object in a very special way.
Did you find it useful? It’s just a small glimpse into The Sketch Handbook2826, written by Christian, and published by Smashing Magazine. The full book (which features many more topics27) should help you become a proficient user of Sketch in (almost) no time. No guarantees though! 😉 Happy reading!
(mb, il)
This article is an excerpt from Christian’s The Sketch Handbook2826, available in print and as eBook, published by yours truly. The book contains twelve jam-packed chapters within 376 pages. Among other things, it will teach you how to design a multi-screen mobile app, a responsive article layout as well as icons and interfaces. You’ll also learn about the most recommended plugins for Sketch and a few useful tips, tricks and best practices.
The first set of screens with which users interact, set the expectations of the app. To make sure your users don’t delete your app after the first use, you should teach them how to complete key tasks and make them want to come back for more. In other words, you need to successfully onboard and engage your users during those first interactions.
The onboarding process is a critical step in setting up your users for success with your product. You only get one chance to make a first impression. In this article, we’ll provide some tips on how to approach onboarding using a simple pattern called “empty states.” If you’d like to bring your app or website to life with little effort, you can download and test Adobe XD1for free.
Content is what provides value for most apps. Whether it’s a news feed, a to-do app, or system dashboard, it’s why people use apps – for the content. This is why it’s critical to consider how we design empty states; those moments in a user’s journey where an app might not have content for a user yet.
An app screen whose default state is empty and requires users to go through one or more steps to populate it with data, is perfectly suited to onboarding. Besides informing the user about what content to expect on the page, empty states also teach people how to use your app. Even if the onboarding process consists of just one step, the guidance will reassure users that they are doing the right thing.
The Value Of An Empty State During Onboarding Link
Consider a “first-use” empty state as part of a cohesive onboarding experience. You should utilize the empty state screen to educate and engage your users. Use this screen as an opportunity to turn a moment of nothing into something.
First and foremost, the empty state screen should help users understand the context. Setting expectations for what’ll happen makes users get comfortable. The best way to deliver this information is a show-or-tell format: show the user what the screen will look like when it’s filled with content or tell them with a clear instructions.
Most empty states will tell you what they are for and why you’re seeing them. But, effective empty states will take this even further and tell you what you can do next. Educating your users is important, but true success in your first empty state means driving an action. Think of this empty state as a starting point and design it to encourage user activity.
While your app should be functional (it should solve a problem for your users) and usable (it should be easy to learn and easy to use), it should also be pleasurable. Empty states are an excellent opportunity to make a human connection with your users and get across the personality of your app.
Despite the fact that empty states can engage users, they’re often overlooked during design and development. This happens because we normally design for a populated interface where everything in the layout looks well arranged. However, how should we design our page when the content is pending user action? Empty state design is actually an amazing opportunity for creativity and usability.
The absolute worst thing you can do with an empty state is to drop your users into a dead-end. Dead-ends create confusion and lead to additional and unnecessary taps. Consider the difference between the following two examples from Modspot’s Posts screens. The first image is Modspot’s current screen for first-time users; a useful and smartly crafted empty state reduces friction by guiding users along to an action that will get them started.
The second image is a fake version of the same screen that I’ve created to demonstrate an ineffective empty state that provides no guidance, no examples – only a dead end.
The beauty of a great empty state design is its simplicity. You should use a minimalist design approach in order to bring the most important content to the forefront and minimize distractions. Thus, only include well-written and easily scannable copy (clear, brief descriptions or easy-to-follow instructions) and wrap it together with good visuals.
Don’t forget that empty states aren’t only about visual aesthetics. They should also help users understand the context. Even if it’s meant to be just a temporary onboarding step, you should maximize its communication value for users and provide directions on how to change an empty state to an active one.
Let’s take an empty state screen from Google Photos as an example. Visually it looks great: a well-composed layout with beautiful graphics. However, this empty state simply doesn’t help users understand the context, and doesn’t provide an answer on following questions:
A good first impression isn’t just about usability, it’s also about personality. Personality is what makes your app memorable and pleasurable to use. It may not seem like much, but if your first empty state looks a bit different from similar products, your users will notice and expect the entire product experience to be different, as well. For example, below you can see how Khaylo Workout uses its empty states to convey personality and tone.
Your primary goal is to persuade your users to do something as soon as possible so that the screen won’t be empty. To prompt action on an empty state don’t just show users the benefit they will receive when they interact with your app, but direct them to the desired action as well.
Let’s examine the install screen of Facebook Messenger. When users arrive at this screen, they are met with encouragement – the screen lets users know the benefits of the product (a user can take pictures or record video using Messenger) and tells them how many of their Facebook friends are already using the app. The ‘Install’ button guides users onto the next step necessary to clear up the empty state. Users simply have no other option than to touch install.
If Possible, Provide Content That’s Personalized Link
When you personalize your app for users, you show off the value of your product even faster. The main goal of personalization is to deliver content that matches specific user needs or interests, with no effort from the targeted users. The app profiles the user and adjusts the interface – fill empty states – according to that profile.
Consider providing starter content that will allow users to explore your app right away. For example, a book reading app might provide all users with a few books based on information about a user.
Empty states can help you show the human side of your business or product. Positive emotional stimuli can build a sense of engagement with your users. What kind of feeling your empty state conveys, depends on the purpose of your app. An example below shows the emotional side of empty state in Google Hangouts and how it can incentivize users to get invites on Hangouts.
Of course, showing emotion in design like in the example above is risky – some people don’t get it, and some people may even hate it. But, that’s OK, since emotional response to your design is much better than indifference.
The moment a first-time user completes an important task is a great opportunity for you to create a positive emotional connection between them and your product. Let your users know that they are doing great by acknowledging their progress and celebrate success with the user.
Success state is an amazing opportunity to congratulate users on a job well done and prompt them toward new interactions. For example, clearing a task list is certainly a positive achievement for Writeupp users. It’s great that the app offers a congratulatory, “Well done!” as a positive reinforcement. This success state delights users and offers next steps to keep them engaged.
The following resources can help you find user onboarding and user interface inspiration:
Useronboard2624 is a great resource for exploring existing onboarding experiences and reading detailed teardowns.
Uxarchive28 is another great resource that contains breakdowns of onboarding in many popular apps.
Ui-patterns3129 has a collection of web-app user onboarding & user interface.
Emptystat.es33 is a collection of empty state screenshots that has been taking user submissions since 2013. A majority of screenshots for this article were taken from this resource.
Your empty state should never feel empty. Don’t let the user face a blank screen the first time they open an app. Invest in empty states because they aren’t a temporary or minor part of the user experience. In fact, they are just as important as other design components and full of potential to drive engagement and delight users when they have just signed up.
This article is part of the UX design series sponsored by Adobe. The newly introduced Experience Design app34 is made for a fast and fluid UX design process, creating interactive navigation prototypes, as well as testing and sharing them – all in one place.
You can check out more inspiring projects created with Adobe XD on Behance35, and also visit the Adobe XD blog to stay updated and informed. Adobe XD is being updated with new features frequently, and since it’s in public Beta, you can download and test it for free36.
As JavaScript developers, we often forget that not everyone has the same knowledge as us. It’s called the curse of knowledge1: When we’re an expert on something, we cannot remember how confused we felt as newbies. We overestimate what people will find easy. Therefore, we think that requiring a bunch of JavaScript to initialize or configure the libraries we write is OK. Meanwhile, some of our users struggle to use them, frantically copying and pasting examples from the documentation, tweaking them at random until they work.
You might be wondering, “But all HTML and CSS authors know JavaScript, right?” Wrong. Take a look at the results of my poll2, which is the only data on this I’m aware of. (If you know of any proper studies on this, please mention them in the comments!)
One in two people who write HTML and CSS is not comfortable with JavaScript. One in two. Let that sink in for a moment.
As an example, look at the following code to initialize a jQuery UI autocomplete, taken from its documentation5:
This is easy, even for people who don’t know any JavaScript, right? Wrong. A non-programmer would have all sorts of questions going through their head after seeing this example in the documentation. “Where do I put this code?” “What are these braces, colons and brackets?” “Do I need them?” “What do I do if my element does not have an ID?” And so on. Even this tiny snippet of code requires people to understand object literals, arrays, variables, strings, how to get a reference to a DOM element, events, when the DOM is ready and much more. Things that seem trivial to programmers can be an uphill battle to HTML authors with no JavaScript knowledge.
Now consider the equivalent declarative code from HTML56:
Not only is this much clearer to anyone who can write HTML, it is even easier for programmers. We see that everything is set in one place, no need to care about when to initialize, how to get a reference to the element and how to set stuff on it. No need to know which function to call to initialize or which arguments it accepts. And for more advanced use cases, there is also a JavaScript API in place that allows all of these attributes and elements to be created dynamically. It follows one of the most basic API design principles: It makes the simple easy and the complex possible.
This brings us to an important lesson about HTML APIs: They would benefit not only people with limited JavaScript skill. For common tasks, even we, programmers, are often eager to sacrifice the flexibility of programming for the convenience of declarative markup. However, we somehow forget this when writing a library of our own.
So, what is an HTML API? According to Wikipedia7, an API (or application programming interface) is “is a set of subroutine definitions, protocols, and tools for building application software.” In an HTML API, the definitions and protocols are in the HTML itself, and the tools look in HTML for the configuration. HTML APIs usually consist of certain class and attribute patterns that can be used on existing HTML. With Web Components, even custom element names8 are game, and with the Shadow DOM9, those can even have an entire internal structure that is hidden from the rest of the page’s JavaScript or CSS. But this is not an article about Web Components; Web Components give more power and options to HTML API designers; but the principles of good (HTML) API design are the same.
HTML APIs improve collaboration between designers and developers, lift some work from the shoulders of the latter, and enable designers to create much higher-fidelity mockups. Including an HTML API in your library does not just make the community more inclusive, it also ultimately comes back to benefit you, the programmer.
Not every library needs an HTML API. HTML APIs are mostly useful in libraries that enable UI elements such as galleries, drag-and-drop, accordions, tabs, carousels, etc. As a rule of thumb, if a non-programmer cannot understand what your library does, then your library doesn’t need an HTML API. For example, libraries that simplify or help to organize code do not need an HTML API. What kind of HTML API would an MVC framework or a DOM helper library even have?
So far, we have discussed what an HTML API is, why it is useful and when it is needed. The rest of this article is about how to design a good one.
With a JavaScript API, initialization is strictly controlled by the library’s user: Because they have to manually call a function or create an object, they control precisely when it runs and on what. With an HTML API, we have to make that choice for them, and make sure not to get in the way of the power users who will still use JavaScript and want full control.
The common way to resolve the tension between these two use cases is to only auto-initialize elements that match a given selector, usually a specific class. Awesomplete10 follows this approach, only picking up input elements with class="awesomplete".
In some cases, making auto-initialization easy is more important than making opt-in explicit. This is common when your library needs to run on a lot of elements, and when avoiding having to manually add a class to every single one is more important than making opt-in explicit. For example, Prism1711 automatically highlights any <code> element that contains a language-xxx class (which is what the HTML5 specification recommends for specifying the language of a code snippet12) or that is inside an element that does. This is because it could be included in a blog with a ton of code snippets, and having to go back and add a class to every single one of them would be a huge hassle.
In cases where the init selector is used very liberally, a good practice is to allow customization of it or allow opting-out of auto-initialization altogether. For example, Stretchy13 autosizes every<input>, <select> and <textarea> by default, but allows customization of its init selector to something more specific via a data-stretchy-filter attribute. Prism supports a data-manual attribute on its <script> element to completely disable automatic initialization. A good practice is to allow this option to be set via either HTML or JavaScript, to accommodate both types of library users.
So, for every element the init selector matches, your library needs a wrapper around it, three buttons inside it and two adjacent divs? No problem, but generate them yourself. This kind of grunt work is better suited to machines, not humans. Do not expect that everyone using your library is also using some sort of templating system: Many people are still hand-crafting markup and find build systems too complicated. Make their lives easier.
This also minimizes error conditions: What if a user includes the class that you expect for initialization but not all of the markup you need? When there is no extra markup to add, no such errors are possible.
There is one exception to this rule: graceful degradation and progressive enhancement. For example, embedding a tweet involves a lot of markup, even though a single element with data-* attributes for all the options would suffice. This is done so that the tweet is readable even before the JavaScript loads or runs. A good rule of thumb is to ask yourself, does the extra markup offer a benefit to the end user even without JavaScript? If so, then requiring it is OK. If not, then generate it with your library.
There is also the classic tension between ease of use and customization: Generating all of the markup for the library’s user is easier for them, but leaving them to write it gives them more flexibility. Flexibility is great when you need it, but annoying when you don’t, and you still have to set everything manually. To balance these two needs, you can generate the markup you need if it doesn’t already exist. For example, suppose you wrap all .foo elements with a .foo-container element? First, check whether the parent — or, better yet, any ancestor, via element.closest(".foo-container") — of your .foo element already has the foo-container class, and if so, use that instead of creating a new element.
Typically, settings should be provided via data-* attributes on the relevant element. If your library adds a ton of attributes, then you might want to namespace them to prevent collisions with other libraries, like data-foo-* (where foo is a one-to-three letter prefix based on your library’s name). If that’s too long, you could use foo-*, but bear in mind that this will break HTML validation and might put some of the more diligent HTML authors off your library because of it. Ideally, you should support both, if it won’t bloat your code too much. None of the options here are ideal, so there is an ongoing discussion14 in the WHATWG about whether to legalize such prefixes for custom attributes.
Follow the conventions of HTML as much as possible. For example, if you use an attribute for a boolean setting, its presence means true regardless of the value, and its absence means false. Do not expect things like data-foo="true" or data-foo="false" instead. Sure, ARIA does that, but if ARIA jumped off a cliff, would you do it, too?
When the setting is a boolean, you could also use classes. Typically, their semantics are similar to boolean attributes: The presence of the class means true, and the absence means false. If you want the opposite, you can use a no- prefix (for example, no-line-numbers). Keep in mind that class names are used more than data-* attributes, so there is a greater possibility of collision with the user’s existing class names. You could consider prefixing your classes with a prefix like foo- to prevent that. Another danger with class names is that a future maintainer might notice that they are not used in the CSS and remove them.
When you have a group of related boolean settings, using one space-separated attribute might be better than using many separate attributes or classes. For example, <div data-permissions="read add edit delete save logout>" is better than <div data-read data-add data-edit data-delete data-save data-logout">, and <div> would likely cause a ton of collisions. You can then target individual ones via the ~= attribute selector. For example, element.matches("[data-permissions~=read]") checks whether an element has the read permission.
If the type of a setting is an array or object, then you can use a data-* attribute that links to another element. For example, look at how HTML5 does autocomplete: Because autocomplete requires a list of suggestions, you use an attribute to link to a <datalist> element containing these suggestions via its ID.
This is a point when following HTML conventions becomes painful: In HTML, linking to another element in an attribute is always done by referencing its ID (think of <label for="…">). However, this is rather limiting: It’s so much more convenient to allow selectors or even nesting if it makes sense. What you go with will largely depend on your use case. Just keep in mind that, while consistency is important, usability is our goal here.
It’s OK if not every single setting is available via HTML. Settings whose values are functions can stay in JavaScript and be considered “advanced customization.” Consider Awesomplete15: All numerical, boolean, string and object settings are available as data-* attributes (list, minChars, maxItems, autoFirst). All function settings are only available in JavaScript (filter, sort, item, replace, data). If someone is able to write a JavaScript function to configure your library, then they can use the JavaScript API.
Regular expressions (regex) are a bit of a gray area: Typically, only programmers know regular expressions (and even programmers have trouble with them!); so, at first glance there doesn’t seem to be any point in including settings with regex values in your HTML API. However, HTML5 did include such a setting (<input pattern="regex">), and I believe it was quite successful, because non-programmers can look up their use case in a regex directory16 and copy and paste.
If your UI library is going to be used once or twice on each page, then inheritance won’t matter much. However, if it could be applied to multiple elements, then configuring the same settings on each one of them via classes or attributes would be painful. Remember that not everyone uses a build system, especially non-developers. In these cases, it might be useful to define that settings can be inherited from ancestor elements, so that multiple instances can be mass-configured.
Take Prism1711, a popular syntax-highlighting library, used here on Smashing Magazine as well. The highlighting language is configured via a class of the form language-xxx. Yes, this goes against the guidelines we discussed in the previous section, but this was a conscious decision because the HTML5 specification recommends this18 for specifying the language of a code snippet. On a page with multiple code snippets (think of how often a blog post about code uses inline <code> elements!), specifying the coding language on each <code> element would become extremely tedious. To mitigate this pain, Prism supports inheritance of these classes: If a <code> element does not have a language-xxx class of its own, then the one of its closest ancestor that does is used. This enables users to set the coding language globally (by putting the class on the <body> or <html> elements) or by section, and override it only on elements or sections with a different language.
Now that CSS variables19 are supported by every browser20, they are a good candidate for such settings: They are inherited by default and can be set inline via the style attribute, via CSS or via JavaScript. In your code, you get them via getComputedStyle(element).getPropertyValue("--variablename"). Besides browser support, their main downside is that developers are not yet used to them, but that is changing. Also, you cannot monitor changes to them via MutationObserver, like you can for elements and attributes.
Most UI libraries have two groups of settings: settings that customize how each instance of the widget behaves, and global settings that customize how the library behaves. So far, we have mainly discussed the former, so you might be wondering what is a good place for these global settings.
One candidate is the <script> element that includes your library. You can get this via document.currentScript21, and it has very good browser support22. The advantage of this is that it’s unambiguous what these settings are for, so their names can be shorter (for example, data-filter, instead of data-stretchy-filter).
However, the <script> element should not be the only place you pick up these settings from, because some users may be using your library in a CMS that does not allow them to customize <script> elements. You could also look for the setting on the <html> and <body> elements or even anywhere, as long as you have a clearly stated policy about which value wins when there are duplicates. (The first one? The last one? Something else?)
So, you’ve taken care to design a nice declarative API for your library. Well done! However, if all of your documentation is written as if the user understands JavaScript, few will be able to use it. I remember seeing a cool library for toggling the display of elements based on the URL, via HTML attributes on the elements to be toggled. However, its nice HTML API could not be used by the people it targeted because the entire documentation was littered with JavaScript references. The very first example started with, “This is equivalent to location.href.match(/foo/).” What chance does a non-programmer have to understand this?
Also, remember that many of these people do not speak any programming language, not just JavaScript. Do not talk about models, views, controllers or other software engineering concepts in text that you expect them to read and understand. All you will achieve is confusing them and turning them away.
Of course, you should document the JavaScript parts of your API as well. You could do that in an “Advanced usage” section. However, if you start your documentation with references to JavaScript objects and functions or software engineering concepts, then you’re essentially telling non-programmers that this library is not for them, thereby excluding a large portion of your potential users. Sadly, most documentation for libraries with HTML APIs suffers from these issues, because HTML APIs are often seen as a shortcut for programmers, not as a way for non-programmers to use these libraries. Hopefully, this will change in the future.
In the near future, the Web Components quartet of specifications will revolutionize HTML APIs. The <template> element will enable authors to provide scripts with partial inert markup. Custom elements will enable much more elegant init markup that resembles native HTML. HTML imports will enable authors to include just one file, instead of three style sheets, five scripts and ten templates (if Mozilla gets its act together and stops thinking that ES6 modules are a competing technology23). The Shadow DOM will enable your library to have complex DOM structures that are properly encapsulated and that do not interfere with the user’s own markup.
However, <template> aside, browser support for the other three is currently limited24. So, they require large polyfills, which makes them less attractive for library use. However, it’s something to keep on your radar for the near future.
If you’ve followed the advice in this article, then congratulations on making the web a better, more inclusive space to be creative in! I try to maintain a list of all libraries that have HTML APIs on MarkApp25. Send a pull request and add yours, too!
The virtual realm is uncharted territory for many designers. In the last few years, we’ve witnessed an explosion in virtual reality (VR) hardware and applications. VR experiences range from the mundane to the wondrous, their complexity and utility varying greatly.
Taking your first steps into VR as a UX or UI designer can be daunting. We know because we’ve been there. But fear not! In this article, we’ll share a process for designing VR apps that we hope you’ll use to start designing for VR yourself. You don’t need to be an expert in VR; you just need to be willing to apply your skills to a new domain. Ultimately, as a community working together, we can accelerate VR to reach its full potential faster.
Generally speaking from a designer’s perspective, VR applications are made up of two types of components: environments and interfaces.
You can think of an environment as the world that you enter when you put on a VR headset — the virtual planet you find yourself on, or the view from the rollercoaster5 that you’re riding.
An interface is the set of elements that users interact with to navigate an environment and control their experience. All VR apps can be positioned along two axes according to the complexity of these two components.
In the top-left quadrant are things like simulators, such as the rollercoaster experience linked to above. These have a fully formed environment but no interface at all. You’re simply locked in for the ride.
In the opposite quadrant are apps that have a developed interface but little or no environment. Samsung’s Gear VR home screen is a good example.
Designing virtual environments such as places and landscapes requires proficiency with 3D modelling tools, putting these elements out of reach for many designers. However, there’s a huge opportunity for UX and UI designers to apply their skills to designing user interfaces for virtual reality (or VR UIs, for short).
The first full VR UI design we did was an app for The Economist, created in collaboration with VR production studio Visualise12. We did the design, while Visualise created the content and developed the app.
We’ll use this as a working example throughout the next section, in which we’ll lay out an approach to designing VR apps, before getting into the nitty-gritty of designing interfaces for VR. You can download the Economist app for Gear VR15 from the Oculus website.
Whereas most designers have figured out their workflow for designing mobile apps, processes for designing VR interfaces are yet to be defined. When the first VR app design project came through our door, the logical first step was for us to devise a process.
When we first played with Gear VR by Samsung, we noticed similarities to traditional mobile apps. Interface-based VR apps work according to the same basic dynamic as traditional apps: Users interact with an interface that helps them navigate pages. We’re simplifying here, but just keep this in mind for now.
Given the similarity to traditional apps, the tried-and-tested mobile app workflows that designers have spent years refining won’t go to waste and can be used to craft VR UIs. You’re closer to designing VR apps than you think!
Before describing how to design VR interfaces, let’s step back and run through the process for designing a traditional mobile app.
At this stage, the features and interactions have been approved. Brand guidelines are now applied to the wireframes, and a beautiful interface is crafted.
Here, we’ll organize screens into flows, drawing links between screens and describing the interactions for each screen. We call this the app’s blueprint, and it will be used as the main reference for developers working on the project.
Now, how can we apply this workflow to virtual reality?
The simplest problems can be the most challenging. Faced with a 360-degree canvas, one might find it difficult to know where to begin. It turns out that UX and UI designers only need to focus on a certain portion of the total space.
We spent weeks trying to figure out what canvas size would make sense for VR. When you work on a mobile app, the canvas size is determined by the device’s size: 1334 × 750 pixels for the iPhone 6 and roughly 1280 × 720 pixels for Android.
To apply this mobile app workflow to VR UIs, you first have to figure out a canvas size that makes sense.
Below is what a 360-degree environment looks like when flattened. This representation is called an equirectangular projection. In a 3D virtual environment, these projections are wrapped around a sphere to mimic the real world.
The full width of the projection represents 360 degrees horizontally and 180 degrees vertically. We can use this to define the pixel size of the canvas: 3600 × 1800.
Working with such a big size can be a challenge. But because we’re primarily interested in the interface aspect of VR apps, we can concentrate on a segment of this canvas.
Building on Mike Alger’s early research26 on comfortable viewing areas, we can isolate a portion where it makes sense to present the interface.
The area of interest represents one ninth of the 360-degree environment. It’s positioned right at the centre of the equirectangular image and is 1200 × 600 pixels in size.
The reason for using two canvases for a single screen is testing. The “UI View” canvas helps to keep our focus on the interface we’re crafting and makes it easier to design flows.
Meanwhile, the “360 View” is used to preview the interface in a VR environment. To get a real sense of proportions, testing the interface with a VR headset is necessary.
Before we get started with the walkthrough, here are the tools we’ll need:
Sketch32 We’ll use Sketch to design our interfaces and user flows. If you don’t have it, you can download a trial version. Sketch is our preferred interface design software, but if you’re more comfortable using Photoshop or anything else, that would work, too.
GoPro VR Player33 GoPro VR Player is a 360-degree content viewer. It’s provided by GoPro and is free. We’ll use it to preview our designs and test them in context.
Oculus Rift34 Hooking Oculus Rift into the GoPro VR Player will enable us to test the design in context.
In this section, we’ll run through a short tutorial on how to design a VR interface. We’ll design a simple one together, which should take five minutes tops.
Download the assets pack36, which contains presized UI elements and the background image. If you want to use your own assets, go for it; it won’t be a problem.
First things first. Let’s create the canvas that will represent the 360-degree view. Open a new document in Sketch, and create an artboard: 3600 × 1800 pixels.
Import the file named background.jpg, and place it in the middle of the canvas. If you’re using your own equirectangular background, make sure its proportions are 2:1, and resize it to 3600 × 1800 pixels.
As mentioned above, the “UI View” is a cropped version of the “360 View” and focuses on the VR interface only.
Create a new artboard next to the previous one: 1200 × 600 pixels. Then, copy the background that we just added to our “360 View,” and place it in the middle of our new artboard. Don’t resize it! We want to keep a cropped version of the background here.
We’re going to design our interface on the “UI View” canvas. We’ll keep things simple for the sake of this exercise and add a row of tiles. If you’re feeling lazy, just grab the file named tile.png in the assets pack and drag it into the middle of the UI view.
Duplicate it, and create a row of three tiles.
Grab kickpush-logo.png from the assets pack, and place it above the tiles.
Open the GoPro VR Player and drag the “360 View” PNG that you just exported into the window. Drag the image with your mouse to preview your 360-degree environment.
We’re done! Pretty simple when you know how, right?
If you have an Oculus Rift set up on your machine, then the GoPro VR Player should detect it and allow you to preview the image using your VR device. Depending on your configuration, you might have to mess around with the display settings in MacOS.
The resolution of the VR headset is pretty bad. Well, that’s not entirely true: It’s equivalent to your phone’s resolution. However, considering the device is 5 centimeters from your eyes, the display doesn’t look crisp.
To get a crisp VR experience, we would need an 8K display per eye. That’s a 15,360 × 7680-pixel display. We’re pretty far off from that, but we’ll get there eventually.
Because of the display’s resolution, all of your beautifully crisp UI elements will look pixelated. This means, first, that text will be difficult to read and, secondly, that there will be a high level of aliasing on straight lines. Try to avoid using big text blocks and highly detailed UI elements.
Remember the blueprint from our mobile app design process? We’ve adapted this practice to VR interfaces. Using our UI views, we map and organize our flows into a comprehensible blueprint, ideal for developers to understand the overall architecture of the app we’ve designed.
Designing a beautiful UI is one thing, but showing how it’s supposed to animate is a different story. Once again, we’ve decided to approach it with a two-dimensional perspective.
Using our Sketch designs, we animate the interface with Adobe After Effects49 and Principle50. While the outcome is not a 3D experience, it’s used as a guideline for the development team and to help our clients understand our vision at an early stage of the process.
We know what you’re thinking, though: “That’s cool, but VR apps can get way more complicated.” Yes, they can. The question is, to what extent can we apply our current UX and UI practices to this new medium?
Some VR experiences rely so heavily on the virtual environment that a traditional interface that sits on top might not be the optimal way for the user to control the app. In this case, you might want users to interact directly with the environment itself.
Imagine that you’re making an app for a luxury travel agent. You’d want to transport the user to potential holiday destinations in the most vivid way possible. So, you invite the user to put on the headset and begin the experience in your swanky Chelsea office.
To transition from the office to some far away place, the user needs to choose where they want to go. They could pick up a travel magazine and flick through it until they land on an appealing page. Or there could be a collection of interesting objects on your desk that whisk the user to different locations depending on which one they pick up.
This is definitely cool, but there are some drawbacks. To get the full effect, you’d need a more advanced VR headset with handheld controllers. Plus, an app like this takes quite a bit more effort to develop than a set of well-presented options organized like in a traditional app interface.
The reality is that these immersive experiences are not commercially viable for most companies. Unless you’ve got virtually unlimited resources, like Valve and Google, creating an experience like the one described above is probably too costly, too risky and too time-consuming.
This kind of experience is brilliant for showing off that you’re at the cutting edge of media and technology, but not so great for taking your product to market through a new medium. Accessibility is important.
Usually, when a new format emerges, it’s pushed to the limit by early adopters: the creators and innovators of this world. In time, and with enough learning and investment, it becomes accessible to a wider range of potential users.
As VR headsets become more commonplace, companies will start to spot opportunities to integrate VR into the ways that they engage with customers.
From our perspective, VR apps with intuitive UIs — that is, UIs closer to what people are already accustomed to with their wearables, phones, tablets and computers — are what will make VR an affordable and worthwhile investment for the majority of companies that pursue it.
We hope we’ve made the VR space a bit less scary with this article and inspired you to start designing for VR yourself.
They say that if you want to travel fast, go alone. But if you want to travel far, travel together. We want to travel far. At Kickpush, we think that every company will have a VR app someday, just like every company now has a mobile website (or should have — it’s 2017, dang it!).
So, we’re building a rocketship, a joint effort by designers around the globe to boldly go where no designer has gone before. The sooner that producing VR apps make sense for companies, the sooner the whole ecosystem will blow up.
Our next challenges as digital product designers are more complex applications and handling other types of input through controllers. To begin to tackle this we’ll need robust prototyping tools that let us create and test designs quickly and easily. We’ll be writing a follow up article that looks at some of the early attempts to do this, and at some of the new tools in development.