HTML & CSS is hard
An awesome guide to web development with great visualizations, not just for beginners. Made by Oliver James.
An awesome guide to web development with great visualizations, not just for beginners. Made by Oliver James.
For luxury companies and upscale lifestyle service providers, excellence in experience is an essential component of the value delivered. Conceptually different from the mass market, the luxury domain relies not only on offering the highest differentiated products and services, but on delivering experiential value.
Adopting technology and embracing a digital presence through platforms and initiatives, the luxury industry today is tackling the challenge of designing an unparalleled user experience (UX) online. In this article, we’ll present a case study and share observations on the peculiarities of the UX design of a luxury lifestyle service platform and its mobile apps.
Some time ago, 415Agency1 teamed up with VERITAMO2 to design a digital UX for its platform, which enables luxury service providers and lifestyle-management companies to deliver personalized services to its clients. Among the services offered are travel arrangements and bookings, luxury shopping experiences, gastronomy and more. A typical client of the platform is either a provider of high-end services or a lifestyle-management company that serves affluent clients. The company offers a back-office solution, together with integrated white-label mobile apps for use by its clientele.
The goal was to enable service providers to deliver a unique UX journey to a very particular type of consumer. We were extremely curious to solve the challenge of creating an upscale mobile experience in a time when digital personalization and customization are available to anyone.
According to a recent study by McKinsey & Company3, modern luxury consumers have become “highly digital, social and mobile,” with 75% already owning several digital devices. They are known for putting less value on owning physical high-end items, focusing instead on the authentic and special experiences that luxury companies offer. Moreover, they want their experience to be smooth, omnichannel and available 24/7, but at the same time only when and where they want. Based on research by Bain & Company4, the profiles of luxury consumers today are very diverse, both demographically and geographically, covering many different segments of people. With the changes and expansion in the luxury customer’s profile, mindset and habits, luxury companies and service providers have to experiment at the intersection of technology, culture and commerce to keep their devotees interested, informed and entertained.
For our project, our primary understanding of the luxury service’s end users (their demographics and psychographics) was based on insights from VERITAMO’s customers. Based on their observations, we were able to frame the initial end user’s profile and make it the baseline for our further work. The insights highlighted the following important areas:
These initial findings covered the core user research questions: who, what, when and where. We used this data to set early hypotheses on the peculiarities of the digital luxury experience and on ways to address it in our design solution. We expected end users of luxury lifestyle services to be highly detail oriented; to be willing to learn and participate in all stages of service requests, search and the booking process; and to anticipate the highest and the most transparent level of customer service, with an exclusive and extra personal touch.
We focused on investigating the missing why and how: understanding customer incentives and the steps they needed to take within the app to reach their goals. Additional surveys and user interviews were conducted iteratively during the design process. Some of our initial assumptions and the methods we selected turned out to be incorrect or inappropriate, so we had to adjust them along the way.
Our initial assumptions about the digital luxury experience spun around highly personalized service delivery. At the beginning, we believed that swift customer service and elevated human-to-human interactions were key to offering efficient mobile tools to connect luxury consumers with their service providers. We believed that these aspects alone were enough for service providers to lure discerning, busy consumers to the mobile apps created by VERITAMO.
As it turns out, we were wrong. App usage statistics across the spectrum of service providers showed that there was no significant difference in the rate of orders from clients who downloaded the app and those who didn’t. Furthermore, mobile user retention suffered dramatically when service providers did not make any effort to market their apps.
We formed several hypotheses to explain this. With so many apps out there competing for real estate on users’ phone, incremental improvement to interaction with service providers was not enough. After all, consumers already had communication channels established with their service providers — even if they were brittle and inefficient.
Based on feedback from digital product managers and client services managers at the biggest concierge companies (including AmEx Centurion, John Paul, Quintessentially, Ten Group, LesConcierges, Aspire Lifestyles and several others), we learned that luxury service providers were seeking better management of and greater transparency with client requests. We decided to make this our key design motivation.
Initially, when working on the service discovery process, we offered mobile users a vast variety of search options, including multiple search criteria, filters and instant browsing. The initial design contained a flyout search menu (via the famous hamburger icon), which confused users about the navigation. They browsed only the current category selected, without understanding that other search options were available.
So, we changed the design to the variant below:
Using the combination of screen recording and concierge testing we observed users were still struggling with the discovery process. Customers expected immediate results with minimum data input. Yet they also expected several options to choose from. Some users reported being overwhelmed by choices that may or may not have been of interest to them. Additionally, the absence of expected results (such as “The restaurant that I know is hot right now”) created a negative impression of the service provided by their lifestyle manager (“They don’t even know the best restaurants in my city.”).
Mobile users relied more on the suggested offerings preselected by their service provider. Rather than desiring freedom of choice, they valued interaction with a dedicated advisor who would promptly respond to their requests with just a few relevant options. Discovery of services became a secondary feature of the mobile platform.
This observation can be further explained by the high degree of sophistication of the affluent clientele and their motivation to research available services. With limited time available, these customers have a precise reason for turning to their service providers for advice. After all, such needs are their very reason for retaining the services of a lifestyle manager in the first place.
Based on the user test results, we limited the number of service options to several categories, including “recommended / featured services,” “popular in your area,” etc. At the same time, in order to offer the richness of experience that one would encounter in close one-on-one communication, we improved the app’s navigation to enable easy access to the concierge chat feature and to allow delivery of options for review directly in the communication thread.
Using our evolutionary approach to UX design, we combined the client CRM, the content management system and interactive messaging functionality to create something quite powerful for the luxury industry. Service providers are now able to serve multiple clients simultaneously, without sacrificing personalization and exclusivity. The next step in our roadmap is to test targeted suggestions for each client in order to automate predictable and mundane tasks, freeing each service provider to concentrate on their main value proposition, which is to humanize the personal, bespoke approach to serving their clients’ needs.
We expected a transparent booking process to involve several stages for both the client and the concierge, with all steps tracked in the app — for example, order status options (booked, processed, confirmed, rejected), payment status (requested, pending, confirmed), etc. — and the possibility to send out request status notifications to clients. As mentioned before, initially we believed this information was crucial for picky luxury consumers. The reality was that we were all wrong: customers were not interested in participating and following the multi-step process. They considered it extremely important to know that someone was working on their request and what the outcome is, but without knowing many more details, which were considered bothersome.
They also expected the “one-button approach” and instant order confirmations. They anticipated it to be as short as possible, with immediate results, no extra information, and always five-star customer service, which implied a concierge to be handling all transitional steps, including changes, issues, and updates.
Our initial assumptions about the psychology of the perception of luxury17 helped us to create a rather sophisticated onboarding process. It included, first, limiting initial access to a mobile service in order to create artificial demand and, secondly, personalizing “invitations” for each user. We used the term “nomination” (rather than “invitation”) and implemented an approval workflow to accept new users to the service. Prospective clients had to “apply” for membership and await approval.
This approach was met with positive feedback from service providers because it enabled them to control their membership base and to weed out time-wasters. As for mobile users, our tests told us that our approach was not ideal and had to be improved.
We measured, first, the time it took for a person to respond to an onboarding questionnaire and, secondly, retention of approved users. We assumed that the demand-creation approach would outweigh the negative effect of limiting immediate access to the app.
The incentive to complete the questionnaire would be to get access to the mobile service. However, because service providers took some time approving accounts, the incentive quickly disappeared. Users who had to wait for approval were two to three times less likely to come back once the approval notice was sent to them.
We simplified the onboarding process from over 2 minutes down to an average of around 40 seconds, by asking for only the most basic information before an approval was made and then asking for the rest upon first successful entry into the app. We also introduced a pre-approval process to eliminate the wait time and to allow access to the app right away, while still properly communicating the privilege of access.
Further testing is required to assess the effect of the term “nomination” (as opposed to “invitation”) on the likelihood of referrals because existing users can “nominate” their friends to get exclusive membership with their service provider.
The visual part of UX design in the luxury industry is essential to communicating excellence and exclusivity. In order to create a holistic app experience, we strived to reflect these qualities both in the UI design and the functionality. With maximum attention to detail in the typography, color palette and iconography design, we aimed to establish solid visual cues that would determine how users experience the app. Using colors associated with luxury — gold, jet black, dark blue — we emphasized the timelessness of the experience. Thin classic typography and a minimalist design of icons added to the feel of modernity and elegance.
Luxury companies need to prove their value to customers more extensively than other brands, offering an experience that justifies the price and loyalty. With utmost customer care and exclusiveness of selection, they ensure that the transition to a sale happens seemingly effortlessly.
While from the user’s perspective the process may look absolutely effortless and refined, the system behind it is truly sophisticated. In the case of VERITAMO’s platform, the interface for the service advisor and concierge had to have a highly detailed structure and yet be as simple as possible to use. It needed to contain all information about the user: preferences, recent choices, a summary of their previous experience, current requests, current order status, history of requested changes and other details. It was absolutely necessary to provide a highly personalized level of customer service and to address user inquiries, concerns and frustrations with class, swiftness, and simplicity.
The customer experience in each industry is perceived differently. Very often, when we take up a new project, our initial assumptions put us in a rigid framework of predetermined creative responses that misalign UX design solutions with real user needs. Filing our observations as “Things we wish we knew when starting the project,” we see that it is essential to do a reality check on one’s expectations of user behavior and to keep in mind that a compelling UX design goes beyond the confines of a particular industry and the user’s social standing.
These are our key observations and takeaways on UX design for the luxury domain:
Despite the fact that modern luxury lifestyle consumers are becoming highly sophisticated and tech-savvy, many of the key observations our team made during this project do not seem to be exclusive to the luxury field. Sound UX principles apply to all user groups, regardless of their social status or preferences.
Today, users anticipate a superior experience and have a strong understanding of the value delivered. They are focused on results and a one-button approach, expecting their orders to be addressed efficiently, at the highest level of service and with maximum transparency. However, more so in the luxury field, human interaction within the digital experience is not an option, but rather an undeniably powerful tool that improves communication and increases loyalty.
At the end of the day, a white-glove UX is all about delivering the right information, in the right amount, in the right place and at the right time, while maintaining a refined and confident appearance.
(cc, yk, al, il)
I’ve been thinking a lot about speech for the last few years. In fact, it’s been a major focus in several of my talks of late, including my well-received Smashing Conference talk “Designing the Conversation1.” As such, I’ve been keenly interested in the development of the Web Speech API2.
If you’re unfamiliar, this API gives you (the developer) the ability to voice-enable your website in two directions: listening to your users via the SpeechRecognition interface3 and talking back to them via the SpeechSynthesis interface4. All of this is done via a JavaScript API, making it easy to test for support. This testability makes it an excellent candidate for progressive enhancement, but more on that in a moment.
A lot of my interest stems from my own personal desire to experiment with new ways of interacting with the web. I’m also a big fan of podcasts and love listening to great content while I’m driving and in other situations where my eyes are required elsewhere or are simply too tired to read. The Web Speech API opens up a whole range of opportunities to create incredibly useful and natural user interactions by being able to listen for and respond with natural language:
– Hey Instapaper, start reading from my queue!
– Sure thing, Aaron…
The possibilities created by this relatively simple API set are truly staggering. There are applications in accessibility, Internet of Things, automotive, government, the list goes on and on. Taking it a step further, imagine combining this tech with real-time translation APIs (which also recently began to appear). All of a sudden, we can open up the web to millions of people who struggle with literacy or find themselves in need of services in a country where they don’t read or speak the language. This. Changes. Everything.
But back to the Web Speech API. As I said, I’d been keeping tabs on the specification for a while, checked out several of the demos and such, but hadn’t made the time to play yet. Then Dave Rupert finally spurred me to action with a single tweet:
Within an hour or so, I’d gotten a basic implementation together for my blog6 that would enable users to listen to a blog post7 rather than read it. A few hours later, I had added more features, but it wasn’t all wine and roses, and I ended up having to back some functionality out of the widget to improve its stability. But I’m getting ahead of myself.
I’ve decided to hit the pause button for a few days to write up what I’ve learned and what I still don’t fully understand in the hope that we can begin to hash out some best practices for using this awesome feature. Maybe we can even come up with some ways to improve it.
So far, my explorations into the Web Speech API have been wholly in the realm of speech synthesis. Getting to “Hello world” is relatively straightforward and merely involves creating a new SpeechSynthesisUtterance
(which is what you want to say) and then passing that to the speechSynthesis
object’s speak()
method:
var to_speak = new SpeechSynthesisUtterance('Hello world!'); window.speechSynthesis.speak(to_speak);
Not all browsers support this API, although most modern ones do8. That being said, to avoid throwing errors, we should wrap the whole thing in a simple conditional that tests for the feature’s existence before using it:
if ( 'speechSynthesis' in window ) { var to_speak = new SpeechSynthesisUtterance('Hello world!'); window.speechSynthesis.speak(to_speak); }
See the Pen Experimenting with `speechSynthesis`, example 135129 by Aaron Gustafson (@aarongustafson36331310) on CodePen37341411.
Once you’ve got a basic example working, there’s quite a bit of tuning you can do. For instance, you can tweak the reading speed by adjusting the SpeechSynthesisUtterance
object’s rate
property. It accepts values from 0.1 to 10. I find 1.4 to be a pretty comfortable speed; anything over 3 just sounds like noise to me.
See the Pen Experimenting with `speechSynthesis`, example 135129 by Aaron Gustafson (@aarongustafson36331310) on CodePen37341411.
You can also tune things such as the pitch15, the volume16 of the voice, even the language being spoken17 and the voice itself18. I’m a big fan of defaults in most things, so I’ll let you explore those options on your own time. For the purpose of my experiment, I opted to change the default rate
to 1.4, and that was about it.
When I began working with this code on my own website, I was keen to provide four controls for my readers:
The first two were relatively easy. The latter two caused problems, which I’ll discuss shortly.
To kick things off, I parroted the code Dave had tweeted:
var to_speak = new SpeechSynthesisUtterance( document.querySelector('main').textContent ); window.speechSynthesis.speak(to_speak);
This code grabs the text content (textContent
) of the main
element and converts it into a SpeechSynthesisUtterance
. It then triggers the synthesizer to speak that content. Simple enough.
Of course, I didn’t want the content to begin reading immediately, so I set about building a user interface to control it. I did so in JavaScript, within the feature-detection conditional, rather than in HTML, because I did not want the interface to appear if the feature was not available (or if JavaScript failed for some reason). That would be frustrating for users.
I created the buttons and assigned some event handlers to wire up the functionality. My first pass looked something like this:
var $buttons = document.createElement('p'), $button = document.createElement('button'), $play = $button.cloneNode(), $pause = $button.cloneNode(), paused = false, to_speak; if ( 'speechSynthesis' in window ) { // content to speak to_speak = new SpeechSynthesisUtterance( document.querySelector('main').textContent ); // set the rate a little faster than 1x to_speak.rate = 1.4; // event handlers to_speak.onpause = function(){ paused = true; }; // button events function play() { if ( paused ) { paused = false; window.speechSynthesis.resume(); } else { window.speechSynthesis.speak( to_speak ); } } function pause() { window.speechSynthesis.pause(); } // play button $play.innerText = 'Play'; $play.addEventListener( 'click', play, false ); $buttons.appendChild( $play ); // pause button $pause.innerText = 'Pause'; $pause.addEventListener( 'click', pause, false ); $buttons.appendChild( $pause ); } else { // sad panda $buttons.innerText = 'Unfortunately your browser doesn’t support this feature.'; } document.body.appendChild( $buttons );
This code creates a play button and a pause button and appends them to the document. It also assigns the corresponding event handlers. As you’d expect, the play button calls speechSynthesis.speak()
, as we saw earlier, but because pause is also in play, I set it up to either speak the selected text or resume speaking — using speechSynthesis.resume()
— if the speech is paused. The pause button controls that by triggering speechSynthesis.pause()
. I tracked the state of the speech engine using the boolean variable paused
. You can kick the tires of this code over on CodePen19.
I want to (ahem) pause for a moment to tuck into the speak()
command, because it’s easy to misunderstand. At first blush, you might think it causes the supplied SpeechSynthesisUtterance
to be read aloud from the beginning, which is why I’d want to resume()
after pausing. That is true, but it’s only part of it. The speech synthesis interface actually maintains a queue for content to be spoken. Calling speak()
pushes a new SpeechSynthesisUtterance
to that queue and causes the synthesizer to start speaking that content if it’s not already speaking. If it’s in the process of reading something already, the new content takes its spot at the back of the queue and patiently waits its turn. If you want to see this in action, check out my fork of the reading speed demo20.
If you want to clear the queue entirely at any time, you can call speechSynthesis.cancel()
. When testing speech synthesis with long-form content, having this at the ready in the browser’s console is handy.
As I mentioned, I also wanted to give users control over the reading speed used by the speech synthesizer. We can tune this using the rate
property on a SpeechSynthesisUtterance
object. That’s fantastic, but you can’t (currently, at least) adjust the rate of a SpeechSynthesisUtterance
once the synthesizer starts playing it — not even while it’s paused. I don’t know enough about the inner workings of speech synthesizers to know whether this is simply an oversight in the interface or a hard limitation of the synthesizers themselves, but it did force me to find a creative way around this limitation.
I experimented with a bunch of different approaches to this and eventually settled on one that works reasonably well, despite the fact that it feels like overkill. But I’m getting ahead of myself again.
Every SpeechSynthesisUtterance
object offers a handful of events you can plug in to do various things. As you’d expect, onpause
21 fires when the speech is paused, onend
22 fires when the synthesizer has finished reading it, etc. The SpeechSynthesisEvent
23 object passed to each of these includes information about what’s going on with the synthesizer, such as the position of the virtual cursor (charIndex
24), the length of time after the current SpeechSynthesisUtterance
started being read (elapsedTime
25), and a reference to the SpeechSynthesisUtterance
itself (utterance
26).
Originally, my plan to allow for real-time reading-speed adjustment was to capture the virtual cursor position via a pause event so that I could stop and start a new recording at the new speed. When the user adjusted the reading speed, I would pause the synthesizer, grab the charIndex
, backtrack in the text to the previous space, slice from there to the end of the string to collect the remainder of what should be read, clear the queue, and start the synthesizer again with the remainder of the content. That would have worked, and it should have been reliable, but Chrome kept giving me a charIndex
of 0
, and in Edge it was always undefined
. Firefox tracked charIndex
perfectly. I’ve filed a bug for Chromium27 and one for Edge28, too.
Thankfully, another event, onboundary
29, fires whenever a word or sentence boundary is reached. It’s a little noisier, programmatically speaking, than onpause
because the event fires so often, but it reliably tracked the position of the virtual cursor in every browser that supports speech synthesis, which is what I needed.
Here’s the tracking code:
var progress_index = 0; to_speak.onboundary = function( e ) { if ( e.name == 'word' ) { progress_index = e.charIndex; } };
Once I was set up to track the cursor, I added a numeric input to the UI to allow users to change the speed:
var $speed = document.createElement('p'), $speed_label = document.createElement('label'), $speed_value = document.createElement('input'); // label the field $speed_label.innerText = 'Speed'; $speed_label.htmlFor = 'speed_value'; $speed.appendChild( $speed_label ); // insert the form control $speed_value.type = 'number'; $speed_value.id = 'speed_value'; $speed_value.min = '0.1'; $speed_value.max = '10'; $speed_value.step = '0.1'; $speed_value.value = Math.round( to_speak.rate * 10 ) / 10; $speed.appendChild( $speed_value ); document.body.appendChild($speed);
Then, I added an event listener to track when it changes and to update the speech synthesizer:
function adjustSpeed() { // cancel the original utterance window.speechSynthesis.cancel(); // find the previous space var previous_space = to_speak.text.lastIndexOf( ' ', progress_index ); // get the remains of the original string to_speak.text = to_speak.text.slice( previous_space ); // math to 1 decimal place speed = Math.round( $speed_value.value * 10 ) / 10; // adjust the rate if ( speed > 10 ) { speed = 10; } else if ( speed < 0.1 ) { speed = 0.1; } to_speak.rate = speed; // return to speaking window.speechSynthesis.speak( to_speak ); } $speed_value.addEventListener( 'change', adjustSpeed, false );
This works reasonably well, but ultimately I decided that I was not a huge fan of the experience, nor was I convinced it was really necessary, so this functionality remains commented out in my website’s source code30. You can make up your mind after seeing it in action over on CodePen31.
At the top of every blog post, just after the title, I include quite a bit of meta data about the post, including things like the publication date, tags for the post, comment and webmention counts, and so on. I wanted to selectively control which content from that collection is read because only some of it is really relevant in that context. To keep the configuration out of the JavaScript and in the declarative markup where it belongs, I opted to have the JavaScript look for a specific class
name, “dont-read”, and exclude those elements from the content that would be read. To make it work, however, I needed revisit how I was collecting the content to be read in the first place.
You may recall that I’m using the textContent
property to extract the content:
var to_speak = new SpeechSynthesisUtterance( document.querySelector('main').textContent );
That’s all well and good when you want to grab everything, but if you want to be more selective, you’re better off moving the content into memory so that you can manipulate it without causing repaints and such.
var $content = document.querySelector('main').cloneNode(true);
With a clone of main
in memory, I can begin the process of winnowing it down to only the stuff I want:
var to_speak = new SpeechSynthesisUtterance() $content = document.querySelector('main').cloneNode(true), $skip = $content.querySelectorAll('.dont-read'); // don’t read Array.prototype.forEach.call( $skip, function( $el ){ $el.innerHTML = ''; }); to_speak.text = $content.textContent;
Here, I’ve separated the creation of the SpeechSynthesisUtterance
to make the code a little clearer. Then, I’ve cloned the main
element ($content
) and built a nodeList
of elements that I want to be ignored ($skip
). I’ve then looped over the nodeList
— borrowing Array
’s handy forEach
method — and set the contents of each to an empty string, effectively removing them from the content. At the end, I’ve set the text property to the cloned main
element’s textContent
. Because all of this is done to the cloned main
, the page remains unaffected.
Done and done.
Sadly, the value of a SpeechSynthesisUtterance
can only be text. If you pipe in HTML, it will read the tag names and slashes. That’s why most of the demos use an input to collect what you want read or rely on textContent
to extract text from the page. The reason this saddens me is that it means you lose complete control over the pacing of the content.
But not all is lost. Speech synthesizers are pretty awesome at recognizing the effect that punctuation should have on intonation and pacing. To go back to the first example I shared, consider the difference when you drop a comma between “hello” and “world”:
if ( 'speechSynthesis' in window ) { var to_speak = new SpeechSynthesisUtterance('Hello, world!'); window.speechSynthesis.speak(to_speak); }
See the Pen Experimenting with `speechSynthesis`, example 232 by Aaron Gustafson (@aarongustafson36331310) on CodePen37341411.
Here’s the original again, just so you can compare:
See the Pen Experimenting with `speechSynthesis`, example 135129 by Aaron Gustafson (@aarongustafson36331310) on CodePen37341411.
With this in mind, I decided to tweak the pacing of the spoken prose by artificially inserting commas into the specific elements that follow the pattern I just showed for hiding content:
var $pause_before = $content.querySelectorAll( 'h2, h3, h4, h5, h6, p, li, dt, blockquote, pre, figure, footer' ); // synthetic pauses Array.prototype.forEach.call( $pause_before, function( $el ){ $el.innerHTML = ' , ' + $el.innerHTML; });
While I was doing this, I also noticed some issues with certain elements running into the content around them. Most notably, this was happening with pre
elements. To mitigate that, I used the same approach to swap carriage returns, line breaks and such for spaces:
var $space = $content.querySelectorAll('pre'); // spacing out content Array.prototype.forEach.call( $space, function( $el ){ $el.innerHTML = ' ' + $el.innerHTML.replace(/[rnt]/g, ' ') + ' '; });
With those tweaks in place, I’ve been incredibly happy with the listening experience. If you’d like to see all of this code in context, head over to my GitHub repository38. The code you use to drop the UI into the page will likely need to be different from what I did, but the rest of the code should be plug-and-play.
speechSynthesis
Ready For Production? LinkAs it stands right now, the Web Speech API has not become a standard and isn’t even on a standards track39. It’s an experimental API and some of the details of the specification remain in flux. For instance, the elapsedTime
property of a SpeechSynthesisEvent
originally tracked milliseconds and then switched to seconds. If you were doing math that relied on that number to do something else in the interface, you might get widely different experiences in Chrome (which still uses milliseconds) and Edge (which uses seconds).
If I was granted one wish for this specification—apart from standardization—it would be for real-time speed, pitch and volume adjustment. I can understand the need to restart things to get the text read in another voice, but the others feel like they should be manipulable in real time. But again, I don’t know anything about the inner workings of speech synthesizers, so that might not be technically possible.
In terms of actual browser implementations, basic speech synthesis like I’ve covered here is pretty solid in browsers that support the API40. As I mentioned, Chrome and Edge currently fail to accurately report the virtual cursor position when speech synthesis is paused, but I don’t think that’s a deal-breaker. What is problematic is how unstable things get when you start to combine features such as real-time reading-speed adjustments, pausing and such. Often, the synthesizer just stops working and refuses to start up again. If you’d like to see that happen, take a look at a demo I set up41. Chances are that this issue would go away if the API allowed for real-time manipulation of properties such as rate
because you wouldn’t have to cancel()
and restart the synthesizer with each adjustment.
Long story short, if you’re looking at this as a progressive enhancement for a content-heavy website and only want the most basic features, you should be good to go. If you want to get fancy, you might be disappointed or have to come up with more clever coding acrobatics than I’ve mustered.
As with most things on the web, I learned a ton by viewing other people’s source, demos and such — and the documentation, naturally. Here are some of my favorites (some of which I linked to in context):
(rb, yk, il, al)
The sharing spirit in the design community is remarkable. Designers spend countless hours on side projects, and without asking for anything in return, they share their creations freely with the community, just to give something back, to inspire, and to support fellow folks in their work.
When working on a project yourself, freebies like these can come to the rescue when you have to get along on a tight budget but, more often that that, they simply are the missing piece that’ll make your design complete.
In this post, we hand-picked 30 fonts that are bound to give your project the finishing touch, and maybe even inspire you to something entirely new. The fonts can all be downloaded for free. However, please note that some of them are free for personal use only and are clearly marked as such in the description. Also, please be sure to check the license agreements before using a font in your project as they may change from time to time.
For more free font goodness, also check out the following posts:
Luis Calzadilla’s font L-7 Stencil5 is a good match for all those occasions when you want to make a bold statement while keeping the typeface itself rather than sleek and slim. Characteristic for the sans-serif font are the stencil-style, fragmented letters and the rounded terminals. The font supports capital letters and numbers and can be used for free in personal projects. If you want to use it in a commercial project, please be sure to credit the designer.
It’s not only the name of the brush sans Westfalia8 that wakes allusions of the famous campervan. With its hand-drawn feel, messy edges, and varied line thickness, the font also caters for a warm feeling of authenticity and adventure. Westfalia comes in one weight, with capital letters, numbers and punctuation marks, and works especially well as bold headings or on posters. It’s free to use for both personal and commercial projects.
If you’re looking for something to add a personal touch to your projects, the modern calligraphy typeface Setta Script11 might be for you. It comes with 244 glyphs and 69 alternate characters with Opentype features. Ligatures are also supported. A perfect match for greeting cards and invitations.
Inspired by the old growth forests of the West Coast, Old Growth14 is a rough sans-serif font with edges as uneven as the treetops in the woods. This one works especially well for branding, quotes, and headlines. You’re free to use the font to your liking in personal as well as commercial projects.
Inspired by the typography of the 1920’s, Marius Kempken designed Moderne Sans17. The typeface is based on uppercase letters, but lowercase letters and numbers are included in the font, too. You may use Moderne Sans freely in both personal as well as commercial work.
The font family Octanis20 beautifully merges the new and the old. It comes in eight styles ranging from modern, even a bit futuristic sans-serif versions to a rather vintage-inspired slab serif. A nice choice for headlines and logos, but also paragraphs of text look great with it. You may use the typeface for free in both personal and commercial projects.
A balanced upright script with style and moxie. That’s Escafina23. Escafina is a modern interpretation of the letters you usually find in mid-century advertising and signage. It comes in three styles (high, medium, and low) and supports over 100 languages. Personal licenses are pay-as-you-want.
You know those little boxes that appear when a computer can’t render a character? Because of their shape, they are often referred to as “tofu”. Google’s answer to these little boxes is a font family that aims to support all existing languages and, thus, put an end to “tofu”. And what name could be better suited for such an undertaking as “Noto26”, which is assembled from “no more tofu”? The Noto typeface comes in multiple styles and weights and is freely available. Perfect for when your project needs to support languages that other fonts usually fail to display.
To give your project an authentic, handmade touch, Bonfire29 might be just what you were looking for. The hand-drawn brush font shines with its unique swashes. The free version includes upper and lowercase letters in one style that you may use for personal projects.
If you’re looking for a typeface with a seamless flow that still makes a bold statement, Etna32 may be one for you. Characteristic for Etna are the pointy edges of the capital letters that majestically stand out like the tip of a mountain. While the full version covers Latin as well as Cyrillic alphabets, the free version comes with Latin characters only. Free for personal use.
Vintii35 is certainly a friendly and playful typeface that doesn’t take itself too seriously. With its cut-out looks, it’s a good catch for headlines and short descriptions, but it’s readable in larger blocks of text as well. The font contains all basic glyphs and characters and can be used to your liking.
To create his typeface Plume38, Krišjānis Mežulis chose a quite extraordinary approach: He used a thick brush to paint the individual letters, numbers, and punctuation marks on a plastic surface. The result: a crisp typeface with a unique splashed look.
Simple rounded shapes and a sleek overall look are the determining elements of the font Coves41. It comes in two weights (light and bold) and offers full glyph support. You’re free to use Coves in personal projects. If you’re interested in a commercial license, please be sure to contact the designer.
Zefani44 is a typeface with a strong character and an elegant, sophisticated look. The stencil version comes with uppercase letters and can be used for free in private projects.
If you’re looking for a font with personality that is humble enough not to steal your content the show, check out Kano47. With its geometric structure and sharp edge points, it makes a statement that is ideal for logos, posters, and other typographic work. Kano is free to use in personal and commercial projects.
Ailerons50 can be translated as “little wing” in French, and that’s exactly where the typeface sought its inspiration: in aircraft models of the 1940s. The typeface is clean and stylish and works especially well for titles. You may use it freely as long as it’s for personal use only. If you’re interested in using Ailerons in a commercial project, please contact the designer.
Do you have a sweet spot for handlettering? Then, take a look at Noelan Script53. The modern calligraphy typeface comes with Opentype features that allow swashes to be automatically connected for intial and terminal. And to improve the handwritten look even further, you can mix and match alternate characters for more variety. Noelan is free for personal and commercial use.
Inspired by vintage print catalogs from the early 1900s, Mark Richardson set out to create a typeface that captures the aesthetics of the era. What came out of it, is the free font Phalanx56, and, well, rustic and honest are probably the words that best describe its look. Phalanx comes with a full uppercase alphabet and numbers. You’re free to use it as you wish.
How about some 90s vibes for a change? Shkoder 198959 seeks inspiration in the good things of the decade: sports, tech, and everything else that inspired a kid of the time. The typeface consists of caps, numbers, and a lot of glyphs that make it a good fit also for non-English projects. Two weights – one light, one black – are available. You may use Shkoder 1989 for any kind of project. If you decide to use it commercially, shoot the designers an email – they’d love to hear about it.
A font that beautifully captures the aesthetic found in popular handwriting pieces is Wayward63. The uppercase alphabet pairs well with script lettering and gives branding projects a personal touch. Free to use, also commercially.
Aqua Grotesque66 is a grotesque typeface with a retro, 1940s touch. Its crisp, geometric shapes cater for a fresh and unique look. Feel free to use it as you like.
“A funny font for funny people.” That’s how the font Daddy69 describes itself. Originally created for a children’s book, Daddy is bound to bring a fresh and playful twist to any kind of project. It’s free to use, even commercially.
A sharp and precise design that enables a clear communication with the reader – that’s Santral72. Santral was designed with a focus on keeping the balance between visual perfection and optical impression. The complete font family includes twelve weights and italic versions, two of them (Light and Light Italic) can be downloaded for free for personal projects.
The hand-painted brush script typeface Hensa75 is a nice choice for logos, packaging, greeting cards and the like. It supports standard Latin characters (upper- and lowercase), numerals, punctuation, ligatures, and – for the extra handmade touch – a set of swashes. Free for private and commercial use.
Its high x-height and long descenders make Affogato78 an unusually expressive, yet friendly, typeface. It comes in five weights and a vast variety of glyphs which make it a good fit for diacritic-heavy languages, too. Affogato looks especially good as display type or in logos, but body copy works well, too. You may use it for free (also commercially) or can pay what you want for a license to show the designer your appreciation.
How about something experimental for a change? Inspired by Kandinsky and Gestalt’s optical research, Alfonso Armenteros Parras designed Stijla81, a typeface that wants to push the boundaries of legibility. The free version comes with a standard Latin alphabet and numbers.
Another rather experimental font is Accent84. The combination of fine lines and bold geometric shapes works best for short titles and short words. You may use Accent for free in both personal and commercial projects.
Art nouveau and the modern Didot typeface were the source of inspiration for Soria87. Soria comes with a good selection of glyphs and beautiful ligatures. A timely piece with a unique, vintage touch.
A unique yet functional font is Orkney90. With its geometric look and a high level of readability also in small font sizes, it works well in both print and web projects. The Orkney family includes four weights with more than 400 characters and wide language support. Released under the SIL Open Font License, you may use it commercially.
Technically speaking, Multicolore94 isn’t a font as it’s multicolored and you cannot write with it in your favorite program either. Instead, you’ll need a vector editing application to create text with it. But that’s nothing to worry about as the bold and playful fellow is best suited for text that includes only a few words anyhow. Multicolore comes in EPS, AI and PDF formats and is free even for commercial use.
Did you stumble across a free font recently that caught your attention? We’d love to hear about it in the comments!
(aa, il)
Many criticize gestural controls as being unintuitive and unnecessary. Despite this, widespread adoption is underway already, and the UI design world is burning the candle at both ends to develop solutions that are instinctively tactile. The challenges here are those of novelty.
Even though gestural controls have been around since the early 1980s1 and have enjoyed a level of ubiquity since the early 2000s, designers are still in the beta-testing phase of making gestural controls intuitive for everyday use.
This article will explore the benefits and drawbacks of gestural controls for mobile UIs, as well as offer advice on effective implementation that avoids the gap in user familiarity.
Gestures come in all shapes and sizes. The most common are listed in the graphic below. These are the conventional controls to which most active mobile device users are accustomed. These are the most used across platforms and, in that regard, the most intuitive. At least that’s the case with people who have significant experience using gestural controls.
This level of intuition can’t be applied, however, to the diminishing population who are flying blind when confronted with a mobile interface. According to an oft-cited study8 by Dan Mauney, there are a great deal of similarities in the way people expect a mobile interface to work. This study asked participants from nine countries to create a set of 28 actions using a gestural interface.
The results were stunningly similar. There wasn’t a ton of variability between actions. Most people expected certain actions to work the same. Deleting, for example, was most often accomplished by dragging an element off of the screen. Menus were constantly consulted — despite warnings not to do this. People often drew a question mark to indicate help functionality.
Oddly enough, the standard set of controls used across most apps were these:
These didn’t always account for the intuitive gestures most people in the study created when left to their own devices. This presents a big question: How intuitive are gestural interfaces? Not only that, but what are the pros and cons of implementing a gestural interface?
Regardless of the drawbacks, one thing is clear: Gestural interfaces aren’t going anywhere. That’s why it’s vital for today’s designers to firmly grasp the underlying concepts that make gestural controls effective. Otherwise, the chance that the usability of their work will suffer increases dramatically.
Gestural controls are popular because of two major factors:
Kidding. It’s just about the mobile devices. The Minority Report HUD display is such a fantastic example, however, that it’s become somewhat of a trope to discuss it in conversations about touch interfaces, but we’re still a ways off from interacting with holographic projections.
Even so, this foreboding Tom Cruise vehicle did a great job of showing what will eventually be possible with UI design. And the important part of that is getting something that’s usable and intuitive. Let’s examine how that’s possible in our first tangible benefit of gestural control.
Touch UIs only feel intuitive when they approximate interaction with a physical object. This means that tactile feedback, and the ability to manipulate the UI elements, has to work as an abstraction of a real object in order to be truly intuitive.
Even poorly designed interfaces only take a little experimentation to figure out, at least for power users. Think about how often you’ve skipped a tutorial to just interact with an app’s interface. You might miss some fine details, but it’s fairly easy to discover the primary controls for most interfaces within a few minutes of unguided interaction. Still, there’s a serious limiter on user delight if there’s no subtle guidance from the designer. So, how do you teach your users without distracting them from the application?
The best approach to creating intuitive touch-based interaction is through a process called progressive disclosure. This is a process by which a user is introduced to controls and gestures as they proceed through an interface’s flow. Start by showing users only the most important options for interaction. You can do this with visual cues, or through a tutorial-like “get started” process. I favor the former, because many users (myself included) will usually skip a tutorial12 to start interacting with an app right away.
Slight visual cues and animations that give instant feedback in response to touch are the perfect delivery method for progressive disclosure. A fantastic example of this is visible in Apple products’ “slide to unlock” commands, although the feature has since been removed.
The interface guides you with text, indicates the direction with an arrow and offers immediate feedback action with animation. You can take this same concept and draw it out further with more multifaceted applications.
In his 2013 article about gestural interfaces16, Thomas Joos, contributor to Smashing Magazine, covers this process thoroughly, pointing to YouTube’s Capture application as an example.
Both progressive disclosure and the tutorial techniques offer guidance should a user require it. The disclosure method, however, has the added benefit of respecting the user enough to expect they can figure out a process.
Because they’re completing a task with minimal guidance (achieving goals, as it were), they feel a sense of accomplishment. The design is adding to their delight in interacting with the app. This can help to create habits and obviously makes it much easier to learn related or increasingly complex operations within the application. You’ve established a pattern of minimal guidance; all you have to do is repeat it as the functions layer on in complexity.
The important thing to remember when teaching users how to use your interface is the three-part process of habit formation:
The trigger is the inciting action, such as a push notification reminding a user to interact with the app. The action is where you leave your subtle clue as to how the user should gesticulate in order to complete the goal. Then comes the feedback, which works as a sort of reward for a job well done.
This habit-formation process is a form of user onboarding, or a way of ensuring that new users are successful when they start using your application, and then converting casual visitors into enthusiastic fans. A great example of this process (specifically, the third step20) can be seen in the Lumosity app.
The Lumosity app is a game-based brain-training application. It allows users to set up their own triggers, which manifest as push notifications.
It then progresses to the actions, the games themselves. These games are gesture-based, and each is introduced by a quick, easy, simple tutorial.
Note the succinct instructions. A quick read and the performance of the instructions provide instant feedback on user actions.
Finally, after the user has finished each exercise, the feedback is offered — then again, when they’ve finished a set number of exercises in a given day.
Providing these stimuli to the user reinforces their satisfaction from performing their tasks, as well as their memory of how to perform them. Gestural controls are a skill, like any other. If you can make learning them fun, then the curve for retention will decrease significantly.
Of course, easy learning is only one benefit of a gestural UI. Another big one is the fact that it promotes a minimalist aesthetic.
Screen real estate on a mobile device is a big deal. Your space is limited, and you have to use it wisely — especially if you have an abundance of features. That’s why so many interfaces are resorting to the hamburger menu icon to hide navigation controls.
Using gestures for navigation of a website might be a bit of a tradeoff in usability, but it makes an app look pretty slick. Just take a look at the Solar app, which is highly minimalist and offers those subtle cues we talked about earlier.
Though the clarity of the actions a user is meant to take is decreased slightly, the look and feel of the app are boosted in a tangible way. Plus, delight is increased because the user is given more autonomy to figure out what to do on their own. Speaking of delight…
Something that’s easy to use and easy on the eyes is also easy to enjoy. Gestural controls enable a tactile experience for users, and that’s downright enjoyable. Using haptic feedback to indicate a successful interaction, for example, can give users a subtle sense of accomplishment. This could be as simple as a confirmative vibration upon muting the phone (as in the case of both Apple and Android products).
Basically, in addition to the visual and audio appeal of a product, designers can now begin incorporating touch sensations as a way to engage users. The folks over at Disney are exploring this concept31 with no lack of zeal.
That brings us to our final point. This is unexplored territory — a whole new world of interaction for designers to bring to life in living color! While usability and industry best practices should always be considered and consulted, this is a chance to break creatively from convention. While it might not always work out to be revolutionary, experimentation in this field can’t help but be exciting.
Oddly enough, with all of the futuristic appeal and hype paid to gestural controls, the trend isn’t universally beloved. In fact, there’s a sizeable camp in the design world that considers gestural controls to be a step back in usability.
At least part of this pushback is due to the rush to implement. Many designers working on gestural interfaces are ignoring the standard UX caveats that have been shown to measurably improve a product’s usability. Moreover, the inclination towards conformity in design is always pretty high. You’re reading what is essentially a “best practices” article, after all. And it’s one of thousands.
This means that people are using the same techniques and design patterns across any number of applications and interfaces, even when they don’t make sense, due to “conventional wisdom.”
Designers sometimes duplicate the same usability problems in their work that you find in other popular gestural interfaces employed by industry big boys, such as Google and Facebook — for example, the preference for icons over text-based links. In an effort to save space, designers use pictures rather than text. This, in itself, isn’t exactly a cardinal sin, and it can be very helpful in moderation.
The problem is that it isn’t exactly super-intuitive. Pictures are subjective. They can mean different things to different people, and assuming that your users will know what an obscure icon is supposed to do is quite the gamble.
Check the interface of music app Bloom.fm.
There’s a lot going on here. What’s the raindrop supposed to be? Is that a warning for a snowstorm in the bottom left? A musical note overlaying a hamburger menu in the top right, right across the screen from a regular hamburger menu? What am I looking at?
Granted, some users can hit the ground running with these interfaces and learn a lot as they go. But the point is that nothing about this interface gives you a sense of immediate apprehension. It’s not intuitive.
To address this, Bloom.fm might be better served by removing these dissonant symbols from the main screen entirely. Put these functions (whatever they are) in the hidden menu. After all, if you’re on a music player screen, what more do you really need than play, pause, fast forward and rewind?
This brings us to my next point, which is the overarching problem with gestural interfaces: All of the controls and gestural functions are always hidden. You’re depending on a user’s prior familiarity with basic gestural concepts to get along.
This means that any departure from convention will be seen as unfamiliar. Even more problematic is that there’s no industry standard for gestural controls. It’s like the Wild West, but with more tapping and less shooting. Double-tapping might mean one thing in one app and something completely different in another.
Gestural controls can even change between iterations of an application. For instance, the image-sharing application Imgur used the double-tap gesture for zooming in an old version, but an update to the interface changed the gesture to something different entirely. Now it’s used to “upvote” a post (i.e. increasing its popularity and giving the poster fake Internet points).
Which leads to another problem: The learning curve, depending on your attention to usability detail, can be quite steep. While picking up gestural skills is usually pretty easy, as discussed above, the greater room to explore and implement new design patterns means that touch UIs can be highly variable. Variability is roughly equivalent to unpredictability, and that’s the opposite of intuitive.
To combat this, well-designed touch UIs stay in their lane for the most part, relying on visual cues (particularly animations) and text-based explanations in some cases, to establish a connection between a gesture and a function in the user’s mind.
As stated at the beginning of this article, despite any deficiencies that may or may not be innate in the basic concepts of gestural interfaces, the touch UI is here to stay. Its flexibility and mild learning curve (for the basics anyway) practically ensure it.
The bottom line of this whole thing is that, regardless of the benefits and disadvantages, touch is the dominant interface of the future. In other words, you’ll have to find a way to make it work. Proceed with caution, and stick with the familiar whenever possible. The best advice I can give is to keep it simple and test with users above and beyond what’s required. It’s in your best interest to figure out how and when to introduce new controls, to make sure you’re not an example in someone else’s article about UI usability.
If you’d like to learn more about the implementation of touch gestures, check out these helpful resources:
(da, vf, al, il)
CSSRooster takes your HTML code as input, including CSS styles, and then writes class names for your HTML tags by analyzing the patterns hidden in your code.
Everyone here can have a big impact on a project, on someone else. I get very excited about this when I read stories like the one about an intern at Google who did an experiment that saves tons of traffic, or when I get an email from one of my readers who now published an awesome complete beginner’s guide to front-end development.
We need to recognize that our industry depends on people who share their open-source code and we should support them and their projects1 that we heavily rely on2. Finally, we also need to understand that these people perhaps don’t want a job as an employee at some big company but remain independent instead. So if you make money with a project that uses open-source libraries or other resources, maybe Valentine’s Day might be an occasion to show your appreciation and make the author a nice present.
And with that, I’ll close for this week. If you like what I write each week, please support me with a donation20 or share this resource with other people. You can learn more about the costs of the project here21. It’s available via email, RSS and online.
— Anselm
Creating a clock in Sketch might not sound exciting at first, but we’ll discover how easy it is to recreate real-world objects in a very accurate way. You’ll learn how to apply multiple layers of borders and shadows, you’ll take a deeper look at gradients and you will see how objects can be rotated and duplicated in special ways. To help you along the way you can also download the Sketch editable file1 (139 KB).
This is a rather advanced tutorial, so if you are not that savvy with Sketch yet and need some help, I would recommend to first read “Design a Responsive Music Player in Sketch” (Part One2 | Part Two3) that cover a few key aspects in detail when working with Sketch. You can also have a look at my personal project sketchtips.info4 where I regularly provide tips and tricks about Sketch.
The first step is to create a new document, named “clock.” Now, set up a new artboard with the same name, 600 pixels in both dimensions and positioned “0” (X) and “0” (Y). For the background color, choose a toned-down blue; I picked the one from the Global Colors in the color dialog (#4A90E2
). Center the artboard to the canvas with Cmd + 3
, and let’s get started.
The base of the clock is a simple white circle with a diameter of “480px.” Drag it to this size after you have pressed O
on the keyboard. Align it to the center of the artboard, and name it “Face.” For the bezel, add a first Inside border with a Thickness of “16.” With just a single solid color, it would look quite dull; to give it a metallic appearance, we will add an angular gradient instead (Fig. 2). After you have picked this fill type for the border in the color dialog (last icon), click on the first color stop on the left of this dialog. Move it a bit to the right with the arrow key (press it about four times). Jump to the other color stop with Tab
, and use the arrow key again to slightly move its position, but this time to the left (about six times). Change the color to “#BFBEAC”; I’ve mixed in a small amount of yellow to give it a more natural look, which also applies to some of the other light colors in the gradient. Now go back to the first stop again and change this one to a color of “#484848”.
After that add six more color stops with a double-click each, their colors being (from left to right): “#BDBDBD”, “#A1A091”, “#C9C9C9”, “#575757”, “#C9C8B5”, “#555555”. For the positions, please refer to the Fig. 2. It looks way better now, but it is still not the result I had in mind. I also want the frame to have a 3D feeling, which is achieved with two additional borders: one below (add it with a click on the “+” button), with an Inside position and a thickness of “21.” Because it is placed below, it will be covered partly, but due to its increased size, it can still be seen a little. Keep this in mind when you stack borders.
Assign the second border a linear gradient (the second icon in the color dialog), going from the top-left of the clock face to the bottom-right. For the start color at the top, choose “#929292”; for the one at the bottom, “#D6D6D6” (Fig. 3). This alone gives the clock much more depth, but another border should give us the final look. This time, add one with an Outside border, stacked between the other two and with a thickness of 5 pixels. This one also needs a linear gradient in the same direction, but from light to dark, with a color of “#BDBDBD” at the beginning and “#676767” at the end.
Now that we have taken care of the frame itself, we also want it to look slightly raised from the clock face. This is accomplished with a light Inner Shadow. Because the borders already cover a certain part of the clock, the shadow needs to be quite big so that it can be seen. To counteract this, increase the Spread of the shadow to a relatively large value of “26,” which will pull the shadow in. Setting the Blur to “10” now gives us a nice centered shadow; however, that doesn’t respect the lightning of the scene. The light is supposed to come from the top-left, so we need to correct both the X and Y positions to “3.” To echo the theme of the artboard’s background, I have chosen a darker shade of the blue, with “#162A40” at “23%” alpha. Save this color to the Document Colors for later reference.
This is not the only shadow we will use. Another one on the outside will make sure that the clock contrasts with the background and looks as if it would hang on a wall. The shadow should be black, with an alpha value of “23%” and the remaining properties “6/6/14.” This time, we don’t need to increase the Spread because we’ve only set a slight outside border for the circle. The raised effect is even reinforced with a slight gradient on the background itself. Because we have set it directly on the artboard, we need to overlay a rectangle (press R
) for this purpose.
Add one that covers the whole artboard (name it “Background shadow”) but that is behind the clock face, and change the fill to a radial gradient. Move its center to the bottom-right third of the artboard (Fig. 4, 1); to change the size, drag the indicator on the circle line to the top-left third of the artboard (Fig. 4, 2). Be sure to use the point that is connected to the center with a line (the other point would change the gradient’s shape to an ellipse). Set both color stops to black in the inspector: the one at the center should have full opacity (100%), and the one on the outside none at all (0%). The shadow would be way too strong like this, so decrease the general opacity of this layer to 24% (close the color dialog and press 2
, rapidly followed by 4
).
With the last step we finished the casing of the clock, so let’s take care of the clock face itself now. To make the alignment for all of the elements easier, let’s add some custom guides first: show the rulers with Ctrl + R
, and make sure that the circle is selected. Now, add a vertical guide at its center with a click on the upper ruler. As a guide hover over the ruler until the red line is directly above the middle handles of the shape on the canvas. Do the same for the horizontal guide on the left ruler. For the correct placement, you could also have a look at the positions of the guides when you hover over the rulers: with an artboard size of 600 pixels, this would be 300 pixels for both.
To break the ground, we’ll add the scale for the hours. Create a rectangle at the top of the clock face, above the circle, for the mark of the twelfth hour. The easiest way is to add a rectangle with a random size first and then change it in the inspector. It should have the dimensions “6” (width) and “18” (height), with a black fill. Move it “31px” away from the outer edge of the circle: Hold Alt
to show the smart guides, including the distance; point to the circle with the mouse; leave it there; and use the arrow keys to reposition the shape until the spacing is correct (while still holding Alt
). Also, center it to the clock face horizontally after selecting both layers, making a right-click and selecting Align Horizontally. But what about the remaining hour marks? It would be quite tedious to create and rotate them by hand.
Luckily, Sketch offers a handy feature that can do both at the same time: Rotate Copies. Select it from Layer → Paths in the menu bar. The following dialog lets you define how many additional copies of the selected element to make. With a total of twelve hours, we require eleven more marks. After you have entered this value and confirmed the dialog, you will be presented with all of the lines and a circular indicator in the middle. You can drag this circle around at will; based on the position, it gives you a wealth of different patterns. Try to move it around! Also, give some other shapes (instead of a rectangle) a shot as a starting point to see what can be done with this option.
However, for the correct placement of the hour marks, move the indicator down until it is at the intersection of the guides that we added earlier (Fig. 5). That was easy! Please note that you won’t be able to alter this position anymore as soon as you click anywhere else on the canvas. But you will still be able to change the individual elements after accessing the related Boolean group with a double-click on the canvas. Rename it to “Hour marks.”
For the minutes, we can take a similar approach, but instead of lines, we will create circles for these marks. To make that easier, set the hours to “20%” opacity first with 2
. Now, draw a circle with a diameter of “8px” at the same position as the current mark on the twelfth hour, which you should move “40px” from the top edge of the clock. Also, set its color to black.
The Rotate Copies option comes into play again. This time we need “59” additional copies. Like before, align the circular indicator to the intersection of the guides. At once, we’ve added all of the marks for the minutes. Rename the new Boolean group to “Minute marks,” and access it with a double-click. However, we don’t need the marks at the same positions as the hours, so we will delete them now: Click on the mark at “12” on the canvas, hold Shift
, click on the other round marks that overlap, and delete all twelve of them. You can now set the hours to full opacity again.
This brings us a huge step closer to the final clock face. However, we have still some work to do. First, the digits. To give the clock a modern appearance I have chosen the futuristic Exo 2 family from Google15. Unfortunately, you can’t use Rotate Copies to distribute text layers, but we would need to align them manually anyway due to the different shapes of the numbers, so let’s go for it.
To make the alignment easier, create a circle with a diameter of “360” at the center of the clock, and assign it a thin gray border (no fill). Add the “12” at the top, with a font size of “52,” a “Bold” weight and a black fill: Align it with the arrow keys, so that its topside touches the helper circle (Fig. 6). The number should also be centered to the corresponding hour mark. Continue in the same manner for the remaining hours. Always make sure that they touch the circle on the inside. The easiest way is to drag the preceding number out while holding Alt
, move it to the new place, change the content, and set the final position with the arrow keys. When you are finished, delete this helper shape. Also, create a “Digits” group for all of the numbers.
The remaining elements to take care of are the watch hands. Zoom in a bit to start with the second hand. It’s made of a simple red (#DF321E) rectangle with dimensions of “4” (width) and “200” (height), and whose lower two vector points are moved in “1px” each to form a slight trapezoid. To achieve this, press Enter
to go into vector point mode, hit Tab
two times to go to the lower-right point, and press the left arrow
key on the keyboard to move it 1 pixel to the left. Hit Tab
again to continue to the lower-left point, which you’ll move in with the right arrow
key. Leave this mode again by pressing Esc
two times, zoom back to 100% with Cmd + 0
, and center the hand to the artboard horizontally. On the Y axis, it should be “192px” away from the top of the watch. Because it is supposed to point to the “6,” we don’t need to rotate it, but make sure that it is above the “Digits” group in the layers list. Finally, name it “Second,” but hide it for now.
You can create the minute hand in the same fashion: Add a black rectangle with the dimensions “10” (width) and “210” (height), and zoom into it with Cmd + 2
. For this shape, we’ll add some points at the top and bottom. Like before, enter vector point mode, but move the lower points in “2px” each. Now hold Cmd
and click on the top segment to add a point in the exact middle. Push this point up by 3 pixels. Do the same for the lower segment, but move it down by 4 pixels (Fig. 7).
Finally, give the pointer a three-dimensional appearance with a crest (Fig. 8). One way to achieve this is to add a gradient with a hard stop in the middle, consisting of two stops at the same position. Add a gradient fill on top of the existing fill, assigned black with “100%” alpha for the first color stop and white with “0%” for the last stop. Bring the gradient to a horizontal position with the left-pointing arrow in the color dialog.
Now add another point with a double-click on the gradient axis in the color dialog, moved to the exact middle with 5
on the keyboard. Give it 100% alpha, and make sure it is black. Add another one to the right, and also move it to the center with 5
, but then press the right arrow
key once to offset it slightly to the right. After you have changed it to white with “30%” alpha, you’ll see that this has resulted in a hard edge, thanks to the same position of the color stops. To conclude, leave the color dialog by clicking anywhere on the canvas, and name this shape “Minute.” Place it 188 pixels away from the top of the clock, centered horizontally on the artboard.
It’s quite an easy task to get to the hour hand from here. Duplicate its minute counterpart, but hide the original layer, name the new one “Hour,” and change the dimensions to “12” (width) and “162” (height). That already gives us the final shape. However we need to mirror it horizontally to bring the gradient to the opposite side: Right-click on the shape, and select Flip Horizontal from the Transform menu. After that, position it “202px” from the top of the clock face, and center it. Be sure that the order of the hands is second, hour, minute in the layers list, and combine all of them into the new group, “Hands.” It should be above the “Digits” group.
Time to set the clock. The second hand, which you can show again now, already points in the right direction, but the other two hands should read 10:07. Rotating the hour pointer in the default way doesn’t give us the correct result because it alters the position we’ve already set. You may remember that it’s possible to adjust the point around which an element rotates. For this to work, we need to use the Rotate icon in the toolbar (Fig. 9, 1), which gives us a little indicator at the center of the object (Fig. 9, 2).
Drag it to the intersection of the custom guides defined earlier, and try to perform the rotation now: The hand will move like on a real clock. Take this opportunity to set the hour hand to a little after 10:00, at about “233” degrees. Show the minute hand again, and proceed in the same manner, but rotate it until it is at the seventh minute of the hour (“–137” degrees). Please note that you need to perform the rotation on the canvas; the input field in the inspector won’t respect the altered rotation point.
For the final touch and to further strengthen the 3D effect of the watch, add some shadows to the hands. Start with the second hand: To respect that the light comes from the top-left, we need to set the properties to “2/5/4/0” with the dark blue that we saved to the Document Colors (#162A40), but at “30%” opacity. The same blur and color can be used for the shadow of the hour hand, but the X and Y positions need to be changed to “–3” and “–2.” The same goes for the minute hand, but with values of “–4” and “–2.”
To top everything off, we will add one last element: a small red circle with a diameter of 12 pixels at the center of the clock that will hold all of the hands at their positions, and named “Cover” (Fig. 10). Take over the color from the second hand with the color picker and add a second fill on top of it: a radial gradient that has the same size and position as the circle, starting with 0% black at the center and going to 20% black on the outline. Also, add a shadow to raise it slightly from the hands. Give it the properties “0/0/5/0” with 50% black.
The result is a realistic wall clock. You’ve learned not only how to stack multiple borders, but also how to apply gradients to create distinctive effects. You’ve also learned more about rotations and how to use the Rotate Copies function to add multiple copies of the same object in a very special way.
Did you find it useful? It’s just a small glimpse into The Sketch Handbook2826, written by Christian, and published by Smashing Magazine. The full book (which features many more topics27) should help you become a proficient user of Sketch in (almost) no time. No guarantees though! 😉 Happy reading!
(mb, il)
This article is an excerpt from Christian’s The Sketch Handbook2826, available in print and as eBook, published by yours truly. The book contains twelve jam-packed chapters within 376 pages. Among other things, it will teach you how to design a multi-screen mobile app, a responsive article layout as well as icons and interfaces. You’ll also learn about the most recommended plugins for Sketch and a few useful tips, tricks and best practices.
The world map that shows you all current job posts aggregated from Hacker News, Stack Overflow, Github and more.
The first set of screens with which users interact, set the expectations of the app. To make sure your users don’t delete your app after the first use, you should teach them how to complete key tasks and make them want to come back for more. In other words, you need to successfully onboard and engage your users during those first interactions.
The onboarding process is a critical step in setting up your users for success with your product. You only get one chance to make a first impression. In this article, we’ll provide some tips on how to approach onboarding using a simple pattern called “empty states.” If you’d like to bring your app or website to life with little effort, you can download and test Adobe XD1for free.
Content is what provides value for most apps. Whether it’s a news feed, a to-do app, or system dashboard, it’s why people use apps – for the content. This is why it’s critical to consider how we design empty states; those moments in a user’s journey where an app might not have content for a user yet.
An app screen whose default state is empty and requires users to go through one or more steps to populate it with data, is perfectly suited to onboarding. Besides informing the user about what content to expect on the page, empty states also teach people how to use your app. Even if the onboarding process consists of just one step, the guidance will reassure users that they are doing the right thing.
Consider a “first-use” empty state as part of a cohesive onboarding experience. You should utilize the empty state screen to educate and engage your users. Use this screen as an opportunity to turn a moment of nothing into something.
First and foremost, the empty state screen should help users understand the context. Setting expectations for what’ll happen makes users get comfortable. The best way to deliver this information is a show-or-tell format: show the user what the screen will look like when it’s filled with content or tell them with a clear instructions.
Most empty states will tell you what they are for and why you’re seeing them. But, effective empty states will take this even further and tell you what you can do next. Educating your users is important, but true success in your first empty state means driving an action. Think of this empty state as a starting point and design it to encourage user activity.
While your app should be functional (it should solve a problem for your users) and usable (it should be easy to learn and easy to use), it should also be pleasurable. Empty states are an excellent opportunity to make a human connection with your users and get across the personality of your app.
Despite the fact that empty states can engage users, they’re often overlooked during design and development. This happens because we normally design for a populated interface where everything in the layout looks well arranged. However, how should we design our page when the content is pending user action? Empty state design is actually an amazing opportunity for creativity and usability.
The absolute worst thing you can do with an empty state is to drop your users into a dead-end. Dead-ends create confusion and lead to additional and unnecessary taps. Consider the difference between the following two examples from Modspot’s Posts screens. The first image is Modspot’s current screen for first-time users; a useful and smartly crafted empty state reduces friction by guiding users along to an action that will get them started.
The second image is a fake version of the same screen that I’ve created to demonstrate an ineffective empty state that provides no guidance, no examples – only a dead end.
The beauty of a great empty state design is its simplicity. You should use a minimalist design approach in order to bring the most important content to the forefront and minimize distractions. Thus, only include well-written and easily scannable copy (clear, brief descriptions or easy-to-follow instructions) and wrap it together with good visuals.
Don’t forget that empty states aren’t only about visual aesthetics. They should also help users understand the context. Even if it’s meant to be just a temporary onboarding step, you should maximize its communication value for users and provide directions on how to change an empty state to an active one.
Let’s take an empty state screen from Google Photos as an example. Visually it looks great: a well-composed layout with beautiful graphics. However, this empty state simply doesn’t help users understand the context, and doesn’t provide an answer on following questions:
A good first impression isn’t just about usability, it’s also about personality. Personality is what makes your app memorable and pleasurable to use. It may not seem like much, but if your first empty state looks a bit different from similar products, your users will notice and expect the entire product experience to be different, as well. For example, below you can see how Khaylo Workout uses its empty states to convey personality and tone.
Your primary goal is to persuade your users to do something as soon as possible so that the screen won’t be empty. To prompt action on an empty state don’t just show users the benefit they will receive when they interact with your app, but direct them to the desired action as well.
Let’s examine the install screen of Facebook Messenger. When users arrive at this screen, they are met with encouragement – the screen lets users know the benefits of the product (a user can take pictures or record video using Messenger) and tells them how many of their Facebook friends are already using the app. The ‘Install’ button guides users onto the next step necessary to clear up the empty state. Users simply have no other option than to touch install.
When you personalize your app for users, you show off the value of your product even faster. The main goal of personalization is to deliver content that matches specific user needs or interests, with no effort from the targeted users. The app profiles the user and adjusts the interface – fill empty states – according to that profile.
Consider providing starter content that will allow users to explore your app right away. For example, a book reading app might provide all users with a few books based on information about a user.
Empty states can help you show the human side of your business or product. Positive emotional stimuli can build a sense of engagement with your users. What kind of feeling your empty state conveys, depends on the purpose of your app. An example below shows the emotional side of empty state in Google Hangouts and how it can incentivize users to get invites on Hangouts.
Of course, showing emotion in design like in the example above is risky – some people don’t get it, and some people may even hate it. But, that’s OK, since emotional response to your design is much better than indifference.
The moment a first-time user completes an important task is a great opportunity for you to create a positive emotional connection between them and your product. Let your users know that they are doing great by acknowledging their progress and celebrate success with the user.
Success state is an amazing opportunity to congratulate users on a job well done and prompt them toward new interactions. For example, clearing a task list is certainly a positive achievement for Writeupp users. It’s great that the app offers a congratulatory, “Well done!” as a positive reinforcement. This success state delights users and offers next steps to keep them engaged.
The following resources can help you find user onboarding and user interface inspiration:
Your empty state should never feel empty. Don’t let the user face a blank screen the first time they open an app. Invest in empty states because they aren’t a temporary or minor part of the user experience. In fact, they are just as important as other design components and full of potential to drive engagement and delight users when they have just signed up.
This article is part of the UX design series sponsored by Adobe. The newly introduced Experience Design app34 is made for a fast and fluid UX design process, creating interactive navigation prototypes, as well as testing and sharing them – all in one place.
You can check out more inspiring projects created with Adobe XD on Behance35, and also visit the Adobe XD blog to stay updated and informed. Adobe XD is being updated with new features frequently, and since it’s in public Beta, you can download and test it for free36.
(ms, vf, yk, aa, il)
Spacegrid * AR.js * Boundless * Flatris * WhaleSynth * Dwitter * Purser * Box Alignment Cheatsheet * Webpack Intro…
As JavaScript developers, we often forget that not everyone has the same knowledge as us. It’s called the curse of knowledge1: When we’re an expert on something, we cannot remember how confused we felt as newbies. We overestimate what people will find easy. Therefore, we think that requiring a bunch of JavaScript to initialize or configure the libraries we write is OK. Meanwhile, some of our users struggle to use them, frantically copying and pasting examples from the documentation, tweaking them at random until they work.
You might be wondering, “But all HTML and CSS authors know JavaScript, right?” Wrong. Take a look at the results of my poll2, which is the only data on this I’m aware of. (If you know of any proper studies on this, please mention them in the comments!)
One in two people who write HTML and CSS is not comfortable with JavaScript. One in two. Let that sink in for a moment.
As an example, look at the following code to initialize a jQuery UI autocomplete, taken from its documentation5:
<div> <label for="tags">Tags: </label> <input> </div>
$( function() { var availableTags = [ "ActionScript", "AppleScript", "Asp", "BASIC", "C" ]; $( "#tags" ).autocomplete({ source: availableTags }); } );
This is easy, even for people who don’t know any JavaScript, right? Wrong. A non-programmer would have all sorts of questions going through their head after seeing this example in the documentation. “Where do I put this code?” “What are these braces, colons and brackets?” “Do I need them?” “What do I do if my element does not have an ID?” And so on. Even this tiny snippet of code requires people to understand object literals, arrays, variables, strings, how to get a reference to a DOM element, events, when the DOM is ready and much more. Things that seem trivial to programmers can be an uphill battle to HTML authors with no JavaScript knowledge.
Now consider the equivalent declarative code from HTML56:
<div> <label for="tags">Tags: </label> <input list="languages"> <datalist> <option>ActionScript</option> <option>AppleScript</option> <option>Asp</option> <option>BASIC</option> <option>C</option> </datalist> </div>
Not only is this much clearer to anyone who can write HTML, it is even easier for programmers. We see that everything is set in one place, no need to care about when to initialize, how to get a reference to the element and how to set stuff on it. No need to know which function to call to initialize or which arguments it accepts. And for more advanced use cases, there is also a JavaScript API in place that allows all of these attributes and elements to be created dynamically. It follows one of the most basic API design principles: It makes the simple easy and the complex possible.
This brings us to an important lesson about HTML APIs: They would benefit not only people with limited JavaScript skill. For common tasks, even we, programmers, are often eager to sacrifice the flexibility of programming for the convenience of declarative markup. However, we somehow forget this when writing a library of our own.
So, what is an HTML API? According to Wikipedia7, an API (or application programming interface) is “is a set of subroutine definitions, protocols, and tools for building application software.” In an HTML API, the definitions and protocols are in the HTML itself, and the tools look in HTML for the configuration. HTML APIs usually consist of certain class and attribute patterns that can be used on existing HTML. With Web Components, even custom element names8 are game, and with the Shadow DOM9, those can even have an entire internal structure that is hidden from the rest of the page’s JavaScript or CSS. But this is not an article about Web Components; Web Components give more power and options to HTML API designers; but the principles of good (HTML) API design are the same.
HTML APIs improve collaboration between designers and developers, lift some work from the shoulders of the latter, and enable designers to create much higher-fidelity mockups. Including an HTML API in your library does not just make the community more inclusive, it also ultimately comes back to benefit you, the programmer.
Not every library needs an HTML API. HTML APIs are mostly useful in libraries that enable UI elements such as galleries, drag-and-drop, accordions, tabs, carousels, etc. As a rule of thumb, if a non-programmer cannot understand what your library does, then your library doesn’t need an HTML API. For example, libraries that simplify or help to organize code do not need an HTML API. What kind of HTML API would an MVC framework or a DOM helper library even have?
So far, we have discussed what an HTML API is, why it is useful and when it is needed. The rest of this article is about how to design a good one.
With a JavaScript API, initialization is strictly controlled by the library’s user: Because they have to manually call a function or create an object, they control precisely when it runs and on what. With an HTML API, we have to make that choice for them, and make sure not to get in the way of the power users who will still use JavaScript and want full control.
The common way to resolve the tension between these two use cases is to only auto-initialize elements that match a given selector, usually a specific class. Awesomplete10 follows this approach, only picking up input elements with class="awesomplete"
.
In some cases, making auto-initialization easy is more important than making opt-in explicit. This is common when your library needs to run on a lot of elements, and when avoiding having to manually add a class to every single one is more important than making opt-in explicit. For example, Prism1711 automatically highlights any <code>
element that contains a language-xxx
class (which is what the HTML5 specification recommends for specifying the language of a code snippet12) or that is inside an element that does. This is because it could be included in a blog with a ton of code snippets, and having to go back and add a class to every single one of them would be a huge hassle.
In cases where the init
selector is used very liberally, a good practice is to allow customization of it or allow opting-out of auto-initialization altogether. For example, Stretchy13 autosizes every<input>
, <select>
and <textarea>
by default, but allows customization of its init
selector to something more specific via a data-stretchy-filter
attribute. Prism supports a data-manual
attribute on its <script>
element to completely disable automatic initialization. A good practice is to allow this option to be set via either HTML or JavaScript, to accommodate both types of library users.
So, for every element the init
selector matches, your library needs a wrapper around it, three buttons inside it and two adjacent divs? No problem, but generate them yourself. This kind of grunt work is better suited to machines, not humans. Do not expect that everyone using your library is also using some sort of templating system: Many people are still hand-crafting markup and find build systems too complicated. Make their lives easier.
This also minimizes error conditions: What if a user includes the class that you expect for initialization but not all of the markup you need? When there is no extra markup to add, no such errors are possible.
There is one exception to this rule: graceful degradation and progressive enhancement. For example, embedding a tweet involves a lot of markup, even though a single element with data-*
attributes for all the options would suffice. This is done so that the tweet is readable even before the JavaScript loads or runs. A good rule of thumb is to ask yourself, does the extra markup offer a benefit to the end user even without JavaScript? If so, then requiring it is OK. If not, then generate it with your library.
There is also the classic tension between ease of use and customization: Generating all of the markup for the library’s user is easier for them, but leaving them to write it gives them more flexibility. Flexibility is great when you need it, but annoying when you don’t, and you still have to set everything manually. To balance these two needs, you can generate the markup you need if it doesn’t already exist. For example, suppose you wrap all .foo
elements with a .foo-container
element? First, check whether the parent — or, better yet, any ancestor, via element.closest(".foo-container")
— of your .foo
element already has the foo-container
class, and if so, use that instead of creating a new element.
Typically, settings should be provided via data-*
attributes on the relevant element. If your library adds a ton of attributes, then you might want to namespace them to prevent collisions with other libraries, like data-foo-*
(where foo is a one-to-three letter prefix based on your library’s name). If that’s too long, you could use foo-*
, but bear in mind that this will break HTML validation and might put some of the more diligent HTML authors off your library because of it. Ideally, you should support both, if it won’t bloat your code too much. None of the options here are ideal, so there is an ongoing discussion14 in the WHATWG about whether to legalize such prefixes for custom attributes.
Follow the conventions of HTML as much as possible. For example, if you use an attribute for a boolean setting, its presence means true
regardless of the value, and its absence means false
. Do not expect things like data-foo="true"
or data-foo="false"
instead. Sure, ARIA does that, but if ARIA jumped off a cliff, would you do it, too?
When the setting is a boolean, you could also use classes. Typically, their semantics are similar to boolean attributes: The presence of the class means true
, and the absence means false
. If you want the opposite, you can use a no-
prefix (for example, no-line-numbers
). Keep in mind that class names are used more than data-*
attributes, so there is a greater possibility of collision with the user’s existing class names. You could consider prefixing your classes with a prefix like foo-
to prevent that. Another danger with class names is that a future maintainer might notice that they are not used in the CSS and remove them.
When you have a group of related boolean settings, using one space-separated attribute might be better than using many separate attributes or classes. For example, <div data-permissions="read add edit delete save logout>"
is better than <div data-read data-add data-edit data-delete data-save data-logout">
, and <div>
would likely cause a ton of collisions. You can then target individual ones via the ~=
attribute selector. For example, element.matches("[data-permissions~=read]")
checks whether an element has the read
permission.
If the type of a setting is an array or object, then you can use a data-*
attribute that links to another element. For example, look at how HTML5 does autocomplete: Because autocomplete requires a list of suggestions, you use an attribute to link to a <datalist>
element containing these suggestions via its ID.
This is a point when following HTML conventions becomes painful: In HTML, linking to another element in an attribute is always done by referencing its ID (think of <label for="…">
). However, this is rather limiting: It’s so much more convenient to allow selectors or even nesting if it makes sense. What you go with will largely depend on your use case. Just keep in mind that, while consistency is important, usability is our goal here.
It’s OK if not every single setting is available via HTML. Settings whose values are functions can stay in JavaScript and be considered “advanced customization.” Consider Awesomplete15: All numerical, boolean, string and object settings are available as data-*
attributes (list
, minChars
, maxItems
, autoFirst
). All function settings are only available in JavaScript (filter
, sort
, item
, replace
, data
). If someone is able to write a JavaScript function to configure your library, then they can use the JavaScript API.
Regular expressions (regex) are a bit of a gray area: Typically, only programmers know regular expressions (and even programmers have trouble with them!); so, at first glance there doesn’t seem to be any point in including settings with regex values in your HTML API. However, HTML5 did include such a setting (<input pattern="regex">
), and I believe it was quite successful, because non-programmers can look up their use case in a regex directory16 and copy and paste.
If your UI library is going to be used once or twice on each page, then inheritance won’t matter much. However, if it could be applied to multiple elements, then configuring the same settings on each one of them via classes or attributes would be painful. Remember that not everyone uses a build system, especially non-developers. In these cases, it might be useful to define that settings can be inherited from ancestor elements, so that multiple instances can be mass-configured.
Take Prism1711, a popular syntax-highlighting library, used here on Smashing Magazine as well. The highlighting language is configured via a class of the form language-xxx
. Yes, this goes against the guidelines we discussed in the previous section, but this was a conscious decision because the HTML5 specification recommends this18 for specifying the language of a code snippet. On a page with multiple code snippets (think of how often a blog post about code uses inline <code>
elements!), specifying the coding language on each <code>
element would become extremely tedious. To mitigate this pain, Prism supports inheritance of these classes: If a <code>
element does not have a language-xxx
class of its own, then the one of its closest ancestor that does is used. This enables users to set the coding language globally (by putting the class on the <body>
or <html>
elements) or by section, and override it only on elements or sections with a different language.
Now that CSS variables19 are supported by every browser20, they are a good candidate for such settings: They are inherited by default and can be set inline via the style
attribute, via CSS or via JavaScript. In your code, you get them via getComputedStyle(element).getPropertyValue("--variablename")
. Besides browser support, their main downside is that developers are not yet used to them, but that is changing. Also, you cannot monitor changes to them via MutationObserver
, like you can for elements and attributes.
Most UI libraries have two groups of settings: settings that customize how each instance of the widget behaves, and global settings that customize how the library behaves. So far, we have mainly discussed the former, so you might be wondering what is a good place for these global settings.
One candidate is the <script>
element that includes your library. You can get this via document.currentScript
21, and it has very good browser support22. The advantage of this is that it’s unambiguous what these settings are for, so their names can be shorter (for example, data-filter
, instead of data-stretchy-filter
).
However, the <script>
element should not be the only place you pick up these settings from, because some users may be using your library in a CMS that does not allow them to customize <script>
elements. You could also look for the setting on the <html>
and <body>
elements or even anywhere, as long as you have a clearly stated policy about which value wins when there are duplicates. (The first one? The last one? Something else?)
So, you’ve taken care to design a nice declarative API for your library. Well done! However, if all of your documentation is written as if the user understands JavaScript, few will be able to use it. I remember seeing a cool library for toggling the display of elements based on the URL, via HTML attributes on the elements to be toggled. However, its nice HTML API could not be used by the people it targeted because the entire documentation was littered with JavaScript references. The very first example started with, “This is equivalent to location.href.match(/foo/)
.” What chance does a non-programmer have to understand this?
Also, remember that many of these people do not speak any programming language, not just JavaScript. Do not talk about models, views, controllers or other software engineering concepts in text that you expect them to read and understand. All you will achieve is confusing them and turning them away.
Of course, you should document the JavaScript parts of your API as well. You could do that in an “Advanced usage” section. However, if you start your documentation with references to JavaScript objects and functions or software engineering concepts, then you’re essentially telling non-programmers that this library is not for them, thereby excluding a large portion of your potential users. Sadly, most documentation for libraries with HTML APIs suffers from these issues, because HTML APIs are often seen as a shortcut for programmers, not as a way for non-programmers to use these libraries. Hopefully, this will change in the future.
In the near future, the Web Components quartet of specifications will revolutionize HTML APIs. The <template>
element will enable authors to provide scripts with partial inert markup. Custom elements will enable much more elegant init
markup that resembles native HTML. HTML imports will enable authors to include just one file, instead of three style sheets, five scripts and ten templates (if Mozilla gets its act together and stops thinking that ES6 modules are a competing technology23). The Shadow DOM will enable your library to have complex DOM structures that are properly encapsulated and that do not interfere with the user’s own markup.
However, <template>
aside, browser support for the other three is currently limited24. So, they require large polyfills, which makes them less attractive for library use. However, it’s something to keep on your radar for the near future.
If you’ve followed the advice in this article, then congratulations on making the web a better, more inclusive space to be creative in! I try to maintain a list of all libraries that have HTML APIs on MarkApp25. Send a pull request and add yours, too!
(vf, il, al)
The virtual realm is uncharted territory for many designers. In the last few years, we’ve witnessed an explosion in virtual reality (VR) hardware and applications. VR experiences range from the mundane to the wondrous, their complexity and utility varying greatly.
Taking your first steps into VR as a UX or UI designer can be daunting. We know because we’ve been there. But fear not! In this article, we’ll share a process for designing VR apps that we hope you’ll use to start designing for VR yourself. You don’t need to be an expert in VR; you just need to be willing to apply your skills to a new domain. Ultimately, as a community working together, we can accelerate VR to reach its full potential faster.
Generally speaking from a designer’s perspective, VR applications are made up of two types of components: environments and interfaces.
You can think of an environment as the world that you enter when you put on a VR headset — the virtual planet you find yourself on, or the view from the rollercoaster5 that you’re riding.
An interface is the set of elements that users interact with to navigate an environment and control their experience. All VR apps can be positioned along two axes according to the complexity of these two components.
In the top-left quadrant are things like simulators, such as the rollercoaster experience linked to above. These have a fully formed environment but no interface at all. You’re simply locked in for the ride.
In the opposite quadrant are apps that have a developed interface but little or no environment. Samsung’s Gear VR home screen is a good example.
Designing virtual environments such as places and landscapes requires proficiency with 3D modelling tools, putting these elements out of reach for many designers. However, there’s a huge opportunity for UX and UI designers to apply their skills to designing user interfaces for virtual reality (or VR UIs, for short).
The first full VR UI design we did was an app for The Economist, created in collaboration with VR production studio Visualise12. We did the design, while Visualise created the content and developed the app.
We’ll use this as a working example throughout the next section, in which we’ll lay out an approach to designing VR apps, before getting into the nitty-gritty of designing interfaces for VR. You can download the Economist app for Gear VR15 from the Oculus website.
Whereas most designers have figured out their workflow for designing mobile apps, processes for designing VR interfaces are yet to be defined. When the first VR app design project came through our door, the logical first step was for us to devise a process.
When we first played with Gear VR by Samsung, we noticed similarities to traditional mobile apps. Interface-based VR apps work according to the same basic dynamic as traditional apps: Users interact with an interface that helps them navigate pages. We’re simplifying here, but just keep this in mind for now.
Given the similarity to traditional apps, the tried-and-tested mobile app workflows that designers have spent years refining won’t go to waste and can be used to craft VR UIs. You’re closer to designing VR apps than you think!
Before describing how to design VR interfaces, let’s step back and run through the process for designing a traditional mobile app.
First, we’ll go through rapid iterations, defining the interactions and general layout.
At this stage, the features and interactions have been approved. Brand guidelines are now applied to the wireframes, and a beautiful interface is crafted.
Here, we’ll organize screens into flows, drawing links between screens and describing the interactions for each screen. We call this the app’s blueprint, and it will be used as the main reference for developers working on the project.
Now, how can we apply this workflow to virtual reality?
The simplest problems can be the most challenging. Faced with a 360-degree canvas, one might find it difficult to know where to begin. It turns out that UX and UI designers only need to focus on a certain portion of the total space.
We spent weeks trying to figure out what canvas size would make sense for VR. When you work on a mobile app, the canvas size is determined by the device’s size: 1334 × 750 pixels for the iPhone 6 and roughly 1280 × 720 pixels for Android.
To apply this mobile app workflow to VR UIs, you first have to figure out a canvas size that makes sense.
Below is what a 360-degree environment looks like when flattened. This representation is called an equirectangular projection. In a 3D virtual environment, these projections are wrapped around a sphere to mimic the real world.
The full width of the projection represents 360 degrees horizontally and 180 degrees vertically. We can use this to define the pixel size of the canvas: 3600 × 1800.
Working with such a big size can be a challenge. But because we’re primarily interested in the interface aspect of VR apps, we can concentrate on a segment of this canvas.
Building on Mike Alger’s early research26 on comfortable viewing areas, we can isolate a portion where it makes sense to present the interface.
The area of interest represents one ninth of the 360-degree environment. It’s positioned right at the centre of the equirectangular image and is 1200 × 600 pixels in size.
Let’s sum up:
The reason for using two canvases for a single screen is testing. The “UI View” canvas helps to keep our focus on the interface we’re crafting and makes it easier to design flows.
Meanwhile, the “360 View” is used to preview the interface in a VR environment. To get a real sense of proportions, testing the interface with a VR headset is necessary.
Before we get started with the walkthrough, here are the tools we’ll need:
In this section, we’ll run through a short tutorial on how to design a VR interface. We’ll design a simple one together, which should take five minutes tops.
Download the assets pack36, which contains presized UI elements and the background image. If you want to use your own assets, go for it; it won’t be a problem.
First things first. Let’s create the canvas that will represent the 360-degree view. Open a new document in Sketch, and create an artboard: 3600 × 1800 pixels.
Import the file named background.jpg
, and place it in the middle of the canvas. If you’re using your own equirectangular background, make sure its proportions are 2:1, and resize it to 3600 × 1800 pixels.
As mentioned above, the “UI View” is a cropped version of the “360 View” and focuses on the VR interface only.
Create a new artboard next to the previous one: 1200 × 600 pixels. Then, copy the background that we just added to our “360 View,” and place it in the middle of our new artboard. Don’t resize it! We want to keep a cropped version of the background here.
We’re going to design our interface on the “UI View” canvas. We’ll keep things simple for the sake of this exercise and add a row of tiles. If you’re feeling lazy, just grab the file named tile.png
in the assets pack and drag it into the middle of the UI view.
Duplicate it, and create a row of three tiles.
Grab kickpush-logo.png
from the assets pack, and place it above the tiles.
Looking pretty good, eh?
Now for the fun stuff. Make sure the “UI View” artboard is above the “360 View” artboard in the layers list on the left.
Drag the “UI View” artboard to the middle of the “360 View” artboard. Export the “360 View” artboard as a PNG; the “UI View” will be on top of it.
Open the GoPro VR Player and drag the “360 View” PNG that you just exported into the window. Drag the image with your mouse to preview your 360-degree environment.
We’re done! Pretty simple when you know how, right?
If you have an Oculus Rift set up on your machine, then the GoPro VR Player should detect it and allow you to preview the image using your VR device. Depending on your configuration, you might have to mess around with the display settings in MacOS.
The resolution of the VR headset is pretty bad. Well, that’s not entirely true: It’s equivalent to your phone’s resolution. However, considering the device is 5 centimeters from your eyes, the display doesn’t look crisp.
To get a crisp VR experience, we would need an 8K display per eye. That’s a 15,360 × 7680-pixel display. We’re pretty far off from that, but we’ll get there eventually.
Because of the display’s resolution, all of your beautifully crisp UI elements will look pixelated. This means, first, that text will be difficult to read and, secondly, that there will be a high level of aliasing on straight lines. Try to avoid using big text blocks and highly detailed UI elements.
Remember the blueprint from our mobile app design process? We’ve adapted this practice to VR interfaces. Using our UI views, we map and organize our flows into a comprehensible blueprint, ideal for developers to understand the overall architecture of the app we’ve designed.
Designing a beautiful UI is one thing, but showing how it’s supposed to animate is a different story. Once again, we’ve decided to approach it with a two-dimensional perspective.
Using our Sketch designs, we animate the interface with Adobe After Effects49 and Principle50. While the outcome is not a 3D experience, it’s used as a guideline for the development team and to help our clients understand our vision at an early stage of the process.
We know what you’re thinking, though: “That’s cool, but VR apps can get way more complicated.” Yes, they can. The question is, to what extent can we apply our current UX and UI practices to this new medium?
Some VR experiences rely so heavily on the virtual environment that a traditional interface that sits on top might not be the optimal way for the user to control the app. In this case, you might want users to interact directly with the environment itself.
Imagine that you’re making an app for a luxury travel agent. You’d want to transport the user to potential holiday destinations in the most vivid way possible. So, you invite the user to put on the headset and begin the experience in your swanky Chelsea office.
To transition from the office to some far away place, the user needs to choose where they want to go. They could pick up a travel magazine and flick through it until they land on an appealing page. Or there could be a collection of interesting objects on your desk that whisk the user to different locations depending on which one they pick up.
This is definitely cool, but there are some drawbacks. To get the full effect, you’d need a more advanced VR headset with handheld controllers. Plus, an app like this takes quite a bit more effort to develop than a set of well-presented options organized like in a traditional app interface.
The reality is that these immersive experiences are not commercially viable for most companies. Unless you’ve got virtually unlimited resources, like Valve and Google, creating an experience like the one described above is probably too costly, too risky and too time-consuming.
This kind of experience is brilliant for showing off that you’re at the cutting edge of media and technology, but not so great for taking your product to market through a new medium. Accessibility is important.
Usually, when a new format emerges, it’s pushed to the limit by early adopters: the creators and innovators of this world. In time, and with enough learning and investment, it becomes accessible to a wider range of potential users.
As VR headsets become more commonplace, companies will start to spot opportunities to integrate VR into the ways that they engage with customers.
From our perspective, VR apps with intuitive UIs — that is, UIs closer to what people are already accustomed to with their wearables, phones, tablets and computers — are what will make VR an affordable and worthwhile investment for the majority of companies that pursue it.
We hope we’ve made the VR space a bit less scary with this article and inspired you to start designing for VR yourself.
They say that if you want to travel fast, go alone. But if you want to travel far, travel together. We want to travel far. At Kickpush, we think that every company will have a VR app someday, just like every company now has a mobile website (or should have — it’s 2017, dang it!).
So, we’re building a rocketship, a joint effort by designers around the globe to boldly go where no designer has gone before. The sooner that producing VR apps make sense for companies, the sooner the whole ecosystem will blow up.
Our next challenges as digital product designers are more complex applications and handling other types of input through controllers. To begin to tackle this we’ll need robust prototyping tools that let us create and test designs quickly and easily. We’ll be writing a follow up article that looks at some of the early attempts to do this, and at some of the new tools in development.
Stay tuned!
(km, il, al)
Lottie is an iOS, Android, and React Native library that renders After Effects animations in real time, allowing apps to use animations as easily as they use static images.
With great power comes great responsibility. This week I found some resources that got me thinking: Service Workers that download 16MB of data on the user’s first visit? A Bluetooth API in the browser? Private browser windows that aren’t so private at all?
We have a lot of methods and strategies to fix these kinds of things. We can give the browser smarter hints, put security headers and HTTPS in place, serve web fonts locally, and build safer network protocols. The responsibility is in our hands.
<keygen>
will be removed, for example, as well as the prefix in some webkit
-prefixed APIs.position: sticky
(the only browser still lacking support for it now is MS Edge).fetch()
, Custom Elements, CSS Grid Layout, Reduced Motion Media Query, and ES6 native modules are notable ones.<link rel='preload'>
(and prefetch) automatically.And with that, I’ll close for this week. If you like what I write each week, please support me with a donation21 or share this resource with other people. You can learn more about the costs of the project here22. It’s available via email, RSS and online.
— Anselm
If it’s still snowy where you live, then you’re probably tired of the cold weather by now. Winter may be in full swing but that shouldn’t stop us from hunting for inspiration. While the gray days always seem to find a way to make us more and more anxious for springtime to finally arrive, it’s also a time we can use to reflect on our work and perhaps better decide what it is that we hope to improve or change in the next months.
Believe it or not, some of these photographs and illustrations are the starting point of a design that I create. They are the spark that sets the process of creation in motion. It doesn’t take much; it can be any part of an element that catches my eye, be it a particular color, style, texture, or anything really. You’ll find a bit of everything in today’s selection: Architecture, colors, some of the best photographs from 2016, and more. I hope you’ll like my playground! 😉
I’m admiring the textures and colors here. Wonderful to get some new ideas for backgrounds.
I really like the style of Bodil Jane. This piece of beautiful artwork looks almost like a collage of separate items that is glued on a canvas.
Some good advice that I can totally get behind. Cleverly translated to something else you do everyday.
The view angle is so well done in this illustration, as well as the shadow and light effects. A few other gems in here, such as the transparency of the bag on the desk and the wooden floor under the desk.
Beautiful book cover illustration. How deep she is in her thoughts is just so inspiring.
A wonderful advertisement for planet Earth. Look at that fire in the sky! Purdy.
One of my own pictures shot during a morning bicycle ride. The best kind! Those colors are just wow!
Creating hair is among the most difficult things to achieve. It takes a long time to get it right. That’s why I always study the ones who master it. The hair is simply gorgeous in this illustration, especially those braids.
Some fantastic photographs in this Strava collection of 2016. Hard to pick just one, but after much deliberation I chose this one. Isn’t it marvelous?
The first thing I noticed is the wonderful color palette. I’m also admiring all the different buildings created with very little elements. Imagine how it would look like if it was brought to life. Well, look here19!
Perfect scenery for some daydreaming. The texture used in this illustration is awe-inspiring.
Not your typical color palette. They really work quite well together. Lovely shapes, too!
Using negative space in illustration is one of my favorite things. I’ve personally never done it, but would love to one day. This is one of those nice examples to look up to.
I love the style in which the illustrator doesn’t draw perfect characters. It’s an illustration style that embraces the awkward. Hard to pull off right.
First thing I checked out was the pattern on the guy’s shirt. I also love the use of sharp angles for the arms, legs and other elements in this scene.
Just like the illustration above, this one also features the sharp angles, but also some interesting textures. Look at how the shadows and highlights are applied. Truly amazing!
The choice of colors and shapes are well considered here. Simple, yet effective.
Such an incredible spot! Beautifully captured with some gorgeous light.
Speaking of the use of sharp angles and shapes, Tokyo-based illustrator Jun Takahashi uses this technique to create geometric sports characters. All this with a muted, contemporary palette of colors. His series is called Square Modern.38
Some fine details in this cover such as the dotted stripes on the pants of the male character. Another is the tree bast structure.
Very appropriate since we are still in the middle of winter. This snow landscape is just beautifully executed. Works so well with this pinkish mood/sky.
Splendid shot! Stunning light, colors and lovely composition.
Beautiful smooth water and great sunlight colors.
What makes this interesting is the way this illustration is compiled: the mix of lines and fills, in combination with a limited color palette. Clever.
So delicate and beautiful! Colors, subtle use of gradients, everything is inspiring.
I like the atmosphere in this fall like scenery. Many great details such as the way the collar and sleeve patterns are created. It creates a lovely accent.
This is gorgeous! Loving the colors for this first set of Data Visualization Guidelines from IBM. Great composition and geometry.
If time travel was possible it would look like this. Jurassic Age is part of the Time Travel Destinations Posters. There are a few more, and some are animated too. Go have a look57.
Really diggin’ the stylized perspective. It creates a nice composition.
Great character in this clever logo illustration. That type is great – it really fits the tone of the logo.
So many details in this colorful illustration.
Admiring the simplicity in this illustration.
If you love minimalistic architecture and colors like me, you’ll appreciate this work by Paolo Pettigiani. The new series is called “SHAPEGUARD”.
A second one from the new series called “SHAPEGUARD” by Paolo Pettigiani.
Wonderful duotones at work, especially to create the feeling of the movement of the water. Those swimsuits are not too shabby as well.
For the sci-fi fans among us. A hi-tech village on a transparent hill, enjoying a dramatic red sunset of a class M red dwarf sun. Those gradients and the glowy sun is so perfect!
Talking about being in the right place at the right time. Sunset curving up a wave!
One more for the sci-fi fans. A special color palette and a great illustration style.
Shepard Fairey, whose iconic posters supporting Barack Obama’s 2008 election and won him Design of the Year, has a new offering. The American graphic designer has applied the same posterized style and palette of red, beige and blue of the Hope imagery to three new designs, created for a nonprofit organization called ‘the Amplifier Foundation’.
Loving this muted color palette and the organic style.
If you love your classics you’ll recognize the Lindy Hop in this wonderful illustration.
Creative Director and photographer Dylan Schwartz‘s point-of-view is high above the cities he photographs, capturing the bridges, sports complexes, and tips of high rises from the cockpit of a helicopter.
I’ve featured an image of San Francisco’s fog here before. The waviness is almost surreal.
When Charles Manson and The Beach Boys‘ Dennis Wilson meet. So beautifully stylized!
Such a great concept to have a panoramic scenery on the sweaters.
How does one look like after a rough day at work? I think this illustration pretty much nails it.
Great view on the hustle of Hong Kong. The flow of the water is greatly executed. Lovely color palette, perfectly executed.
Riccardo is a regular guest here. Love how he works with flat colors and sharp-angled shapes.
One more of Riccardo’s recent work. Brilliant as always! I also love the retro touch in all of them.
The Swedish photographer Jeanette Hägglund seems to have found a nice playground in the city of La Manzanera, near Alicante. She plays with the architecture, colors, and light and shadows. Be sure to see the rest of the series.
(yk, il)
In a recent sales meeting for a prospective healthcare client, our team at Mad*Pow found ourselves answering an all-too-familiar question. We had covered the fundamental approach of user-centered design, agreed on leading with research and strategy, and everything was going smoothly. Just as we were wrapping up, the head of their team suddenly asked, “Oh, you guys design mobile-first, right?”
Well, that’s a difficult question to answer.
While the concept of mobile-first began as a philosophy to help prioritize content and ensure positive, device-agnostic experiences, budgetary and scheduling constraints often result in mobile-first meaning mobile-only.
But according to the analytics data of our healthcare clients, the majority of their users are still on desktop. We want to provide a positive experience for those users and for users on mobile and tablet apps and for those using mobile browsers — and even for users having an in-person experience! It is not accurate to assume that mobile is the primary experience.
We’ve come to the conclusion that mobile-first is not specific enough to user needs. Truly user-centered design needs to start with the journeys our users are taking and the flows they follow to complete their objectives. In other words, journey-driven design. Journey-driven design naturally emerges from a user-centered approach that factors in the who, the when and the how to reveal the truly complex set of user needs. Good design doesn’t force users to pick up the device that we designers want them to pick up; good design gives users the best of what a company has to offer on the device that the user wants to use at that point in their journey.
Early in the world of mobile, we (designers in general) essentially designed for desktops… desktops with small screens. UX designers were used to thinking about things like how their users would approach a website and what visual, linguistic and contextual clues they would need to complete their tasks, but we didn’t think about the screen’s size changing. In 1999, we merely worked with 800 pixels. Then, we expanded to 1024, then 1200. Designers rejoiced!
As mobile design improved, battery life lengthened and Wi-Fi became ubiquitous, we learned that users were likely to approach a website from their mobile screen. But all of the design considerations for desktop didn’t translate well.
The idea of mobile-first was a triumph for the user. In 2009, Luke Wroblewski introduced6 this best practice. Karen McGrane added to the conversation7 with her Content Strategy for Mobile in 2012. They found that designing with the constraints of small screens helps us to prioritize content, which leads to a better experience for the end user. In addition, the capabilities of mobile devices left more opportunities for engaging experiences.
Still, it focused on only one great experience. One variation focused on the concept of graceful degradation, which suggested that we design a perfect experience (typically for desktop), and then account for older browsers and less common devices by ensuring functionality, even if the design suffered. Similarly, we tried progressive enhancement, which suggested starting small (mobile), and then enhancing the design as the device or browser gets bigger. Neither accounted for a great design across the realm.
And, more importantly, no one intended mobile-first to mean mobile-only.
Now it’s 2017, and we assume that every project needs to be mobile-friendly; so, when budgets decrease, mobile-first does become mobile-only. After all, 34% of people8 use the Internet predominantly from their mobile phones, and as of April 2015, Google penalizes websites that aren’t usable on mobile. But the choice isn’t as simple as mobile or desktop. Many users switch devices mid-task9, making it even more vital that we focus our content and create consistency across the experiences. In healthcare, 50% of smartphone users download health apps10 — which also means that 50% do not yet.
In a recent report on mobile marketing statistics, Smart Insights’ founder Dave Chafey analyzed the reports11 and concluded:
The reality is that while smartphone use is overwhelmingly popular for some activities such as social media, messaging and catching up with news and gossip, the majority of consumers in western markets also have desktop (and tablet) devices which they tend to use for more detailed review and purchasing.
We have to ask: Did a patient get their diagnosis via a phone call at home and then turn to their desktop, or were they at the doctor’s office searching on their mobile phone? Did that shoe shopper peek at their phone for a cheaper online deal, or did they go home and make the purchase on their tablet?
The divide between online and offline has dissolved as the Internet of Things15 expands, and our experiences are likely to cross between phone, laptop, tablet, TV, watch, refrigerator, car and even toilet16! Pixels and dimensions mean far less than responsive design, adaptive content and journey mapping. There is no obvious starting point for a journey, and no clear template to follow. In order to prioritize content as well as to design for screens beyond mobile, we need to focus on the journey as a holistic piece.
What makes journey-first design more effective than mobile-first is that we are looking at the process holistically. This means that even small budgets have the time and money to take into account design thinking for more screen sizes. In addition, journey-first provides context, a necessary element of today’s design, which was not a focus when mobile-first design began.
The first step in journey-driven design is to map the journey itself, with a focus on how someone accomplishes their goals, and the best person to ask is the end user. Most user-centered design projects already begin with a research phase. It’s an opportunity to hear from users or prospective users and to learn what their expectations are, where they have pain points, and what device or devices they will use as they navigate through the experience.
Based on the research, we create personas17, and then map out each persona’s ideal journey, including making note of every touchpoint along the way to the final objective. These are the times when the business and the persona communicate, whether via email, website, social media, store visit, phone call, mailing or other method. Those touchpoints will be what we can actually design, in order to shape the complete experience. In other words, we can’t necessarily control what a user does when they’re out for a walk, with their phone on silent, but we can control what they see when they receive an email from us, and that will affect the portions of their experience when we’re not connected.
For one of our healthcare clients, this became particularly clear when we considered how end users interact with insurance companies. Though the patient’s goal is to get their lab results, on the way to the doctor’s office, the patient might wonder what their copayment is. If they check the mobile website to log in, that’s a touchpoint for us. If they look for signs in the doctor’s office that list typical copayments, that’s another potential touchpoint. After the appointment, when the claim is filed, we have an opportunity to email the patient an update — another touchpoint.
All of these touchpoints come together in an ecosystem web21, which is one of our primary tools when we engage in journey-driven design. The ecosystem web will help us make connections between the areas of our website, the various other technologies involved in the Internet of Things, and the actions the user is taking. On the ecosystem web, we can also identify which device the persona is using at each digital touchpoint. We might know this information from user research, or we can gather it from analytics, which can get so granular as to tell us which pages on a website are most visited by which devices.
Knowing the touchpoints and designing an ecosystem map is all well and good, but there’s still the matter of designing an appropriate user interface and interactions to help the end user accomplish their goal(s). We still need to build out wireframes or mockups that can ultimately be developed into an app or website.
For some designers, creating an ecosystem map or shifting that into a journey map (whether current or idealized) is as far as journey-driven design goes, before shifting back to mobile-first. The consistency of viewing all designs in mobile-sized mockups is attractive, and stakeholders love the cohesive feel. But our job as designers is to sell them on the need for a variety of screens. We need to consider each set of interactions independently. If there is an ideal user journey that begins on mobile but after four screens switches to desktop, then we need to design four steps for mobile and then change to desktop. If another prospective user journey (maybe the power user’s journey) within the same app begins on a tablet, then we should begin with a tablet design. If still another begins on the watch, then that’s where we begin. We’re still prioritizing content, but we’re also accounting for the visual frame.
After designing each interaction, ask these key questions:
Although the designs won’t have the cohesive feel of a stack of mobile screens, it will have the context of the user’s scenario and their rationale for particular actions. As an added benefit, this type of non-linear thinking will encourage new design approaches. Plus, front-end developers love getting a few screens for each screen size early on; it helps them to structure their development work.
For our healthcare client, we sketched out an ideal journey map on sticky notes, using one sticky note for each touchpoint, and different colors for each likely device. Once we understood the touchpoints along the journey, we were able to think in terms of interactions on screens. We designed every screen for tablet, as well as variations for the six screens most likely to be done on mobile and the four most likely to be done on a larger desktop monitor. We used mobile-first and responsive design best practices, resulting in screens that engage users across all devices.
But best of all, we improve our work as storytellers by linking interactions back to the journey. Even the best, most user-focused designers can forget about the narrative when they begin to focus on individual interactions. The shift from one device to another will help recenter you, reminding you again and again of who the user is, where they are, why they’re doing what they’re doing and what their main focus is — particularly when their focus is not on your UI!
For designers working with content, there’s still another benefit to journey-driven design. Content strategists tend to plan out screen content by identifying what the goal of a screen is, and then providing copy to accomplish that goal and to get across any related messages. In traditional UX design, different content strategists have different methods for determining the goal of any given screen. But with journey-drive design, the designer and strategist will identify the goal or action happening each step of the way early on.
The designer and content strategist should work together to consider the goals of each touchpoint through the journey. Rather than waiting to ultimately write copy in wireframes, the content strategist can help ascertain the goals during the mapping of the ecosystem, which gives the designer a leg up on designing something that will work hand in hand with the content.
In addition, content strategists need to consider what content will be static and what will be adaptive26. When the designer designs different screens for different steps of the journey, they provide insight into how content will be perceived, whether the user will be distracted and what other clues to context the content strategist should consider.
In short, journey-driven design provides us with a big-picture approach that ensures fewer surprises and easier work the farther you go into the project. It keeps the whole team working together, all engaged in the creation of the ecosystem, so that decisions are shared early on, before designs are begun.
At its heart, journey-driven design is still just another flavor of user-centered design. Once the ecosystem is mapped and the journey identified, journey-driven design proceeds like any other UX project. Designs need to be created for other screen sizes and then checked for consistency across devices to account for edge cases. Usability testing, revising and iterating is as much a part of journey-driven design as any other.
Looking to get started with your own journey maps? Try out these steps:
We live in an Internet of Things. There’s no use pretending that mobile, desktop or tablets will be the way of the future. There are still more devices to come our way, more technologies to infuse our homes, workplaces and commutes. Journey-driven design allows design to expand and evolve as technologies evolve. We’re starting to practice this more and more, and we hope to share a case study or two in the coming months.
(cc, vf, al, il)
Chat bots, virtual reality and conversational design are just a few of the hot topics that are about to change the way we craft digital experiences. So, it’s a good time to rethink current practices and prepare for what’s about to come, don’t you think?
To give us all an early head start into the future of designing meaningful experiences, we are happy to live stream the backstage interviews which we’ll hold at the Awwwards Digital Design Conf in London1 today. Kindly organized by our good friends at Adobe.
We’ll sit down with some of the creative minds out there to discuss their workflows, techniques, failures and lessons learned from crafting digital experiences today. We’ll talk about ongoing and upcoming developments and the challenges they bring along for you as a designer. Light-bulb moments are guaranteed.
Sounds good? Well then, prepare yourself a nice cuppa coffee, or better yet, find a comfy spot not too far away from the coffee machine: the 5-hour long live stream will start right here on Thursday, February 2nd at 12 PM GMT.
Designing for the web today isn’t easy. We have to deal with so many unknowns – ranging from screen sizes to network conditions to capabilities of devices. To manage this unpredictability, we tend to rely on predictable patterns – predictable solutions which bring us to predictable results — faster. However, as a result, we tend to create same-looking experiences as well.
The live stream today will explore how we can break out of the box and create unpredictable, delightful experiences based on the new possibilities web technologies offer today. We’ll discuss all those fancy new technologies such as AR/VR, conversational interfaces and new hardware capabilities with established designers and developers from the industry to find out how we can apply them in our work, too.
We’ll talk about how we all can use these technologies in actual projects and how other designers and developers out there make use of them already. More importantly, you’ll get insights into lessons learned from their work – things that worked well, failures and successes, and the rationale behind all of the design decisions in-between.
You’ll find the schedule below, so you can jump in and out whenever you are interested, or just having lunch! There will be 20-25 min interview sessions, with short breaks in-between, in addition to two panels, in the middle and end of the day.
Send your questions! This is going to be quite a day, and we’d love you to be a part of it! We highly encourage you to take part in the livestream and send in your questions in the live chat window on YouTube3. We’ll moderate all questions and we’ll ask our speakers live during the Q&A. Big thanks to Adobe and Awwwwards for making it all possible!
Not enough? Well, to help the learning continue even after the live stream has ended, we did some digging in our own archives, as well to bring you eleven timeless SmashingConf talks that’ll take your UX skills to the next level. After all, a good foundation is the best way to be equipped for the new challenges that the future might bring, right? Happy learning!
After a decade and a half as a user experience professional, Jesse James Garrett has had more than his fair share of scrapes and bruises. In this presentation, he reflects on what he’s learned about what it really takes to deliver great UX work, from working with teams and managing stakeholders to breaking a creative rut and finding innovative solutions to design problems.
UX design is all the rage at the moment, but how usable is it as a process? When the top industry experts can’t even agree to its definition (or it’s existence) how are you supposed to bake it into your practice, let alone sell it to your clients? In fact, should you or your clients even care? In this session Andy Budd will try to demystify some of the rhetoric and dogma floating around about User Experience Design and explain what should and shouldn’t matter to your business, your clients, and your day-to-day work as a web designer.
Often when solving design problems we take into account the demands of stakeholders, or our desire to create beautiful work. These are both valuable for informing the choices we make, but there’s another factor that’s imperative to our design’s success: the needs of real people. Meagan Fisher will share why human need matters in design, why we resist getting close to our users, and easy ways to put users front and center in our process.
When you design a website, an app, software, or a product, you expect a human to interact with your design with their perceptual and sensory systems. If you want to design a product that is easy to use, engaging, and that meets your goals and objectives for user experience, you need to know about the psychology of perception. In this talk, Susan Weinschenk will share her top ten most important research studies on perception – concentrating on vision and hearing.
Joe Leech will take you on a journey to find the holy grail we are all looking for: the “perfect” design. To get you a step closer to finding it, he’ll share a practical strategy that uses psychology to produce the ideal design for those tricky user experience design problems we face everyday.
As experience designers, we spend our days, and often nights, working hard to solve problems for people. Often, the focus of our work is solely put on how a user will interact with any given digital experience. We’re told to work quickly, fail fast, and to not overthink it. What we tend to miss are the many touch points that can help shape the larger user experience and enable customers to have an emotional connection to a product, a service or a company. In this talk, Jon Setzen will explain how embracing a “design for service” thinking can help digital design teams shift from transaction-driven thinking to a relationship-centric approach.
Can good design turn people into better citizens? In this talk Adrian Zumbrunnen will discuss how design can drive behavior, the responsibilities of our craft, and explore a few rules we can use to nudge people in our desired direction.
Our current focus on components, design systems, pattern libraries, and frameworks has helped make design and development easier for us. It’s made it easier for us to make things consistent. But it has also provided fertile ground for design sameness and boring websites. One part of art direction is focusing on the details. But another part, the part that should define the details, the part too often quietly kicked under the bed, so people don’t see it, is the big picture. In this talk, Stephen Hay will talk about stepping back and looking at how all the small parts of your design can add up to a meaningful experience. That also involves looking at how meaningful experience can lead to all the small parts of your design.
It has never been easier to make a website, and our digital toolbox has never been greater. At the same time, we seem more concerned with automating our process and systemising design than with creative thinking and generating ideas. Is web design purely about utility? Is it all about convention? Is it a science? Or, is there room for beauty, expression and art? In this talk, Espen Brunborg will take a tongue-in-cheek look at the state of web design, explore different creative mindsets, and show how adding a pinch of comedy can make a real difference to the bottom line.
Based on his experience at Stack Overflow and Discourse, Jeff Atwood talks about building a habit forming community based on fun, tangible progress, and respect. It’s about how to gently guide your community members down the path toward mutual cooperation. You’ll gain a closer look at how Q&A and Discussion are different and how they can learn from each other and at building habits that lead to positive community behaviors.
What if this thing was magic? The web is touching everyday objects now, and designing for the internet of things means blessing everyday objects, places, even people with extraordinary abilities—requiring designers, too, to break with the ordinary. Designing for this new medium is less a challenge of technology than imagination. Sharing a rich trove of examples, Josh Clark explores the new experiences that are possible when anything can be an interface.
Well then, why not watch a couple more SmashingConf videos? Our Vimeo channel26 has got you covered.
(il, aa)
A great article that explains what secure headers are and how to implement them in Rails, Django, Express.js, Go, Nginx, and Apache.
Time flies by! February is already here and artists and designers from across the globe have once again diligently created a potpourri of unique wallpaper calendars to freshen up your desktop. This monthly wallpapers mission has been going on for eight years1 already and we are very thankful to all the creative minds who challenge their skills and contribute to it each month anew.
This post features their desktop artwork for February 2017. The wallpapers all come in versions with and without a calendar and can be downloaded for free. Now there’s only one question left to answer: Which one will make it to your desktop this month?
Please note that:
Designed by Xenia Latii6 from Germany.
“I was doodling pictures of my cat one day and decided I could turn it into a fun wallpaper – because a cold, winter night in February is the perfect time for staying in and cuddling with your cat, your significant other, or both!” — Designed by Angelia DiAntonio49 from Ohio, USA.
“Amantine-Lucile-Aurore Dupin, best known by her pseudonym George Sand, was a French novelist and memoirist.” — Designed by Tazi Design84 from Australia.
“Valentine’s day is coming and there’s always this saying that ‘opposites attract’ and while it is true, I think that even if you are the same, if love is strong, you can always make it work. No matter if the odds are against you.” — Designed by Maria Keller109 from Mexico.
“Valentine’s Day is probably celebrated almost everywhere in the world today. These celebrations and traditions on how a particular form of society celebrates love vary based on the place or the country. Mostly Valentine’s day is recognized as the day that celebrates love. But in many cultures, it can also be recognized as many other things such as spring, happiness and so on. There are so many folk traditions based on Valentine’s day and the belief it holds among various cultures also differs with respect to the history of the day.” — Designed by Dipanjan Karmakar162 from India.
“This minimalistic love logo is designed from geometric shapes, where the heart represents the letter ‘O’ and love. The anchor represents the letter ‘V’ very delicately and stylish and it symbolizes your wanderlust. The anchor is a symbol of adventure and travels.” — Designed by Antun Hirsman181 from Croatia.
“I got my inspiration from a children’s song here in Belgium. It’s a song by K3, they were and still are my favorite Belgian girl band from my childhood. ‘Love boat baby’ is a recent song from the new girls of K3, and this is my favorite one. I thought it would be a really nice wallpaper with an actual love boat on the sea to represent the month of love.” — Designed by Melissa Bogemans202 from Belgium.
“Hearts, kisses, chocolates, cards and flowers… Love is everywhere — it’s Valentine‘s Day!” — Designed by Hemangi Rane245 from Gainesville, Forida.
“February is a month of love and friendship! Valentine’s Day – a time for special presents, going out drinking with mates and most importantly, sharing delicious meals with loved ones. Food Crush!” — Designed by foodpanda Hong Kong254 from Hong Kong.
“The best and most beautiful things in the world can’t be seen or even touched — they must be felt with the heart.” — Designed by Colorgraphicz291 from India.
“Love is what makes you live and gives you the hope to live and look for a better future.” — Designed by Hatim M. M. Yousif Al-Sharif300 from the United Arab Emirates.
“Love is like a cloud… love is like a dream… love is one word and everything in between… love is a fairytale come true… I found love when I found you.” — Designed by Suman Sil345 from India.
“Flowers, chocolates, teddy, gifts are so mainstream. This year touch her heart and soul and rather than saying those obvious ‘Three Magical Words’, say ‘Be my queen’.” — Designed by Damn Perfect376 from Jaipur, India.
“Love is something eternal – the aspect may change, but not the essence. A Valentine’s Day wallpaper with traditional motifs especially for you!” — Designed by Roxi Nastase427 from Romania.
“February is National Bird Feeding Month. I used to feed the birds in our backyard as a kid so this is what inspired the imagery.” — Designed by Karen Frolo470 from the United States.
“February means Valentine’s Day – and love is in the air!” — Designed by James Mitchell501 from the United Kingdom.
“Happy Valentine’s day! We want everyone to find somebody to love.” — Designed by Anto Fernández | Destaca Imagen522 from Spain.
“February is a short month with a whirlwind of romance smack dab in the middle. Whether there is a special someone in your life or not, a gorgeous view at sunset stirs the heart and refreshes the soul. This particular sunset did that to me the moment I walked up, and I knew it needed to be captured. Taken at Lyall Bay Rocks in Wellington, New Zealand.” — Designed by Jenni Adamitis543 from Houston, Texas.
“The world needs some love, now more then ever. In this wallpaper, you can see there is a girl giving some of her love to a tree. The tree of love which spreads the love it gets to the world.” — Designed by Mira Van der Jeugt586 from Belgium.
“Danube is Europe’s second largest river, connecting 10 different countries. In these cold days, when ice paralyzes rivers and closes waterways, a small but brave icebreaker called Greben (Serbian word for ‘reef’) seems stronger than winter. It cuts through the ice on Đerdap gorge (Iron Gate) – the longest and biggest gorge in Europe – thus helping the production of electricity in the power plant. This is our way to give thanks to Greben!” — Designed by PopArt Studio601 from Serbia.
“February is the month of love par excellence, but also a different month. Perhaps because it is shorter than the rest or because it is the one that makes way for spring, but we consider it a special month. It is a perfect month to make plans because we have already finished the post-Christmas crunch and we notice that spring and summer are coming closer. That is why I like to imagine that maybe in another place someone is also making plans to travel to unknown lands.” — Designed by Verónica Valenzuela646 from Spain.
Designed by Nathalie Ouederni667 from France.
“In February, nature shows its creativity. Our artwork occurs when it is being drawn.” — Designed by Ana Masnikosa684 from Belgrade, Serbia.
“Charles Dickens is most famous for writing Oliver Twist and A Christmas Carol. The 7th of February is Charles Dickens birthday, so to honour his birthday I created this wallpaper with a quote from my favourite book, Oliver Twist!” — Designed by Safia Begum725 from the United Kingdom.
Designed by Nathalie Croze744 from France.
Designed by Gregor Haslinger755 from Germany.
“This design is dedicated to the rooster, the one who starts with his cock-a-doodle-do at the crack of dawn to wake everyone up from their deep slumber. He’s not just nature’s alarm clock but the protector of the coop as well, watchful as he walks through the farm with his pointed saddle feathers and red comb.” — Designed by Acodez IT Solutions772 from India.
Designed by Elise Vanoorbeek – Doud815 from Belgium.
“See the beautiful colors, precision, and the nature of Japan in one picture.” — Designed by Fatih Yilmaz846 from the Netherlands.
“2017 is the Chinese Year of the Rooster, the golden eggs on behalf of auspicious meaning.” — Designed by Sunny Hong869 from Taiwan.
Please note that we respect and carefully consider the ideas and motivation behind each and every artist’s work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience throughout their works. This is also why the themes of the wallpapers weren’t anyhow influenced by us, but rather designed from scratch by the artists themselves.
A big thank you to all designers for their participation. Join in next month888!
What’s your favorite theme or wallpaper for this month? Please let us know in the comment section below.
(cm)