Collective #304
AcrossTabs * Sonnet * ReactXP * Colormind * Flint OS * Color Tool * The Art of Naming * Vue.js 2.2 Cheat Sheet * Moon…
AcrossTabs * Sonnet * ReactXP * Colormind * Flint OS * Color Tool * The Art of Naming * Vue.js 2.2 Cheat Sheet * Moon…
Editor’s Note: In the world of web design, we tend to become preoccupied with the here and now. In “Resilient Web Design1“, Jeremy Keith emphasizes the importance of learning from the past in order to better prepare ourselves for the future. So, perhaps we should stop and think more beyond our present moment? The following is an excerpt from Jeremy’s web book.
Design adds clarity. Using colour, typography, hierarchy, contrast, and all the other tools at their disposal, designers can take an unordered jumble of information and turn it into something that’s easy to use and pleasurable to behold. Like life itself, design can win a small victory against the entropy of the universe, creating pockets of order from the raw materials of chaos.
The Book of Kells is a beautifully illustrated manuscript created over 1200 years ago. It’s tempting to call it a work of art, but it is a work of design. The purpose of the book is to communicate a message; the gospels of the Christian religion. Through the use of illustration and calligraphy, that message is conveyed in an inviting context, making it pleasing to behold.
Design works within constraints. The Columban monks who crafted the Book of Kells worked with four inks on vellum, a material made of calfskin. The materials were simple but clearly defined. The cenobitic designers knew the hues of the inks, the weight of the vellum, and crucially, they knew the dimensions of each page.
Materials and processes have changed and evolved over the past millennium or so. Gutenberg’s invention of movable type was a revolution in production. Whereas it would have taken just as long to create a second copy of the Book of Kells as it took to create the first, multiple copies of the Gutenberg bible could be produced with much less labour. Even so, many of the design patterns such as drop caps and columns were carried over from illuminated manuscripts. The fundamental design process remained the same: knowing the width and height of the page, designers created a pleasing arrangement of elements.
The techniques of the print designer reached their zenith in the 20th century with the rise of the Swiss Style. Its structured layout and clear typography is exemplified in the work of designers like Josef Müller‐Brockmann and Jan Tschichold. They formulated grid systems and typographic scales based on the preceding centuries of design.
Knowing the ratio of the dimensions of a page, designers could position elements with maximum effect. The page is a constraint and the grid system is a way of imposing order on it.
When the web began to conquer the world in the 1990s, designers started migrating from paper to pixels. David Siegel’s Creating Killer Websites came along at just the right time. Its clever TABLE
and GIF hacks allowed designers to replicate the same kind of layouts that they had previously created for the printed page.
Those TABLE
layouts later became CSS layouts, but the fundamental thinking remained the same: the browser window — like the page before it — was treated as a known constraint upon which designers imposed order.
There’s a problem with this approach. Whereas a piece of paper or vellum has a fixed ratio, a browser window could be any size. There’s no way for a web designer to know in advance what size any particular person’s browser window will be.
Designers had grown accustomed to knowing the dimensions of the rectangles they were designing within. The web removed that constraint.
There’s nothing quite as frightening as the unknown. These words of former US Secretary of Defense Donald Rumsfeld should be truly terrifying (although the general consensus at the time was that they sounded like nonsense):
There are known knowns. There are things we know we know. We also know there are known unknowns, that is to say we know there are some things we do not know. But there are also unknown unknowns — the ones we don’t know we don’t know.
The ratio of the browser window is just one example of a known unknown on the web. The simplest way to deal with this situation is to use flexible units for layout: percentages rather than pixels. Instead, designers chose to pretend that the browser dimensions were a known known. They created fixed‐width layouts for one specific window size.
In the early days of the web, most monitors were 640 pixels wide. Web designers created layouts that were 640 pixels wide. As more and more people began using monitors that were 800 pixels wide, more and more designers began creating 800 pixel wide layouts. A few years later, that became 1024 pixels. At some point web designers settled on the magic number of 960 pixels as the ideal width.
It was as though the web design community were participating in a shared consensual hallucination. Rather than acknowledge the flexible nature of the browser window, they chose to settle on one set width as the ideal …even if that meant changing the ideal every few years.
Not everyone went along with this web‐wide memo.
In the year 2000 the online magazine A List Apart published an article entitled A Dao of Web Design. It has stood the test of time remarkably well.
In the article, John Allsopp points out that new mediums often start out by taking on the tropes of a previous medium. Scott McCloud makes the same point in his book Understanding Comics:
Each new medium begins its life by imitating its predecessors. Many early movies were like filmed stage plays; much early television was like radio with pictures or reduced movies.
With that in mind, it’s hardly surprising that web design began with attempts to recreate the kinds of layouts that designers were familiar with from the print world. As John put it:
“Killer Web Sites” are usually those which tame the wildness of the web, constraining pages as if they were made of paper — Desktop Publishing for the Web.
Web design can benefit from the centuries of learning that have informed print design. Massimo Vignelli, whose work epitomises the Swiss Style, begins his famous Canon with a list of The Intangibles including discipline, appropriateness, timelessness, responsibility, and more. Everything in that list can be applied to designing for the web. Vignelli’s Canon also includes a list of The Tangibles. That list begins with paper sizes.
The web is not print. The known constraints of paper — its width and height — simply don’t exist. The web isn’t bound by pre‐set dimensions. John Allsopp’s A Dao Of Web Design called on practitioners to acknowledge this:
The control which designers know in the print medium, and often desire in the web medium, is simply a function of the limitation of the printed page. We should embrace the fact that the web doesn’t have the same constraints, and design for this flexibility.
This call to arms went unheeded. Designers remained in their Matrix-like consensual hallucination where everyone’s browser was the same width. That’s understandable. There’s a great comfort to be had in believing a reassuring fiction, especially when it confers the illusion of control.
There is another reason why web designers clung to the comfort of their fixed‐width layouts. The tools of the trade encouraged a paper‐like approach to designing for the web.
It’s a poor craftsperson who always blames their tools. And yet every craftsperson is influenced by their choice of tools. As Marshall McLuhan’s colleague John Culkin put it, “we shape our tools and thereafter our tools shape us.”
When the discipline of web design was emerging, there was no software created specifically for visualising layouts on the web. Instead designers co‐opted existing tools.
Adobe Photoshop was originally intended for image manipulation; touching up photos, applying filters, compositing layers, and so on. By the mid nineties it had become an indispensable tool for graphic designers. When those same designers began designing for the web, they continued using the software they were already familiar with.
If you’ve ever used Photoshop then you’ll know what happens when you select “New” from the “File” menu: you will be asked to enter fixed dimensions for the canvas you are about to work within. Before adding a single pixel, a fundamental design decision has been made that reinforces the consensual hallucination of an inflexible web.
Photoshop alone can’t take the blame for fixed‐width thinking. After all, it was never intended for designing web pages. Eventually, software was released with the specific goal of creating web pages. Macromedia’s Dreamweaver was an early example of a web design tool. Unfortunately it operated according to the idea of WYSIWYG: What You See Is What You Get.
While it’s true that when designing with Dreamweaver, what you see is what you get, on the web there is no guarantee that what you see is what everyone else will get. Once again, web designers were encouraged to embrace the illusion of control rather than face the inherent uncertainty of their medium.
It’s possible to overcome the built‐in biases of tools like Photoshop and Dreamweaver, but it isn’t easy. We might like to think that we are in control of our tools, that we bend them to our will, but the truth is that all software is opinionated software. As futurist Jamais Cascio put it, “software, like all technologies, is inherently political”:
Code inevitably reflects the choices, biases and desires of its creators.
Small wonder then that designers working with the grain of their tools produced websites that mirrored the assumptions baked into those tools — assumptions around the ability to control and tame the known unknowns of the World Wide Web.
By the middle of the first decade of the twenty‐first century, the field of web design was propped up by multiple assumptions:
A minority of web designers were still pleading for fluid layouts. I counted myself amongst their number. We were tolerated in much the same manner as a prophet of doom on the street corner wearing a sandwich board reading “The End Is Nigh” — an inconvenient but harmless distraction.
There were even designers suggesting that Photoshop might not be the best tool for the web, and that we could consider designing directly in the browser using CSS and HTML. That approach was criticised as being too constraining. As we’ve seen, Photoshop has its own constraints but those had been internalised by designers so comfortable in using the tool that they no longer recognised its shortcomings.
This debate around the merits of designing Photoshop comps and designing in the browser would have remained largely academic if it weren’t for an event that would shake up the world of web design forever.
An iPod. A phone. And an internet communicator. An iPod. A phone …are you getting it? These are not three separate devices. This is one device. And we are calling it: iPhone.
With those words in 2007, Steve Jobs unveiled a mobile device that could be used to browse the World Wide Web.
Web‐capable mobile devices existed before the iPhone, but they were mostly limited to displaying a specialised mobile‐friendly file format called WML. Very few devices could render HTML. With the introduction of the iPhone and its competitors, handheld devices were shipping with modern web browsers capable of being first‐class citizens on the web. This threw the field of web design into turmoil.
Assumptions that had formed the basis for an entire industry were now being called into question:
The rise of mobile devices was confronting web designers with the true nature of the web as a flexible medium filled with unknowns.
The initial reaction to this newly‐exposed reality involved segmentation. Rather than rethink the existing desktop‐optimised website, what if mobile devices could be shunted off to a separate silo? This mobile ghetto was often at a separate subdomain to the “real” site: m.example.com or mobile.example.com.
This segmented approach was bolstered by the use of the term “the mobile web” instead of the more accurate term “the web as experienced on mobile.” In the tradition of their earlier consensual hallucinations, web designers were thinking of mobile and desktop not just as separate classes of device, but as entirely separate websites.
Determining which devices were sent to which subdomain required checking the browser’s user‐agent string against an ever‐expanding list of known browsers. It was a Red Queen’s race just to stay up to date. As well as being error‐prone, it was also fairly arbitrary. While it might have once been easy to classify, say, an iPhone as a mobile device, that distinction grew more difficult over time. With the introduction of tablets such as the iPad, it was no longer clear which devices should be redirected to the mobile URL. Perhaps a new subdomain was called for — t.example.com or tablet.example.com — along with a new term like “the tablet web”. But what about the “TV web” or the “internet‐enabled fridge web?”
The practice of creating different sites for different devices just didn’t scale. It also ran counter to a long‐held ideal called One Web:
One Web means making, as far as is reasonable, the same information and services available to users irrespective of the device they are using.
But this doesn’t mean that small‐screen devices should be served page layouts that were designed for larger dimensions:
However, it does not mean that exactly the same information is available in exactly the same representation across all devices.
If web designers wished to remain true to the spirit of One Web, they needed to provide the same core content at the same URL to everyone regardless of their device. At the same time, they needed to be able to create different layouts depending on the screen real‐estate available.
The shared illusion of a one‐size‐fits‐all approach to web design began to evaporate. It was gradually replaced by an acceptance of the ever‐changing fluid nature of the web.
In April of 2010 Ethan Marcotte stood on stage at An Event Apart in Seattle, a gathering for people who make websites. He spoke about an interesting school of thought in the world of architecture: responsive design, the idea that buildings could change and adapt according to the needs of the people using the building. This, he explained, could be a way to approach making websites.
One month later he expanded on this idea in an article called Responsive Web Design. It was published on A List Apart, the same website that had published John Allsopp’s A Dao Of Web Design ten years earlier. Ethan’s article shared the same spirit as John’s earlier rallying cry. In fact, Ethan begins his article by referencing A Dao Of Web Design.
Both articles called on web designers to embrace the idea of One Web. But whereas A Dao Of Web Design was largely rejected by designers comfortable with their WYSIWYG tools, Responsive Web Design found an audience of designers desperate to resolve the mobile conundrum.
Writer Steven Johnson has documented the history of invention and innovation. In his book Where Good Ideas Come From, he explores an idea called “the adjacent possible”:
At every moment in the timeline of an expanding biosphere, there are doors that cannot be unlocked yet. In human culture, we like to think of breakthrough ideas as sudden accelerations on the timeline, where a genius jumps ahead fifty years and invents something that normal minds, trapped in the present moment, couldn’t possibly have come up with. But the truth is that technological (and scientific) advances rarely break out of the adjacent possible; the history of cultural progress is, almost without exception, a story of one door leading to another door, exploring the palace one room at a time.
This is why the microwave oven could not have been invented in medieval France; there are too many preceding steps required — manufacturing, energy, theory — to make that kind of leap. Facebook could not exist without the World Wide Web, which could not exist without the internet, which could not exist without computers, and so on. Each step depends upon the accumulated layers below.
By the time Ethan coined the term Responsive Web Design a number of technological advances had fallen into place. As I wrote in the foreword to Ethan’s subsequent book on the topic:
The technologies existed already: fluid grids, flexible images, and media queries. But Ethan united these techniques under a single banner, and in so doing changed the way we think about web design.
TABLE
layouts.The layers were in place. A desire for change — driven by the relentless rise of mobile — was also in place. What was needed was a slogan under which these could be united. That’s what Ethan gave us with Responsive Web Design.
The first experiments in responsive design involved retrofitting existing desktop‐centric websites: converting pixels to percentages, and adding media queries to remove the grid layout on smaller screens. But this reactive approach didn’t provide a firm foundation to build upon. Fortunately another slogan was able to encapsulate a more resilient approach.
Luke Wroblewski coined the term Mobile First in response to the ascendency of mobile devices:
Losing 80% of your screen space forces you to focus. You need to make sure that what stays on the screen is the most important set of features for your customers and your business. There simply isn’t room for any interface debris or content of questionable value. You need to know what matters most.
If you can prioritise your content and make it work within the confined space of a small screen, then you will have created a robust, resilient design that you can build upon for larger screen sizes.
Stephanie and Bryan Rieger encapsulated the mobile‐first responsive design approach:
The lack of a media query is your first media query.
In this context, Mobile First is less about mobile devices per se, and instead focuses on prioritising content and tasks regardless of the device. It discourages assumptions. In the past, web designers had fallen foul of unfounded assumptions about desktop devices. Now it was equally important to avoid making assumptions about mobile devices.
Web designers could no longer make assumptions about screen sizes, bandwidth, or browser capabilities. They were left with the one aspect of the website that was genuinely under their control: the content.
Echoing A Dao Of Web Design, designer Mark Boulton put this new approach into a historical context:
Embrace the fluidity of the web. Design layouts and systems that can cope to whatever environment they may find themselves in. But the only way we can do any of this is to shed ways of thinking that have been shackles around our necks. They’re holding us back.
Start designing from the content out, rather than the canvas in.
This content‐out way of thinking is fundamentally different to the canvas‐in approach that dates all the way back to the Book of Kells. It asks web designers to give up the illusion of control and create a materially‐honest discipline for the World Wide Web.
Relinquishing control does not mean relinquishing quality. Quite the opposite. In acknowledging the many unknowns involved in designing for the web, designers can craft in a resilient flexible way that is true to the medium.
Texan web designer Trent Walton was initially wary of responsive design, but soon realised that it was a more honest, authentic approach than creating fixed‐width Photoshop mock‐ups:
My love for responsive centers around the idea that my website will meet you wherever you are — from mobile to full‐blown desktop and anywhere in between.
For years, web design was dictated by the designer. The user had no choice but to accommodate the site’s demand for a screen of a certain size or a network connection of a certain speed. Now, web design can be a conversation between the designer and the user. Now, web design can reflect the underlying principles of the web itself.
On the twentieth anniversary of the World Wide Web, Tim Berners‐Lee wrote an article for Scientific American in which he reiterated those underlying principles:
The primary design principle underlying the Web’s usefulness and growth is universality. The Web should be usable by people with disabilities. It must work with any form of information, be it a document or a point of data, and information of any quality — from a silly tweet to a scholarly paper. And it should be accessible from any kind of hardware that can connect to the Internet: stationary or mobile, small screen or large.
(yk, il)
Part one of a four part series on how to get started with front-end code testing. By Gil Tayar. Read the other parts: 2, 3, 4.
The checkout page is the last page a user visits before finally decide to complete a purchase on your website. It’s where window shoppers turn into paying customers. If you want to leave a good impression, you should provide optimal usability of the billing form and improve it wherever it is possible to. In less than one day, you can add some simple and useful features to your project to make your billing form user-friendly and easy to fill in.
Credit-card details are among the most commonly corrected fields in forms. Fortunately, nowadays almost every popular browser has an autofill feature, allowing the users to store their card data in the browser and to fill out form fields more quickly. Also, since iOS 8, mobile Safari users can scan their card’s information with the iPhone’s camera and fill in their card’s number, expiration date and name fields automatically. Autocomplete is simple, clear and built into HTML5, so we’ll add it to our form first.
Both autofill and card-scanning work only with forms that have special attributes: autocomplete
for modern browsers (listed in the HTML5 standard5) and name
for browsers without HTML5 support.
Note: A demo with all the functions covered below is available6. You can find its code in the GitHub repository7.
Credit cards have specific autofill attributes. For autocomplete
:
cc-name
cc-number
cc-csc
cc-exp-month
cc-exp-year
cc-exp
cc-type
cc-csc
For name
:
ccname
cardnumber
cvc
ccmonth
ccyear
expdate
card-type
cvc
To use autofill, you should add the relevant autocomplete
and name
attributes for the input elements in your index.html
file:
<input type="text" placeholder="XXXX XXXX XXXX XXXX" pattern="[0-9]{14,23}" required autofocus autocomplete="cc-number" name="cardnumber"> <input type="text" placeholder="MM" pattern="[0-9]{1,2}" required autocomplete="cc-exp-month" name="ccmonth"> <input type="text" placeholder="YYYY" pattern="[0-9]{2,4}" required autocomplete="cc-exp-year" name="ccyear"> <input type="text" placeholder="CARDHOLDER NAME" required autocomplete="cc-name" name="ccname">
Don’t forget to use placeholder
in input fields to help users understand the required data formats. We can provide input validation with HTML5 attributes: pattern
(based on JavaScript regular expressions) and required
. For example, with pattern="[0-9s]{14,23}" required
attributes in a field, the user won’t be able to submit the form if the field is empty, has a non-numeric or non-space symbol, or is shorter than 14 symbols or longer than 23 symbols.
Once the user has saved their card data in the browser, we can see how it works:
Notice that using one field for the expiration date (MM/YYYY
) is not recommended because Safari requires separate month and year fields to autocomplete.
Of course, autocomplete and autofill attributes are widely used not only for billing forms but also for names, email and postal addresses and passwords. You can save the user time and make them even happier by correctly using these attributes in your forms.
Even though we now have autocomplete, Google Payments and Apple Wallet, many users still prefer to enter their credit-card details manually, and no one is safe from making a typo with a 16-digit number. Long numbers are hard to read, even more painful to write and almost impossible to verify.
To help users feel comfortable with their long card number, we can divide it into four-digit groups by adding the simple VanillaMasker9 library by BankFacil to our project. Inputted data will be transformed to a masked string. So, we can add a custom pattern with spaces after every fourth digit of a card number, a two-digit pattern for the expiration month and a four-digit pattern for the expiration year. VanillaMasker can also verify data formats: If we have passed only “9” (the default number for the masker) to the ID, then all non-numeric characters will be deleted after input.
npm install vanilla-masker --save
In our index.js
file, let’s import the library and use it with one string for every field:
import masker from 'vanilla-masker'; const cardNumber = document.getElementById('card__input_number'); const cardMonth = document.getElementById('card__input_month'); const cardYear = document.getElementById('card__input_year'); masker(cardNumber).maskPattern('9999 9999 9999 9999 99'); masker(cardMonth).maskPattern('99'); masker(cardYear).maskPattern('9999');
Thus, the digits of the card number in our form will be separated, like on a real card:
The masker will erase characters with an incorrect value type or length, although our HTML validation will notify the user about invalid data only after the form has been submitted. But we can also check a card number’s correctness as it is being filled in. Did you know that all plastic credit-card numbers are generated according to the simple and effective Luhn algorithm? It was created in 1954 by Hans Peter Luhn and subsequently set as an international standard. We can include the Luhn algorithm to pre-validate the card number’s input field and warn the user about a typo.
To do this, we can use the tiny fast-luhn11 npm package, adapted from Shirtless Kirk’s gist12. We need to add it to our project’s dependencies:
npm install fast-luhn --save
To use fast-luhn, we’ll import it in a module and just call luhn(number)
on the input event to check whether the number is correct. For example, let’s add the card__input_invalid
class to change the outline
and field’s text color
when the user has made an accidental error and a check has not been passed. Note that VanillaMasker adds a space after every four-digit group, so we need to convert the inputted value to a plain number without spaces using the split
and join
methods, before calling lunh
.
The result is code that looks like this:
import luhn from 'fast-luhn'; const cardNumber = document.getElementById('card-number'); cardNumber.addEventListener('input', (event) => { const number = event.target.value; if (number.length >= 14) { const isLuhnCheckPassed = luhn(number.split(' ').join('')); cardNumber.classList.toggle('card__input_invalid', !isLuhnCheckPassed); cardNumber.classList.toggle('card__input_valid', isLuhnCheckPassed); } else { cardNumber.classList.remove('card__input_invalid', 'card__input_valid'); } });
To prevent luhn
from being called while the user is typing, let’s call it only if the inputted number is as long as the minimum length with spaces (14 characters, including 12 digits) or longer, or else remove the card__input_invalid
class.
Here are the validation examples in action:
The Luhn algorithm is also used for some discount card numbers, IMEI numbers, National Provider Identifier numbers in the US, and Social Insurance Numbers in Canada. So, this package isn’t limited to credit cards.
Many users want to check their card details with their own eyes, even if they know the form is being validated. But human beings perceive things in a way that makes comparison of differently styled numbers a little confusing. As we want the interface to be simple and intuitive, we can help users by showing a font that looks similar to the one they would find on a real card. Also, the font will make our card-like input form look more realistic and appropriate.
Several free credit-card fonts are available:
We’ll use Halter. First, download the font, place it in the project’s folder, and create a CSS3 @font-face
rule in style.css
:
@font-face { font-family: Halter; src: url(font/HALTER__.ttf); }
Then, simply add it to the font-family
rule for the .card-input
class:
.card-input { color: #777; font-family: Halter, monospace; }
Don’t forget that if you input the CSS in a JavaScript file with the webpack bundle, you’ll need to add file-loader
:
npm install file-loader --save
And add file-loader
for the font file types in webpack.config.js
:
module: { loaders: [ { test: /.(ttf|eot|svg|woff(2)?)(?[a-z0-9=&.]+)?$/, loader: 'file', }], },
The result looks pretty good:
You can make it even fancier, if you like, with an embossed effect using a double text-shadow
and a semi-transparency on the text’s color
:
.card-input { color: rgba(84,110,122,0.5); text-shadow: -0.75px -0.75px white, 0.75px 0.75px 0 black; font-family: Halter, monospace; }
Last but not least, you can pleasantly surprise customers by adding a coloring feature to the form. Every bank has its own brand color, which usually dominates that bank’s card. To make a billing form even more user-friendly, we can use this color and print the bank’s name above the form fields (corresponding to where it appears on a real card). This will also help the user to avoid making a typo in the number and to ensure they have picked the right card.
We can identify the bank of every user’s card by the first six digits, which contain the Issuer Identification Number (IIN) or Bank Identification Number (BIN). Banks DB19 by Ramoona is a database that gets a bank’s name and brand color from this prefix. The author has set up a demo of Banks DB20.
This database is community-driven, so it doesn’t contain all of the world’s bank. If a user’s bank isn’t represented, the space for the bank’s name will be empty and the background will show the default color (#fafafa
).
Banks DB assumes one of two ways of using it: with PostCSS or with CSS in JavaScript. We are using it with PostCSS. If you are new to PostCSS, this is a good reason to start using it. You can learn more about PostCSS in the official documentation21 or in Drew Minns’ article “An Introduction to PostCSS22”.
We need to install the PostCSS Banks DB23 plugin to set the CSS template for Banks DB and install the PostCSS Contrast24 plugin to improve the readability of the bank’s name:
npm install banks-db postcss-banks-db postcss-contrast --save
After that, we’ll add these new plugins to our PostCSS process in accordance with the module bundler and the load configuration used in our project. For example, with Webpack and postcss-load-config25, simply add the new plugins to the .postcssrc
file.
Then, in our style.css
file, we need to add a new class rule template for Banks DB with the postcss-contrast plugin:
@banks-db-template { .card_bank-%code% { background-color: %color%; color: contrast(%color%); } }
We could also set a long transition
on the whole .card
class to smoothly fade in and out the background and text color, so as not to startle users with an abrupt change:
.card { … transition: background 0.6s, color 0.6s; }
Now, import Banks DB in index.js
, and use it in the input
event listener. If the BIN is represented in the database, we’ll add the class containing the bank’s name to the form in order to insert the name and change the form’s background.
import banksDB from 'banks-db'; const billingForm = document.querySelector('.card'); const bankName = document.querySelector('.card__bank-name'); const cardNumber = document.getElementById('card__input_number'); cardNumber.addEventListener('input', (event) => { const number = event.target.value; const bank = banksDB(number); if (bank.code) { billingForm.classList.add(`card_bank-${(bank.code || 'other')}`); bankName.innerText = bank.country === 'ru' ? bank.localTitle : bank.engTitle; } else { billingForm.className = 'card'; bankName.innerText = ''; } });
If you use webpack, add json-loader for the .json
file extension to webpack’s configuration in order to input the database in the bundle correctly.
Here is a working example of Banks DB:
In case you see no effect with your bank card, you can open an issue or add your bank to the database.27
Improving your billing form can make the user experience much more intuitive and, as a result, ensure user convenience and increase confidence in your product. It’s an important part of web applications. We can improve it quickly and easily using these simple features:
autocomplete
and name
attributes for autofilling,placeholder
attribute to inform the user of the input format,pattern
and require
attributes to prevent incorrect submission of form,Note that only Banks DB requires a module bundler; you can use the others within the simple script. Adding all of this functionality to your checkout page would most likely take less than a day.
(rb, al, il)
I started out as a web developer, and that’s now one part of what I do as a full-stack developer, but never had I imagined I’d create things for the desktop. I love the web. I love how altruistic our community is, how it embraces open-source, testing and pushing the envelope. I love discovering beautiful websites and powerful apps. When I was first tasked with creating a desktop app, I was apprehensive and intimidated. It seemed like it would be difficult, or at least… different.
It’s not an attractive prospect, right? Would you have to learn a new language or three? Imagine an archaic, alien workflow, with ancient tooling, and none of those things you love about the web. How would your career be affected?
OK, take a breath. The reality is that, as a web developer, not only do you already possess all of the skills to make great modern desktop apps, but thanks to powerful new APIs at your disposal, the desktop is actually where your skills can be leveraged the most.
In this article, we’ll look at the development of desktop applications using NW.js1 and Electron2, the ups and downs of building one and living with one, using one code base for the desktop and the web, and more.
First of all, why would anyone create a desktop app? Any existing web app (as opposed to a website, if you believe in the distinction) is probably suited to becoming a desktop app. You could build a desktop app around any web app that would benefit from integration in the user’s system; think native notifications, launching on startup, interacting with files, etc. Some users simply prefer having certain apps there permanently on their machine, accessible whether they have a connection or not.
Maybe you’ve an idea that would only work as a desktop app; some things simply aren’t possible with a web app (at least yet, but more about that in a little bit). You could create a self-contained utility app for internal company use, without requiring anyone to install anything other than your app (because Node.js in built-in). Maybe you’ve an idea for the Mac App Store. Maybe it would simply be a fun side project.
It’s hard to sum up why you should consider creating a desktop app because there are so many kinds of apps you could create. It really depends on what you’d like to achieve, how advantageous you find the additional APIs, and how much offline usage would enhance the experience for your users. For my team, it was a no-brainer because we were building a chat application7. On the other hand, a connection-dependent desktop app that doesn’t really have any desktop integration should be a web app and a web app alone. It wouldn’t be fair to expect a user to download your app (which includes a browser of its own and Node.js) when they wouldn’t get any more value from it than from visiting a URL of yours in their favorite browser.
Instead of describing the desktop app you personally should build and why, I’m hoping to spark an idea or at least spark your interest in this article. Read on to see just how easy it is to create powerful desktop apps using web technology and what that can afford you over (or alongside of) creating a web app.
Desktop applications have been around a long time but you don’t have all day, so let’s skip some history and begin in Shanghai, 2011. Roger Wang, of Intel’s Open Source Technology Center, created node-webkit; a proof-of-concept Node.js module that allowed the user to spawn a WebKit browser window and use Node.js modules within <script>
tags.
After some progress and a switch from WebKit to Chromium (the open-source project Google Chrome is based on), an intern named Cheng Zhao joined the project. It was soon realized that an app runtime based on Node.js and Chromium would make a nice framework for building desktop apps. The project went on be quite popular.
Note: node-webkit was later renamed NW.js to make it a bit more generic because it no longer used Node.js or WebKit. Instead of Node.js, it was based on io.js (the Node.js fork) at the time, and Chromium had moved on from WebKit to its own fork, Blink.
So, if you were to download an NW.js app, you would actually be downloading Chromium, plus Node.js, plus the actual app code. Not only does this mean a desktop app can be created using HTML, CSS and JavaScript, but the app would also have access to all of the Node.js APIs (to read and write to disk, for example), and the end user wouldn’t know any better. That’s pretty powerful, but how does it work? Well, first let’s take a look at Chromium.
There is a main background process, and each tab gets its own process. You might have seen that Google Chrome always has at least two processes in Windows’ task manager or macOS’ activity monitor. I haven’t even attempted to arrange the contents of the main process here, but it contains the Blink rendering engine, the V8 JavaScript engine (which is what Node.js is built on, too, by the way) and some platform APIs that abstract native APIs. Each isolated tab or renderer process has access to the JavaScript engine, CSS parser and so on, but it is completely separate to the main process for fault tolerance. Renderer processes interact with the main process through interprocess communication (IPC).
This is roughly what an NW.js app looks like. It’s basically the same, except that each window has access to Node.js now as well. So, you have access to the DOM and you can require other scripts, node modules you’ve installed from npm, or built-in modules provided by NW.js. By default, your app has one window, and from there you can spawn other windows.
Creating an app is really easy. All you need is an HTML file and a package.json
, like you would have when working with Node.js. You can create a default one by running npm init --yes
. Typically, a package.json
would point a JavaScript file as the “main” file for the module (i.e. using the main
property), but with NW.js you need to edit the main
property to point to your HTML file.
{ "name": "example-app", "version": "1.0.0", "description": "", "main": "index.html", "scripts": { "test": "echo "Error: no test specified" && exit 1" }, "keywords": [], "author": "", "license": "ISC" }
<!-- index.html --> <!DOCTYPE html> <html> <head> <title>Example app</title> <meta name="viewport" content="width=device-width, initial-scale=1"> </head> <body> <h1>Hello, world!</h1> </body> </html>
Once you install the official nw
package from npm (by running npm install -g nw
), you can run nw .
within the project directory to launch your app.
It’s as easy as that. So, what happened here was that NW.js opened the initial window, loading your HTML file. I know this doesn’t look like much, but it’s up to you add some markup and styles, just like you would in a web app.
You could drop the window bar and chrome if you like, or create your own custom frame. You could have semi to fully transparent windows, hidden windows and more. I took this a bit further recently and resurrected Clippy14 using NW.js. There’s something weirdly satisfying about seeing Clippy on macOS or Windows 10.
So, you get to write HTML, CSS and JavaScript. You can use Node.js to read and write to disk, execute system commands, spawn other executables and more. Hypothetically, you could build a multiplayer roulette game over WebRTC that deletes some of the users’ files randomly, if you wanted.
You get access not only to Node.js’ APIs but to all of npm, which has over 350,000 modules now. For example, auto-launch20 is an open-source module we created at Teamwork.com21 to launch an NW.js or Electron app on startup.
Node.js also has what’s known as “native modules,” which, if you really need to do something a bit lower level, allows you to create modules in C or C++.
To top it all off, NW.js exposes APIs that effectively wrap native APIs, allowing you to integrate closely with the desktop environment. You can have a tray icon, open a file or URL in the default system application, and a lot lot more. All you need to do to trigger a notification is use the HTML5 notification API:
new Notification('Hello', { body: 'world' });
You might recognize GitHub’s text editor, Atom, below. Whether you use it or not, Atom was a game-changer for desktop apps. GitHub started development of Atom in 2013, soon recruited Cheng Zhao, and forked node-webkit as its base, which it later open-sourced under the name atom-shell.
Note: It’s disputed whether Electron is a fork of node-webkit or whether everything was rewritten from scratch. Either way, it’s effectively a fork for the end user because the APIs were almost identical.
In making Atom, GitHub improved on the formula and ironed out a lot of the bugs. In 2015, atom-shell was renamed Electron. Since then it has hit version 1.0, and with GitHub pushing it, it has really taken off.
As well as Atom, other notable projects built with Electron include Slack, Visual Studio Code, Brave, HyperTerm and Nylas, which is really doing some cutting-edge stuff with it. Mozilla Tofino is an interesting one, too. It was an internal project at Mozilla (the company behind Firefox), with the aim of radically improving web browsers. Yeah, a team within Mozilla chose Electron (which is based on Chromium) for this experiment.
But how is it different from NW.js? First of all, Electron is less browser-oriented than NW.js. The entry point for an Electron app is a script that runs in the main process.
The Electron team patched Chromium to allow for the embedding of multiple JavaScript engines that could run at the same time. So, when Chromium releases a new version, they don’t have to do anything.
Note: NW.js hooks into Chromium a little differently, and this was often blamed on the fact NW.js wasn’t quite as good at keeping up with Chromium as Electron was. However, throughout 2016, NW.js has released a new version within 24 hours of each major Chromium release, which the team attributes to an organizational shift.
Back to the main process. Your app hasn’t any window by default, but you can open as many windows as you’d like from the main process, each having its own renderer process, just like NW.js.
So, yeah, the minimum you need for an Electron app is a main JavaScript file (which we’ll leave empty for now) and a package.json
that points to it. Then, all you need to do is npm install --save-dev electron
and run electron .
to launch your app.
{ "name": "example-app", "version": "1.0.0", "description": "", "main": "main.js", "scripts": { "test": "echo "Error: no test specified" && exit 1" }, "keywords": [], "author": "", "license": "ISC" }
// main.js, which is empty
Not much will happen, though, because your app hasn’t any window by default. You can open as many windows as you’d like from the main process, each having its own renderer process, just like they’d have in an NW.js app.
// main.js const {app, BrowserWindow} = require('electron'); let mainWindow; app.on('ready', () => { mainWindow = new BrowserWindow({ width: 500, height: 400 }); mainWindow.loadURL('file://' + __dirname + '/index.html'); });
<!-- index.html --> <!DOCTYPE html> <html> <head> <title>Example app</title> <meta name="viewport" content="width=device-width, initial-scale=1"> </head> <body> <h1>Hello, world!</h1> </body> </html>
You could load a remote URL in this window, but typically you’d create a local HTML file and load that. Ta-da!
Of the built-in modules Electron provides, like the app
or BrowserWindow
module used in the previous example, most can only be used in either the main or a renderer process. For example, the main process is where, and only where, you can manage your windows, automatic updates and more. You might want a click of a button to trigger something in your main process, though, so Electron comes with built-in methods for IPC. You can basically emit arbitrary events and listen for them on the other side. In this case, you’d catch the click
event in the renderer process, emit an event over IPC to the main process, catch it in the main process and finally perform the action.
OK, so Electron has distinct processes, and you have to organize your app slightly differently, but that’s not a big deal. Why are people using Electron instead of NW.js? Well, there’s mindshare. So many related tools and modules are out there as a result of its popularity. The documentation is better. Most importantly, it has fewer bugs and superior APIs.
Electron’s documentation really is amazing, though — that’s worth emphasizing. Take the Electron API Demos app30. It’s an Electron app that interactively demonstrates what you can do with Electron’s APIs. Not only is the API described and sample code provided for creating a new window, for example, but clicking a button will actually execute the code and a new window will open.
If you submit an issue via Electron’s bug tracker, you’ll get a response within a couple of days. I’ve seen three-year-old NW.js bugs, although I don’t hold it against them. It’s tough when an open-source project is written in languages drastically different from the languages known by its users. NW.js and Electron are written mostly in C++ (and a tiny bit of Objective C++) but used by people who write JavaScript. I’m extremely grateful for what NW.js has given us.
Electron ironed out a few of the flaws in the NW.js APIs. For example, you can bind global keyboard shortcuts, which would be caught even if your app isn’t focused. An example API flaw I ran into was that binding to Control + Shift + A
in an NW.js app did what you would expect on Windows, but actually bound to Command + Shift + A
on a Mac. This was intentional but really weird. There was no way to bind to the Control
key. Also, binding to the Command
key did bind to the Command
key but the Windows
key on Windows and Linux as well. The Electron team spotted these problems (when adding shortcuts to Atom I assume) and quickly updated their globalShortcut API so both of these cases work as you’d expect. To be fair, NW.js has since fixed the former but not the latter.
There are a few other differences. For instance, in recent NW.js versions, notifications that were previously native are now Chrome-style ones. These don’t go into the notification centre on Mac OS X or Windows 10, but there are modules on npm that you could use as a workaround if you’d like. If you want to do something interesting with audio or video, use Electron, because some codecs don’t work out of the box with NW.js.
Electron has added a few new APIs as well, more desktop integration, and it has built-in support for automatic updates, but I’ll cover that later.
It feels fine. Sure, it’s not native. Most desktop apps these days don’t look like Windows Explorer or Finder anyway, so users won’t mind or realize that HTML is behind your user interface. You can make it feel more native if you’d like, but I’m not convinced it will make the experience any better. For example, you could prevent the cursor from turning to a hand when the user hovers over a button. That’s how a native desktop app would act, but is that better? There are also projects out there like Photon Kit33, which is basically a CSS framework like Bootstrap, but for macOS-style components.
What about performance? Is it slow or laggy? Well, your app is essentially a web app. It’ll perform pretty much like a web app in Google Chrome. You can create a performant app or a sluggish one, but that’s fine because you already have the skills to analyze and improve performance. One of the best things about your app being based on Chromium is that you get its DevTools. You can debug within the app or remotely, and the Electron team has even created a DevTools extension named Devtron36 to monitor some Electron-specific stuff.
Your desktop app can be more performant than a web app, though. One thing you could do is create a worker window, a hidden window that you use to perform any expensive work. Because it’s an isolated process, any computation or processing going on in that window won’t affect rendering, scrolling or anything else in your visible window(s).
Keep in mind that you can always spawn system commands, spawn executables or drop down to native code if you really need to (you won’t).
Both NW.js and Electron support a wide array of platforms, including Windows, Mac and Linux. Electron doesn’t support Windows XP or Vista; NW.js does. Getting an NW.js app into the Mac App Store is a bit tricky; you’ll have to jump through a few hoops. Electron, on the other hand, comes with Mac App Store-compatible builds, which are just like the normal builds except that you don’t have access to some modules, such as the auto-updater module (which is fine because your app will update via the Mac App Store anyway).
Electron even supports ARM builds, so your app can run on a Chromebook or Raspberry Pi. Finally, Google may be phasing out Chrome Packaged Apps37, but NW.js allows you to port an app over to an NW.js app and still have access the same Chromium APIs.
Even though 32-bit and 64-bit builds are supported, you’ll get away with 64-bit Mac and Windows apps. You will need 32-bit and 64-bit Linux apps, though, for compatibility.
So, let’s say that Electron has won over and you want to ship an Electron app. There’s a nice Node.js module named electron-packager38 that helps with packing your app up into an .app
or .exe
file. A few similar projects exist, including interactive ones that prompt you step by step. You should use electron-builder7939, though, which builds on top of electron-packager, plus a few other related modules. It generates .dmg
s and Windows installers and takes care of the code-signing of your app for you. This is really important. Without it, your app would be labelled as untrusted by operating systems, your app could trigger anti-virus software, and Microsoft SmartScreen might try to block the user from launching your app.
The annoying thing about code-signing is that you have to sign your app on a Mac for Mac and on Windows for Windows. So, if you’re serious about shipping desktop apps, then you’ll need to build on multiple machines for each release.
This can feel a bit too manual or tedious, especially if you’re used to creating for the web. Thankfully, electron-builder was created with automation in mind. I’m talking here about continuous integration tools and services such as Jenkins40, CodeShip41, Travis-CI42, AppVeyor43 (for Windows) and so on. These could run your desktop app build at the press of a button or at every push to GitHub, for example.
NW.js doesn’t have automatic update support, but you’ll have access to all of Node.js, so you can do whatever you want. Open-source modules are out there for it, such as node-webkit-updater44, which handles downloading and replacing your app with a newer version. You could also roll your own custom system if you wanted.
Electron has built-in support for automatic updates, via its autoUpdater8045 API. It doesn’t support Linux, first of all; instead, publishing your app to Linux package managers is recommended. This is common on Linux — don’t worry. The autoUpdater
API is really simple; once you give it a URL, you can call the checkForUpdates
method. It’s event-driven, so you can subscribe to the update-downloaded
event, for example, and once it’s fired, call the restartAndInstall
method to install the new version and restart the app. You can listen for a few other events, which you can use to tie the auto-updating functionality into your user interface nicely.
Note: You can have multiple update channels if you want, such as Google Chrome and Google Chrome Canary.
It’s not quite as simple behind the API. It’s based on the Squirrel update framework, which differs drastically between Mac and Windows, which use the Squirrel.Mac46 and Squirrel.Windows47 projects, respectively.
The update code within your Mac Electron app is simple, but you’ll need a server (albeit a simple server). When you call the autoUpdater module’s checkForUpdates
method, it will hit your server. What your server needs to do is return a 204 (“No Content”) if there isn’t an update; and if there is, it needs to return a 200 with a JSON containing a URL pointing to a .zip
file. Back under the hood of your app (or the client), Squirrel.Mac will know what to do. It’ll go get that .zip
, unzip it and fire the appropriate events.
There a bit more (magic) going on in your Windows app when it comes to automatic updates. You won’t need a server, but you can have one if you’d like. You could host the static (update) files somewhere, such as AWS S3, or even have them locally on your machine, which is really handy for testing. Despite the differences between Squirrel.Mac and Squirrel.Windows, a happy medium can be found; for example, having a server for both, and storing the updates on S3 or somewhere similar.
Squirrel.Windows has a couple of nice features over Squirrel.Mac as well. It applies updates in the background; so, when you call restartAndInstall
, it’ll be a bit quicker because it’s ready and waiting. It also supports delta updates. Let’s say your app checks for updates and there is one newer version. A binary diff (between the currently installed app and the update) will be downloaded and applied as a patch to the current executable, instead of replacing it with a whole new app. It can even do that incrementally if you’re, say, three versions behind, but it will only do that if it’s worth it. Otherwise, if you’re, say, 15 versions behind, it will just download the latest version in its entirety instead. The great thing is that all of this is done under the hood for you. The API remains really simple. You check for updates, it will figure out the optimal method to apply the update, and it will let you know when it’s ready to go.
Note: You will have to generate those binary diffs, though, and host them alongside your standard updates. Thankfully, electron-builder generates these for you, too.
Thanks to the Electron community, you don’t have to build your own server if you don’t want to. There are open-source projects you can use. Some allow you to store updates on S348 or use GitHub releases49, and some even go as far as providing administrative dashboards50 to manage the updates.
So, how does making a desktop app differ from making a web app? Let’s look at a few unexpected problems or gains you might come across along the way, some unexpected side effects of APIs you’re used to using on the web, workflow pain points, maintenance woes and more.
Well, the first thing that comes to mind is browser lock-in. It’s like a guilty pleasure. If you’re making a desktop app exclusively, you’ll know exactly which Chromium version all of your users are on. Let your imagination run wild; you can use flexbox, ES6, pure WebSockets, WebRTC, anything you want. You can even enable experimental features in Chromium for your app (i.e. features coming down the line) or tweak settings such as your localStorage allowance. You’ll never have to deal with any cross-browser incompatibilities. This is on top of Node.js’ APIs and all of npm. You can do anything.
Note: You’ll still have to consider which operating system the user is running sometimes, though, but OS-sniffing is a lot more reliable and less frowned upon than browser sniffing.
Another interesting thing is that your app is essentially offline-first. Keep that in mind when creating your app; a user can launch your app without a network connection and your app will run; it will still load the local files. You’ll need to pay more attention to how your app behaves if the network connection is lost while it’s running. You may need to adjust your mindset.
Note: You can load remote URLs if you really want, but I wouldn’t.
One tip I can give you here is not to trust navigator.onLine
51 completely. This property returns a Boolean indicating whether or not there’s a connection, but watch out for false positives. It’ll return true
if there’s any local connection without validating that connection. The Internet might not actually be accessible; it could be fooled by a dummy connection to a Vagrant virtual machine on your machine, etc. Instead, use Sindre Sorhus’ is-online
52 module to double-check; it will ping the Internet’s root servers and/or the favicon of a few popular websites. For example:
const isOnline = require('is-online'); if(navigator.onLine){ // hmm there's a connection, but is the Internet accessible? isOnline().then(online => { console.log(online); // true or false }); } else { // we can trust navigator.onLine when it says there is no connection console.log(false); }
Speaking of local files, there are a few things to be aware of when using the file://
protocol — protocol-less URLs, for one; you can’t use them anymore. I mean URLs that start with //
instead of http://
or https://
. Typically, if a web app requests //example.com/hello.json
, then your browser would expand this to http://example.com/hello.json
or to https://example.com/hello.json
if the current page is loaded over HTTPS. In our app, the current page would load using the file://
protocol; so, if we requested the same URL, it would expand to file://example.com/hello.json
and fail. The real worry here is third-party modules you might be using; authors aren’t thinking of desktop apps when they make a library.
You’d never use a CDN. Loading local files is basically instantaneous. There’s also no limit on the number of concurrent requests (per domain), like there is on the web (with HTTP/1.1 at least). You can load as many as you want in parallel.
A lot of asset generation is involved in creating a solid desktop app. You’ll need to generate executables and installers and decide on an auto-update system. Then, for each update, you’ll have to build the executables again, more installers (because if someone goes to your website to download it, they should get the latest version) and binary diffs for delta updates.
Weight is still a concern. A “Hello, World!” Electron app is 40 MB zipped. Besides the typical advice you follow when creating a web app (write less code, minify it, have fewer dependencies, etc.), there isn’t much I can offer you. The “Hello, World!” app is literally an app containing one HTML file; most of the weight comes from the fact that Chromium and Node.js are baked into your app. At least delta updates will reduce how much is downloaded when a user performs an update (on Windows only, I’m afraid). However, your users won’t be downloading your app on a 2G connection (hopefully!).
You will discover unexpected behavior now and again. Some of it is more obvious than the rest, but a little annoying nonetheless. For example, let’s say you’ve made a music player app that supports a mini-player mode, in which the window is really small and always in front of any other apps. If a user were to click or tap a dropdown (<select/>
), then it would open to reveal its options, overflowing past the bottom edge of the app. If you were to use a non-native select library (such as select2 or chosen), though, you’re in trouble. When open, your dropdown will be cut off by the edge of your app. So, the user would see a few items and then nothing, which is really frustrating. This would happen in a web browser, too, but it’s not often the user would resize the window down to a small enough size.
You may or may not know it, but on a Mac, every window has a header and a body. When a window isn’t focused, if you hover over an icon or button in the header, its appearance will reflect the fact that it’s being hovered over. For example, the close button on macOS is gray when the window is blurred but red when you hover over it. However, if you move your mouse over something in the body of the window, there is no visible change. This is intentional. Think about your desktop app, though; it’s Chromium missing the header, and your app is the web page, which is the body of the window. You could drop the native frame and create your own custom HTML buttons instead for minimize, maximize and close. If your window isn’t focused, though, they won’t react if you were to hover over them. Hover styles won’t be applied, and that feels really wrong. To make it worse, if you were to click the close button, for example, it would focus the window and that’s it. A second click would be required to actually click the button and close the app.
To add insult to injury, Chromium has a bug that can mask the problem, making you think it works as you might have originally expected. If you move your mouse fast enough (nothing too unreasonable) from outside the window to an element inside the window, hover styles will be applied to that element. It’s a confirmed bug; applying the hover styles on a blurred window body “doesn’t meet platform expectations,” so it will be fixed. Hopefully, I’m saving you some heartbreak here. You could have a situation in which you’ve created beautiful custom window controls, yet in reality a lot of your users will be frustrated with your app (and will guess it’s not native).
So, you must use native buttons on a Mac. There’s no way around that. For an NW.js app, you must enable the native frame, which is the default anyway (you can disable it by setting window
object’s frame
property to false
in your package.json
).
You could do the same with an Electron app. This is controlled by setting the frame
property when creating a window; for example, new BrowserWindow({width: 800, height: 600, frame: true})
. As the Electron team does, they spotted this issue and added another option as a nice compromise; titleBarStyle
. Setting this to hidden
will hide the native title bar but keep the native window controls overlaid over the top-left corner of your app. This gets you around the problem of having non-native buttons on Mac, but you can still style the top of the app (and the area behind the buttons) however you like.
// main.js const {app, BrowserWindow} = require('electron'); let mainWindow; app.on('ready', () => { mainWindow = new BrowserWindow({ width: 500, height: 400, titleBarStyle: 'hidden' }); mainWindow.loadURL('file://' + __dirname + '/index.html'); });
Here’s an app in which I’ve disabled the title bar and given the html
element a background image:
See “Frameless Window57” from Electron’s documentation for more.
Well, you can pretty much use all of the tooling you’d use to create a web app. Your app is just HTML, CSS and JavaScript, right? Plenty of plugins and modules are out there specifically for desktop apps, too, such as Gulp plugins for signing your app, for example (if you didn’t want to use electron-builder). Electron-connect58 watches your files for changes, and when they occur, it’ll inject those changes into your open window(s) or relaunch the app if it was your main script that was modified. It is Node.js, after all; you can pretty much do anything you’d like. You could run webpack inside your app if you wanted to — I’ve no idea why you would, but the options are endless. Make sure to check out awesome-electron59 for more resources.
What’s it like to maintain and live with a desktop app? First of all, the release flow is completely different. A significant mindset adjustment is required. When you’re working on the web app and you deploy a change that breaks something, it’s not really a huge deal (of course, that depends on your app and the bug). You can just roll out a fix. Users who reload or change the page and new users who trickle in will get the latest code. Developers under pressure might rush out a feature for a deadline and fix bugs as they’re reported or noticed. You can’t do that with desktop apps. You can’t take back updates you push out there. It’s more like a mobile app flow. You build the app, put it out there, and you can’t take it back. Some users might not even update from a buggy version to the fixed version. This will make you worry about all of the bugs out there in old versions.
Because a host of different versions of your app are in use, your code will exist in multiple forms and states. Multiple variants of your client (desktop app) could be hitting your API in 10 slightly different ways. So, you’ll need to strongly consider versioning your API, really locking down and testing it well. When an API change is to be introduced, you might not be sure if it’s a breaking change or not. A version released a month ago could implode because it has some slightly different code.
You might receive a few strange bug reports — ones that involve bizarre user account arrangements, specific antivirus software or worse. I had a case in which a user had installed something (or had done something themselves) that messed with their system’s environment variables. This broke our app because a dependency we used for something critical failed to execute a system command because the command could no longer be found. This is a good example because there will be occasions when you’ll have to draw a line. This was something critical to our app, so we couldn’t ignore the error, and we couldn’t fix their machine. For users like this, a lot of their desktop apps would be somewhat broken at best. In the end, we decided to show a tailored error screen to the user if this unlikely error were ever to pop up again. It links to a document explaining why it has occurred and has a step-by-step guide to fix it.
Sure, a few web-specific concerns are no longer applicable when you’re working on a desktop app, such as legacy browsers. You will have a few new ones to take into consideration, though. There’s a 256-character limit on file paths in Windows, for example.
Old versions of npm store dependencies in a recursive file structure. Your dependencies would each get stored in their own directory within a node_modules
directory in your project (for example, node_modules/a
). If any of your dependencies have dependencies of their own, those grandchild dependencies would be stored in a node_modules
within that directory (for example, node_modules/a/node_modules/b
). Because Node.js and npm encourage small single-purpose modules, you could easily end up with a really long path, like path/to/your/project/node_modules/a/node_modules/b/node_modules/c/.../n/index.js
.
Note: Since version 3, npm flattens out the dependency tree as much as possible. However, there are other causes for long paths.
We had a case in which our app wouldn’t launch at all (or would crash soon after launching) on certain versions of Windows due to an exceeding long path. This was a major headache. With Electron, you can put all of your app’s code into an asar archive60, which protects against path length issues but has exceptions and can’t always be used.
We created a little Gulp plugin named gulp-path-length61, which lets you know whether any dangerously long file paths are in your app. Where your app is stored on the end user’s machine will determine the true length of the path, though. In our case, our installer will install it to C:Users<username>AppDataRoaming
. So, when our app is built (locally by us or by a continuous integration service), gulp-path-length is instructed to audit our files as if they’re stored there (on the user’s machine with a long username, to be safe).
var gulp = require('gulp'); var pathLength = require('gulp-path-length'); gulp.task('default', function(){ gulp.src('./example/**/*', {read: false}) .pipe(pathLength({ rewrite: { match: './example', replacement: 'C:\Users\this-is-a-long-username\AppData\Roaming\Teamwork Chat\' } })); });
Because all of the automatic updates handling is done within the app, you could have an uncaught exception that crashes the app before it even gets to check for an update. Let’s say you discover the bug and release a new version containing a fix. If the user launches the app, an update would start downloading, and then the app would die. If they were to relaunch app, the update would start downloading again and… crash. So, you’d have to reach out to all of your users and let them know they’ll need to reinstall the app. Trust me, I know. It’s horrible.
You’ll probably want to track usage of the app and any errors that occur. First of all, Google Analytics won’t work (out of the box, at least). You’ll have to find something that doesn’t mind an app that runs on file://
URLs. If you’re using a tool to track errors, make sure to lock down errors by app version if the tool supports release-tracking. For example, if you’re using Sentry62 to track errors, make sure to set the release
property when setting up your client63, so that errors will be split up by app version. Otherwise, if you receive a report about an error and roll out a fix, you’ll keep on receiving reports about the error, filling up your reports or logs with false positives. These errors will be coming from people using older versions.
Electron has a crashReporter
64 module, which will send you a report any time the app completely crashes (i.e. the entire app dies, not for any old error thrown). You can also listen for events indicating that your renderer process has become unresponsive.
Be extra-careful when accepting user input or even trusting third-party scripts, because a malicious individual could have a lot of fun with access to Node.js. Also, never accept user input and pass it to a native API or command without proper sanitation.
Don’t trust code from vendors either. We had a problem recently with a third-party snippet we had included in our app for analytics, provided by company X. The team behind it rolled out an update with some dodgy code, thereby introducing a fatal error in our app. When a user launched our app, the snippet grabbed the newest JavaScript from their CDN and ran it. The error thrown prevented anything further from executing. Anyone with the app already running was unaffected, but if they were to quit it and launch it again, they’d have the problem, too. We contacted X’s support team and they promptly rolled out a fix. Our app was fine again once our users restarted it, but it was scary there for a while. We wouldn’t have been able to patch the problem ourselves without forcing affected users to manually download a new version of the app (with the snippet removed).
How can you mitigate this risk? You could try to catch errors, but you’ve no idea what they company X might do in its JavaScript, so you’re better off with something more solid. You could add a level of abstraction. Instead of pointing directly to X’s URL from your <script>
, you could use Google Tag Manager65 or your own API to return either HTML containing the <script>
tags or a single JavaScript file containing all of your third-party dependencies somehow. This would enable you to change which snippets get loaded (by tweaking Google Tag Manager or your API endpoint) without having to roll out a new update.
However, if the API no longer returned the analytics snippet, the global variable created by the snippet would still be there in your code, trying to call undefined functions. So, we haven’t solved the problem entirely. Also, this API call would fail if a user launches the app without a connection. You don’t want to restrict your app when offline. Sure, you could use a cached result from the last time the request succeeded, but what if there was a bug in that version? You’re back to the same problem.
Another solution would be to create a hidden window and load a (local) HTML file there that contains all of your third-party snippets. So, any global variables that the snippets create would be scoped to that window. Any errors thrown would be thrown in that window and your main window(s) would be unaffected. If you needed to use those APIs or global variables in your main window(s), you’d do this via IPC now. You’d send an event over IPC to your main process, which would then send it onto the hidden window, and if it was still healthy, it would listen for the event and call the third-party function. That would work.
This brings us back to security. What if someone malicious at company X were to include some dangerous Node.js code in their JavaScript? We’d be rightly screwed. Luckily, Electron has a nice option to disable Node.js for a given window, so it simply wouldn’t run:
// main.js const {app, BrowserWindow} = require('electron'); let thirdPartyWindow; app.on('ready', () => { thirdPartyWindow = new BrowserWindow({ width: 500, height: 400, webPreferences: { nodeIntegration: false } }); thirdPartyWindow.loadURL('file://' + __dirname + '/third-party-snippets.html'); });
NW.js doesn’t have any built-in support for testing. But, again, you have access to Node.js, so it’s technically possible. There is a way to test stuff such as button-clicking within the app using Chrome Remote Interface66, but it’s tricky. Even then, you can’t trigger a click on a native window control and test what happens, for example.
The Electron team has created Spectron67 for automated testing, and it supports testing native controls, managing windows and simulating Electron events. It can even be run in continuous integration builds.
var Application = require('spectron').Application var assert = require('assert') describe('application launch', function () { this.timeout(10000) beforeEach(function () { this.app = new Application({ path: '/Applications/MyApp.app/Contents/MacOS/MyApp' }) return this.app.start() }) afterEach(function () { if (this.app && this.app.isRunning()) { return this.app.stop() } }) it('shows an initial window', function () { return this.app.client.getWindowCount().then(function (count) { assert.equal(count, 1) }) }) })
Because your app is HTML, you could easily use any tool to test web apps, just by pointing the tool at your static files. However, in this case, you’d need to make sure the app can run in a web browser without Node.js.
It’s not necessarily about desktop or web. As a web developer, you have all of the tools required to make an app for either environment. Why not both? It takes a bit more effort, but it’s worth it. I’ll mention a few related topics and tools, which are complicated in their own right, so I’ll keep just touch on them.
First of all, forget about “browser lock-in,” native WebSockets, etc. The same goes for ES6. You can either revert to writing plain old ES5 JavaScript or use something like Babel68 to transpile your ES6 into ES5, for web use.
You also have require
s throughout your code (for importing other scripts or modules), which a browser won’t understand. Use a module bundler that supports CommonJS (i.e. Node.js-style require
s), such as Rollup69, webpack70 or Browserify71. When making a build for the web, a module bundler will run over your code, traverse all of the require
s and bundle them up into one script for you.
Any code using Node.js or Electron APIs (i.e. to write to disk or integrate with the desktop environment) should not be called when the app is running on the web. You can detect this by checking whether process.version.nwjs
or process.versions.electron
exists; if it does, then your app is currently running in the desktop environment.
Even then, you’ll be loading a lot of redundant code in the web app. Let’s say you have a require
guarded behind a check like if(app.isInDesktop)
, along with a big chunk of desktop-specific code. Instead of detecting the environment at runtime and setting app.isInDesktop
, you could pass true
or false
into your app as a flag at buildtime (for example, using the envify72 transform for Browserify). This will aide your module bundler of choice when it’s doing its static analysis and tree-shaking (i.e. dead-code elimination). It will now know whether app.isInDesktop
is true
. So, if you’re running your web build, it won’t bother going inside that if
statement or traversing the require
in question.
There’s that release mindset again; it’s challenging. When you’re working on the web, you want to be able to roll out changes frequently. I believe in continually delivering small incremental changes that can be rolled back quickly. Ideally, with enough testing, an intern can push a little tweak to your master branch, resulting in your web app being automatically tested and deployed.
As we covered earlier, you can’t really do this with a desktop app. OK, I guess you technically could if you’re using Electron, because electron-builder can be automated and, so, can spectron tests. I don’t know anyone doing this, and I wouldn’t have enough faith to do it myself. Remember, broken code can’t be taken back, and you could break the update flow. Besides, you don’t want to deliver desktop updates too often anyway. Updates aren’t silent, like they are on the web, so it’s not very nice for the user. Plus, for users on macOS, delta updates aren’t supported, so users would be downloading a full new app for each release, no matter how small a tweak it has.
You’ll have to find a balance. A happy medium might be to release all fixes to the web as soon as possible and release a desktop app weekly or monthly — unless you’re releasing a feature, that is. You don’t want to punish a user because they chose to install your desktop app. Nothing’s worse than seeing a press release for a really cool feature in an app you use, only to realize that you’ll have to wait a while longer than everyone else. You could employ a feature-flags API to roll out features on both platforms at the same time, but that’s a whole separate topic. I first learned of feature flags from “Continuous Delivery: The Dirty Details73,” a talk by Etsy’s VP of Engineering, Mike Brittain.
So, there you have it. With minimal effort, you can add “desktop app developer” to your resumé. We’ve looked at creating your first modern desktop app, packaging, distribution, after-sales service and a lot more. Hopefully, despite the pitfalls and horror stories I’ve shared, you’ll agree that it’s not as scary as it seems. You already have what it takes. All you need to do is look over some API documentation. Thanks to a few new powerful APIs at your disposal, you can get the most value from your skills as a web developer. I hope to see you around (in the NW.js or Electron community) soon.
(rb, al, il)
In 2017, the question is not whether we should use a responsive design framework. Increasingly, we are using them. The question is which framework should we be using, and why, and whether we should use the whole framework or just parts of it.
With dozens of responsive design frameworks available to download, many web developers appear to be unaware of any except for Bootstrap. Like most of web development, responsive design frameworks are not one-size-fits-all. Let’s compare the latest versions of Bootstrap, Foundation and UIkit for their similarities and differences.
Three years ago, I wrote an article for Smashing Magazine, “Responsive Design Frameworks: Just Because You Can, Should You?5” In the article, I argued that responsive design frameworks should be used by professionals under the right circumstances and for appropriate projects. Custom design still has its place, of course. However, a fully customized approach is not appropriate to every website, under every timeline, every budget and every set of browser support guidelines.
Since that time, the industry has evolved at its typical breakneck pace. Sass, the CSS preprocessor, has become standard to most workflows. We’re more accustomed to adjusting Sass variables to customize existing code. We confidently work with mixins, building our own styles. We’ve even got Sass formulas for generating color schemes6 that we can incorporate in our work.
Workflows themselves have become standard, making use of technologies such as Node.js, Bower, Grunt, Gulp, Git and more. Old browsers continue to fall away, allowing front-end developers to confidently use more features of HTML5 and CSS3, with fewer worries about significant cross-browser compatibility issues.
Revisiting the 131 comments on my 2014 article, I saw readers suggesting a number of approaches to making use of responsive design frameworks:
All of these are legitimate approaches. Hundreds of frameworks are available. Many who have been in the business for some time have a favorite, whether it’s an established framework or one they’ve created.
However, increasingly, I see students, clients and web development firms defaulting to Bootstrap as their starting point, often with little critical evaluation of whether it’s an appropriate solution to the task at hand. Clients hire me for training in Bootstrap, for example, and then ask how they can add functionality that’s native to Foundation. When I ask, “Why not use Foundation?,” they tell me they’ve never heard of it and didn’t know there are options beyond Bootstrap for responsive design.
There is no way I could review the dozens, if not hundreds, of responsive design frameworks. However, given that Bootstrap is the 500-pound gorilla of responsive design, I’ve chosen two other frameworks to evaluate compared to Bootstrap7: Foundation8 and GetUIkit9. I’ve chosen these three frameworks based on the characteristics they share, in that they are full-service responsive design frameworks. They offer grid systems, SCSS with piles of variables and mixins for customizations, basic styling of nearly all HTML5 tags, prestyled components such as badges, labels and cards, and piles of JavaScript-based features such as dropdown menus, accordions, tabs, image galleries, image carousels and so much more. All three frameworks offer ways of reducing file sizes to just those styles and functionalities needed to work on a given website. They are all in active development.
Again, I am not suggesting that these are the only responsive design frameworks, nor am I implying that they are the best frameworks. These are, however, popular frameworks with piles of features out of the box, making them attractive to many development firms wanting to work with “Bootstrap or a close equivalent.”
To start the discussion, let’s examine some of the basic background features of each framework.
Bootstrap 4 | Foundation for Sites 6 | UIkit 3 | |
---|---|---|---|
Current version, release date | 4.0.0-alpha 6, released January 2017 | 6.3.0, released January 2017 | 3.00 beta 9, released February 2017 |
Owned by | Originally developed at Twitter by Mark Otto and Jacob Thornton, now an open-source project | ZURB, a web development company in Campbell, California | YOOtheme, a WordPress and Joomla theme development company in Hamburg, Germany |
Website | Bootstrap10 | Foundation11 | |
Standard distribution package includes | Required CSS (1 minified, 1 standard, both with .map files); required JavaScript (1 minified, 1 standard). Former theme is incorporated in Sass files, activated by variables. (Glyphicons are not distributed with Bootstrap 4.) Also contains two additional sets of CSS files (1 minified, 1 standard, both with .map files): bootstrap-grid, which appears to be a flexbox-based grid system, possibly redundant at this point; and bootstrap-reboot, based on normalize.css. A CDN is available for standard distribution files. |
Four distributions12: Complete, Essential, Custom, Sass. Complete version includes compiled CSS (minified and not), plus compiled JavaScript, including individual vendor files. Essential includes typography, grid, Reveal, and Interchange only. Custom can be customized on the website for downloading, and the developer can choose elements to include and change a few variables. The Sass version can only be downloaded using the command line, Foundation CLI or Yeti Launcher. A CDN is available for standard distribution files. | Distribution includes a single minified CSS file, a single minified JavaScript file and 26 SVG graphics. LESS, CSS, and JavaScript source files are available via Bower or npm. A CDN is available for standard distribution files. |
Additional JavaScript libraries required? | Yes, you must download or link to a CDN separately: jQuery 3.1.1 Slim, Tether 1.4.0. Files are linked to just before end of body element. |
All dependencies are bundled in the distribution. jQuery is required, but it is part of the distribution. Files are linked to just before end of body element. |
All dependencies are bundled in the distribution. jQuery is required, but it is part of the distribution. Linking to files in the head is recommended. |
Browser support | Latest versions of: Chrome (macOS, Windows, iOS and Android), Safari (macOS and iOS), Firefox (macOS, Windows, Android, iOS) (latest version of Firefox plus Extended Support Release), Edge (Windows, Windows 10 Mobile), Opera (macOS, Windows), Android Browser & WebView 5.0+, Windows 10 Mobile, Internet Explorer 10+ | Last two versions of Chrome, Firefox, Safari, Opera, mobile Safari, Internet Explorer mobile, as well as Internet Explorer 9+ and Android browser 2.3+ | Latest versions of Chrome, Firefox, Opera, Edge, Safari 7.1+, Internet Explorer 10+, No mention of mobile-specific browsers |
Internet Explorer support | Internet Explorer 10 and higher (Bootstrap 3 recommended for Internet Explorer 8 and 9) | Internet Explorer 9 and higher | Internet Explorer 10 and higher |
Other browser support notes | “Unofficially, Bootstrap should look and behave well enough in Chromium and Chrome for Linux, Firefox for Linux, and Internet Explorer 9, though they are not officially supported.” (source13) | “JavaScript: Our plugins use a number of handy ECMAScript 5 features that aren’t supported in IE8.” (source14) | |
License and copyright | Code and documentation copyright (2011 to 2017) of the Bootstrap authors and Twitter, Inc. Code released under the MIT License. Docs released under Creative Commons. | MIT license, no mention of copyright | MIT license, copyright of YOOtheme GmbH |
Build tools | “Bootstrap uses Grunt for its CSS and JavaScript build system and Jekyll for the written documentation. Our Gruntfile includes convenient methods for working with the framework, including compiling code, running tests, and more.” (source15) | To access Sass files16, you must install using the command line, the Node.js-powered Foundation CLI, or with its Yeti Launch application (Mac only). “Foundation is available on npm, Bower, Meteor, and Composer. The package includes all of the source SCSS and JavaScript files, as well as compiled CSS and JavaScript, in uncompressed and compressed flavors.” Other versions of Foundation are available as simple downloads without using these tools, but without SCSS files. | Bower and npm. Offer a SublimeText plugin and an Atom plugin for working with UIkit 3 as well. |
In general, all three frameworks are in active development (all were updated in January or February 2017) and are mobile-first. All generally have some type of CSS preprocessor. UIkit version 3 beta currently only offers LESS. It will offer a SCSS port, as it’s offered in previous versions of UIkit, in future releases.
@jen4web2317 Yes, we will have a SASS port too. (like we have in UIkit 2)
— UIkit (@getuikit) February 9, 201718
Bootstrap transitioned from LESS to SCSS as part of its version 4 update. Foundation has always been written in SCSS, because ZURB is a Ruby on Rails shop, and Sass hails from the Ruby on Rails world.
There are notable differences with the build tools. Foundation is extremely picky about you using its workflow when working with the framework, if you wish to work with SCSS files. Unfortunately, Foundation offers few choices in tools. Bootstrap is much less picky about workflow, offering several options, including no workflow at all, even with its SCSS files. All frameworks offer at least one distribution ZIP file, containing all elements of the framework. Foundation offers a way to choose only certain elements in a download. Bootstrap 4 alpha and UIkit 3 beta do not offer this functionality currently, but they have previously offered this in older Bootstrap and UIkit versions. We can assume that once these frameworks reach their stable releases, this functionality will be offered as well.
Each framework comes with a grid system, as one might expect. Bootstrap and Foundation’s grid systems include multiple breakpoints, nested grids, offsets and source ordering (i.e. changing the order of content on collapse). Exact breakpoint values vary, but the breakpoints are generally customizable with SCSS. UIkit’s grid is radically different and is described below.
With the most recent alpha 6 release, Bootstrap features a flexbox-based grid system by default. (This is partly the reason for its Internet Explorer 10+ browser support.) By default, Foundation features a floated grid system (which helps, in part, with its Internet Explorer 9+ support). Optionally in Foundation, you can compile a flexbox-based grid system by toggling an appropriate Sass variable and recompiling to CSS. Foundation notes that flexbox is not supported in Internet Explorer 9, so keep this in mind if this is a target browser for you.
Foundation offers a few more grid features than Bootstrap, including centered columns, incomplete rows, responsive gutters, semantic grid options and fluid rows.
Equal-height columns are a common problem when working with float-based grid systems. Foundation includes a JavaScript component named Equalizer. Because UIkit and Bootstrap are based on flexbox and not floats, equal-height columns are built into the grid system and are never an issue.
UIkit has a very different grid than Foundation and Bootstrap. Rather than a standard 12-column system, UIkit has broken its layouts into three components: grid, flex and width. Starting with the grid component19, you can create as many columns as you wish:
To create the grid container, add the
uk-grid
attribute to a<div>
element. There’s no need to add a class. Add child<div>
elements to create the cells. By default, all grid cells are stacked. To place them side by side, add one of the classes from the Width component. Usinguk-child-width-expand
will automatically apply equal width to items, regardless of how many there are.
As implied in the documentation, grid sets up the boxes that might be next to each other in some layouts, while width determines the width of those boxes, and flex determines the layout within each box, using flexbox properties. Finally, unlike other grid systems, UIkit offers a border between grid columns.
All three frameworks come with some level of CSS styling. The styling can be changed through an overriding style sheet or by modifying the provided SCSS or LESS preprocessor files. Generally speaking, basic styling is provided for all, or nearly all, HTML5 elements. All frameworks provide some quantity of utility classes, generally used for printing, responsiveness and the visibility of elements. Foundation and UIkit offer right-to-left support for languages written in that direction. UIkit states that it will feature a RTL version and the option to compile UIkit with a custom prefix.
@jen4web2317 Also a RTL version and the option to compile UIkit with a custom prefix (like ba- instead of uk-)
— UIkit (@getuikit) February 9, 201724
It is unclear from its documentation whether Bootstrap 4 is offering RTL support, but it has been available in previous versions. It seems like this might be part of a future stable release25. There is much interest in including this feature in the Bootstrap community.
The framework with the least out-of-the-box styling is Foundation. This framework has long assumed that its users are more advanced developers who will want to write their own styling for their projects. Therefore, ZURB has always provided less styling out of the box by design, meaning there will be fewer styles to override later.
UIkit offers two color options, accessible via the Inverse component. They include the standard scheme (light backgrounds and dark text) and a contrast scheme (dark backgrounds and light text). UIkit also offers some unique styled components, such as Articles and Comments, as well as Placeholder, which provides an empty space for drag-and-drop file-uploading interfaces. If you recall YOOtheme’s WordPress and Joomla theming roots, these features make perfect sense. Neither Bootstrap nor Foundation includes this styling specific to content management systems, so this is a distinguishing characteristic of the framework.
Bootstrap offers much more than Foundation. Indeed, Bootstrap’s distinctive look has permeated websites for several years. Bootstrap 4 has a similar look to Bootstrap 3. Bootstrap still offers several colors, such as “warning,” “danger,” “primary,” “info” and “success,” and it typically offers a few variations on styling certain HTML elements, such as tables and forms.
It’s also worth comparing the CSS units in each framework. Bootstrap 4 uses rem as its primary CSS unit throughout. However, it states, “pixels are still used for media queries and grid behavior as viewports are not affected by type size.” Bootstrap also increased its base font size from 14 pixels in version 3 to 16 pixels in version 4.
Foundation says29, “We use the rem unit nearly everywhere in Foundation, and even wrote a Sass function to make it a little easier. The rem-calc()
function can take one or more pixel values and convert them to proper rem values.”
Finally, UIkit seems to use pixels as a primary size unit, although occasionally uses percentages and rems in a few places in its LESS files. Its CSS size philosophy is not addressed in its documentation.
All three frameworks ship with responsive navigation elements. All include some fairly common navigation components, like formatted breadcrumbs, tabs, accordions and pagination. There are minor variations between these, but they all typically work as expected.
All three frameworks ship with dropdown menus. In general, these dropdowns may be used in several scenarios: with standard responsive navbars, with tabs, with buttons, etc. All three include vertical dropdowns. Foundation and UIkit also include horizontal dropdowns, in which the dropdown is displayed as a row rather than a column.
Navigation bars are present in all three frameworks. Each navbar can accommodate website branding and search and can be made sticky, and elements can be aligned left or right in the bar. All navigation bars collapse, but they may present collapsed content differently (some via a hamburger button plus a dropdown, some via an off-canvas menu). The navigation bars also have a few differences among themselves.
Bootstrap offers some basic horizontal and vertical navigation bars with different styling options (tabs, pills, stacked pills, justified or plain styling). It offers a separate horizontal responsive navigation bar, which creates a hamburger button on collapse. Two color schemes are available, with colors and breakpoints customizable through SCSS.
Foundation had a responsive navigation bar in previous versions similar to Bootstrap’s. In version 6, it has turned several navigation bar treatments into individual elements that can be combined. For example, on collapse, the navigation may be hidden behind a hamburger button, behind an off-canvas treatment or behind a drilldown menu, in which the user loads screens of menu items. These types of navigation treatment may be changed at specific screen sizes. For example, the full navigation bar may be available at desktop dimensions but switch to a drilldown for mobile.
UIkit’s bar offers functionality similar to Bootstrap’s. By default, it does not have a built-in toggle, but this is easily added with the appropriate CSS classes and JavaScript. Collapsed content is typically behind a hamburger button, and it may be coupled with off-canvas functionality. UIkit also specifically offers an icon-based navigation bar (iconbar). It ships with 26 SVG icons, which can be integrated in this bar, with the promise of more icons in the future.
All three frameworks depend on jQuery for their JavaScript-based components. Foundation and UIkit ship with a copy of jQuery, while Bootstrap relies on connecting to jQuery via a CDN or a download.
All three frameworks contain similar types of functionality: tooltips, modal windows, accordions, tabs, dropdown menus, image carousels, etc.
Foundation has two handy components that set it apart. One is Abide, a full-featured HTML5 form-validation library. It can handle client-side error-checking of forms. The other is Interchange, a JavaScript-based responsive image loader. It’s compatible with images and with bits of HTML and text. It loads the appropriate content on the page based on media queries. Despite the fact that one of the defining characteristics of responsive design is images that resize, neither Bootstrap nor UIkit offers similar functionality.
UIkit combines its styles and JavaScript components in its documentation, calling it all “components.” Some of the unique JavaScript-based components currently available in its beta release include Scroll, which allows smooth scrolling to other parts of the web page, and Sortable, which enables drag-and-drop grids on the page.
In future versions of UIkit33, we are promised many components from UIkit 2, including “Slideshow, Slider, Slideset, Parallax, Nestable, Lightbox, Dynamic Grid, HTML editor, Date- and Timepicker components.” It’s worth highlighting HTML Editor, and the Date- and Timepicker components, because these are typically used in administrative interfaces and in applications. Remember that YOOtheme creates Joomla and WordPress themes as part of its business, so these are important components for it. Neither Foundation nor Bootstrap includes these admin-friendly widgets, so this is a distinguishing characteristic of this framework.
One of UIkit’s most distinguishing JavaScript features34, however, is this:
UIkit is listening for DOM manipulations and will automatically initialize, connect and disconnect components as they are inserted or removed from the DOM. That way it can easily be used with JavaScript frameworks like Vue.js and React.
It depends! All three projects are actively maintained and have devoted followers. In the end, the right framework for you will depend on your project’s requirements, where you need help in programming (making it pretty? coding functionality?) and your personal coding philosophy (rems versus pixels? LESS versus Sass?). As with all technology, there is no perfect choice, and you will have to make compromises on features, functionality and code styles.
With this in mind, here are some points to weigh in making your decision.
(da, al, il)
A fantastic three.js palm tree generator by Davide Prati. Check out this page, too.
We’re all designers. Whether we do a layout, a product design or write code to design a product technically doesn’t matter here. What does matter though, is that we always take the context of a project into consideration. Because as someone shaping a project so that it is appealing to the clients and works in the best way possible for the target audience, we have a pretty big responsibility.
Imagine architects building a wall out of recycled material that also looks nice — sounds pretty great, right? But seen in the context that this will be a wall that divides people and encourages racism and even more inequality in our society, our first impression of the undertaking suddenly shifts into the opposite direction. We have to make new decisions every time we start a new project, and seeing things in context is crucial to live up to our responsibility — both in our work and our lives.
object-fit
and object-position
for Microsoft Edge4. A great step towards better web compatibility and I really hope that it’ll land in an official build soon.setTimeout()
and setInterval()
24. Please read this update post and check if your code still works as expected.And with that, I’ll close for this week. If you like what I write each week, please support me with a donation25 or share this resource with other people. You can learn more about the costs of the project here26. It’s available via email, RSS and online.
— Anselm
No, this is not a broken page or a sneaky landing page. Today marks an important milestone in Smashing Magazine’s life, and this very page is an early preview of what’s coming up next: many experiments, new challenges, but still a good ol’ obsession with quality content. A complete overhaul, both visually and technically, a fine new printed magazine, and a shiny new Smashing Membership, with nifty features and goodies for you, our lovely community. Curious? Well, fasten your seatbelt and browse around — it’s going to be quite a journey!
Today, we are happy to announce the public open beta of the next Smashing Magazine, next.smashingmagazine.com, with a new design, but one with a few bugs and issues that have to be… you know, smashed first. (Sorry about that!) The big-bang release is scheduled for later this year (probably early May to June). Some things now might be broken or just plain weird — we still have a bit of work to do, so do please let us know if you encounter any issues (and you definitely will!).
Over the last 18 months, we’ve been working with Dan Mall, Sara Soueidan, Marko Dugonjić, Ilya Pukhalski, Matt Biilman, Ricardo Gimenes, Sara Drasner, Andrew Clarke and the Netlify team on a complete overhaul of this little website, both technical and visual.
What’s different? Well, everything. The website won’t be running on WordPress anymore; in fact, it won’t have a back end at all. We are moving to a JAMstack: articles published directly to Netlify CDNs, with a custom shop based on an open-sourced headless E-Commerce GoCommerce and a job board that’s all just static HTML; content editing with Netlify’s new open-source, Git-Based CMS, real-time search powered by Algolia, full HTTP/2 support, and the whole website running as a progressive web app with a service worker in the background (thanks to the awesome Service Worker Toolbox library). Booo-yah!
We don’t really have a back-end any more. Instead, static HTML, advanced JavaScript APIs, running as a progressive web app with a service worker in the background and blazingly fast performance — served from a CDN near you.
How does it work? Quite simple, actually. Content is stored in Markdown files. HTML is pre-baked using the static site generator Hugo, combined with a modern asset pipeline built with Gulp and webpack, all based on the Victor Hugo boilerplate.
We’ve spiced it all up with a handful of fancy APIs, including ones by Stripe for payments, Algolia for search, Cloudinary for responsive images, and Netlify’s open-source APIs GoCommerce (a headless e-commerce API), GoTrue for authentication, and GoTell for our more than 150,000 comments.
Every time content changes, it’s all pushed to Netlify’s CDN nodes close to you. We do our best to ensure that content is accessible and enhanced progressively, with performance in mind. If JavaScript isn’t available or if the network is slow, then we deliver content via static fallbacks (for example, by linking directly to Google search), as well as a service worker that persistently stores CSS, JavaScripts, SVGs, font files and other assets in its cache. The dynamic components in JavaScript are all based on Preact, and most pages you’ll see on the website will have been fully pre-rendered, prebuilt and deployed, ready to be served from a CDN near you.
That’s a good question to ask. There are a couple of reasons. In the past, we were using WordPress as a CMS, our job board was running on Ruby, and at one point we switched to Shopify from Magento for our online shop. Not only was maintenance of four separate platforms incredibly complicated, but designing a consistent, smashing experience the way we envisioned it proved to be nearly impossible due to technical restrictions or requirements imposed by these platforms.
Indeed, developing for every platform separately was quite expensive and slow, and in many ways, creating a cohesive experience where a user would literally “flow” from one area to another was remarkably difficult. As a result, because some areas were more important from the business perspective, they were growing and evolving fast, while the others were decaying in the dark. This led to an inconsistent, incosehive, and, frankly, quite annoying and frustrating experience. And even once we’ve committed to make some major changes to unify the experience, it turned out to be a big, challenging undertaking since we were using different platforms, stacks and at some point even different designs.
Performance was another reason. With a proper CDN, full HTTP/2 support and service workers in place, last year we managed to beat the performance results we ever had. However, even with a fancy nginx setup, the performance we could get with a pre-built page enhanced with JavaScript was nothing short of breathtaking. At this point we aren’t quite where we want to be, and we are looking forward to optimize the HTTP/2 delivery and add some minor and major improvements and measure the results. But the initial test showed the Start Render time hitting around 500–600ms, and Time To Interactive steadily being below 1s. We weren’t able to reach the same level of performance with WordPress and LAMP stack in place.
For the first time in 10 years, we were able to define a smashing experience from scratch and implement it from very start to finish. It’s not a big revelation: when it comes to interaction design, asking all the right questions is not enough; it also matters how and when those questions are asked. With the redesign, we had the freedom to explore slightly unconventional ways of asking these questions. If you look closely, you will find some of these (unusual) decisions pretty much everywhere on the site.
As you might have noticed, visually we’ve introduced some major changes as well. In the previous design, we always struggled to find good spots to prominently display our products — books and eBooks and job board postings and conferences; a major goal of the redesign was to change that. We looked for a way to bring our products to the forefront, but without making them feel like blatant, boring, cheap advertising; they had to fit the overall visual language and layout we were developing.
The truth is that you’re probably using an ad blocker (in fact, more than 60% of our readers do), so if most users don’t see the ads, what’s the point of having them in the first place? Instead of pushing advertising over the edge, we’ve taken the drastic decision of removing advertising almost entirely, focusing instead on featuring our lovely products.
As part of the relaunch, we took our time to study and rediscover our signature and personality, and we’ve highlighted it prominently in every component of every page. Have you noticed yet something consistently shining through in the new design? Yep, that’s right. It’s cats. We designed 56 different cats (yes, cats) that will be appearing throughout the website in various places, and if you take the time to find them all, then you’d probably deserve a free ticket to one of our conferences. We’ve also leaned even more heavily towards the signature defined by our logo: Most elements on the page are tilted at just the right angle. We’re also using transitions and animations (work in progress!) to keep interactions smooth and clear, which we didn’t do before.
Now, the redesign is just a part of the story. We’ve got something new cookin’, too: the Smashing Membership, with webinars, workshops and whopping discounts and Smashing Magazine Print, our new, physical, printed magazine — and we wanted them to become an integral part of the site. After years of refining the ideas, now it’s finally happening. However, they deserve a standalone article — just like a detailed overview of the design process and what we’ve learned in the last 18 months.
So, here we go! Please feel free to browse around — and be prepared to discover all kinds of cats… oh, things. As it is always the case with an open beta, there are still a good number of bugs to be smashed, and we are on it. Still, please do report issues and bugs on GitHub — the more, the merrier! We can’t wait to read your feedback in good ol’ social media and in the comments. Meow! — and look out for the next articles of the series! 😉
(ms, il, al)
Sometimes we tend to think of our designs as if they are pieces of art. But if we think of them this way, it means they won’t be ready to face the uncertain conditions of the “real world.” However, there is also beauty in designing an interface that is ready for changes — and, let’s admit it, interfaces do change, all the time.
One of the things I like most about designing a mobile app is that, from the initial concepts to the time when you are fine-tuning and polishing all of the interface details, this is a process with many steps. In this process, I, together with several other members of the team (from researchers to illustrators to developers), am involved as a designer. But this also means that a lot of decisions have to be made in each and every stage — and some of them don’t always seem to be as fun to make as others.
As UX specialists, we have varied and diverse backgrounds, but visual interfaces are what we spend most of our time on (and are what is most often attributed to us). We are visual thinkers with a highly trained eye. That’s why it’s tempting sometimes to jump straight to the visual UI design stage when starting a new project, and one of the reasons why we may be bored by some other tasks.
This also means that we often postpone (or, worse yet, neglect) other important parts of our process and workflow: defining user needs and goals, sketching task flows, working on all the details of the information and interaction design, etc. These are critically important, too, and at the same time, they are more abstract and more difficult for many people to visualize how they will become a tangible part of the final product.
When we’re working on a visual design, the so-called pixel-perfect philosophy could be a trap that makes us spend more time than necessary crafting the little details until even the smallest of them is in the “perfect” place in the interface. This leads to a generation of designers who use Dribbble and Behance9 mainly to show polished screens of apps and websites and who are more concerned with looks than with how a design actually works. And in the real world, things tend not to go as well as we expect.
Or, as Paul Adams10 puts it:
I see designer after designer focus on the fourth layer without really considering the others. Working from the bottom up rather than the top down. The grid, font, colour, and aesthetic style are irrelevant if the other three layers haven’t been resolved first. Many designers say they do this, but don’t walk the walk, because sometimes it’s just more fun to draw nice pictures and bury oneself in pixels than deal with complicated business decisions and people with different opinions. That’s fine, stay in the fourth layer, but that’s art not design. You’re a digital artist, not a designer.
Personally, I think the best designs (when speaking about user interface design) are the ones that not only look and feel good, but also respond elegantly to variable conditions and even unpredictable situations.
In the long road of building a product, there are phases when designers need to be more collaborative and less focused on the visual design. And this is precisely where I’m going to focus on for the sake of this article’s length (I don’t want you to fall asleep at the keyboard!). In the next few paragraphs, I’ll give you some hints and tips on how to put that app design you are working on to the test, and to see whether it’s ready to be released into the wild.
When I was studying graphic design in college, they taught us about the beauty of balance, alignment, proportion and tension and how to position elements in space in such a way that they are harmonious and pleasing to the eye. With this knowledge, my life changed and I started to look at the world with different eyes. Later on, I started designing interfaces and I tried to put those same principles into action — all of the information on the screen should form a visual composition that’s highly satisfying to look at.
If you apply these principles to mobile app design, then we’d find that we must display just the right amount of information. For example, if a screen has to list people’s names, the designer would usually select a few short and common ones and arrange them together perfectly — leaving no room for an unexpectedly long name that could break the design or make it fall apart later.
This approach is based on the assumption that there is no beauty in chaos and imperfection — even though these two aspects appear frequently in the real world. But visual interfaces are not static pieces of art to be admired; they are dynamic, functional spaces that change and adapt for each person using them. We should not succumb to the temptation to design purely for aesthetics, because we can never control everything an interface must present to (and also do for) people.
Instead, we must design for change! This is what the Japanese call wabi-sabi11, a “worldview centered on the acceptance of transience and imperfection.”
Because of this, it’s important to think and design differently:
When you try to present data in a few ways, including some unpredictable ones, you will be able to test whether the interface is ready to handle these situations that are beyond the design’s “comfort zone.” Also, be prepared for extreme cases (when none or a lot of information is present, for example), and try to avoid the “centers” (when everything looks good and balanced).
If you have already launched the product, this will be easier because you can pay attention to the real data12 and use it in your ongoing design process as a reference. But if you are working on something new, then you will have to dig a bit deeper, do some research and try to understand how (and what kind of) information will be presented later on. You can also talk about this with a developer from your back-end team, who will be able to better explain to you what kinds of data will be stored and presented.
I’ll give you one last, more graphic example with something that a developer friend16 of mine calls “the pretty friend syndrome.” When we are designing a screen that will contain pictures of people, like user profiles, we tend to use nearly-stock photos of people who look good and fit well within the design. Yet when he sees such designs, my friend says, “I wish I had friends so handsome.”
So, an alternative to “perfect” imagery could be to use more random photos17 of people with varying colors. That way, you will be able to test how overlaid elements look with different kinds of backgrounds, allowing you to see whether contrast and legibility are still intact.
We are optimists by nature on how an app is going to work. We suppose that everything will go quickly and smoothly and without interruption because… why not? That’s why we sometimes forget how to design and handle some of the potentially not-so-good situations that the user might face later on.
Just to name a few, what would happen if suddenly the Internet connection drops? Or what if there’s an error while the browser is trying to connect to the API when executing a task? And if the connection is too slow, will there be a loading indicator (such as a spinner or progress bar), or will there be some placeholders to (temporarily) fill the display blocks while the actual data is being loaded? And what about the possibility of refreshing certain screens of the app? When (and in which cases) would this be possible?
As you can see, I’m not talking about errors made by the user (for example, making a mistake when filling a form), but about errors that are out of their control but that happen nevertheless. In this case, it’s even more important to talk to developers to know and understand what could go wrong on different screens, and then to devise an approach that could get the user out of trouble easily, giving them the option to try again later or to perform a different action.
In any case, it’s always a good idea to identify the specific conditions that trigger each error and to design a helpful error message for each individual case. These helpful messages will help the user to respond appropriately in each case and to know what to do next to fix the problem. Even if it’s somewhat tempting, avoid a generic error message at all costs.
An interface comprises many elements that together form the whole layout of the application. However, when we focus on the user interface as a whole, we often forget that some elements also have smaller tasks to perform that contribute to the general goal.
Speaking of goals, this is like football (or soccer, if you happen to live in the US). You see, I’m a big fan of this sport, like most people in Argentina, where I’m from. In order for the team as a whole to win, the coach needs to know what to expect from each player and what task they will perform at different moments — even when the unpredictability of some of them (I’m thinking about the magic of Messi23) can make this harder.
Moving forward and (hopefully) forgetting about my sports analogies, let’s translate this to our design cases. If there is a button or item that triggers some kind of interaction, then look ahead and think about the next step: Will a loading state be displayed while the action is being performed? Could it be disabled for some reason? What if the user is holding down the button for a while; will there be any feedback? Just as there are different states for whole screens26, the same should apply to individual elements as well.
In addition, remember to consider how the logic of the product matches the user’s mental model, helping them to accurately and efficiently achieve their goal and to complete their tasks in a meaningful and predictable way.
What I do to address all of these points is to just stop what I’m doing, pause, step back and look at the bigger picture of the entire flow of multiple screens and states over a sequence of steps and actions. I’ll look for the multiple paths that lead to that point, and the multiple paths that lead away from it.
You can do the same while using a prototype, performing the actions slowly, conscientiously and carefully. If this is too challenging for you — because you have probably done this several times before and it’s become kind of an automated task now — borrow some fresh eyes (not literally, of course!) and simply ask a colleague, friend or active user to look at the design or prototype. Seeing someone else use and interact with your design can be illuminating, because we are often too close to and too familiar with it and, so, can overlook things.
When I’m designing, I usually have my phone next to me, so that I can preview my work and make adjustments in real time. To do this, I use Mira27, Crystal28 or Sketch Mirror29, depending on whether I’m designing for Android or iOS.
I think this is a good practice, but this way it’s also easy to forget about all of the other phones different from yours that people may be using. A lot of different screen sizes are out there (especially on the Android platform); try to consider all possible variations.
One way of knowing where to start is to check what kinds of devices your actual users have.
And when preparing your design for those various screen sizes and orientations, it’s not just about stretching boxes and repositioning elements. Carefully consider how to make the most of each situation and, furthermore, how to make the necessary adjustments even when it means deviating a bit from the original design.
In these cases, the same principles that we’ve discussed before still apply: unpredictable situations, different kinds of content, variable amounts of information, missing data and so on — you have to design for all kinds of possible scenarios. Don’t fall into the trap of designing screens as separate, individual parts of the product — they are all connected.
This will be helpful not only to you, but also to your developer friend, who will need to know many of the possible scenarios in order to write the code and prepare the interface to tackle these situations.
You may have noticed that the goal of many of the points in this article is to reduce the unexpected. Even so, there will be many situations in which you won’t have a clear answer. Developers will often ask, “So, what would happen if I do this instead of that?” — pointing to a potential outcome that you hadn’t considered before.
If this happens, then you will have to solve that particular issue for only one case and only one screen. But always try to think globally, and consider how the answer to that particular problem could be designed to work in a flexible way so that you could potentially reuse it later on.
After all, this is what we, UX designers, do — we design and define flexible systems that adapt to unanticipated states, conditions and flows. Think of your interface as a living ecosystem of moving, changing smart parts, instead of a collection of individual blocks of pixels.
During this part of the process, you’ll need to work very closely with developers on your team, mostly to define a set of rules of behaviors for many different situations. But keep a good balance — try not to over-design things. Set your own limits with a dash of common sense. You need to strike a good balance between functionality and consistency. Remember that a good design system is flexible and is prepared for some exceptions to the rules on certain occasions.
On the other hand, think of how elements you have already designed could be tweaked to fit new situations. You will see this better if you make a library of design components, so that, with just a quick overview of the library, you will know whether you need to design something from scratch or you can use something readymade.
If you want to prepare your designs to face the unpredictable and the unknown — and if you want to also, hopefully, get along better with your lead developer by providing them with everything they will need beforehand — here are some final tips.
One of my best experiences with a developer so far (hi, Pier35!) came about simply from sitting right next to him. This is very important, and it will make a huge difference because it will improve communication. Talk often in order to better understand the product and how it works and what they will need to learn from you. Ask, and ask over and over again, in order to see the bigger picture of all possible outcomes.
Get involved in your own design in a conscious, critical way, paying attention to details and small interactions. Get involved and commit yourself. At every step, think of the purpose of each element, which interactions define it and what would happen if something either goes well or goes wrong.
See how things behave in circumstances different from the ones you have while working on the design, beyond the design’s comfort zone. Leave your desk and speak with the actual people who are using (or will be using) the app. And, if possible, bring along others from your team with you. This is also very important — everybody has to be in touch with the real world, so that you all understand better the situations of real users.
The key to good communication is to understand each other well. We sometimes use fancy words to sound “smarter” or to justify our work, but more important is for everybody on the team to be on the same page. Mutual understanding is key here.
Developers have jargon of their own, but most of the time you will be talking about the same things, only with different words. For example, what you call a “screen” is a “view” to them, and what you call a “button” is a “control,” and so on. So, try to align and agree on the terminology that you will be using, to make the exchange of information easier.
This is just as important when you’re talking to product managers and business partners. Designers need to be “multilingual” and understand everyone.
Did I mention that I am a big advocate of design systems? Using a component library, I can design a new screen in literally five minutes (don’t tell my boss!), because I already have what I need to make it. Sometimes you will need to define those components the first time they arise, but they can be reused later for similar cases, not just to save the day in an emergency. Even though it might look like a waste of time in the beginning, it really will pay off in the long run.
Note: This article is not focused on components or pattern libraries, but here’s an excellent (and very detailed) read that I recommend in case you would like to learn more: “Taking Pattern Libraries to the Next Level4636.”
Last but not least, don’t reinvent the wheel unless you have to. By taking advantage of common patterns and elements of the operating system and the app itself, you will design faster, the developers will build the screens more easily and, finally, the user’s learning curve will be less steep.
Do some design and prototyping even before you know whether the idea is feasible. It’s always better to show what you have in mind with a (hopefully) working live prototype as a means of communication.
Responding to a tangible proposal is always easier for people than imagining something theoretical. Don’t fall in love with your idea too early because it could be easily dismissed, but at least the prototype will help everyone to see what you’re talking about. (I wrote a little something about prototyping and prototyping tools not long ago: “Choosing the Right Prototyping Tool41”.)
Having a clearly defined problem with an elegant solution based on a design system will make the visual design part of our work even more fun, because we can focus on the refinements, polish and delight of the interface, without having to iterate endlessly. When we jump to the visuals too soon, we have to solve the problem and craft the interface at the same time, which often leads to frustration and burnout.
Changing your workflow might be challenging in the beginning, but after a while you will enjoy working within the constraints. This will also transform the way you think, and hopefully help you to move away from focusing on the visual details. You will become a more complete and capable UX designer, using the appropriate deliverables, and not just churning out an endless stream of visual mockups and compositions.
Good luck, dear reader! Do tell me what your thoughts are in the comments below, or ping me on Twitter42, I’d like to hear your feedback!
(mb, al)
Marco Lopes shows how to build a system that provides a unified set of UX, design rules and patterns.
Reactide * The Mandelwat Set * NanoNets * Apollo Client 1.0 * Atomic Free * DOM Manipulation * Making Animations Wait…
Pull-to-refresh is one of the most popular gestures in mobile applications right now. It’s easy to use, natural and so intuitive that it is hard to imagine refreshing a page without it. In 2010, Loren Brichter created Tweetie, one of numerous Twitter applications. Diving into the pool of similar applications, you won’t see much difference among them; but Loren’s Tweetie stood out then.
It was one simple animation that changed the game — pull-to-refresh, an absolute innovation for the time. No wonder Twitter didn’t hesitate to buy Tweetie and hire Loren Brichter. Wise choice! As time went on, more and more developers integrated this gesture into their applications, and finally, Apple itself brought pull-to-refresh to its system application Mail, to the joy of people who value usability.
Today, most clients wish to see this gesture in their apps, and most designers want to create prototypes with integrated pull-to-refresh animation, preferably a custom one. This tutorial explains how to build a prototype in Flinto3, a tool that makes swipe-gesture animation possible, and obviously you cannot create a pull-to-refresh animation without a pull. However, it would be fair to say that Flinto is not the only tool that gives us the swipe gesture — Facebook Origami and POP are worth mentioning. After we create a prototype, we will code it into our design of an Android application.
This tutorial will help you master Flinto, understand the logic of creating prototypes of this kind, and learn the process of coding these prototypes in your application. To follow the steps, you will need macOS, Sketch for Mac4, Flinto for Mac5 to create the prototype, and Android Studio6 and JDK 7+7 to write the code.
For the prototype, I am using screens of ChatBoard12, an Android chat application by Erminesoft13. The list of user chat rooms would be a perfect place to integrate a refresh animation to check new messages. Let’s begin!
We’ll make all of the designs in Sketch14. For the first step, we’ll need to create one screen with any list of items the user will be able to refresh. Now we need to export the screen to Flinto. We have two options here:
Let’s move to Flinto for Mac, which you can buy for $99, or you can download a free trial on the website16. To make a simple pull-to-refresh animation, we need five screens. At this point, we can add a custom image or use standard Flinto forms (a rectangle or circle) to create the animated element. For this project, I am using three standard circles. Stop right there: Don’t search for a circle form. Use a rectangle (R), make a square out of it, and set a maximum corner radius. There you go — you’ve got a circle!
The first animation frame requires a separate layer with the list of content. Behind it, we’ll place the animated element in the starting position; in our case, there will be three circles placed on the same X and Y coordinates. That’s screen 1.
On screen 2, we need to move the content down the Y-axis, revealing the animated element hidden behind the list of content.
Additionally at this step (and all following steps), the transition timer (“Timer Link”) should be turned on and set to 0 milliseconds, to eliminate any lag in transition to the next animation screen. Just click on an artboard title to see the timer transition settings.
The previous screen (screen 2) shows only one circle, but remember that three circles are placed at the same X and Y coordinates. At this point (screen 3), our task is to move one of the circles 30 pixels left along the X-axis, and another circle 30 pixels right along the X-axis. Don’t forget to set the transition timer to 0 milliseconds.
Let’s move on to screen 4. Repeat step 5 doing the same thing but moving the circles along the Y-axis instead of the X-axis for the same 30 pixels. The X coordinates of all of the elements should be the same and center-aligned. Don’t forget about the transition timer.
Copy screen 2 for the new screen 5. At this step, all we need to do is change the timer link’s target not to screen 3 but to our home screen.
All of the preparations are done, and we can now move to the animations. Create a new transition. Select the layer of content on the home screen, press F
, and link it to screen 2.
(By the way, the key F
refers to the name of the program itself, “Flinto.” It is its signature key.)
Apply the following transition settings:
Now we get to the custom transition animation section. The first thing to do here is to lay one screen above the other. This creates the impression that it is one animated screen, instead of two screens, because it technically is.
At this point, we need to set the connections between elements throughout the screens in order for the program to associate them. For example, the element named “Circle-1” on the home screen is the same object on all of the screens. We just need to select two identical elements and click “Connect Layers.”
We have to connect all identical elements in this way for our “New Transition.” You can try out various kinds of animations in the “Effects” section, but in this particular case, I advise you to use “Spring,” to make our circles bounce.
Click “Save & Exit.” Now we need to select this transition type for all of the transitions in our project, including our timers.
(An interesting fact: In Principle, the prototyping tool, layers are connected automatically when the program finds two elements with identical names. I find the automatic connection more convenient for those who keep Sketch’s layers’ names in order. Flinto is a better choice for the lazy ones who prefer to connect all animated elements while creating a prototype.)
Additionally, to achieve a more realistic effect, you can make the refreshed screen show an update or an additional item.
In case things don’t go as expected when you follow this tutorial, simply download the related Flinto template35.
Despite the simplicity of this animation, it delivers surprising dynamics and responsiveness to the prototype. It also gives a feeling of product completeness, and it is necessary to making a prototype feel as product-like as possible.
Prototyping is a crucial stage in application development, not only impressing the client and verifying the design concept, but also helping to establish a hand-off process between the designers (who create the animations) and the developers (who implement them). Prototypes can become a valuable asset of communication between team members because they ensure that coders understand the project’s specifications and can implement the designer’s custom animations.
Now, let’s proceed to code our prototype in a Java application for Android mobile devices.
The whole process of creating a custom PullToRefreshListView
for Android involves only three steps:
ListView
.Take the drawable
folder in our project, and create a file named point.xml
with the following content:
<?xml version="1.0" encoding="utf-8"?> <shape xmlns:android="http://schemas.android.com/apk/res/android" android:dither="true" android:shape="oval"> <gradient android:endColor="#ffff6600" android:gradientRadius="10dp" android:startColor="#ffffcc00" android:type="radial" android:useLevel="false" /> <size android:height="10dp" android:width="10dp" /> </shape>
The element has now been formed. The next step is to build the animated movement of these elements. Let’s jump to the anim
folder (or create it if it’s absent), and add two files, named left_step_anim.xml
and right_step_anim.xml
.
The following code listing is for left_step_anim.xml
:
<?xml version="1.0" encoding="utf-8"?> <set xmlns:android="http://schemas.android.com/apk/res/android" android:duration="800" android:repeatMode="restart"> <translate android:fromXDelta="0%" android:toXDelta="-50" /> <translate android:startOffset="800" android:toYDelta="50" android:toXDelta="50" /> <translate android:startOffset="1600" android:toYDelta="-50" /> </set>
The following code should be placed in right_step_anim.xml
:
<?xml version="1.0" encoding="utf-8"?> <set xmlns:android="http://schemas.android.com/apk/res/android" android:duration="800"> <translate android:fromXDelta="0%p" android:toXDelta="50" /> <translate android:startOffset="800" android:toYDelta="-50" android:toXDelta="-50" /> <translate android:startOffset="1600" android:toYDelta="50" /> </set>
Now we need to add our animation to the markdown. Browse to the layout
folder, and create a file named ptr_header.xml
, with the following code:
<?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:layout_width="match_parent" android:layout_height="match_parent" android:gravity="center_horizontal"> <RelativeLayout android:id="@+id/ptr_id_header" android:layout_width="fill_parent" android:layout_height="wrap_content" android:gravity="center_vertical" android:padding="5dp"> <ImageView android:id="@+id/point" android:paddingTop="30dp" android:paddingBottom="30dp" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_centerHorizontal="true" android:contentDescription="point" android:scaleType="fitCenter" android:src="@drawable/point" /> <ImageView android:id="@+id/point2" android:paddingTop="30dp" android:paddingBottom="30dp" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_centerHorizontal="true" android:contentDescription="point" android:scaleType="fitCenter" android:src="@drawable/point" /> <ImageView android:id="@+id/point3" android:paddingTop="30dp" android:paddingBottom="30dp" android:layout_width="wrap_content" android:layout_height="wrap_content" android:contentDescription="point" android:scaleType="fitCenter" android:src="@drawable/point" android:layout_alignTop="@+id/point" android:layout_alignLeft="@+id/point" android:layout_alignStart="@+id/point" /> </RelativeLayout> </LinearLayout>
The file we’ve created will serve as the animated element. Let’s proceed to the second step.
We need to create an item to use in our custom ListView
. To do this, navigate to the layout
folder and create a file named list_item.xml
, containing one TextView
element. This is what it should look like:
<?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="wrap_content" > <TextView android:id="@+id/textView1" android:layout_width="wrap_content" android:layout_height="fill_parent" android:padding="5dp" android:singleLine="true" android:textColor="@android:color/black" android:textSize="10pt" /> </LinearLayout>
Now, let’s add a file with our ListView
to the layout folder. In our case, it is a file named main.xml
in a folder named layout
. It should read as follows:
<?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="fill_parent" android:orientation="vertical"> <erminesoft.com.listreload.PullToRefreshListView android:id="@+id/pull_to_refresh_listview" android:layout_height="fill_parent" android:layout_width="fill_parent" android:background="@android:color/white" android:cacheColorHint="@android:color/white" /> </LinearLayout>
The custom ListView
has now been created.
The last step of this process involves binding together all of the elements above. We need to create two classes. The first class is named PullToRefreshListViewSampleActivity
and is used when launching the application. The second class, PullToRefreshListView
, will contain our element.
In the PullToRefreshListViewSampleActivity
class, our attention is on the onRefresh()
method of the onCreate()
method. This method is exactly where all of the ListView
refreshing magic will happen. Because this is an example, we’ve added our own test data with the loadData()
method of the internal class PullToRfreshListSampleAdapter
. The remaining code of the PullToRefreshListViewSampleActivity
class is relatively simple.
Let’s move on to the PullToRefreshListView
class. Because the main functionality is built on the standard ListView
, we’ll add extends ListView
to its name. The class is quite simple, yet animation involves a few constants that are defined by experimentation. Besides that, the interface implements the onRefresh()
method.
Now let’s add a file with our ListView
to the layout folder. In our case, it is a file named main.xml
in a folder named layout
. It should read as follows:
public interface OnRefreshListener{ void onRefresh(); }
This method will be used to refresh the ListView
. Our class also contains several constructors to create the View
element.
public PullToRefreshListView(Context context){ super(context); init(context); } public PullToRefreshListView(Context context, AttributeSet attrs){ super(context, attrs); init(context); } public PullToRefreshListView(Context context, AttributeSet attrs, int defStyle){ super(context, attrs, defStyle); init(context); }
The class also includes the onTouchEvent
and onScrollChanged
event handlers. These are standard solutions that have to be implemented. You will also need the private class HeaderAnimationListener
, which handles animation in the ListView
.
private class HeaderAnimationListener implements AnimationListener{ ... }
There you go! The code is available on GitHub36.
This tutorial is intended to encourage designers and developers to work together to integrate a custom pull-to-refresh animation and to make it a small yet nice surprise for users. It adds a certain uniqueness to an application and shows that the developers are dedicated to creating an engaging experience for the user above all else. It is also the foundation for making more complex animation, limited only by your imagination. We believe it’s important to experiment with custom animations, adding a touch of creativity to every project you build!
(da, al, il)
Over the last five years, Node.js1 has helped to bring uniformity to software development. You can do anything in Node.js, whether it be front-end development, server-side scripting, cross-platform desktop applications, cross-platform mobile applications, Internet of Things, you name it. Writing command line tools has also become easier than ever before because of Node.js — not just any command line tools, but tools that are interactive, useful and less time-consuming to develop.
If you are a front-end developer, then you must have heard of or worked on Gulp2, Angular CLI3, Cordova4, Yeoman5 and others. Have you ever wondered how they work? For example, in the case of Angular CLI, by running a command like ng new <project-name>
, you end up creating an Angular project with basic configuration. Tools such as Yeoman ask for runtime inputs that eventually help you to customize a project’s configuration as well. Some generators in Yeoman help you to deploy a project in your production environment. That is exactly what we are going to learn today.
In this tutorial, we will develop a command line application that accepts a CSV file of customer information, and using the SendGrid API10, we will send emails to them. Here are the contents of this tutorial:
This tutorial assumes you have installed Node.js on your system. In case you have not, please install it18. Node.js also comes with a package manager named npm19. Using npm, you can install many open-source packages. You can get the complete list on npm’s official website20. For this project, we will be using many open-source modules (more on that later). Now, let’s create a Node.js project using npm.
$ npm init name: broadcast version: 0.0.1 description: CLI utility to broadcast emails entry point: broadcast.js
I have created a directory named broadcast
, inside of which I have run the npm init
command. As you can see, I have provided basic information about the project, such as name, description, version and entry point. The entry point is the main JavaScript file from where the execution of the script will start. By default, Node.js assigns index.js
as the entry point; however, in this case, we are changing it to broadcast.js
. When you run the npm init
command, you will get a few more options, such as the Git repository, license and author. You can either provide values or leave them blank.
Upon successful execution of the npm init
, you will find that a package.json
file has been created in the same directory. This is our configuration file. At the moment, it holds the information that we provided while creating the project. You can explore more about package.json
in npm’s documentation21.
Now that our project is set up, let’s create a “Hello world” program. To start, create a broadcast.js
file in your project, which will be your main file, with the following snippet:
console.log('hello world');
Now, let’s run this code.
$ node broadcast hello world
As you can see, “hello word” is printed to the console. You can run the script with either node broadcast.js
or node broadcast
; Node.js is smart enough to understand the difference.
According to package.json
’s documentation, there is an option named dependencies
22 in which you can mention all of the third-party modules that you plan to use in the project, along with their version numbers. As mentioned, we will be using many third-party open-source modules to develop this tool. In our case, package.json
looks like this:
{ "name": "broadcast", "version": "0.0.1", "description": "CLI utility to broadcast emails", "main": "broadcast.js", "license": "MIT", "dependencies": { "async": "^2.1.4", "chalk": "^1.1.3", "commander": "^2.9.0", "csv": "^1.1.0", "inquirer": "^2.0.0", "sendgrid": "^4.7.1" } }
As you must have noticed, we will be using Async3923, Chalk4524, Commander3025, CSV3126, Inquirer.js27 and SendGrid3628. As we progress ahead with the tutorial, usage of these modules will be explained in detail.
Reading command line arguments is not difficult. You can simply use process.argv
29 to read them. However, parsing their values and options is a cumbersome task. So, instead of reinventing the wheel, we will use the Commander3025 module. Commander is an open-source Node.js module that helps you write interactive command line tools. It comes with very interesting features for parsing command line options, and it has Git-like subcommands, but the thing I like best about Commander is the automatic generation of help screens. You don’t have to write extra lines of code — just parse the --help
or -h
option. As you start defining various command line options, the --help
screen will get populated automatically. Let’s dive in:
$ npm install commander --save
This will install the Commander module in your Node.js project. Running the npm install with --save
option will automatically include Commander in the project’s dependencies, defined in package.json
. In our case, all of the dependencies have already been mentioned; hence, there is no need to run this command.
var program = require('commander'); program .version('0.0.1') .option('-l, --list [list]', 'list of customers in CSV file') .parse(process.argv) console.log(program.list);
As you can see, handling command line arguments is straightforward. We have defined a --list
option. Now, whatever values we provide followed by the --list
option will get stored in a variable wrapped in brackets — in this case, list
. You can access it from the program
variable, which is an instance of Commander. At the moment, this program only accepts a file path for the --list
option and prints it in the console.
$ node broadcast --list input/employees.csv input/employees.csv
You must have noticed also a chained method that we have invoked, named version
. Whenever we run the command providing --version
or -V
as the option, whatever value is passed in this method will get printed.
$ node broadcast --version 0.0.1
Similarly, when you run the command with the --help
option, it will print all of the options and subcommands defined by you. In this case, it will look like this:
$ node broadcast --help Usage: broadcast [options] Options: -h, --help output usage information -V, --version output the version number -l, --list <list> list of customers in CSV file
Now that we are accepting file paths from command line arguments, we can start reading the CSV file using the CSV3126 module. The CSV module is an all-in-one-solution for handling CSV files. From creating a CSV file to parsing it, you can achieve anything with this module.
Because we plan to send emails using the SendGrid API, we are using the following document as a sample CSV file. Using the CSV module, we will read the data and display the name and email address provided in the respective rows.
First name | Last name | |
---|---|---|
Dwight | Schrute | dwight.schrute@dundermifflin.com |
Jim | Halpert | jim.halpert@dundermifflin.com |
Pam | Beesly | pam.beesly@dundermifflin.com |
Ryan | Howard | ryan.howard@dundermifflin.com |
Stanley | Hudson | stanley.hudson@dundermifflin.com |
Now, let’s write a program to read this CSV file and print the data to the console.
const program = require('commander'); const csv = require('csv'); const fs = require('fs'); program .version('0.0.1') .option('-l, --list [list]', 'List of customers in CSV') .parse(process.argv) let parse = csv.parse; let stream = fs.createReadStream(program.list) .pipe(parse({ delimiter : ',' })); stream .on('data', function (data) { let firstname = data[0]; let lastname = data[1]; let email = data[2]; console.log(firstname, lastname, email); });
Using the native File System32 module, we are reading the file provided via command line arguments. The File System module comes with predefined events, one of which is data
, which is fired when a chunk of data is being read. The parse
method from the CSV module splits the CSV file into individual rows and fires multiple data events. Every data event sends an array of column data. Thus, in this case, it prints the data in the following format:
$ node broadcast --list input/employees.csv Dwight Schrute dwight.schrute@dundermifflin.com Jim Halpert jim.halpert@dundermifflin.com Pam Beesly pam.beesly@dundermifflin.com Ryan Howard ryan.howard@dundermifflin.com Stanley Hudson stanley.hudson@dundermifflin.com
Now we know how to accept command line arguments and how to parse them. But what if we want to accept input during runtime? A module named Inquirer.js33 enables us to accept various types of input, from plain text to passwords to a multi-selection checklist.
For this demo, we will accept the sender’s email address and name via runtime inputs.
… let questions = [ { type : "input", name : "sender.email", message : "Sender's email address - " }, { type : "input", name : "sender.name", message : "Sender's name - " }, { type : "input", name : "subject", message : "Subject - " } ]; let contactList = []; let parse = csv.parse; let stream = fs.createReadStream(program.list) .pipe(parse({ delimiter : "," })); stream .on("error", function (err) { return console.error(err.message); }) .on("data", function (data) { let name = data[0] + " " + data[1]; let email = data[2]; contactList.push({ name : name, email : email }); }) .on("end", function () { inquirer.prompt(questions).then(function (answers) { console.log(answers); }); });
First, you’ll notice in the example above that we’ve created an array named contactList
, which we’re using to store the data from the CSV file.
Inquirer.js comes with a method named prompt
34, which accepts an array of questions that we want to ask during runtime. In this case, we want to know the sender’s name and email address and the subject of their email. We have created an array named questions
in which we are storing all of these questions. This array accepts objects with properties such as type
, which could be anything from an input to a password to a raw list. You can see the list of all available types in the official documentation35. Here, name
holds the name of the key against which user input will be stored. The prompt
method returns a promise object that eventually invokes a chain of success and failure callbacks, which are executed when the user has answered all of the questions. The user’s response can be accessed via the answers
variable, which is sent as a parameter to the then
callback. Here is what happens when you execute the code:
$ node broadcast -l input/employees.csv ? Sender's email address - michael.scott@dundermifflin.com ? Sender's name - Micheal Scott ? Subject - Greetings from Dunder Mifflin { sender: { email: 'michael.scott@dundermifflin.com', name: 'Michael Scott' }, subject: 'Greetings from Dunder Mifflin' }
Now that we can read the recipient’s data from the CSV file and accept the sender’s details via the command line prompt, it is time to send the emails. We will be using SendGrid’s API to send email.
… let __sendEmail = function (to, from, subject, callback) { let template = "Wishing you a Merry Christmas and a " + "prosperous year ahead. P.S. Toby, I hate you."; let helper = require('sendgrid').mail; let fromEmail = new helper.Email(from.email, from.name); let toEmail = new helper.Email(to.email, to.name); let body = new helper.Content("text/plain", template); let mail = new helper.Mail(fromEmail, subject, toEmail, body); let sg = require('sendgrid')(process.env.SENDGRID_API_KEY); let request = sg.emptyRequest({ method: 'POST', path: '/v3/mail/send', body: mail.toJSON(), }); sg.API(request, function(error, response) { if (error) { return callback(error); } callback(); }); }; stream .on("error", function (err) { return console.error(err.response); }) .on("data", function (data) { let name = data[0] + " " + data[1]; let email = data[2]; contactList.push({ name : name, email : email }); }) .on("end", function () { inquirer.prompt(questions).then(function (ans) { async.each(contactList, function (recipient, fn) { __sendEmail(recipient, ans.sender, ans.subject, fn); }); }); });
In order to start using the SendGrid3628 module, we need to get an API key. You can generate this API key from SendGrid’s dashboard37 (you’ll need to create an account). Once the API key is generated, we will store this key in environment variables against a key named SENDGRID_API_KEY
. You can access environment variables in Node.js using process.env
38.
In the code above, we are sending asynchronous email using SendGrid’s API and the Async3923 module. The Async module is one of the most powerful Node.js modules. Handling asynchronous callbacks often leads to callback hell40. There comes a point when there are so many asynchronous calls that you end up writing callbacks within a callback, and often there is no end to it. Handling errors gets even more complicated for a JavaScript ninja. The Async module helps you to overcome callback hell, providing handy methods such as each
41, series
42, map
43 and many more. These methods help us write code that is more manageable and that, in turn, appears like synchronous behavior.
In this example, rather than sending a synchronous request to SendGrid, we are sending an asynchronous request in order to send an email. Based on the response, we’ll send subsequent requests. Using each method in the Async module, we are iterating over the contactList
array and calling a function named __sendEmail
. This function accepts the recipient’s details, the sender’s details, the subject line and the callback for the asynchronous call. __sendEmail
sends emails using SendGrid’s API; you can explore more about the SendGrid module in the official documentation44. Once an email is successfully sent, an asynchronous callback is invoked, which passes the next object from the contactList
array.
That’s it! Using Node.js, we have created a command line application that accepts CSV input and sends email.
Now that our application is ready to send emails, let’s see how can we decorate the output, such as errors and success messages. To do so, we’ll use the Chalk4524 module, which is used to style command line inputs.
… stream .on("error", function (err) { return console.error(err.response); }) .on("data", function (data) { let name = data[0] + " " + data[1]; let email = data[2]; contactList.push({ name : name, email : email }); }) .on("end", function () { inquirer.prompt(questions).then(function (ans) { async.each(contactList, function (recipient, fn) { __sendEmail(recipient, ans.sender, ans.subject, fn); }, function (err) { if (err) { return console.error(chalk.red(err.message)); } console.log(chalk.green('Success')); }); }); });
In the snippet above, we have added a callback function while sending emails, and that function is called when the asynchronous each
loop is either completed or broken due to runtime error. Whenever a loop is not completed, it sends an error
object, which we print to the console in red. Otherwise, we print a success message in green.
If you go through Chalk’s documentation, you will find many options to style this input, including a range of console colors (magenta, yellow, blue, etc.) underlining and bolded text.
Now that our tool is complete, it is time to make it executable like a regular shell command. First, let’s add a shebang46 at the top of broadcast.js
, which will tell the shell how to execute this script.
#!/usr/bin/env node const program = require("commander"); const inquirer = require("inquirer"); …
Now, let’s configure the package.json
to make it executable.
… "description": "CLI utility to broadcast emails", "main": "broadcast.js", "bin" : { "broadcast" : "./broadcast.js" } …
We have added a new property named bin
47, in which we have provided the name of the command from which broadcast.js
will be executed.
Now for the final step. Let’s install this script at the global level so that we can start executing it like a regular shell command.
$ npm install -g
Before executing this command, make sure you are in the same project directory. Once the installation is complete, you can test the command.
$ broadcast --help
This should print all of the available options that we get after executing node broadcast --help
. Now you are ready to present your utility to the world.
One thing to keep in mind: During development, any change you make in the project will not be visible if you simply execute the broadcast
command with the given options. If you run which broadcast
, you will realize that the path of broadcast
is not the same as the project path in which you are working. To prevent this, simply run npm link
in your project folder. This will automatically establish a symbolic link between the executable command and the project directory. Henceforth, whatever changes you make in the project directory will be reflected in the broadcast command as well.
The scope of the implementation of these kinds of CLI tools goes well beyond JavaScript projects. If you have some experience with software development and IT, then Bash tools48 will have been a part of your development process. From deployment scripts to cron jobs49 to backups, you could automate anything using Bash scripts. In fact, before Docker50, Chef51 and Puppet52 became the de facto standards for infrastructure management, Bash was the savior. However, Bash scripts always had some issues. They do not easily fit in a development workflow. Usually, we use anything from Python to Java to JavaScript; Bash has rarely been a part of core development. Even writing a simple conditional statement in Bash requires going through endless documentation and debugging.
However, with JavaScript, this whole process becomes simpler and more efficient. All of the tools automatically become cross-platform. If you want to run a native shell command such as git
, mongodb
or heroku
, you could do that easily with the Child Process53 module in Node.js. This enables you to write software tools with the simplicity of JavaScript.
I hope this tutorial has been helpful to you. If you have any questions, please drop them in the comments section below or tweet me54.
(rb, al, il)
Editor’s Note: Making big changes doesn’t necessarily require big efforts — it’s just a matter of moving in the right direction. We can’t wait for Paul’s new book on User Experience Revolution1 (free worldwide shipping starting from April 18!), and in this article, Paul shares just some of the little tricks and techniques to bring around a big UX revolution into your company — with a series of small, effective steps.
It feels like everywhere I turn somebody is saying that user experience is the next frontier in business, that we have moved beyond the age of features to creating outstanding experiences.
But for many of us who work on in-house teams, the reality feels a million miles away from this. Getting management to understand the importance of user experience2 seems so tough. Even colleagues don’t seem to see the benefit. For those of us in-house, how are we going to get to this golden age of user experience design that people keep promising us?
After all, design-led companies have outperformed the rest of the market by 228% over 10 years3. 89% of customers say4 they’ve stopped doing business with a company after a bad experience. Why then aren’t companies falling over themselves to create a better experience? Why is your job so frustrating at times?
The answer is simple. Change is hard. The world has changed. Digital has changed it, and people’s expectations are higher than ever. People and not companies now have the power. Yet many managers still live in the past, in a world of mass production and mass market.
We need to provide the wake-up call our clients and management need. We need to help them realize the potential of user experience design, its potential to reshape their business and provide a competitive advantage. But how?
First, we need to realize we cannot do it alone.
Sometimes we designers are our own worst enemy. We sit around moaning that nobody gets it, that nobody understands. But that isn’t true. Designers are not the only people who see the value of a great customer experience. Others might not know the term UX, but that doesn’t mean they don’t care.
Marketers and sales people understand; they know that a satisfied customer is the best form of advertising. Customer support staff know; they know that happy customers are less likely to call them with problems. Even the finance people get it; they understand that happy customers are less likely to return products, that they are more likely to make repeat purchases.
Neither are we the only people with insights into the user. Marketers have done their market research. Customer-support staff talk to customers every day.
If we want to bring about change in our organization, we cannot continue to be the lone wolf. We need to find allies. We need to bring together all those who care about the user’s experience.
This was part of the reason why Google’s culture shifted from being engineering focused to embracing the importance of design. Designers across the company started to reach out to each other. They started to talk to one another and united around shared values. You need to do the same.
Take the time to seek out people who share your belief in customer experience and start talking to them. Create a Slack channel or mailing list. Get together over lunch. Keep in touch. Share experiences and ideas.
Of course, chatting over lunch or in Slack isn’t going to bring about change. For that, you need to start a movement. You need to unite the group around a common goal, around a common cause.
There is power in numbers. Management is more likely to listen to you if you have a clear, consistent message.
Once you have formed your group, outline some principles on how the company could work to create a better user experience.
But be careful. This shouldn’t turn into a list of everything you perceive to be wrong with the company. That would do nothing but anger and threaten. Also, managers spend their lives hearing problems from employees. If you want to get their attention, you need to be more positive.
Instead, create a positive set of values that will encourage change, the kind of values that put user needs at the centre of decision-making, values such as “Design with data” and “If in doubt, test.”
There is no shortage of principles to inspire you, from the Government Digital Service in the UK13 to Google14. In fact, Jeremy Keith has kindly compiled a list15 of them over on his website.
On day one, these principles aren’t going to have teeth. Management won’t have bought into them. They aren’t something you can enforce. All of that can come later. For now, they will unite the group and strengthen your resolve.
But most of all, they will give you a clear message to take to the rest of the company, a message that has the customer at its heart.
The next step in your campaign for change is to raise the profile of the customer. It is shocking how little most organizations talk about customer needs — or at least talk about it in any meaningful way.
Take a look at the average office wall. Companies cover them with certificates, product shots, executives shaking hands and motivational posters — all inward-looking, all focusing on the achievements of the organization. Rarely do you see the user.
Meanwhile, they shove the things that give insights into the user in a drawer somewhere: the personas, customer journey maps and user research. Why aren’t these things on the wall? Why don’t you make sure they are.
Instead of lecturing colleagues about users, make it impossible to ignore them. Turn that user research into attractive infographics. Cover the walls of your office with them. Sneak in at night and replace those inward-looking wall hangings. Put personas, data and quotes about the user in their place.
But don’t stop there. Open up those usability sessions that you run (you do run them, don’t you?) and invite anybody to come along. Bribe them with food if you have to, anything to get them to see users firsthand.
Failing that, record those sessions. Edit together the highlights, and distribute the video to colleagues.
Also, start sharing best practices on the user experience. Send out a weekly newsletter. Use it to highlight what others are doing to improve the customer experience. Quote experts and research in the field. Share testimonials from customers you have interviewed.
Consider running lunchtime sessions, presentations in which you share best practices, but also in which you share your research. Once again, lay out some food to encourage people to attend. If the budget allows, bring in the occasional outside speaker, someone who will add some credibility to the proceedings.
In short, create some buzz around the customer experience. Treat it like a product launch or marketing campaign. Get imaginative and make it impossible for your colleagues to escape the user.
But don’t target management. The biggest mistake I see people make is going to management for permission too soon. If you want to win over management, timing is everything. You need momentum and numbers behind you. You also need a clear vision of a better future.
As user experience designers, we tend to be visual people. We find it easy to imagine what could be. But not everybody is like that. Most people need to see the potential, rather than be told.
If we want management to care, we need to excite them. To excite them, we need to show them a better future, something they can see potential in. That is why, before we ever approach management, we need to have something to show.
Employees at Disney had an idea. They envisioned a MagicBand20 that visitors to their parks would wear. The band would allow visitors to pay for anything, unlock their hotel room door and more. It would track one’s position so that Mickey could walk up to a child and wish them a happy birthday by name. It would allow a maître d’ to greet a patron personally as they arrived at the restaurant.
Winning over the executives would be tough. The investment was going to be large. So, they built a prototype. They converted an empty warehouse into a cardboard park. It had rides and hotel rooms and restaurants, everything they needed to show their idea.
They invited along the executives. They strapped paper prototypes of the MagicBand around their wrists. They then guided them around the cardboard park, giving them a sense of what the experience would be like.
By showing the executives, rather than telling them, they created excitement. That is what you need to do. A document or slide presentation isn’t going to do the job.
Build a prototype showing management a better way. Visualize a better user journey. Do whatever it takes to excite them about the potential.
You might have to work evenings to get it done. You might have to squeeze it between other work. But it will be worth it when you finally approach management to get them on board.
You are going to need management’s support if you want to see the company become user-centric. No amount of grassroots change is going to get the job done without them. So, when the moment comes, you need to make sure you don’t blow it.
First, put off talking to management until you have to. The longer you leave it, the better prepared you will be, the more momentum among colleagues will be behind you. Remember that management’s job is to shoot down half-baked ideas, so we want to give them no excuse.
Your prototype or vision of the future will help, but it won’t be enough. Some managers get excited by concepts and potential. Others are more risk-averse and prefer hard numbers. That is why you will want some data to back up your proposal. If possible, test the prototype with real users.
You can further mitigate risk by referring to outside data and experts. Third-party sources add credibility to your argument. Management will also consider them more impartial. That is why many choose to bring in an external consultant at this stage.
As you sit down to talk to management, bear in mind one important thing: They don’t care about the user experience, and no amount of arguing will change that. Tim Cook once said:
Most business models have focused on self interest instead of user experience.
Although you want to change that in the long term, you are not there yet. So, focusing on improvements to the user experience will not convince them.
Instead, we need to learn from Jared Spool. In his article “Why I Can’t Convince Executives to Invest in UX (and Neither Can You)23,” he gets to the heart of the issue:
You can find out what your executives are already convinced of. If they are any good at what they do, they likely have something they want to improve. It’s likely to be related to improving revenues, reducing costs, increasing the number of new customers, increasing the sales from existing customers, or increasing shareholder value. Good UX can help with each of those things.
Whatever you want to do to improve the user experience, frame it with what management cares about. If you don’t know what that is, find out. Dig out that company strategy you never bothered to read. That will tell you exactly what they want to achieve. Now all you need to do is show how your ideas will move the company closer to those goals.
Of course, the big question is, what should you ask management for? What is it you want them to do? This is where things can go wrong.
If you ask them for wholesale change, you will overwhelm them. If you ask for a big investment, they will be hesitant. That would require you to have a very compelling case or a great track record of delivering on big-budget projects.
Instead, start small. Earn their trust and build their confidence. Avoid overwhelming them.
Start by outlining your vision of the future. Get them excited. But to avoid overwhelming them, you need to make the next step easy.
Instead of asking for wholesale change, ask them to take one small step. Ask permission to build a proof of concept, something to show that user experience can make a difference to the business.
A proof of concept is a chance to prove yourself and user experience design to management. It will show how you can make a difference to the business. So, getting the right proof of concept is critical.
There are three considerations in picking a project:
For example, suppose management wants to increase the number of leads the company gets. You might agree to run a project that encourages newsletter signups, but to run it in a way that provides value to the user, rather than tricking people into signing up. This will give you a chance to show that putting the user first provides better returns.
This project would work well because you can tie it to a management goal. It would also be inexpensive to build, and its effectiveness would be easy to measure.
The key to a successful proof of concept is to gather as much evidence of success as possible. Measure relentlessly. Even trial more than one approach for comparison. This will provide the evidence that management needs to be confident in you and in the benefits of user experience. It will give them the confidence to take the next step.
A proof of concept is still only the beginning of the journey. You will need to run many such projects that guide management step by step towards your vision of the future.
Remember that this is a marathon and not a sprint. It will take time. It will be frustrating. But what is so obvious to you isn’t to everybody. They will need to go on that journey at their own pace. We need to stand with them, encouraging folks to take that next step, keeping them focused on the end goal.
That is why we cannot hope to do it alone. We need to unite with others around this common aim and vision of the future. We need to work hard to raise the profile of the customer and to approach management with care. But most of all, we need a plan, a plan that starts with some simple pilot projects to build trust, but also one that shows the importance of user experience.
You might conclude that it is not worth the hassle, that you would prefer to work somewhere that already gets it. That is fine. This role is not for everybody. But if you do persevere and change the culture of your company, you will become invaluable. It is the kind of journey that can make a career and transform a company. From my perspective at least, that is worth it.
(al, il)
In part 11 of this article, we looked at where in the world the new entrants to the World Wide Web are, and some of the new technologies the standards community has worked on to address some of the challenges that the next 4 billion people are facing when accessing the web. In short, we’ve tried to make some supply-side improvements to web standards so that websites can be made to better serve the whole world, not just the wealthy West.
But there are other challenges to surmount, such as ways to get over creaky infrastructure in developing markets (which can be done with stopgap technological solutions, such as proxy browsers), and we’ll also look at some of the reasons why some of the offline billions remain offline, and what can be done to address this.
A common problem people encounter in emerging economies relates to networks. Networks are getting better, but they’re not there yet. In 2016, Ericsson reported2:
While cellular networks have improved… smartphone users are still facing issues as frequently as they did in 2013. Globally, 26 percent of smartphone users say they face video streaming issues daily, increasing to over one third in markets like Brazil, India and Indonesia.
This is an excellent statement of the problem. Infrastructure is expensive to upgrade, especially in countries like Indonesia, which is made up of thousands of islands, and India, which is huge and has vast mountain ranges. And as soon as infrastructure is upgraded, more people come online and want to consume video rather than boring old text, and so much more bandwidth is required, and the newly upgraded network crawls again.
In places where bandwidth is seriously constrained (in congested Asian megacities, not just rural areas; I’m in the heart of an Indian city with an international airport, and power cuts and Internet outages occur daily), a lot of people opt to use proxy browsers. Proxy browsers do a lot of the heavy lifting of rendering web pages on their servers and sending compressed versions down to the user, resulting in often significant reductions in data consumption. This is obviously a very appealing proposition for consumers in territories where bandwidth is expensive. Because the data transferred is smaller, websites render faster, too.
Scientia Mobile reported3 in 2016 that, for global market share of proxy browsers, Opera Mini is at 42%, Opera Turbo is at 9%, Chrome is at 39% and UCWeb is at 6%, and there is also Puffin, Silk and others. Opera reports more than 250 million active monthly users of Opera Mini.
Websites compressed by proxy browsers and sent as binary blobs also have a better chance of getting through congested networks. In 2013, Avendus reported (in “India’s Mobile Internet: The Revolution Has Begun,” no longer online):
In India, only 96k of the 736k cell towers are 3G enabled… only 35k of those towers have a fiber-optic connection to the backbone.
(If you’re a real networking anorak, you’ll find the report 2G Network Issues Observed While in India4 to be even more fun than a game of Werewolf at a Star Wars convention.)
So, if proxy browsers are so great, why doesn’t everyone use them? The answer is that such compression comes at a cost; websites can look very different in a proxy browser, and JavaScript can often behave unexpectedly.
In a proxy browser, everything happens on the server, everything needs user interaction, and everything needs a round-trip to the server. Opera Mini on Android and iOS has two modes; one uses proprietary compression techniques and the device’s standard web view, and thus JavaScript isn’t affected. In Opera Mini’s extreme mode, all of the rendering is done on Opera’s server farms, on which JavaScript is allowed to run for 5 seconds and then is throttled. (Disclosure: I was Deputy CTO of Opera until November 2016 and can talk authoritatively about its technology. However, I have no relationship with the company now.)
Therefore, to make websites work in Opera Mini’s extreme mode, treat JavaScript as an enhancement, and ensure that your core functionality works without it. Of course, it will probably be clunkier without scripts, but if your website works and your competitors’ don’t work for Opera Mini’s quarter of a billion users, you’ll get the business.
Mini’s extreme mode has design constraints, too: it doesn’t do CSS rounded corners or gradients, because it can’t rely upon every client device to draw those things successfully; so, to render it on the server, it would need to convert them into bitmaps, which would bloat the page. But that’s OK; CSS is for design, and people in highly bandwidth-constrained situations are happy to get the words.
Similarly, Mini’s extreme mode doesn’t do CSS or SVG animations, only showing the first frame. The reason for this is that animations consume CPU cycles, and CPU cycles consume battery. Yes, your animations are lovely, but if you are sitting on a bus in Lagos in a traffic jam and need to phone your sister to ask her to pick up your children from school, you need that battery life more than you need pretty animations.
Neither does Mini’s extreme mode download web fonts, which can be huge files that are primarily for aesthetics. On many very small monochrome screens, the system fonts tend to be designed for those screens and work better. If you want icons, use SVG rather than icon fonts because, well, that’s what SVG is for.
The best way to ensure that your website or web app (is there a real difference?) will work for people on proxy browsers, conventional browsers with very slow connections and everyone else is to adopt a revolutionary new development methodology.
Airbnb tried it with its new app and wrote excitedly5:
… we’ve launched our first Holy Grail app… It looks exactly the same as the app it replaced, however initial pageload feels drastically quicker…
What voodoo magic did they employ?
… we serve up real HTML instead of waiting for the client to download JavaScript before rendering. Plus, it is fully crawlable by search engines.… It feels 5x faster.
Who knew? The best way to ensure that everyone gets your content is to write real, semantic HTML, to style it with CSS and ensure sensible fallbacks for CSS gradients, to use SVG for icons, and to treat JavaScript as an enhancement, ensuring that core functionality works without scripts. Package up your website with a manifest file and associated icons, add a service worker, and you’ll have a progressive web app in conforming browsers and a normal website everywhere else.
I call this amazing new technique “progressive enhancement.”
You heard it here first, folks!
We have supply-side improvements such as HTML responsive images, progressive web apps, renewed CSS work on vertical text. We have the methodology of progressive enhancement. We have proxy browsers that compress websites for people in low-bandwidth and expensive data plans. We have an explosion of ever-cheaper smartphones (and I’m not talking about the Galaxy Note 7 here). So, why aren’t the next 4 billion already on the web?
There are demand-side problems in emerging markets.
One of those is that the market for smartphones is either flat6 or declining7, depending on which statistics you read. There are many potential reasons for this: the slowdown in the Chinese economy (China, India and the US are the largest consumers of smartphones8); it could be a classic case of market saturation — everyone who wants a smartphone and can afford one already has one.
It could also have to do with devices like the one shown below, which was given to me at the Mobile World Congress SynergyFest.
It’s a feature phone, made in China; it doesn’t have Wi-Fi but is dual-SIM (very important in places like Africa and Asia); it has an FM radio; and it can connect to the web with a WAP-like browser. Its retail price in Africa and Latin America is $2.36 USD.
In a country like Cambodia, where a garment worker’s minimum wage is $14011 a month, or Liberia, where an unskilled worker’s minimum wage is $412 a day, even a $60 to $70 entry-level smartphone is practically unaffordable. But $2.36? That’s affordable.
But it is not just affordability that stops people from coming to the web. As GSMA Intelligence wrote13 in July 2016:
Despite the fact that Africa has the lowest income per capita of any region, affordability was only identified as the most important barrier in one out of 13 markets in our survey.
And it’s not just about networks either, as GSMA Intelligence reports14:
Network coverage was not perceived as an issue in most countries, reflecting the increasing availability of mobile networks. However, mobile broadband (3G or 4G) coverage remains low in most parts of Africa.
The problem is much more profound, and doesn’t have a technological solution. From the same report:
A lack of awareness and locally relevant content was considered the most important barrier to internet adoption in North Africa and the second biggest barrier in Sub-Saharan Africa.
There’s also a worrying lack of digital skills that prevents people from using the web in Africa:
A lack of digital skills was identified as the biggest barrier to internet adoption in Sub-saharan Africa and the second biggest in North Africa.
The World Bank reports15:
In Africa, seven in ten people who do not use the internet say they just don’t know how to use it, and almost four in ten say they do not know what the internet is.
This is true not only of developing economies, by the way. The World Bank continues:
In high-income Poland and the Slovak Republic, one-fifth of adults cannot use a computer.
And in 2014, Pew Research Centre reported16:
41% [of American seniors] do not use the internet at all, 53% do not have broadband access at home, and 23% do not use cell phones.
The lack of digital skills is felt acutely by Asia, too. In its January 2016 Asia survey17, GSMA Intelligence wrote:
A lack of awareness and locally relevant content was the most commonly cited barrier to internet adoption: 72% of non-internet users across the six survey markets felt this was a barrier.… 50% of websites worldwide are in English, a language spoken by only 10% of speakers in the survey countries. By way of contrast, only 2% of websites worldwide are in Mandarin and less than 0.1% are in Hindi.
Below is a video made in a user-testing lab in rural Pakistan, featuring a man in his 20s. He has used a feature phone but never a smartphone. He’s given a simple-sounding task: Go to Google and search for the the name of your favourite actress. Watch the video. Watch it all.
Presumably, he’s stuck because he’s been accustomed to a feature phone with physical keys, and he simply doesn’t know how to call up the virtual keyboard. Anytime you believe that doing something in your web app is “obvious” or “intuitive”, watch this video again.
Just as there is a digital divide between “the West” and developing economies, there is a digital divide in income, age, rural and urban, and women and men across the developing world. The World Bank illustrates a divide in income, gender, age and location (rural and urban).
In February 2016, 53% of urban areas20 had mobile Internet connectivity, a growth of 71% in one year. In the same period, Internet usage in rural India increased by 93%, but that means that only 9% of people in rural India has access. As I write this, there is a copy of The Hindu newspaper next to my laptop, which today reports21 that the Indian government is recommending a community model of Wi-Fi hotspots, such as in neighborhood grocery stores, in rural areas where connectivity is poor and laying cables is not feasible.
The World Bank reports that in many countries (Cuba, Cambodia, Brazil and others), computers are taxed as luxury goods.
Many countries, such as Fiji, Bangladesh and Pakistan, tax mobile phones as luxuries, too.
The World Bank recommends that:
Making the internet universally accessible and affordable should be a global priority.
There are moves afoot to make this happen, such as an initiative called FASTAfrica26. FAST stands for fast, affordable, secure and transparent. In a series of grassroots events across Africa in 2016, the organization has demanded:
The World Bank writes:
Online work can prove particularly beneficial for women, youth, older workers, and the disabled, who may prefer the flexibility of working from home or working flexible hours.
In India… women are 62% less likely to use the internet than men. Many of the underlying reasons for this — affordability, skills and content — are the same as for men; they are simply felt more acutely by women.
In rural India, only 2% of web users are women28.
Yet we know that the web empowers women. Across the world in non-agricultural employment, women make up 25% of the work force, but in online work, women make up 44% of the work force.
When asked why online work is advantageous, women overwhelmingly cited the ability to work flexible hours from home as the primary advantage. (32% of women, compared with 23% of men, said that the primary disadvantage to online work is that payment is not high enough, which suggests that Africa, too, has an unjustifiable difference between what women and men get paid.
One successful government initiative is the Kudumbashree project29, here in Kerala, India, where I’m writing this. “Kudumbashree” means “prosperity of the family” in the local Malayalm language. The World Bank reports:
The government of Kerala, India, outsources information technology services to cooperatives of women from poor families … Average earnings were US$45 a month, with close to 80 per-cent of women earning at least US$1 a day. Nine in ten of the women had previously not worked outside the home. Samasource, RuralShores, and Digital Divide Data are three private service providers. Samasource splits jobs into microwork for almost 6,400 workers, mostly in Ghana, Haiti, India, Kenya, and Uganda, on average more than doubling their previous income.
A similar story is happening in China:
Online shop owners using Alibaba in China, on average, employ 2.6 additional workers. Four in ten shop owners are women, 19 percent were previously unemployed, 7 percent were farmers, and about 1 percent are persons with disabilities.
The World Bank says that:
Access to the internet is critical, but not sufficient. The full benefits of the information and communication transformation will not be realized unless countries continue to improve their business climate, invest in people’s education and health and promote good governance
Should you happen to be a politician in Africa, Asia or Latin America, please take note of the above. But, because you’re reading this, I expect you’re a web professional — but you can play your part! Make sure your websites are ready for the next 4 billion people, with their (potentially) slow networks and browsers and devices you may never have heard of.
picture
element.Developing countries are home to 94% of the global offline population. These people may be your next potential customers. But if they can’t buy from you because your website is a Wealthy Western Website rather than a World Wide Website, you can bet that your competitors will be happy to take their money.
McKinsey Global Institute writes32:
An increase in internet maturity similar to the one experienced in mature countries over the past 5 years creates an increase in real GDP per capita of $500 on average during this period. It took the industrial revolution of the 19th century 50 years to produce same result.
But it’s not just about money. It’s about doing the right thing and keeping the web open, democratic and global.
Millions of people in Bangladesh, Nepal and India have miles to walk to visit a doctor, so a feature phone and the free online book Where There Is No Doctor33 (often translated to their local language) becomes first-line medical care.
Millions of people in Subsaharan Africa can’t afford school textbooks, but Worldreader34 has tens of thousands of books available for free online, and the books work fine even on old feature phones.
And millions of people in despotic regimes have the web as their only way to contact the outside world.
Please, do your part to ensure the health of the web that has provided you with so much, and pay it forward so the next people can benefit.
Thanks to Mrunmaiy Abroal35, until recently Opera’s Head of Comms in India, and Peko Wan36, Head of PR & Communications for Opera in Asia, for their help collecting some of the information in this article. Thanks to Karin Greve-Isdahl37, VP Communications at Opera for allowing me to use charts and illustrations made while I was at Opera. Big thanks to Clara at Damcho Studio38 for helping to prepare this article.
(vf, al, il)
Andrea Giammarchi introduces hyperHTML, a fast and light virtual DOM alternative.
This week was a big week in terms of web development news. We got much broader support for CSS Grids and Web Assembly, for example, but I also stumbled across some great resources that teach us a lot of valuable things.
With this Web Development Reading List, we’ll dive deep into security and privacy issues, take a look at a lightweight virtual DOM alternative, and get insights into how we can overcome our biases (or at least how we can better deal with them). So without further ado, let’s get started!
rel="noopener"
was implemented, too, just like broad support for CSS Grids7, Web Assembly, and async
/await
. They also disabled all plugins except for Adobe Flash.And with that, I’ll close for this week. If you like what I write each week, please support me with a donation37 or share this resource with other people. You can learn more about the costs of the project here38. It’s available via email, RSS and online.
— Anselm
What would life be without surprises? Pretty plain, wouldn’t you agree? Today, we are happy to announce a freebie that bubbles over with its friendly optimistic spirit, bound to sprinkle some unexpected sparks of delight into your projects: Ballicons 3. If that name rings a bell, well, it’s the third iteration of the previous Ballicons icon set1 created by the folks at Pixel Buddha19182.
This icon set covers a vibrant potpourri of subjects, 30 icons ranging from nature, travel and leisure motifs to tech and office. All icons are available in five formats (AI, EPS, PSD, SVG, and PNG) so you can resize and customize them until they match your project’s visual style perfectly. No matter if you like it bright and bubbly or rather sleek and simple — the set has the makings to become a real allrounder in your digital tool belt.
Please note that this icon set is released under a Creative Commons Attribution 3.0 Unported5 license. This means that you may modify the size, color and shape of the icons. No attribution is required, however, reselling of bundles or individual icons is not cool. If you want to spread the word in blog posts or anywhere else, feel free to do so, but please remember to credit the designers and provide a link to this article.
“Creation of icons is something special for our team, as it’s our first icon set which made us refuse from client work and concentrate on crafting content for other specialists. Ballicons and Ballicons 2 have become really popular, and today, after 2.5 years we’ve realized we’re willing to present another iteration of the icons from this series — even more striking and interesting.”
A big thank you to the folks at Pixel Buddha19182 for designing this wonderful icon set — we sincerely appreciate your time and efforts! Keep up the brilliant work!
The past year has seen quite a rise in UI design tools. While existing applications, such as Affinity Designer1, Gravit2 and Sketch3, have improved drastically, some new players have entered the field, such as Adobe XD754 (short for Adobe Experience Design) and Figma255.
For me, the latter is the most remarkable. Due to its similarity to Sketch, Figma was easy for me to grasp right from the start, but it also has some unique features to differentiate it from its competitor, such as easy file-sharing6, vector networks7, “constraints468” (for responsive design) and real-time collaboration9.
In this article, I’d like to compare both apps in detail and highlight where each of them shines.
The greatest weakness of Sketch has always been its lock-in to the Apple ecosystem. You need a Mac not only to design with it but also to open and inspect files. If you are on Windows or Linux, you’re out of luck. (Technically, you could get Sketch running on Windows, as Martijn Schoenmaker14 and Oscar Oto Mir15 describe, but it might not be worth the hassle for some.) That’s why, over time, a few paid services16 have emerged that enable you to provide teammates with specifications of Sketch files, but that’s still an unfortunate extra step.
That’s probably one of the reasons why Figma took a different route right from the start: It can be used right in your web browser, on basically any device in the world. No installation whatsoever needed: Just open Chrome (or Firefox), set up an account and start designing. While I’ve been hesitant with browser-based tools before, Figma blew my doubts away in a single stroke. It’s as performant as Sketch (if not more), even with a multitude of elements on the canvas.
Even standalone desktop apps of Figma for Windows and Mac are available, but basically, they are no more than wrappers of the browser version. While they give you better support for keyboard shortcuts, they don’t save you from having to be online all the time.
Winner: Figma
Why: It is platform-independent.
Basically, Figma can be best described as an in-browser version of Sketch. It offers mostly the same features, the interface looks similar, and even the keyboard shortcuts are mostly taken from its competitor. If you knew Sketch already, you could have easily found your way around Figma when it was released at the beginning of 2016.
Figma has improved on many aspects since then, and has even surpassed Sketch in parts, but some features are still missing, such as plugins8319, integration with third-party tools20 and the ability to use real data21 (with the help of the Craft plugin). Nevertheless, at the pace Figma is improving, it may be just a matter of time before it’s on par with Sketch.
Winner: A draw
Why: At the moment, they don’t differentiate enough. But still, Sketch is more mature, offers more odds and ends to make you more productive.
Sketch initially started with a one-time fee of around $99 USD for each new major version. But having finally realized that this business model is not sustainable, Bohemian Coding has recently changed its pricing to a kind of subscription-based model: For $99 USD, you get one year of free updates. After that, you can keep using the app as is or invest the same amount of money to get another year of updates.
This price increase might seem inappropriate at first, but what’s $99 for your main tool as a designer? Right, not much, and still cheaper than Adobe’s Creative Cloud22 subscription price. (Also, note that when you stop paying for your Creative Cloud subscription, you cannot keep using any of the Adobe CC apps, whereas your existing version of Sketch will continue running with no problems at all.)
For the time being, Figma is still free to use for everybody, but that might change later this year. I’d bet that it will also be subscription-based, with a monthly fee of around $10 to $15 USD (don’t hold me to that, though). One thing’s for sure: Figma will always be free for students23 (similar to how Sketch offers a reasonable discount24 to students and teachers).
Winner: A draw
Why: No word yet on the pricing of Figma.
The main differentiator of Figma is probably the real-time collaboration feature, called “multiplayer.” It allows a user not only to edit a file at the same time as others but also to watch somebody else fiddle around in a design and communicate changes with the built-in commenting system. For one, this simultaneous editing is great for presentations and remote teams, but it also ensures that everyone involved sees the same state, and it prevents unintended overwrites.
Sketch doesn’t offer anything like that (not even with the help of plugins), and it’s probably the better for it. Designers are skeptical of this feature, and rightly so, because it can lead to the dreaded “design by committee” and to a mentality of “Make this button bigger. No, wait! I’ll just do it by myself!” Of course, I’m exaggerating a bit here. This feature can be somewhat valuable, but it has risks if you don’t set certain rules up front.
Winner: Figma
Why: Allows multiple people to work on the same file at the same time.
Somewhat related to real-time collaboration is the sharing of design files. Because everything in Figma happens in the browser, every file naturally has a URL that you can send to your peers and bosses (in view-only mode, too). Another huge advantage of this browser-based approach is that everybody, developers foremost, can open a design and inspect it right away, with no third-party tool needed.
That’s exactly what you need in Sketch in order to offer teammates all of the necessary information on a file, such as font sizes, colors, dimensions, assets and so on. However, as mentioned, to help you here, you’ll need paid tools such as Zeplin28, Sympli29, InVision’s Inspect30, Avocode31 or Markly32. To be fair, the first three are free for a single project, but that might not be enough when you’re working on more complex stuff.
If you need more or don’t want to invest money in such (admittedly basic) functionality, there are also the free plugins Marketch33 and Sketch Measure34. Many paid apps also allow you to add comments and create style guides automatically.
Alternatively, you could even import your Sketch files into Figma and inspect them there, because Figma supports the Sketch file format.
If you just need to let your teammates have a look at a design, then the built-in Sketch Cloud38 does exactly that. To view a design on your iOS device, you have Sketch Mirror39 and Figma Mirror40; for an Android device, you can use Mira41 and Crystal42. Figma doesn’t offer an Android-based app, but opening www.figma.com/mobile-app43 on the device after selecting a frame or component (on your desktop computer) will give you a kind of preview.
Winner: Figma
Why: Files are easily shareable, right in the browser, without the need for a third-party tool.
Thanks to the organizational capabilities that Sketch gives with its pages, you can design an entire app or website in a single file, and switch between its pages quickly. Figma is left behind here: It doesn’t provide pages, but you can at least combine files into a project.
Another important concept of Sketch — artboards — has been taken on as well, but in an enhanced form. In Figma, you can apply fills, borders and effects to artboards, and they can also be nested and be easily created from groups. For one, this allows for containers of elements that are bigger than their combined bounds, which can be helpful for the “constraints468” functionality. Furthermore, each artboard can have its own nested layout grid, which helps with responsive design.
It’s worth mentioning that Sketch also allows you to nest artboards, but the implementation is not nearly as powerful as Figma’s.
Winner: A draw
Why: Sketch has pages, but Figma’s artboards (frames) are more powerful.
A feature that Figma introduced (and that Sketch followed with months later) is responsive design, or the ability to set how elements respond to different artboard or device sizes. There have been plugins that offer such functionality in Sketch, but they’ve always felt a bit clunky. Only the native “group resizing47” feature has been easy to use, competing with Figma’s “constraints” feature.
They do basically the same thing but are very different in execution. In Sketch, you just have a dropdown menu with four options, and it can be a bit hard to imagine what they do. Figma offers a more visual way to define these properties; for example, to pin an element to the edge of an artboard or to let it resize proportionally.
A huge advantage of Figma is that these constraints work in combination with layout grids. Elements can be pinned to certain columns (or rows), for example, or take on their relative width. Furthermore, you are able to change these settings independently for each axis, and they work not only for groups but also for artboards. None of that is possible in Sketch.
Winner: Figma
Why: Its constraints functionality is far more advanced and offers more options.
One major feature of Figma when it was released was “vector networks.” No longer is a vector just a sequence of connected points (nodes). Now, you can easily branch off from any node with a new line. Creating an arrow, for example, is as easy as pie.
But there’s much more to it. Parts of a vector can be easily filled (or removed) with the Paint Bucket tool. You can move entire segments (or bend them) with the help of the Bend tool. And copying lines (as well as points) from one vector to another is a snap.
This feature alone sets Figma apart from Sketch in a very special way and makes it so much more suitable for icon design. Compared to it, the tools available in Sketch feel almost archaic. While Sketch might have everything you’ll ever need to create and manipulate vector objects, as soon as more complex vector operations are involved, it can get a bit too tricky.
While most of the time you’ll need to use Boolean operations55 to create certain types of shapes in Sketch, much of that can be achieved more easily with vector networks in Figma. The fact that in Sketch it’s often hard to join two vector objects or even flatten a shape after a Boolean operation has been applied makes things even worse — the app certainly has a lot of ground to cover in this regard.
Winner: Figma
Why: Vector networks give a whole new dimension to vector manipulation, something still unmatched in Sketch.
Whenever a new application appears, you often face the problem of having to redo everything if you want to migrate a project. Not so in Figma: It can open Sketch files in an impressively accurate way, which gives it an immediate advantage.
Unfortunately, it doesn’t work in the other direction. Once you start designing in Figma, you are bound to it. It can’t export back to Sketch, nor is it capable of creating PDF files (or importing them). There is an option to export to SVG, but that’s not the ideal format in which to exchange complex files between design applications.
Another limiting factor is that you can’t set the quality level when exporting JPG images in Figma, so you might need a separate application to optimize images. The latest version of Sketch improves the exporting options even further. It allows you to set both a prefix and a suffix for file names now, as well as provides exporting presets, which makes exporting for Android much easier.
Let’s look at the export file formats that each application supports.
Sketch:
Figma:
Winner: Sketch
Why: More file formats, and better saving and exporting options.
Though Figma shares many keyboard shortcuts with Sketch, it doesn’t have the same level of accessibility. In Sketch, almost everything can be achieved with a key press. And if you are not satisfied with a certain shortcut combination, you can set your own in the system’s preferences (which largely goes for shortcut commands not assigned by default).
Note: You can learn how to change shortcuts in Sketch in the article “Set Custom Keyboard Shortcuts58,” published on SketchTips.
Because Figma lives in the browser, none of that is possible. While the desktop app makes up for it, certain functions simply can’t be assigned to the keyboard due to a lack of certain menu bar commands.
Winner: Sketch
Why: Almost all features in the application are accessible with keyboard shortcuts, and they can be customized.
An important feature has only just been introduced to Figma: symbols, or, as they are called there, components. Components allow you to combine several elements into a single compound object that can be easily reused in the form of “instances.” As soon as the master component is changed, all instances inherit the change. (However, certain properties can still be modified individually, allowing for variations of the same component.)
Lately, it’s even possible to create a library of components in Figma, that can be shared across files and with multiple users — a huge step towards a robust design system.
In Sketch, the modifications of individual instances are handled in the form of overrides in the Inspector panel, allowing you to change text values, image fills and nested symbols separately for each instance. Figma lets you update the properties of an element directly on the canvas, including the fill, border and effects (shadows, blurs).
The true strength of symbols (and components) becomes evident with responsive design. In both apps, you can define how elements within a symbol react when their parent is resized, making it possible to use symbols in different places or screen sizes.
An area where Figma is still behind is layer63 and text styles64. You can’t save the styling of shapes or text layers and sync it with other elements. What you can do, however, is copy and paste the styling from one element to another.
Winner: A draw
Why: The implementations are quite comparable, but Figma has a slight advantage with its shareable components.
Sketch is undeniably great and capable out of the box, but what makes it really awesome are the third-party plugins, which either enhance Sketch or make up for missing features. You can use them to rename multiple layers65 at once, create dynamic buttons66, save and load color palettes67, easily insert images68 from Unsplash, enable Git functionality69 and create animations70 right in Sketch, just to name a few. A full list of plugins can be found in the official directory71 and on GitHub72.
Then there’s Figma, which has none of that up until now. To be fair, Sketch probably didn’t offer many of these helpers early on in its development, and I’d bet that a plugin ecosystem will be introduced to Figma in the future.
A word of caution: While plugins give Sketch a huge advantage, they can be somewhat of a pitfall, too. They tend to break with each new release, and the Sketch team needs to be very cautious to avoid alienating developers in the future.
Winner: Sketch
Why: It’s simple: Figma doesn’t have plugins at all.
Unlike Adobe XD754, neither Sketch nor Figma has any native prototyping capabilities. There are no plans for the latter so far, but Sketch has the Craft plugin8476, which brings at least basic prototyping functionality. Furthermore, Sketch’s popularity has led to broad integration with other prototyping tools: InVision77, Marvel78, Proto.io79, Flinto80, and Framer81, to name a few. The idea with these is that you can easily import a design and sync up the apps.
The only application that works with Figma so far is Framer82. It’s hard to say what support for similar tools will look like in the future, but it may well be that such interfaces are already in the works.
Winner: Sketch
Why: Though the Craft plugin doesn’t provide persuasive prototyping capabilities, Sketch is well integrated with many prototyping services.
Among the countless plugins8319 that Sketch offers, you will also find some so-called “content generators.” For one, they let you fill a design with dummy content. But what’s even more interesting is that these little helpers enable you to pull in real data from various sources, or even insert content in the form of JSON files. This allows you to make more informed decisions and account for edge cases early in the design process.
By far, the best in this category is the Craft plugin8476 from InVision. As soon as you use it to populate an element with content, you can easily duplicate and vary it in a single step. Craft also allows you to save design assets and share them with your team, or to create a style guide automatically. For an in-depth look at the Craft plugin, have a look at my recent article “Designing With Real Data In Sketch Using The Craft Plugin85.”
Though not as sophisticated as Craft, a few more plugins with similar functionality exist: Sketch Data Populator87, Content Generator88 and Day Player89.
Figma doesn’t offer such functionality yet, but I think it’s just a matter of time. It will be interesting to see which approach it takes: integrating it in the app, as Adobe XD teased some time ago90, or relying on a plugin system.
Winner: Sketch
Why: Compensating for a lack of native functionality, various plugins allow you to use real data in Sketch easily.
One thing I constantly do is undo my changes one by one with Command + Z
to see what an earlier version of a design looks like — either to have a point of reference or to duplicate as a base for a new iteration. You might suppose that Figma is at a dead end here because of the browser’s limited resources, but in reality it is in no way inferior to Sketch.
Both apps save automatically, with the ability to browse and restore old versions. Sketch’s implementation, however, is far from ideal, and I don’t even know where to start. It’s laggy; you need to restore a version before you can actually inspect it; and it can lead to wasted space on the hard drive. (If you encounter this problem, Thomas Degry’s article “How Sketch Took Over 200GB of Our MacBooks91” might help.) To cap it off, things can get screwed up pretty quickly when multiple people are working on the same file in the cloud (via Dropbox, etc.).
Figma lets you restore previous versions instantaneously, which is especially useful if multiple people are editing the same file, helping you to isolate the changes of each person. You can then jump back to a previous state or duplicate the version as the starting point for a new idea.
Winner: Figma
Why: Sketch’s versioning is so imperfect that the winner is obvious.
As in other regards, the handling of text is quite similar in both applications, but some differences are worth mentioning. By default, Figma just gives you Google Fonts to choose from, but with a simple installer, you can access system fonts, too.
Something I constantly miss in Sketch is the ability to change the border position of text layers; “center” is the only option here. Figma also offers “inside” and “outside,” like it does for the shape layers.
Furthermore, Figma lets you set the vertical alignment of text, allowing for easy vertical centering inside the frame. In contrast, Sketch has more typography options and enables you to apply OpenType features such as small caps, old-style figures, stylistic alternatives and ligatures (if a font supports them).
Winner: A draw
Why: Both applications have their distinct advantages. Sketch might be ahead a bit, due to its advanced typography options, but not enough to win outright.
Sketch has been on the market now for over six years, which has allowed the community to accumulate a huge number of useful resources (articles, tutorials, blog posts, etc.). And due to the similarities between the two apps, some Sketch concepts can be applied to Figma, which makes some resources suitable for both applications. I’ll mention only a few here:
Let’s quickly recap each app’s strengths.
The discontinuation of Adobe Fireworks in 2013 left a huge gap106 in the world of UI design tools — a gap that Sketch gladly filled almost immediately107. Sketch became so successful that it closed in on its main competitor, Photoshop, over time and even topped it last year as the tool of choice108 among user interface designers.
Sketch would probably be even more successful if there was a Windows version, though, a problem with other tools, too. So, Affinity Designer and Adobe XD (two new competitors that started as Mac-only apps) quickly stretched out to support Windows as well. There is also Gravit Designer, which works in the browser109 and which now also exists as a desktop app110 (for Windows, Mac and Linux); but like Affinity Designer and Adobe XD, it’s hard for it to compete with Sketch, which is probably the most mature tool of them all at the moment.
And now we have Figma111: Not only is its feature set similar to Sketch’s, but it supports multiple platforms because the only thing you needed to run it is a modern browser. Moreover, Figma allows you to open Sketch files, which means you can basically continue in Figma where you’ve left off in Sketch, without much effort. (Figma requires you to be signed in and online to use it, though, a serious limitation112 in some cases.)
But as different as all of these tools are, they all have one thing in common: They want a slice of the pie, and so they are challenging Sketch in every possible way. But the Sketch team isn’t resting on its laurels. It is constantly improving the application and pushing new updates often, with the goal of making it the best possible UI design tool out there. So far, they are succeeding, but the pressure is growing.
(mb, al, il)