One of the biggest fallacies of our industry is that good work speaks for itself. It is a self-delusional lie that those with a good reputation tell themselves to explain their success.
Success means many things to many people. Some think it is getting to work on projects they love; others believe it is earning a lot of money, still others consider it is getting to spend more time with the family.
However you choose to define success, it will require other people’s cooperation. It will involve landing the right job, winning the right kind of work and being able to charge enough money for your services.
It would be great if that were all defined by the quality of your work, but it’s not. There is another factor at play here; your reputation.
I will let you in on a secret; I am not that amazing at my job. Don’t get me wrong; I am good. But I am not a leading mind in our industry or anything. But, people often talk as if I am.
I would love to claim this was down to talent. But in truth, it is because I shamelessly self-promote. Hell, I am doing it right now, and I am even being paid to write this post!
I will be honest, doing so makes me feel uncomfortable. It might be because I am British. We hate talking about ourselves and despise people who are full of self-importance. But from a career perspective, it was the best thing I ever did.
It has helped me to win work to the point where I can now pick and choose the work I do. But it also had another unexpected benefit; my projects tend to run much more smoothly. That is because my clients respect me more. They know I have a reputation and so listen to what I have got to say.
Think of reputation as a currency. A currency that you can spend to advance your career, win new clients or ensure projects run that little bit easier. It a currency that you can spend to achieve your version of success.
Of course, none of this is fair. People should respect ability, not reputation. But most clients don’t know what good looks like and so have to fall back on how other people talk about you. That means that reputation is about a lot more than the quality of your work.
Every web professional is different, but no one likes to work on weekends and extra hours. Managing your time is vital, and with just a few other tricks you can easily get work done without working more hours. Read more →5
For a start, producing great work won’t build your reputation if people aren’t aware of it. Sure, word of mouth from satisfied clients is important, but that will only take you so far.
If you don’t get out there and talk about your work, then nobody will ever know about it. That is why most of us haven’t heard of Duncan Haldane6 despite having won the Nobel Prize in Physics, but you have heard of Stephen Hawking who hasn’t.
Then you also need to ensure people remember you. People have short memories and reputations can quickly fade if not constantly reinforced.
Finally, they have to like you. You can produce the best work in the world, but if you are obnoxious, then you will end up with entirely the wrong sort of reputation.
So if the quality of your work isn’t enough to build a solid reputation, then what is?
Look, I am no Gary Vaynerchuk9. I am not a Times bestseller who has built a business empire based on his ‘personal brand’. But in my little way, I have done okay.
I would like to say that what success I have had in self-promotion has been down to careful planning, but that would be a lie. A good degree of it was blind luck and timing. For example, when I started a web design podcast12 there was nothing else on the subject. I had a monopoly. That gives you a good head start!
You could argue that my success came from grasping the opportunity. But again that was more luck than judgement, and anyway, that is not something you can replicate in your situation.
But among all the blind luck, I have done a few things right. I could drone on about writing great posts, or why my podcast has been a success. But much of my success has boiled down to two things.
When it comes to reputation building, people give up too quickly. They start a blog and then get demoralised when nobody reads it. They submit talks to a conference but give up after a few rejections. So it goes on, a string of half arsed attempts.
Reputation building takes time, not weeks, not months, but years. Years of nobody paying you a blind bit of attention. It took me years before the number of people subscribed to my podcast got above 600. Years more before I was asked to speak at a conference or invited to write a book.
When it comes to reputation building, you need to think long term. That means that you need a reason to do what you do beyond reputation. I blogged because I wanted to get down in writing all the things I was learning. I recorded my podcast every week because I enjoyed chatting with people. Without those extra motivations, I would have given up, too.
But stubbornness plays a big part as well. It would have been easy to skip the occasional podcast episode or conclude client work was more important than posting an article nobody would read. However, you can’t think like that. If you do, self-promotion will always be at the bottom of your task list and will never happen.
To this day, I will without fail, post an article every Tuesday and a podcast every Thursday. My newsletter will go out every other Friday because I know how important it is to stay in the front of people’s minds. The only time I take a break is when I am on vacation. Because if I start slacking off, people will quickly forget me like that 80’s boy band you loved so much.
But putting out regular content is not enough. It has to be the right type of content too.
While my pigheadedness proved a benefit, my ego did not, especially when it came to building a reputation. If I’m truthful, my motivation in the early days was to win the approval of the “cool kids” of the web design world. I wanted to be a star of the web design community, and that is what drove my writing and podcasting.
Not only did this mean I found myself frustrated when nothing happened, but it also led to me writing for entirely the wrong audience. Instead of writing for people who might potentially hire me, I spent my time writing articles aimed at impressing other web designers.
Don’t misunderstand me; there is nothing wrong with sharing your knowledge with your peers. In fact, it is something we should all be doing. But, we need to remember why exactly we want to build our reputation. Ultimately it is to progress our careers and win work. That means we need to think long and hard about where we invest our time. Do we speak at a jQuery conference that we have always respected and admired, or at some soulless business event full of potential clients? Personally, I would pick the latter every time.
We also need to be focused in who we target. The larger the number of people you are trying to build a reputation with the harder it is going to be. Becoming a global superstar in web design is extremely challenging. But becoming the go-to web designer within a particular sector or niche is much easier. With limited resources at our disposal, it makes much more sense to focus on a specific area in which to build our reputation.
Although the above advice will help, it is not enough on its own. Even if you became an expert in every aspect of reputation building you may still not succeed. That is because our attitude becomes our worst enemy.
I’m sick of people making excuses about why they can never build the reputation I have been talking about in this article. They have failed even before they have begun because of their mindset.
So let’s take a moment to address some of the defeatist attitudes that may be rattling around in your brain.
Maybe you’ve been dismissed as a speaker or submitted a post to Smashing Magazine that got turned down. Maybe you want to write a book but can’t find a publisher. Whatever the case, get over it. You don’t need any of those things to build a reputation.
If you get turned down as a speaker, start offering webinars. If you can’t get a book deal, self-publish. If people don’t accept your articles, start a blog. You don’t need a gatekeeper to reach an audience.
Sure, you will reach fewer people initially, but it gives you time to hone your craft. Given enough time you will become good enough to attract the attention of those with larger audiences. Eventually, they will be banging down your door.
I want to let you in on a shocking truth; we all have the same number of hours in a day. My life is just as busy as yours, but I find time to post an article and podcast episode every week.
You may feel that you do not have the time because you are barely making ends meet by working weekends and evenings already. Well, you have bigger problems and should read my article on pricing your time18.
Many people are afraid to share their thoughts online for fear of criticism. They don’t think they know enough and don’t want to look like an idiot. Trust me, I get this, and everybody struggles with the same thing.
But please do not let that hold you back. In truth, there is always somebody out there who knows less than you. Admittedly, there will be people who know more, but that doesn’t mean you have nothing of value to add.
I recommend starting with sharing your personal experiences of projects that you have run. By talking about your own experience, you are on safe ground. Nobody can tell you you are wrong because you are only sharing what happened to you. Also, nobody else had that experience, and so there is nobody who is better placed to share it. Do that for a while, and you will find your confidence grows.
Finally, I hear a lot of people say that they find public speaking or writing incredibly hard. Believe it or not, this is something that I found hard at the start. As a child, I used to have a stutter and found it incredibly hard to speak in a group. I’ve also always struggled with spelling and grammar which put me off of writing for years.
But speaking and writing are just like any other skill. You eventually become good at it. That is why I don’t regret the fact that I had such a small audience for so long. It gave me time to get better and growing confidence.
Ultimately that is what building a reputation is all about. It is about growing in confidence and being willing to step out from the shadows. It is not easy especially if you are a more introverted character such as myself, but it is worthwhile. You do have something of value to share and those people you look up to are no different to you.
Most apps developed and released in Google’s Play store are abandoned by their developers. Over half of these apps get fewer than 5000 downloads, and most apps are considered unprofitable. This article is not going to make you the next Instagram, but it will hopefully help you get a nice base level of users that you can grow from.
To give you some better understanding of numbers, the example app in this article received 100,000 downloads in eight weeks. This is with a marketing budget of zero and very little work since launch. We’ll cover the basic app store optimizations that will help bring people to your Google Play page. Getting them to download and stay is up to you and up to the value your app provides.
To launch an Android app, you need a plan. Without a plan, you are destined to fail. Your hopes, dreams and code will lie untouched, hidden at the bottom of Google Play for the rest of time or until Google decides to do a clean up and wipes your failure from existence. Of course, to get traction, you need to pick a topic in which enough people are interested, and then the quality of your build is what is going to help keep these users.
The mobile app market is a growing industry, making it even more important for designers and developers to stay up to date with current trends and future prospects. And the explosive growth in the mobile app market isn’t stopping anytime soon. Read more →1
The app I am going to use as my example is Learn How to Draw372, which I launched in December 2016 in cooperation with artist Will Sliney3. We will talk about our launch goals and techniques. We will also share our results in the form of installations, usage and many more statistics, which we have directly pulled from the Google Developer Console. Hopefully, they will give you some context for what to expect.
Everyone has an idea for an app. Some possess the right skills (or have enough money) to turn that idea into reality, but very few launch successfully. The primary problem with most launches is that the developer goes for the vanity metric: “How many downloads can I get in as short a period as possible?” They want the big bang. They want to show the world they are a big success on day one. The problem with this approach is that what goes up fast and doesn’t have a good product-market fit will come down even faster. Spiking on launch followed by a loss of all your newly acquired users the same week might be worse than not spiking at all.
The goal of our launch was to find a nice steady stream of new users. From these users, you can learn what is working and, more importantly, what is not working with your product. Yes, getting articles written about you also helps with app store optimization, but putting time into this should only happen once your foundation is in place. “Thank you for stating the obvious — just tell us how you did it,” I hear you say. No problem. Here it is.
When looking into launch strategies, I categorize our user acquisition work into three categories.
Big bang
The big bang is when you get a big number of users up front, and then it dries up. An example here would be getting an article in a high-profile newspaper or getting a retweet from Mark Cuban. Yes, you will get some users long after the day of the tweet, but the majority will have come up front. There are a lot of reasons why a big-bang launch can actually have negative effects on your app. First, if you attract the wrong users, they are very likely to uninstall your app. Having a high rate of uninstallations will have a negative effect on your app store optimization. These users will also now be part of any analytics you have set, making it really difficult to know what your actual customers of interest are doing and where they are having problems. It becomes really difficult to know whether your product has a problem retaining the right users or whether you have simply attracted the wrong users, users whom your product was never going to retain.
Long tail
The next is the long tail. The long tail is where you get consistent installations. The sum of future downloads far outweighs the initial impact. Examples of methods here include being featured consistently or getting repeat recommendations.
Chain reaction
The final category, chain reaction, is where getting one installation leads to getting one more. Most social techniques fall into this bracket.
The category we will focus on in this article is the long tail. We are looking for a consistent source of relevant users. The method we used is Google Play store visibility, and our technique is app store optimization (ASO).
Our long-tail method focuses on Google Play visibility. Apple and Google are the gatekeepers. Your app’s survival hinges on how well your app places in their stores. Just like in the supermarket, if your product is not on shelves, it is not going to sell.
There are three methods in Google Play by which your app can be found by potential users. We targeted the third: direct search.
charts
features
direct search
Charts have huge power in influencing users to download. Unfortunately, to target charts is to put the cart before the horse. Get everything else right and you will end up at the top of a chart. Not much we can do here to help with your launch.
The second is to get featured. This is the equivalent of a shop placing your product beside its tills. Expect a lot more impulsive buys, but also expect a higher level of churn. These users are not very targeted and probably only downloaded your product to nose around. Again, targeting this method from launch is difficult. Hopefully, if you have stayed close to the Android guidelines and your build quality is high, Google might give you a nice surprise and feature you prominently in the store.
The third method is direct search. This is where we found our gold. Users who search for something using a term that is relevant to what you are offering are the ones you are after. Who better to find your product than people who are actually looking for it. Imagine if you built an app that taught users how to draw, and when any user searches for “learn how to draw,” your app is the first they see. That is a good place to be. The challenge here is all of those other pesky apps that want to get in front of those users. So, let’s see what happens when you search for “learn how to draw” in the Play store.
Boom! There it is.
“learn how to draw”
“learn to draw”
“draw”
position 1
position 6
position 12
The images above give a good view of the app’s position in Play at one point in time. Seeing your own progress, good or bad, is critical to understand how your ASO is helping.
ASO is a little like throwing mud at the wall in the dark. This is simply the way it has to be. Google’s methods for ranking search results are a close-kept secret. I’d like to think that everything in the list below helped, but to what degree we will never know. It is clear that content is key when it comes to ASO, but this is a luxury I do not have because the content in my app is rarely updated. Below is a list of methods I did use.
This is the one I am most proud of. The name Learn How to Draw covers so many good search phases. It covers “learn how to draw,” “learn to draw,” “how to draw” and “draw” all in one name. The only time I would ever consider using a name like “Drawful” is if I had such a strong brand and marketing team that people would actually look for that specific product. Not likely for most of us.
The thing to remember about the build name is that Google uses it to create your unique URL, among other things. Google is a big fan of using URLs in its search rankings. This was a technique used in the early days of the web and was fazed out by Google because people started to take advantage of it. This will probably happen on mobile, so use it while you can. Here, you can see that I use all of the keywords of interest:
This one is obvious, but people still tend not to consider ASO when writing it. The “Title,” “Short description” and “Full description” all play a part.
Updating the app is a good opportunity to get creative. The primary purpose of the text accompanying an app update is to let users know what is in the latest update. That shouldn’t stop you from being creative.
A high rating is critical to strong ASO. We look after every review, trying to move three stars to four and four stars to five. We now have a really solid 4.488 out of 5 rating, which puts us in a far better light than the competition. A second tip here is always to use your response to feedback as another opportunity to get creative with wording.
If you have read anything about traditional web search optimization, then you will know that getting links to your content from high-quality sources plays a key part in Google’s magical algorithms. The same goes for ASO. Anytime someone writes about our app Learn How to Draw, I always contact them and ask them to link to our page in the app store, using as many of our keywords in their anchor text as possible.
Getting users to your Play store page is half the battle. Now you need to get them to download your app. Our goal here was to give users a clear understanding of why they should download our app by using strong app screenshots and a video montage demonstrating the value of the app. Even with all of this work, we still seem to be failing in this step. Only 17% of users who we get to our Play store page go on to install the app. Definitely room for improvement here.
The first graph below shows installations. Here, you can see a nice consistent installation rate. Our rate is currently at 1000 new installations per day — again, with zero marketing.
Next, and probably most important, is usage. Having a nice flow of users installing your app, only to abandon it the same day, is not very helpful.
From these three graphs, it is clear that Learn How to Draw372 has a nice rate of installations, and we are retaining more users than we are losing.
There you have it. 100,000 installations in eight weeks by following the techniques above. Start throwing mud, and let us know if you get any to stick.
Most travellers make last-minute decisions, even though they spend significant time researching things to do before embarking on their trip. Finding a hotel and flight is relatively easy, but when it comes to tours and activities, the problem is that late or last-minute bookings are not always available, and if they are, the process of making a purchase online is often hard. The mobile experience can also be limited because many websites are slow or their booking process is long and complex.
In this article, we’ll present a case study and share observations on the project we designed and built, GetLocal1, an online travel-agency and booking platform in Iceland. We will share how we created a booking platform that tackles multiple challenges faced by mobile users, by building a responsive website with super-fast search and a mobile-optimized booking experience. Although this article focuses on travel, designers and developers can apply what we’ve learned to any type of e-commerce and mobile search.
If you happen to be planing your next mobile e-commerce website, keep in mind that there are many things to consider before getting started, and questions that you need to ask yourself. Read more →2
We started by writing a business plan outlining the problems we wanted to solve. A study by Google3 shows that 85% of travellers make a purchasing decision after arriving at their destination, and 50% use their mobile phone to book. With this in mind, we built multiple use cases catering to different personas. The hardest problem we needed to solve was for the people on the road who were using a mobile device with a slow connection and who needed to find a specific activity with last-minute availability and who needed to be able to finalize their booking online.
Imagine a group of friends travelling down the south coast of Iceland, famous for its waterfalls and black-sand beaches. It’s 10:00 am, and the group wants to use the opportunity to book a glacial hike on a nearby glacier named Sólheimajökull; they need to find a tour with a certified glacial guide who is able to fit them in an afternoon tour on the same day. They have a mobile phone connected to a wireless router that came with the rental car, but in the countryside they’ll be lucky if they find a solid 3G connection. They want to book online and secure their spot.
To us, designing for mobile goes beyond visual design. We look at everything from rendering speed to every single piece of content. From the beginning, GetLocal was designed mobile-first. From the first wireframe drawing to the high-definition designs in Sketch, we kept the mobile experience at the forefront.
Having this in mind helped us to simplify the navigation, as well as the information architecture; it also meant that all elements were designed for touch and that we use mobile design patterns such as horizontal sliding. We also had to overwrite many native form elements and replace them with simpler menus that are easier to use.
If you compare our desktop version to the mobile version, you can see that we simply copied the mobile booking form to the desktop version. That wasn’t the plan, but once the mobile version was made, we didn’t see a need to create a different experience for the desktop.
The booking form queries real-time availability APIs and immediately informs you of the availability on each day. If a departure has fewer than 10 seats left, we tell you exactly how many can be booked. We do this by connecting to various inventory solutions. Once a product is loaded, we call the API and get the available days, along with the number of seats that are left. This allows us to disable days that have no inventory and show the customer how many seats are left.
Having this direct access in real time saves the user the frustration of not having to start a booking process only to find out that what they want is sold out or not available on their preferred date.
We wanted our customers to get the feeling that our website is super-fast, and we wanted to crack the 1-second start rendering time. It’s not an easy task, especially when you have a very visual website loaded with beautiful images.
We challenged our lead front-end developer, Alpesh Prajapati17 from Gateway Technolabs, to cut requests by 40% and increase the rendering speed by 60%. We wanted to be four times faster than our main competitors. To begin with, he thought it was crazy to set such an ambitious goal, but once we started scrutinizing everything we had made so far, we kept finding ways to improve and move closer to our goals.
After pushing ourselves as far as we could, we hired Harry Roberts18 to audit performance for us. Harry provided a very deep analysis and pointed out a few places where we could do better. As a result, we modified our caching strategy and refactored some of our CSS, leading to even further improvements.
We switched over to HTTP/2 in order to load requests in parallel. However, because each request from the server to the client still takes time, we reduced the number of requests by merging and compressing CSS and JavaScript files, using sprites for icons, etc. The merging and compression bring some overhead to development during updates and releases, but they’re still worth it.
We also broke up the JavaScript and CSS files to load only what is necessary on pages; for example, the CSS and JavaScript required for a calendar isn’t needed on our blog. We also removed some libraries that we were using in the beginning because we found a way around them; for example, we dropped Font Awesome, as awesome as it is.
We use a lot of images and we found that Photoshop and Sketch didn’t compress them highly enough. We started passing all images through ImageOptim19 (Mac only), and then we integrated our CMS with the ImageOptim web service20, so all images that are uploaded go through proper compression before being stored and published on our CDN.
Although compression of the images reduced the sizes a lot, we knew that serving images in different sizes based on the screen size of the device would help further. This also enables us to adjust the cropping point and even to serve different images to different devices to make the best use of the space.
We use the picture element to serve the right-sized image at different breakpoints.
For each image, we create versions 480, 780, 1200 and 2560 pixels wide. We use a Slack template to position and crop hero banners in the best way for common screen sizes, sometimes using different images if they fit better.
We preload the hero banner and serve the best-fitting size inline in the header, for optimum size and design (although preload is not supported25 by all browsers).
We have customers from all over the world, and we want customers from Brazil, Australia and China to experience the same speed as customers from Europe. All common files except critical CSS and JavaScript files are stored on Amazon’s S3, served through the CloudFront content delivery network (CDN). For most files, we set the browser cache to be immutable, which basically means that the browser will never try to fetch that file again. In order to break that cache, we change the version numbers of the files. As a backup for browsers that don’t support immutable caching, we set an expiry date very far into the future.
We use Redis26 for server-side caching for the majority of our pages, but we don’t cache everything because the inventory checks need to be live and are populated via AJAX.
When a page loads, we:
get the product details,
load the reviews,
call the real-time availability APIs and populate the calendar,
Speed has a lot to do with perception. Being able to start rendering a page within a second is important because it gives the impression that it is fast, even if the full rendering time is slower.
We do this by taking parts of our main CSS file, just what is needed to render the first part of the page, and include it in the head of our template, which allows the browser to start rendering the page, even though it hasn’t downloaded all of the CSS. Loading CSS files causes render-blocking, so we load the full style sheets asynchronously using loadCSS31.
Optimizing is a bit like going on a diet: Once you reach your goal, you tend to start slacking off and forgetting to optimize. Speed is one of our main features, so we often check to make sure we are still in good health. We use Pingdom’s speed tester32 to automatically test various pages and send us weekly results; this alerts us if something goes wrong or we become forgetful.
We’ve used Lighthouse33, WebPagetest34 and Google’s PageSpeed Insights35 to better understand what we can do to improve our website. We recommend finding a benchmark, setting a goal and optimizing until you reach that point.
Check out some of the performance articles on Smashing Magazine36. There are also a lot of great videos37 and books38 out there on the subject. Don’t be shy to hire experts to perform an audit or give you advice. This subject is really tricky and requires a lot of specialization.
Shortly after going live, we realized that one of our biggest challenges was to find tours with availability. In a relatively small market like Iceland’s, where tourism has been growing more than 20% per year, it’s become increasingly difficult to find available seats close to departure, especially on the more popular tours. We are not a meta search engine, and we integrate directly on top of inventory solutions. We handle the full lifecycle, from search to booking. So, rather than handing off the customer to another sales platform with an affiliation key, we are able to follow the customer through the purchasing funnel, making sure that every step is optimized.
We needed to find a way to map the activities available in our market, so we built a search engine using relative data points, such as location, activity type, price, duration, departure time, vehicle type and sights. Once you start looking into the challenge and throwing booking cutoff times and availability into the mix, the size of the data set starts to grow quite fast.
A tour with three departures a day, 365 days a year, and that contains multiple facets will require over a thousand rows of complex data. Multiply that by a thousand tours and you have over a million rows of data.
With such a large inventory of products on offer but with so many search facets in place (we use over a hundred data points for categorization purposes), we needed to find a solution that not only fulfills our need for speed but also is flexible and scalable enough to handle the load of processing up to a million records in just a few milliseconds.
We thought about building our own search engine and made a few prototypes, but then we came across Algolia and magic started to happen. However, integration with Algolia39 was quite tricky because we needed to design a data set that holds all of the information and facets but with a limitation of 10 KB per record (a limit set by Algolia). We also needed to build custom widgets on top of Algolia’s API because we wanted to use our own designs.
In our database, we store all of the product information that we aggregate from the inventory solutions we integrate with. This includes meta data and descriptions, photos, categorization for the search facets, as well as pricing and availability. This required us to build cron jobs that at certain intervals throughout the day send a request to the APIs of the inventory solutions; these requests check the updated availability for the next few weeks and take in the latest prices. Once an update has been made, we push the changes into Algolia via its API so that the search data set is in sync and shows real-time availability and prices.
The integration of multiple inventory solutions and Algolia was no easy task, but the end result makes us really proud. The search engine delivers a response within 25 milliseconds — it’s almost too fast!
With the result set updating instantly in 25 milliseconds we realized that we need to somehow let the user know that changes have taken place, because the results might appear below the fold.
To work around this problem, we tested using a “Submit” button to mimic a form’s submission, but the button doesn’t really do anything because once any filter is changed, the result set is updated. All the button does is act like an anchor that navigates you down to the result set, thus giving you a feeling that a change has been made in the result set below.
We also tested the yellow fade technique to highlight the change in the result set number, just to let people know that changes have taken place and that the number of results has changed.
Recently, we’ve been experimenting with a sliding panel that appears for two seconds if there is a change in the result set. On mobile, we fix the position of this preview so that people can immediately see the impact their changes have made on the results.
For us, content is a critical part of design — not only the labels and headings, but also the descriptions of the products we sell. We operate a marketplace, so our suppliers provide us with their own marketing material, including text and images. Because we work with 200 suppliers and have over 1000 tours to sell, you can imagine the vast difference from item to item.
This resulted in our need to rewrite the descriptions for the majority of products we offer. We used the opportunity to enhance the descriptions, making them shorter and more direct. We put ourselves in the shoes of a traveller, reading and trying to understand every product that we sell. What information is necessary, and what is just wasteful fluff? Are key elements missing that we can try to answer, and can we highlight key details that we find important.
This process is slow yet important; it is still in progress, but it enables us to add our personal tone and brand to all of the products we sell. We wanted our tone to be that of a friend or of a friendly local who gives you honest and direct answers, rather than glorifying tourist traps (as magnificent as they might be). This enabled us also to use slightly simpler language, because many of our customers are non-native English speakers. The Hemingway app43 has been a great tool to help us rephrase and form our sentences to make them easier to read and understand.
E-commerce is full of hurdles, feelings of insecurity, and complex forms and labels. Users often have to to sign up or give away personal information that is not necessarily needed in order to complete their purchase. One small mistake and you’ve lost a customer forever. Our situation is even more difficult because most of our customers only shop once during their stay, so we don’t get much of an opportunity to build a relationship with them.
For this reason, we don’t require anyone to sign up in order to check out or to add a product to their list of favorites. We store their choices in a cookie but allow them to share a link that reveals the contents of their saved list.
We apply the same philosophy to the checkout, having no hidden fees and not asking for any information we don’t need. We explain why we need the information we collect; we auto-suggest country calling codes; and we make sure that the input fields offer the right keyboard format.
For us, there are two major hurdles in e-commerce. The first is getting visitors to give up their email address, and the second is getting their credit card information. We experimented with asking for their credit card first, before asking for their name, phone number and email address. The idea behind that was that if they are willing to cross the biggest hurdle of providing their credit card, then providing the rest wouldn’t be a problem. The experiment was successful, although we didn’t see a significant change in conversions. So, we reverted to the classic approach because it felt more natural.
Building a great mobile experience is really hard and time-consuming, but with enough attention to detail, you can succeed. For us, it’s been only eight months since going live, so we are nowhere near our goal, although we have made good progress.
We’ve managed to prove our case, and currently the majority of our sales (over 70%) take place on mobile and tablet.
Our goal was to go live with a minimum viable product (MVP) in six weeks, including design and development of the first version. In the first version, we had to take a few shortcuts, but we wanted to prove that we were able to convert. We made our first sale, a $2500 snowmobile trip, within two days of going live. Since then, thousands of travellers have used us our service to book their dream vacation. Now we have started to focus on the next phase of our product, so stay tuned.
FullStory helps you build incredible online experiences by capturing every click, swipe, and scroll, then replaying sessions with pixel-perfect clarity.
Being a designer at the moment is great because a wealth of modern design applications are available that let you easily bring your ideas to the screen: Sketch1, Affinity Designer, Adobe XD (beta) and Figma, to name just a few (not to mention the classics, Photoshop and Illustrator). One app that is quite new, though — and perhaps a bit overlooked — is the free Gravit Designer2 app. Gravit gives you all of the tools needed to create functional and elegant screen designs. It can also be used to make icons, designs for print, presentations and much more.
With the recent release of version 3.1, it’s gotten even better: Now you can import your Sketch files into Gravit (similarly to Figma, which also supports3 the new Sketch file4 format), save designs locally or sync them to Gravit Cloud, and get started quickly with design templates. Gravit is free, works on all major OS platforms and is available as both a web and desktop (standalone) app.
Every new tool can feel daunting at the beginning. So, in this tutorial I will walk you through the creation of a neat weather app (designed by Claudia Driemeyer7). You can download the design file8 to inspect it or use it as the base of your own work. (Please remember that you need to select “View” → “View Mode” → “Output View,” so that the overflowing parts of the design are clipped when you initially open the design in Gravit.)
Moving From Photoshop And Illustrator To Sketch Link
If you’re a UI designer and you still haven’t made the transition, it’s worth trying out. In Sketch, you can work with various plugins that automate parts of your workflow and help you work much more efficiently. Read more →9
The first thing you will see when you open Gravit Designer is the start screen, with a broad selection of presets (see figure 2). For the purpose of this tutorial, let’s start with the “Nexus 5” template, which has the common size of 360 × 640 pixels. (Click and select the template from the dropdown.) If you don’t want to limit your imagination, you can also leave the “width” and “height” fields at the top empty, which will create an infinite canvas; going for a fixed-dimension canvas later will be easy: Just enter width and height values.
This initial screen also lets you open files (either from Gravit Cloud or from your computer) and start from a selection of templates.
After having selected the “Nexus 5” preset, you will be presented with a new “page” (known as an artboard in other design applications) at the center. The page will hold all of the elements of the proposed app. Because it’s an app about weather and a potential user should see the current conditions at a glance, let’s start with a full-sized background image. I’ve prepared a rainy image12, which you can either save to your computer and drag it to the canvas from there, or copy and paste directly from the browser.
This image will create the first entry in the Layers panel (on the left), and it will also be placed in the center area of Gravit Designer, on the canvas (figure 3). In case the image hasn’t been centered to the page, right-click and select “Align” → “Align Center,” followed by “Align Middle.” Alternatively, you can drag the image until the pink smart guides show you this centered position.
Now, switch to the Pointer tool (press V), hold Shift and Alt both to constrain the ratio of the image and to resize it from the center, and drag out the bottom-right handle. It should have a width of about 415 pixels, visible from “Size” → “W” in the Inspector, the right-hand panel of Gravit Designer (figure 3). The image will overflow the page now — to clip it, select “View” → “View Mode” → “Output view.” Another handy option here is “Outline View,” which shows the outlines of the layers only.
Note: Usually, enlarging bitmap images is not a good idea, but as we will be blurring the image later on, this is not so important.
Now is a good time to save our file. The best way is to save it to Gravit Cloud with the related option in “File” in the menu bar, which makes it available everywhere and on all devices. Want to start the design in the Mac app and keep working in the browser or on Windows? No problem. You just need to create an account, which is only a matter of seconds (you can read more about Gravit Cloud in my overview on Medium15). For the file name, use something like weather-app.
The other option, of course, is to save the file to your computer with “Save to file…” (from the desktop app version) or “Download File…” (from the web app version of Gravit).
Let’s continue with the status bar of the app, whose basic shape is a rectangle. Press R to switch to the Rectangle tool, start to draw in the top-left corner, go all the way to the right, and make it 24 pixels high.
Note: All of these tools (such as the Pointer tool mentioned a little earlier) can also be accessed from the toolbar at the top of Gravit Designer (figure 3)
In case you didn’t get the right dimensions for the rectangle, you can also enter them in the Inspector (the width should be “360.”) The reason we chose exactly 24 pixels is that we will be using an 8-point grid, which prevents an arbitrary placement and sizing of elements. You can display the grid from “View” → “Show Grid” or by pressing Alt + Command + G (or, on Windows and Linux, Alt + Control + G). The actual grid size can be set in the Inspector (“Grid” → “Width and Height”) once you click anywhere outside the page — use “8” for both fields.
Select the rectangle again with a click, and change the color to white. You can do so by clicking on the color field in “Fills” in the Inspector (figure 4, callout 1), which will bring up the color dialog, with many different options. For now, either drag the color picker to the top-left (figure 4, callout 2) or select white from the swatches in the bottom-right (figure 4, callout 3). To let the background image shine through, set the “Opacity” to 50%. Instead of doing that in the color dialog, close it, and enter this value just above the “Fills” area in the Inspector.
Note: The opacity affects the global transparency of the layer, while the “A” field (alpha) in the color dialog sets the opacity of only the current fill. This is especially important because you can add multiple fills with the “+” icon on the right. To delete a fill again, select it with a click and press the trash bin (delete) icon.
Back to the status bar: The next element will be the time, a white text layer with a size of 12 pixels. To add it, switch to the Text tool with T, click in the center of the rectangle, and write “14:00” for example (figure 5). When done, press Esc, set the “Size” to “12” and the font face to “Open Sans” with a “Bold” weight in the Inspector.
Note: Gravit Designer comes with a wide selection of fonts, but you can also bring in your own fonts with “File” → “Import” → “Add fonts…” in the web version; in the desktop app, all fonts on your system can be used out of the box.
Select the time along with the rectangle (hold Shift and click on both), and center them to each other. You can either right-click and select “Align” → “Align Center” and “Align Middle” or use the alignment icons in the top-right — in particular, the fourth and seventh icons from the left. Another option would be to drag the time until the pink smart guides show you this centered position. At the moment, the text layer is barely visible, so we will treat the background image before continuing with the rest of the status bar’s elements.
First, create another full-width rectangle, starting from the top-left, with a height of 376 pixels. Enter the color dialog and, instead of a solid fill, select the second option at the top, a linear gradient (figure 6). It will be created horizontally by default, going from the left to the right. What we want, however, is a vertical gradient, so click on the round arrow pointing to the right twice to rotate it clockwise. The starting color stop of the gradient should still be selected — change its color to #070031 in the “Hex” field. For the other gradient stop, you can use the same color but with 0% alpha.
To blend it in with the image, click anywhere on the rectangle to close the color dialog, and select “Soft Light” from “Blending” right above the “Fills” area in the Inspector.
Note: As with the opacity of a layer, there is a “general” blending, which affects the entire element and how it blends with objects behind it, and a “separate” blending for each fill, which just affects how the various fills interact with each other.
For an even stronger effect, clone this layer with Shift + Command + D (on Windows and Linux, Shift + Control + D). Rename the first rectangle to “Overlay 1” with a double-click in the Layers panel, and the second to “Overlay 2,” select both after holding Shift, and drag them below the other white layer (the background of the status bar). Finally, lock them with a click on the lock symbol so that you don’t interfere with them by accident later on.
Now to the background image itself. Gravit Designer enables you to directly tweak the colors of a bitmap without needing to fall back on another application. Select the image, click on “Color Adjust” in the Inspector, and change the “Brightness” to “-17%,” the “Contrast” to “27%” and the “Saturation” to “10%” (figure 7). Looks way better, and the time already comes to the fore. Besides this color correction, you have many other filters with which to modify bitmaps. Click on “More” at the bottom of the Inspector, which will show you a comprehensive list of effects to choose from. Be sure also to click on the dropdown at the top.
Back to the background image, where we need to fix one last detail: applying a “Blur” with a “Radius” of “17” (figure 7). To make the white layer of the status bar better blend in with the background image, you can also set its “Blending” to “Soft Light” in the Inspector.
Important: Keep in mind that effects such as blur can slow down Gravit Designer considerably. If you experience this, switch off the effects temporarily with “View → Show Effects” (on a Mac, Command + E, and on Windows and Linux, Control + E).
Before we continue with the other elements of the status bar, select its existing parts (the time text layer and background rectangle), and press Command + G (on Windows and Linux, Control + G) to group them, and name this group “Status bar” by double-clicking. This lets you select the entire group, but you can still pick individual elements by holding Command (on Windows and Linux, Control) and clicking on a layer. The same behavior will be enabled with “Click-through this element” in the Inspector.
A handy alternative to using a group is to use a so-called “layer.” (Do not confuse this with a general layer, which is an individual element on the canvas and in the Layers panel.) Add such a layer with a click on the “New Layer” icon (figure 8, callout 1) in the Layers panel. Just like with a group, such a layer (group) also allows you to combine elements but has the following advantages:
You can define a color that is shown for all related elements on the canvas (click on the colored bar on the right side of a layer — see figure 8, callout 2),
You don’t need to hold Command or Control to select an individual layer within the layer (group) on the canvas.
Elements are automatically added to the layer (group) after their creation.
You have a separate “Layer” tool (press M, also available from the “Select” tools in the toolbar) that will let you select the entire Layer group.
If you like, you can add a layer and drag the elements from the group in there (and delete the now empty “Status bar” group with Backspace (or, Windows and Linux, Delete).
The next element for the status bar is the icon for the receive signal (see figure 9 for the process). We’ve drawn some inspiration from iOS’ icon here with its circles — and although that’s quite a contrast from the typical Android design, it’s the perfect opportunity to show the smart duplicate feature of Gravit Designer (more on that in a minute).
Press E to switch to the Ellipse tool, hold Shift to make it a perfect circle, and add a first circle with a diameter of 6 on the left. Switch off the grid with Alt + Command + G (on Windows and Linux, Alt + Control + G). Give it a white fill.
If you’re having trouble creating such a small element, you can also add it at a bigger size and then bring it down to 6 × 6 using the Inspector. Just be sure that “Keep Ratio” (the small icon between the width and height fields) is enabled. Alternatively, press Z to switch to the Zoom tool and drag a selection in the top-left of the page to enlarge this area.
Getting to the other four circles is easy. First, press Command + D (on Windows and Linux, Control + D) to duplicate the element. Then drag it up, so that it is at the same height as the other circle and has a distance of 1 pixel (the pink smart guides help here). After that, press Command + D again to create three more instances at the same distance. This “smart duplicate” feature works for all kinds of transformations, such as rotations, and always repeats all of the steps between the first and second duplication.
The difference between duplicate (Mac: Command + D, Windows and Linux: Control + D) and clone (Mac: Shift + Command + D, Windows and Linux: Shift + Control + D) is that the former offsets the new layer by 10 pixels in each direction (X, Y) and allows for smart duplication. The latter doesn’t.
Let’s suppose the signal isn’t the best over here. So, select the last two circles and apply a border instead of a fill. Click on the eye icon on the right side of the fill to disable it, and instead add a border with the “+” icon in the “Borders” section below. Also, give it a white color.
This default centered border isn’t suitable, however, so change it to an inside type by clicking on the “Advanced stroke settings” on the right. There, change the “Position” to the first icon. For the border width, pick “0.5.”
There are several ways to change a value in an input field in Gravit:
Click inside an input field, type the desired value, and press Enter.
Use the up or down arrow key on the keyboard to increase or decrease the value by 1;
Hold Alt and use the arrow keys to go in increments of 0.1;
Hold Shift and use the arrow keys to go in increments of 10.
For our border, the easiest way is to hold Alt and press the down arrow key five times on the keyboard. (With the rise of high-resolution mobile devices, it’s probably OK to use a half-pixel value here, because it will still appear sharp when exported.)
Finally, select all circles and perform the following steps:
Create a group with Command + G (on Windows and Linux, Control + G).
Name it “Receive signal” (double-click in the Layers panel to rename).
Drag it into the “Status bar” group.
Move it so that it is 2 grid units (16 pixels) away from the left edge of the page. (You can show the grid again with Alt + Command + G, or on Windows and Linux, Alt + Control + G.) Pressing Alt to display the smart guides will assist you here.
In the vertical direction, center it to the white background (and rename this one to “BG”).
The next element we will tackle is the carrier name (figure 10). Pan a bit to the right (hold the space bar and drag), select the time text layer, hold Alt and drag it to the left to create a duplicate. Make sure that it’s 6 pixels away from the receive signal, change the text to “Gravit,” and the weight to Regular. Do the same for the battery level indicator, but drag it to the right, with “75%” as the content (figure 10). Remember that you can press Esc to leave text-editing mode. Of course, we could take a premade icon for the battery symbol, but where is the fun in that? Let’s give it a try (figure 11).
The first element is a simple rectangle (24 × 10) with a white inside border of 0.5 and rounded corners of “1.” (You can add the rounded corners in “Corners” in the “Appearance” section of the Inspector.)
Clone this rectangle with Shift + Command + D (on Windows and Linux, Shift + Control + D), move it to the left and down by 1 pixel with the arrow keys, and change the size so that it’s 2 pixels smaller in each dimension.
Please note that Gravit Designer supports mathematical operations in input fields, so you can use “+5,” for example, to add 5 to the current value, or “*3” to multiply by 3. In our case, enter “24-2” for the “Width” in the Inspector, and “10-2” for the “Height,” and press “Enter” to get to the desired dimensions.
This second shape should have a white fill, without a border. Also clone it, move it to the right so that its left edge touches the right edge of the bordered shape, and change the size to 1.5 × 4. Center it vertically to the other shapes.
The third shape has rounded corners on all sides now, but we only want them on the right side, so we need to enter the “Advanced Settings” on the right. Uncheck “Uniform Corners” so that you can enter “0” for the top-left and bottom-left corner, and 0.5 for the top-right and bottom-right pendants. Select all three rectangles, combine them into a “Battery” group and drag it into the “Status bar.” Make sure that it is 2 grid units (16 pixels) away from the right edge of the page, vertically centered to the status bar, and that the battery level has a gap of 6 pixels to it.
There’s one last thing to manage for the status bar: the Wi-Fi signal indicator (figure 12). We can approach it in two ways. First, a white rectangle. Create one next to the “Gravit” text layer, with a size of 9 × 9 (hold Shift to create a square). Turn it by 45 degrees, either in the Inspector (“Angle” → “R”) or by grabbing the top-most handle on the canvas and holding Shift to constrain the movement to 15-degree steps.
Now we need to convert it to a path (Mac: Shift + Command + P, Windows and Linux: Shift + Control + P, or right-click → “Convert to Path”), so that we are able to manipulate its individual points. Change to the Subselect tool (press D), select the top-most point and click on the second icon at “Joint” in the Inspector. This will convert the former “Straight” point, with no curve, to an evenly rounded “Mirror” point. Increase the curve a bit by dragging out either of the handles to the sides of the point while holding Shift (to keep the movement horizontal), and move it 3 pixels down with the keyboard. Goal accomplished!
Note: The difference between the Pointer tool (V) and the Subselect tool (D) is that the former selects entire shapes or layers, while the latter is mainly used to manipulate an individual point or to show more manipulation options.
The second approach for the Wi-Fi icon would be with the Bezigon tool (figure 12). Keep the icon we just created as a reference, but tone it down to an opacity of 40% in the Inspector. Now, enable the Bezigon tool from the “Path” area in the toolbar (or press B) and make a first click at the bottom-most point of the other Wi-Fi symbol. Make another click at the left-most point, then continue to the top-most point, but hold Alt before you click.
This move is especially vital with the Bezigon tool because it creates a curve that automatically adapts to the surrounding points. Continue at the right-most point, but release Alt again, and finish the shape with a click on the first point at the bottom. You can see how a point that was created with an Alt click of the Bezigon tool adapts automatically if you select the top-most point with the Subselect tool (D) and drag it around.
Keep the version of the icon you like, center it vertically to the status bar, and move it away 6 pixels from the “Gravit” text layer. Finally, rename it to “WiFi” before you drag it into the “Status bar” group. With this last step, we’ve finished the status bar and can turn to the contents of the app.
The first element here is the current date (“Friday, Jun 15”), a white text layer with a size of 18 pixels and a “Regular” weight (figure 13). Zoom out to 100% again with Command + 0 (on Windows and Linux, Control + 0), so that the placement becomes easier. Center the text layer to the page and place it on a grid line, about 36 pixels away from the status bar. Hold Alt and drag it to the bottom to create another text layer for the location. Double-click so that you can change it to “Berlin”; for the text size, choose “38.” It should also sit on a grid line right below the date and be centered to the page.
The final text layer we need now is for the temperature (“21 °C”). Proceed as before, but for the font size, you can simply add “*2” to the current value of the input field and press Enter, which will double the size. It should be about 28 pixels away from the other layer.
With this ends the first part of the tutorial. I hope you have enjoyed it so far and that it has given you valuable insight into Gravit Designer, its features and what it can do.
In the second part36, we will look at some more sophisticated techniques, see how we can make the weather icons to create a version of the app with sunny weather, and learn how to export the final design.
If you have questions regarding this tutorial, please leave a comment below and I will gladly help you. And if you have ideas on how Gravit Designer could be further improved, don’t hesitate to reach out to the Gravit team on Twitter37 or on Facebook38, or post your questions and ideas in the Gravit discussion board39. Your feedback would be more than welcome!
If great design can imbue customers with trust, why are designers so removed from product management and the larger business strategy? As a VP of UX with an MBA, I strive to bring both worlds together to create a new model in which user experience and design align with overall business strategy and company vision to drive increased revenue and customer engagement.
As the Internet became commercially viable, “first to market” generally prevailed as a dominant corporate strategy. However, as technology has become more open and reusable, product differentiation is now a proven strategic blueprint. This tectonic shift has been a boon for the design discipline. Consequently, design has gotten the proverbial “seat at the table” and is now expected to be a driving, strategic function.
For designers, this is a celebrated, exciting advancement, yet it has also exposed a severe skills gap. Specifically, design leaders are being thrust into overarching, cross-functional executive positions, with little to no formal training in business or strategy skills.
Consider the context: Most BFA and MFA programs don’t cover traditional business skills, and companies certainly aren’t investing in cross-functional training for creative professionals. The resulting condition is tragic: Design teams and leaders are not set up for success and, subsequently, are unable to deliver, thus relegating the collective back to a service function.
I’ve experienced this first-hand. I unexpectedly found myself in boardrooms and strategy sessions armed with nothing more than Photoshop and front-end programming competencies. I knew the value of design work but simply couldn’t articulate it in a convincing and relatable form to a corporate audience.
Designers like to pride themselves on an extensive portfolio, while business folks like to see numbers. With no doubt, speaking in a language the client understands is key to good communication in any business. Read more →3
Out of frustration, I formulated a hypothesis: If I were educated in core business skills, I could shift myself (and design teams) from a service to a highly strategic functional asset, driving larger organizational direction.
Very soon, I found a workable solution: a part-time MBA program where I would study and learn actionable skills of business frameworks and best practices of management. So, for just over three years, I was a senior user experience designer by day and an MBA student by night.
Though I studied many illuminating topics (economics, financial accounting, business law, supply-chain management, corporate ethics) as they relate to design, three particular areas of study were highly impactful: statistical analysis, competitive strategy and organizational management. These subjects were most impactful because, to be candid, I had never studied or considered them at depth professionally. Yet, the more I learned, the greater the impact these courses had on my professional development and approach to design.
When it came to statistical analysis, as a creative professional, I had an adverse physical reaction to the very idea of spending four months pouring over Excel spreadsheets. In fact, I had done just about everything possible to avoid “math” during my academic and professional career. I was guilty, like so many, of being stuck in the construct of a left-brain and right-brain duality.
It was surprising to me that my first weeks in statistics had very little to do with mathematical calculations, but rather, with theory6. I learned that statistics actually rest upon a singular premise: ratios. If you boil down ratios to their core, it’s basic: X ÷ Y = Z. Simply, you take two forces, stack them against each other, and you achieve an output that is influenced against both forces.
Surprisingly, as a designer, the concepts and application of ratios were very familiar to me. I had comfortably worked with ratios in everyday design tasks (grids, padding, aspect ratios) throughout my career.
As I progressed through the course, week after week, I waded through increasingly complex financial models that leverage statistics. Over time, this repetition built up a muscle, and without even thinking about it, I found myself toying with basic statistical models to help guide my design decisions.
Normally, as a designer, I’d go through my design process (competitive auditing, high-level concepts, interactive wireframes, visual design, usability testing), and leave it to the business unit to figure out the impact on our company and users. Armed with this new statistical skill set, I was able to incorporate business thinking in my design process as an early step.
Let’s consider a simple redesign of a home page. For many designers, this can be very challenging because there are so many competing priorities: search functionality, email registration, illustration of the brand’s value, testimonials, etc.
Prior to using statistical analysis, I’d just try to find a design that represented a compromise for all use cases and moved on. Now, I’ve been able to step back and use statistics to help prioritize my layout. Rather than “just” designing a variant or two, I’m able to provide a thoughtful overview of the revenue impact correlated to my design decisions.
Let’s take a hypothetical example. In a first pass of a design, the layout would prioritize email registration as the primary action for a user. Consequently, this would have the immediate impact of fewer orders. Using standard Google Analytics, I was able to show the potential revenue ripple effects of that approach.
Conversely, for a second pass at the design, my layout would feature a search bar. Of course, this would lead to a decrease in email registrations but would also lift the number of immediate orders, as well as conversions (users would convert better because they found more relevant results).
This is a basic application of a statistical model, but as it scales, it becomes even more powerful for larger design initiatives (launch of an app, new product offering, new checkout page, e-commerce feature, brand campaign). What’s more, the model becomes more specific and predictive the more it’s used.
As I progressed, I was able to genuinely factor in cost reduction9 as a critical variable into my models. Consider a chatbot that can easily answer questions for a customer; that feature cuts down on call-center operations, thus reducing a cost driver to the business. Or consider an engaging feed, constantly being populated by fresh content; this boosts organic repeat visits, reducing pricey SEO spend. Of course, raw revenue generation will always be a key driver for business, but as designers, our work actually has a profound impact on cost reduction for the company. This type of analysis gave financial credibility to design-led projects that would have previously been considered “non-revenue generating.”
In all, once I was able to build this sort of analysis into my design practice, more and more of my initiatives got investment. It became formulaic. I would not only present a jaw-dropping, visually striking design prototype that solved actual customer problems, but would also offer precise measurement and financial gains (or savings) to the company.
Even armed with statistical analysis, I still struggled with the ask to be more “strategic.” But, to be honest, I had no idea what that really meant. How does one actually be strategic? Is it planning? Is it innovation?
In my study of competitive strategies, I read extensively about strategic frameworks to help guide decision-making. In particular, we spent a great deal of time understanding the work of Michael Porter, who is basically the Jony Ive of business thinking.
Porter constructed a simple yet far-reaching structure to guide corporate decision-making (The Economist has a good summary10). It all rests on the premise of achieving an enduring advantage over your competitors (Warren Buffett calls it the moat).11
To summarize, Porter defines the two ways in which an organization can achieve competitive advantage over its rivals: cost advantage and differentiation advantage. In short, to stand apart, you can either have a very different offering or win on price. (Mind Tools has more on this12.) If executed correctly, the business will have achieved a competitive advantage over its competitors.
Today at Shutterstock, we certainly offer exceptional content at competitive prices, but many in the industry offer similar products. Using Porter’s model, we knew that differentiation would be a strong strategic approach, as opposed to cost leadership. Aligned with traditional user experience and product strategy, we identified a pain point for an emerging customer segment. Customers want to be able to quickly edit and design off our images without leaving the website. From there, our integrated editor15 tool was born. As it moved from beta to production, we have emerged with a highly differentiated experience for a new customer segment.
Porter’s model is powerful for larger-scale strategic initiatives, but is equally important for feature development. How often have we seen decision-makers deploy a strategy to merely copy existing functionality (see Instagram and Snapchat)? To this, Porter would say, “… bad strategy simply ignores competitors; average companies copy; and winning companies lead their competitors.”
From a design standpoint, articulating the flawed logic of replication becomes simple and powerful. A series of designs showing our product changes to be benchmarked to company B’s advantage showed the futility of the effort and how differentiation would never be achieved. In contrast, a differentiated product (alongside a statistical modeling) can clearly illustrate that the optimal path allows for both differentiation and long-term revenue gain.
Designers are natural optimists. Where others see disorganization, designers see the prospect of beauty. Where others wish to cut corners, designers take pride in completeness and quality. Moreover, designers are problem solvers, collaborators and, yes, a bit eccentric, too! We are the people who are wanted — who are needed — to be constant, positive creative, cultural and strategic forces within companies.
However, I would argue that while we can be epicenters of culture and innovation, our discipline hasn’t devoted much attention to successful management of our organizations. Far too often, we manage work, not the individual or the collective.
For me, a breakthrough in thinking about how to structure design departments was found in the work of Edward Deming18, often referred to as the father of quality.
Deming, an academic who was brought to Japan in the 1950s following World War II, is credited with being a leading figure in post-war Japan’s economic rise. Deming based his entire business philosophy on an ideal of cooperation and complete employee fulfillment. Much of his experience and life work were codified in his brilliant 14 points19.
To me, these 14 points are an exact blueprint for how to build and scale a thriving design organization. I encourage every reader to share these with their teams and to ask for a grade on each point. I promise that you’ll find immediate areas of opportunity. If you commit to the changes, you will have all the guidance needed to build a connected, high-morale and thriving organization.
I would like to leave you with Deming’s final point, point 14: “Put everybody in the company to work to accomplish the transformation. The transformation is everybody’s job.”
To me, this point — and my entire MBA experience — reaffirmed that the foundation of design thinking, customer empathy and long-term vision is, in fact, the foundation of an enduring corporate strategy. Far too often in today’s economy, strategy is subject to the whims of short-term gains, which, over time, comes back to haunt company and consumer alike. Or, inversely, as the adage goes, if you solve your customer’s problems, they’ll solve your business problems.
I’ll end with this: Invest in yourself. Invest in learning new skills. Invest in your design team. When you do, you’ll see great returns for yourself, the team, the customer and the business.
UX design hasn’t been the same since Sketch arrived on the scene. The app has delivered a robust design platform with a refreshing, simple user interface. A good product on its own, it achieved critical success by being extended with community plugins.
The open nature of the Sketch plugin system means that anyone can identify a need, write a plugin and share it with the community. A major barrier is stopping those eager to take part: Designers and front-end developers must learn how to write a plugin. Unfortunately, Objective-C is difficult to learn!
What if users could write plugins using technologies they are already familiar with? This tutorial covers the usage of WebView technology to create a plugin using HTML, CSS and JavaScript.
This introduction to Sketch development teaches you by creating a sample plugin. This plugin is dubbed “Symbol Export.” It’s a simple tool to export document symbols to image files. A mockup of what you’ll create is below.
For the purpose of learning, the end product won’t be aesthetically pleasing. The demo will be simplistic, but you are encouraged to spice it up with your own HTML, CSS and JavaScript.
Be warned: While the tutorial leans heavily on front-end technologies to reduce the learning barrier, some Objective-C and Cocoa concepts must still be learned.
Sketch plugins are typically created using Objective-C and CocoaScript. These technologies can be confusing, and often downright frustrating, to learn for beginners. So, although developers must still know the basics to create a plugin, the difficulty will be mitigated by using WebViews.
Simply put, a WebView is a web browser. With WebView, instead of learning how to create layouts in Objective-C, you can use JavaScript, HTML and CSS. Much like a real browser, developers also get access to the powerful developer console to troubleshoot and prototype with.
You might already be familiar with WebViews in hybrid app development! The concept is the same: Barriers are removed through the use of technologies that developers are already familiar with. We will lean heavily on the WebView to meet all interface requirements.
Think of Sketch and the WebView as two separate entities. Sketch provides data to the interface, monitoring for events, and any system-level logic such as reading and writing to files. The WebView is responsible for rendering the interface and communicating events back to Sketch.
For this tutorial, we will create a Sketch plugin that lets designers export all document symbols as images. When a user presses a shortcut, they will see an interface showing all symbols to export. Clicking on an “Export” button will perform the export and close the plugin’s interface.
For the purpose of education, the result won’t be as stylish as the screenshot above. The functionality will be the same, however.
Sketch plugins are bundles of files set up as macOS bundles. Similar to a ZIP archive, a bundle appears as a single file but is actually made up of many. Let’s create our Sketch bundle now.
// Go to your Sketch plugins folder. // Replace {{User}} below with your Mac user name cd /Users/{{User}}/Library/Application Support/com.bohemiancoding.sketch3/Plugins/ // Create the folder with the .sketchplugin suffix mkdir SymbolExport.sketchplugin
You may now open the folder you just created with your favorite text editor. You may also consider adding a shortcut to this folder on your desktop for quick access.
Recreate the following file tree:
Contents
Contains all plugin files.
Resources
May contain images, fonts and other assets.
Sketch
Contains the plugin’s code.
Manifest.json
Meta data for the plugin, similar to package.json for npm.
app.cocoascript
The file to run when the plugin is activated.
At a bare minimum, every Sketch plugin requires a manifest.json file and a file to call when the plugin is initiated. Beyond these requirements, any directory or file structure may be used.
Fill out meta data above to reflect your name and email address. Pay special attention to the commands array: This is where we tell Sketch the file and function to run and which shortcut to respond to.
Next, create app.cocoascript with the contents below:
/** * onRun * * Bootstraps the Sketch plugin when the user calls the plugin. * This method runs every time the plugin shortcut or menu item is fired. * * @param {object} context A generic object Sketch provides with information on the currently running Sketch instance. */ function onRun(context) { context.document.displayMessage('Hello there!'); }
Time for testing! Create a new Sketch project to see our hard work in action. You can initiate your plugin with the shortcut or by looking under the “Plugins” menu option. If all went well, you should see the message “Hello there!” at the bottom of your Sketch window each time the plugin is called.
A Primer On Sketch, Objective-C And CocoaScript Link
To be effective in creating a Sketch plugin, a developer must know the system APIs available for accessing user data. First, a short history lesson.
Objective-C was released in the 1980s as a general-purpose programming language. It’s since been used to create full-fledged applications for Apple products. Being an older language, Objective-C isn’t as intuitive as more modern ones.
Cocoa is a framework created to make working with Objective-C easier. Think of it as a library of APIs that streamline actions in Objective-C. You can often recognize Cocoa code by the NS naming convention: NSString, NSObject, NSWindow and others. This comes from its original name, NextStep.
CocoaScript is a bridge between JavaScript and Cocoa. It allows for scripting in Cocoa without needing to know as much of the language. CocoaScript simplifies the process of Objective-C development but is not a complete abstraction: Developers must still be aware of Cocoa APIs to create a Sketch plugin.
You can see an example of CocoaScript below. Don’t worry: If this is slightly confusing, that’s all right. Picking up small bits of Cocoa and CocoaScript will happen naturally as we create the Sketch plugin.
/** * Example of Objective-C/Cocoa and CocoaScript in action * * Saves a JavaScript object as a json file */ // A JavaScript object turned into a JSON string var userConfigToSave = JSON.stringify({ name: 'John Doe' }); // Convert the JavaScript string to a Cocoa string (note NS prefix) var userConfigNSString = [NSString stringWithFormat: "%@", userConfigToSave]; // The path to save the file to var savePath = '/path/to/userConfig.json'; // A Cocoa system call to save the file [userConfigNSString writeToFile:savePath atomically:true encoding:NSUTF8StringEncoding error:nil];
Our main plugin class will eventually have many JavaScript methods. Understanding the flow of the application will be easier if the logic is moved to a separate file. Change the contents of app.cocoascript to the contents below:
@import 'app/SketchPlugin.cocoascript'; /** * @type {SketchPlugin} The Sketch plugin app class. */ var plugin = new SketchPlugin(); /** * onRun * * Bootstraps the Sketch plugin when the user calls the plugin. * This method runs every time the plugin shortcut or menu item is fired. * * @param {object} context A generic object Sketch provides with information on the currently running Sketch instance. */ function onRun(context) { plugin.init(context); }
For the rest of development, our app.cocoascript file will stay the same. Its only responsibility is to load the SketchPlugin class, initialize it and then run it when the user calls the plugin.
Note the context parameter. This is a plain JavaScript object containing information on the currently running Sketch instance. It’s used to get file path information, user settings and other meta data.
On line 1, an import call is made to an uncreated file. Now, create the app folder and the SketchPlugin.cocoascript file inside it. Your directory structure should match what is below.
Add the code below to the SketchPlugin class created. The code represents a skeleton class; it doesn’t do anything useful quite yet.
/** * SketchPlugin Class * * Manages CocoaScript code for our plugin. * * @constructor */ function SketchPlugin() { // The Sketch context this.context = {}; } /** * Init * * Sets the current app and plugin context, then renders the plugin. * * @param {object} sketchContext An object provided by Sketch with information on the currently running app and plugin. * @returns {SketchPlugin} */ SketchPlugin.prototype.init = function(context) { this.context = context; return this; }
Rendering the WebView is a simple three-step process. Create a containing window, create a frame for the WebView and, finally, have the WebView load an HTML file.
A window in Cocoa is an NSWindow object. The window will be used as a container for the WebView and its frame. See the annotated code below for a new method to add to the SketchPlugin class.
SketchPlugin.prototype.init = function(context) { this.context = context; // Create a window this.createWindow(); // Blastoff! Run the plugin. [NSApp run]; return this; } /** * Create Window * * Creates an [NSWindow] object to hold a WebView in. */ SketchPlugin.prototype.createWindow = function() { this.window = [[[NSWindow alloc] initWithContentRect:NSMakeRect(0, 0, 800, 800) styleMask:NSTitledWindowMask | NSClosableWindowMask backing:NSBackingStoreBuffered defer:false ] autorelease]; this.window.center(); this.window.makeKeyAndOrderFront_(this.window); return this; };
The new createWindow method introduces the NSWindow Cocoa object. With it, we are able to create a window with many different options. Some of those are worth explaining because the methods aren’t entirely obvious.
[[[NSWindow alloc] creates an NSWindow object. An important property here is alloc, which allocates memory to the window. Without it, we couldn’t interact with the window.
JavaScript developers might be unfamiliar with memory management. In JavaScript, a “garbage collector” automatically releases objects in memory that are not in use. Unfortunately, a little extra leg work is required for Objective-C.
Think of alloc as telling the system, “Please keep this object around. I’m not done using it.” And after, it follows up with, “I’m done with this object. Please free the memory space it was using.”
So, when do you explicitly allocate memory to an object? For most actions in CocoaScript, this will only be required for window objects that you want to keep around. For now, you only need to be familiar with the concept.
initWithContentRect:NSMakeRect(0, 0, 800, 800) creates the window with the specified position and height. NSMakeRect‘s parameters are set as x, y, width, height.
X and Y values are ignored here because we’re calling this.window.center() below it, which automatically sets these values for us.
styleMask:NSTitledWindowMask | NSClosableWindowMask sets the style of the window. It’s possible to set whether or not a user can close the window, to set a window title and to set other style options.
For this project, we will use a title bar and allow the user to close the window. Note that we have not declared support for resizing the window or minimizing it.
backing:NSBackingStoreBuffered defer:false are two options that specify how the window is rendered. defer:false says we want to create the window object now, not later. And the backing type specifies how the window contents are drawn in memory. Always use NSBackingStoreBuffered, which specifies that a memory buffer should be used. That’s what the system is optimized for, and it is the most performant.
] autorelease]; states that the window object should be cleared from memory when it’s closed (remember when we used alloc?). Recall that memory management is important in Cocoa and Objective-C. If objects are not released after being used, the app might crash from a lack of memory. This unfortunate scenario is referred to as a memory leak.
centers the window and also makes it the key window.
A “key window” is the window that responds to key events. Think of it as the active window.
In addition to adding this method, review the init method: We’ve called createWindow and also added a call to [NSApp run], which launches our window.
Now, review the changes made to the init method. A createWindow call was added, as well as the [NSApp run] line. The call to NSApp runs our plugin and starts listening to events. The plugin does not stop until NSApp receives a message to terminate. In this case, the run command shows the window; clicking the close button on the window sends the “terminate!” message and releases our plugin from memory.
It’s worth pointing out line 17 as an alternative CocoaScript syntax — it certainly feels more like JavaScript! The style is often a matter personal preference, but being familiar with both forms will help you read the code of others.
We could just as easily have written the following code instead:
Don’t feel you need to know the names of objects, the methods or the syntax. For now, we only need to know that the WebView is working. Resources to learn more are listed at the end of this article.
Remember to add the call to createWebView in the init method. It should get called after the window is created:
// Create a window this.createWindow(); // Create a WebView this.createWebView();
Create the HTML Layout
If you peeked at the previous code closely, then you saw reference to an HTML file that doesn’t yet exist. Create a webview folder and an index file now.
So far, we have a working WebView to use as an interface but no data to fill it with. Next, let’s see how to communicate data between Sketch and the WebView.
This plugin will render a list of symbol data for previewing before exporting. To do that, we need to get the data from Sketch and then communicate it to the WebView.
Several options exist to communicate a large amount of data to a WebView. Keep in mind that it’s a requirement that the end user thinks the WebView is a native application, which is to say that all data should load instantaneously. The easiest means of meeting this requirement is by precompiling symbol data to a file before loading the request, then including the data as a JavaScript script.
Other options for communicating data do exist. One way is to set JavaScript variables through CocoaScript once the WebView loads. Unfortunately, this does add a slight lag, thus failing our requirement for a native-like experience.
How to Fetch Sketch Symbol Data
The first step is to grab all symbol data from Sketch. Add the following method to your file:
/** * Get All Document Symbols * * Gets every symbol in the document (in all pages, artboards, etc) * * @returns {Array} An array of MSSymbolMaster objects. */ SketchPlugin.prototype.getAllDocumentSymbols = function() { var pages = this.context.document.pages(); var symbols = []; // Loop through all pages for (var i = 0; i < pages.count(); i++) { var page = pages.objectAtIndex(i); // Loop through all artboard layers for (var k = 0; k < page.layers().count(); k++) { var layer = page.layers().objectAtIndex(k); if ('MSSymbolMaster' == layer.class()) { symbols.push(layer); } } } return symbols; }
Sketch has many helper methods to access data in the current document. Sometimes, developers will need to do some heavy lifting, as is the case here. No helper method exists to grab all master symbols data in all pages. As a result, we must loop through data that Sketch does provide in order to find what we need.
It’s worth noting the check for MSSymbolMaster on line 20. This differs from a symbol instance in that the master is what each instance references as the original copy. If, instead, we checked for any symbol type, we would have duplicates!
Methods such as context.document.pages() and type definitions such as MSSymbolMaster are detailed in Sketch’s documentation28. Often you may find Sketch’s documentation lacking. More resources for finding what you need are included at the end of this tutorial.
Create the JavaScript File for Storing Symbols Data
Following the illustration, this step requires that we create a JavaScript file and fill it with symbol data. Add the following method to your plugin file:
/** * Create Symbols JavaScript File * * Creates a JavaScript file representing all document master symbols. * This data is consumed by the WebView for rendering symbol information. * * Generated file path: * Contents/Sketch/app/webview/symbolData.js * * @returns {SketchPlugin} * @method */ SketchPlugin.prototype.createSymbolsJavaScriptFile = function() { /** * Build the content for the JavaScript file */ var webviewSymbols = []; this.documentSymbols = this.getAllDocumentSymbols(); // Push all document symbols to an array of symbol objects for (var i = 0; i < this.documentSymbols.length; i++) { var symbol = this.documentSymbols[i]; webviewSymbols.push({ name: '' + symbol.name(), symbolId: '' + symbol.symbolID(), symbolIndex: i }); } /** * Create The JavaScript File, then fill it with symbol data */ var jsContent = 'var symbolData = ' + JSON.stringify(webviewSymbols) + ';'; var jsContentNSSString = [NSString stringWithFormat:"%@", jsContent]; var jsContentFilePath = this.context.scriptPath.stringByDeletingLastPathComponent() + '/app/webview/symbolData.js'; [jsContentNSSString writeToFile:jsContentFilePath atomically:true encoding:NSUTF8StringEncoding error:nil]; return this; };
The method loops through all document symbols, creates a new array of objects and then saves to the file path specified. Let’s review two items that could use more explanation.
webviewSymbols.push({ name: '' + symbol.name(),
This code sample is an example of where JavaScript and Cocoa sometimes clash types. The return value of symbol.name() is a Cocoa object. Without casting to a JavaScript string, nothing at all would be assigned to name!
The '' + [Object] line is shorthand for casting the value to a JavaScript string.
You will not get an exception when an unexpected type conversion doesn’t occur. Avoid bugs by being aware of each object type and documenting them in your code.
This code shows a common means of saving contents to a file. Developers do not always need to understand the underlying methods. Instead, think of some Cocoa code as a code recipe: Use this code as a template and switch out the file path and content value.
Indeed, much of CocoaScript code can be abstracted into a library for large projects.
function saveFile(string, path) { // Cocoa code goes here }
For larger Sketch plugins, consider adding abstractions like saveFile above into a utility class.
As usual, call the new createSymbolsJavaScriptFile code in the init method. Add it before the WebView is created:
SketchPlugin.prototype.init = function(context) { this.context = context; // Generate symbol data for the webview this.createSymbolsJavaScriptFile(); // Create a window this.createWindow(); // Create a WebView this.createWebView();
Running Sketch and calling the plugin now creates the JavaScript data file for us. Give it a shot! If you don’t see any generated, ensure that you have created at least one master symbol object in Sketch.
Communicating data to the WebView from Sketch was easy enough. Communicating events from the WebView to Sketch, such as a click event, requires slightly more work.
WebView Design Update
The fun in working with WebViews is that developers have full access to all CSS and JavaScript libraries desired. You should feel right at home in creating the interface. For simplicity’s sake, we won’t include any additional front-end frameworks. Instead, let’s go relive 1995.
<!doctype html> <html> <head> <script src="symbolData.js"></script> </head> <body> <!-- Header --> <h1>Export All Symbols as Assets</h1> <button>Export Now</button> <hr/> <!-- Symbols List --> <div></div> <script> var symbolListDiv = document.getElementById('symbol-list'); /** * Add all symbol data to the interface */ // Ensure symbols exist if (typeof symbolData !== 'undefined' && true === Array.isArray(symbolData) && symbolData.length) { // Append each symbol as a new div for (var i = 0; i < symbolData.length; i++) { var newDiv = document.createElement('div'); var newDivHdg = document.createElement('h3'); var symbolNameNode = document.createTextNode(symbolData[i].name); newDivHdg.appendChild(symbolNameNode); newDiv.appendChild(newDivHdg); symbolListDiv.appendChild(newDiv); } } else { symbolListDiv.innerText = 'No symbols found!'; } </script> </body> </html>
Should any design issues arise, recall that you have full access to the “Developer Console.”
Intercepting WebView Events in CocoaScript
With the stunning WebView design in place, it’s time to communicate back to Sketch when the export button is clicked.
The most reliable way to communicate a WebView event is slightly convoluted. The solution is to redirect the URL when a desired event is fired, intercept the request and then run CocoaScript code instead.
Why so confusing? You’d expect better event management in communication to a WebView. However, in CocoaScript, we have limited access to event handlers. No listener for “the user clicked something on the WebView” exists. The solution adopted is possible due to a CocoaScript event that is called any time the WebView changes its URL.
The WebView event is a simple onclick event on the export button. Add the following to your index.html script section:
/** * Export assets when button is clicked */ var exportBtn = document.getElementById('export'); exportBtn.onclick = function() { window.location.href = 'https://localhost:8080/symbolexport'; }
So far, the word “event” has been used to describe listeners in Objective-C. It’s time to learn about proper terminology: delegates. A “delegate” is an object that acts on another object’s behalf. It can receive messages and specify a callback when a certain message is sent. Put very simply, think of delegates as event listeners and handlers in JavaScript. From here on out, we will use the term “delegate” to refer to events on the Objective-C side.
To intercept the WebView redirect, create a delegate and assign it to the WebView. Add the createWebViewRedirectDelegate method below to your class file.
/** * Create Webview Redirect Delegate * * Creates a Cocoa delegate class, then registers a callback for the redirection event. */ SketchPlugin.prototype.createWebViewRedirectDelegate = function() { /** * Create a Delegate class and register it */ var className = 'MochaJSDelegate_DynamicClass_SymbolUI_WebviewRedirectDelegate' + NSUUID.UUID().UUIDString(); var delegateClassDesc = MOClassDescription.allocateDescriptionForClassWithName_superclass_(className, NSObject); delegateClassDesc.registerClass(); /** * Register the "event" to respond to and specify the callback function */ var redirectEventSelector = NSSelectorFromString('webView:willPerformClientRedirectToURL:delay:fireDate:forFrame:'); delegateClassDesc.addInstanceMethodWithSelector_function_( // The "event" - the WebView is about to redirect soon NSSelectorFromString('webView:willPerformClientRedirectToURL:delay:fireDate:forFrame:'), // The "listener" - a callback function to fire function(sender, URL) { // Ensure its the URL we want to respond to. // You can also fire different methods based on the URL if you have multiple events. if ('https://localhost:8080/symbolexport' != URL) { return; } // A special method to export symbols - we haven't created it yet this.exportAllSymbolsToImages(); }.bind(this) ); // Associate the new delegate to the WebView we already created this.webView.setFrameLoadDelegate_( NSClassFromString(className).new() ); return this; };
This is the most complex part of the tutorial for anyone without Objective-C and Cocoa experience. Let’s look at some of the lines and explain further.
/** * Create a Delegate class and register it */ var className = 'MochaJSDelegate_DynamicClass_SymbolUI_WebviewRedirectDelegate' + NSUUID.UUID().UUIDString(); var delegateClassDesc = MOClassDescription.allocateDescriptionForClassWithName_superclass_(className, NSObject); delegateClassDesc.registerClass();
Because we don’t have a simple abstraction to work with delegates, any delegate code will require some basic knowledge of Cocoa.
This code creates a new Cocoa delegate class. It’s important to note that some CocoaScript development involves quirks. On line 48, we see an example of this: Any class name registered through CocoaScript that isn’t unique will crash the application. This appears to be an issue with memory allocation at runtime. The simple solution is to append a unique string at the end of the class name we choose.
As you learn CocoaScript, you might come across some odd workarounds like this:
/** * Register the "event" to respond to and specify the callback function */ var redirectEventSelector = NSSelectorFromString('webView:willPerformClientRedirectToURL:delay:fireDate:forFrame:'); delegateClassDesc.addInstanceMethodWithSelector_function_( // The "event" - the WebView is about to redirect soon NSSelectorFromString('webView:willPerformClientRedirectToURL:delay:fireDate:forFrame:'), // The "listener" - a callback function to fire function(sender, URL) { // Ensure its the URL we want to respond to. // You can also fire different methods based on the URL if you have multiple events. if ('https://localhost:8080/symbolexport' != URL) { return; } // A special method to export symbols - we haven't created it yet this.exportAllSymbolsToImages(); }.bind(this)); // Associate the new delegate to the WebView we already created this.webView.setFrameLoadDelegate_( NSClassFromString(className).new());
Think of this code portion as a simple JavaScript event callback — the NSSelectorFromString being the event, and the anonymous function being the callback to run.
In the callback, notice the reference to another method, exportAllSymbols, that is not created just yet — we will create it in the next step.
// Associate the a new delegate instance to the WebView this.webView.setFrameLoadDelegate_(NSClassFromString(className).new());
The setFrameLoadDelegate_ method on our WebView allows us to associate a delegate.
This line creates a new instance of the delegate class, and it tells the WebView we want to use it to intercept events.
Take a deep breath and exhale: The most difficult part is over. Finally, add a call to the method that we just created inside the createWebView function.
// Create the WebView, frame, and set content this.webView = WebView.new(); this.webView.initWithFrame(webviewFrame); this.webView.mainFrame().loadRequest(urlRequest); this.window.contentView().addSubview(this.webView); // Assign a redirect delegate to the WebView this.createWebViewRedirectDelegate(); return this; }
The last step in this tutorial is to export all document symbols as images. Create the final method below.
SketchPlugin.prototype.exportAllSymbolsToImages = function() { /** * Loop through symbols; export each one as a PNG */ for (var i = 0; i < this.documentSymbols.length; i++) { var symbol = this.documentSymbols[i]; // Specify the path and filename to create var filePath = this.context.scriptPath.stringByDeletingLastPathComponent() + '/export/' + symbol.symbolID() + '.png'; // Create preview PNG var slice = [[MSExportRequest exportRequestsFromExportableLayer:symbol] firstObject]; [slice setShouldTrim:1]; [slice setScale:1]; [(this.context.document) saveArtboardOrSlice:slice toFile:filePath]; } // Close the plugin and display a success message this.window.close(); this.context.document.displayMessage('All symbols exported!'); };
If all goes well, calling the plugin in Sketch will create the export folder and images for you. You should see the “All symbols exported!” message flash once the operation completes. The file system will look similar to the screenshot below.
Go ahead and look at the generated PNG files and bask in the fruits of your hard labor.
Congratulations! You’ve created your first Sketch plugin! This foundation should give you the information needed to start making larger, more complex Sketch plugins.
Below are resources to aid you in future Sketch development. Go forth and create the next best Sketch plugin!
The most effective debugging tool is still careful thought, coupled with judiciously placed print statements.
– Brian Kernighan (Unix for Beginners, 1979)
This tutorial should have guided you well enough that troubleshooting tools were not necessary.
Take time to appreciate Kernighan’s quote. Engineers are spoiled today with great debugging tools. Unfortunately, for Sketch developers, the tool set for debugging a plugin isn’t quite there yet. Debugging often involves placing log calls throughout the plugin to see where the plugin is failing.
One utility class that helps greatly with step debugging is SketchPluginLog46. Using this utility, you are able to print information to the system log or to custom log files. Go through the setup guide and become familiar with logging before starting your next big Sketch plugin.
Sketch’s “Resources47” page lists resources for each technology involved — even JavaScript! It will provide ongoing education after you’ve reviewed the “Introduction48” for developers. The reference APIs and subsequent pages will help you find more information, but developers will quickly find that the documentation is incomplete.
Given the lack of documentation and books, the Sketch team recommends pouring over open-source plugins. The quality of code varies wildly, as do the solutions to different problems. Learning from the code of established Sketch plugins is recommended. The Plugin Directory49 on GitHub is a good reference of plugins to learn from.
The Sketch team doesn’t appear to respond directly to requests for support with plugin development, and StackOverflow isn’t very active for Sketch topics. Much of the work in plugin development will come from reviewing the code of others.
Both Objective-C and CocoaScript are difficult to use without learning the basics. For intense complex applications, consider purchasing a book on building straight Cocoa applications. Most use cases in plugin development should be solved by GitHub’s code search and by looking at code samples.
Apple has an API reference50 that will frequently come in handy. You can match up some of the calls made in this demo to the NSWindow documentation5118 for reference.
CocoaScript syntax, and how it relates to Cocoa, can sometimes be confusing as well. MagicSketch has an excellent guide on CocoaScript syntax52 to assist with translation.
What does the landscape look like for creating a Sketch plugin in the future?
Historically, the Sketch team has created some API-breaking changes with major releases. For the most part, the major updates have done well in not breaking plugins, but do keep in mind that you’ll need to test new major versions for each release.
This tutorial should remain effective through new Sketch updates, because it doesn’t touch on internal Sketch APIs to a large degree. This article will be updated for all major updates after version 42.
You might have noticed it already: in the past few weeks you might have missed Anselm’s Web Development Reading List1 issues here on SmashingMag. No worries, from now on, we’ll switch to collecting the most important news of each month in one handy, monthly summary for you. If you’d like to continue reading Anselm’s weekly reading list (and we encourage you to!), you can still do so via email2, on wdrl.info3 or via RSS4. — Editorial Team
Hello again! I’ll continue publishing this resource and am grateful for everyone who supports my ongoing work. And to celebrate the last weekly edition, I found a lot of great articles for you: Biohacking news that sound like science fiction, advances in deep learning with JavaScript, and a lot more. Happy reading!
The upcoming Chrome 615 (in beta channel now) brings support for JavaScript modules, the Payment Request API on desktop, smooth-scrolling in CSS, 8-digit hex colors (with alpha transparency), and the new Expect-CT HTTP header.
This week, Opera announced the end of Opera Max8, their data-saving browser product. The service will still stay active for a while but probably not for too long.
Andreya Grzegorzewski explains how we can use the Cache API for offline POST requests17 in Progressive Web Apps. This super cool trick allows us to queue POST requests, such as a form submission/data upload, cache it, and send it to the server once the user is back online.
If you want to use <details>/<summary> elements together with rem font-size values on your site, be aware that there’s a bug in Safari that renders parts of a website with that CSS combination useless. After tracking it down and debugging it, I finally summarized the case18.
deeplearn.js23 is a hardware-accelerated machine intelligence library for the web. You can use it to build and train neural networks in your browser, to play color sequences or detect objects in images, for example.
flatpickr24 is a dependency-free, lightweight and powerful datetime picker.
Darius Foroux on why you should spend less time in your head31, thinking, worrying, stressing, but exercise pragmatism instead. An article about mastering your mind and realizing that most of our thoughts cannot make it into practice.
Developers and organizations alike are looking for a way to have more agility with mobile solutions. There is a desire to decrease the time from idea to test. As a developer, I often run up against one hurdle that can slow down the initial build of a mobile hypothesis: user management.
Over the years, I have built at least three user management systems from scratch. Much of the approach can be based on a boilerplate, but there are always a few key items that need to be customized for a particular client. This is enough of a concern that an entire category of user management, authentication and authorization services have sprung up to meet this need. Services like Auth01 have entire solutions based on user and identity management that developers can integrate with.
One service that provides this functionality is Amazon Web Services’ (AWS’) Cognito. Cognito is a tool for enabling users to sign up for and sign into web and mobile applications that you create. In addition to this functionality, it also allows for storage of user data offline, and it provides synchronization of this data. As Amazon states, “With Amazon Cognito, you can focus on creating great app experiences instead of worrying about building, securing, and scaling a solution to handle user management, authentication, and sync across devices.”
Carousels don’t really deserve the bad reputation they’ve gained throughout the years. They can prove to be very effective and come in many shapes and sizes. Read more →2
Last year, Amazon introduced an addition to its Cognito service, custom user pools. This functionality now provides what I and other developers need in order to have a complete, customizable, cross-platform user management system, with the flexibility needed to fit most use cases. To understand why, we need to take a quick look at what user management is and what problems it solves.
In this article, we will spend a majority of our time walking through the process of configuring a user pool for our needs. Then, we will integrate this user pool with an iOS application and allow a user to log in and fetch the attributes associated with their user account. By the end, we’ll have a limited demo application, but one that handles the core of user management. In addition, after this is in place, there will be a follow-up article that takes this quite a bit deeper.
If you have a mobile or web app, what exactly do you need in terms of user management? While user log-in is probably the first thing you would think of, we cannot stop there. If we want a flexible user management system that would work for most web and mobile app use cases, it would need to have the following functionality:
username and password log-in;
secure password hashing and storage;
password changes;
password policy and validation;
user lifecycle triggers (welcome email, goodbye email, etc.);
user attributes (first name, last name, etc.);
required configuration and optional attributes per user;
handling of forgotten passwords;
phone number validation through SMS;
email verification;
API access to endpoints based on permissions;
secure storage of access token(s) on mobile devices;
offline storage of user attributes for mobile devices;
synchronization of user attributes for online and offline states;
multi-factor authentication.
While user management might at first seem like a log-in system, the functionality must go far beyond that in order for the system to be truly flexible enough to handle most use cases. This clearly goes far beyond just a username and password.
One additional item needs to be called out here: security. One of the requirements of any user management system is that it needs to be continually evaluated for the security of the system as a whole. Many custom user management systems have vulnerabilities that simply haven’t been corrected. Within the last year, there have been security breaches of user management systems of companies such as Dropbox, Dailymotion, Twitter and Yahoo. If you choose to build a custom solution, you are on the hook for securing your system.
Amazon Cognito is a managed service that enables you to integrate a flexible and scalable user management system into your web and mobile applications. Cognito provides two distinct ways to utilize the service: federated identities, which allow for log-in via social networks such as Facebook, and user pools, which give you completely custom user management capabilities for a specific app or suite of applications.
Federated identities are great if you want users to be able to log in with Facebook (or Google, Amazon, etc.), but this means that a portion of the user management process will have been outsourced to Facebook. While this might be acceptable in some cases, users might not want to connect their Facebook account to your application. In addition, you might want to manage more of the user’s lifecycle directly, and for this, federated identities aren’t as flexible. For the purpose of today’s article, we will focus on user pools because they provide the flexibility needed for a robust user management platform that would fit most any use case. In this manner, you will have an approach that can be used in most any project.
Because this is an AWS service, there are other benefits of using Cognito. Cognito can integrate with API Gateway to provide a painless way to authorize API access based on the tokens that are returned from a Cognito log-in. In addition, if you are already leveraging other AWS services for your mobile application, you can use your user pool as an identity provider for your AWS credentials.
As with any other AWS service, there is a cost involved. Pricing for Cognito is based on monthly active users (MAUs). The great news for most developers is that there is an indefinite free tier that is capped at 50,000 MAUs when using a custom user pool. If you have a large application, this will give you a large number of users to pilot a new approach to user management. However, I suspect that many of you have experiences that will never go beyond 50,000 users. In this case, core user management will be pretty much free. The only exception to this is other AWS services that you will be leveraging as part of the user management process, such as Lambda, SNS and S3.
The first step in integrating a user pool into your mobile application is to create a Cognito user pool. This will give us the configuration values needed to plug into our example application. To create a new user pool, walk through the wizard provided in Amazon’s Cognito console5.
Let’s walk through the process of creating a user pool. I must warn you that this is a lengthy process. In many ways, this is a good thing because it shows areas of flexibility. However, you’ll want to grab a cup of coffee and buckle in for this one.
The initial step in creating a user pool involves setting a name for your user pool and selecting the approach you will be taking to create the user pool. You can either review the defaults or “step through” the settings. Because we want to have a good working knowledge of how the user pool is being configured, choose the option “Step through settings.”
Configuring attributes will require a bit of thought. For each user pool, you will need to determine which attributes will be stored in the system and which ones are required. Because the system will enforce required values, you cannot change this down the road. The best approach here is to mark only truly essential values here as required. In addition, if you want users to be able to log in with their email address, be sure to mark that one as an alias.
If you want to include custom values, you will need to do that here as well. Each custom value will have a type, optional validation rules, and an option to be mutable (changeable) or non-mutable (unchangeable). There is a hard limit of 25 custom attributes.
Finally, a point needs to be made here about usernames. The username value for each user is immutable (unchangeable). This means that, in most cases, making this value automatically generated would make sense. This is why the “preferred username” value exists. If you want to users to have a username value that they can edit, just mark the “preferred username” attribute as an alias. If you want users simply to log in with their email address, be sure to mark the “email” attribute as both required and an alias.
For our demo application, I chose to make “email,” “given name” and “family name” required.
After configuring the attributes, you will be able to configure the policies for the account. The first policy to configure is the password policy. The policy allows you to configure both the length and whether you require numbers, special characters, uppercase letters or lowercase letters. This policy will be enforced on both passwords that users enter as well as passwords that administrators assign to users.
The next policies relate to user sign-up. For a public application, you will likely want to allow users to sign up themselves. However, depending on the type of application, you might want to restrict sign-up and have the system be invitation-only. In addition, you will have to configure how quickly these invitations will expire if they are not used.
For our demo application, I chose to use just the default values, with the exception that I don’t want users to be able to sign up on their own. With these values in place, we can proceed to verifications.
The verifications step allows you to set up multi-factor authentication, as well as email and phone verification. While this functionality is relatively easy to set up in the console, note that you will need to request a spending increase12 for AWS SNS if you want to either verify phone numbers or use multi-factor authentication.
For our demo application, I chose to use just the default values.
This step allows you to customize the email and SMS messages that your user pool will send, as well as the “from” and “reply to” email addresses. For the purpose of our demo application, I will leave the default values here and proceed.
If you are new to AWS, you might not need to specify any tags. However, in case your organization uses AWS regularly, tags provide a way to analyze spending and assign permissions with IAM. For example, some organizations specify tags per environment (development, staging, production) and by project.
No matter what you enter in this step, it won’t affect our demo application.
The next step allows you to define whether the user pool will remember your user’s devices. This is an additional security step that allows you to see what devices a specific account has been logged into with. This has an extra value when you are leveraging multi-factor authentication (MFA). If the device is remembered, you can elect not to require an MFA token upon each log-in.
For the purpose of the demo application, I have chosen to set the value to “Always.”
For each application for which you want to use the user pool (such as an iOS application, web application, Android application, etc.), you should create an app. However, you can come back and create these after the user pool has been created, so there is no pressing need to add all of these just yet.
Each application has several values that you can configure. For this demo application, we will give the app a name and then leave the default values. Next, you can configure which user attributes each app can read and write.
You can set whichever values you like in this step, as long as the email address, family name and given name are all readable and writable by the application. Be sure to click the option to “Create App Client” before proceeding.
With triggers, you can use Lambda functions to completely customize the user lifecycle process. For example, if you only want users with an email address from your company’s domain to be able to sign up, you could add a Lambda function for the “Pre sign-up” trigger to perform this validation and reject any sign-up request that doesn’t pass.
For our demo application, I will not add any triggers.
I realize that this might have seemed like a lengthy and arduous process. But bear in mind that each step in creating a user pool has flexibility that allows for the solution to fit more use cases. And now for the news you’ve been waiting to hear: This is the last step.
Just review the settings to make sure you have configured them correctly for the demo application. From this screen, you can go back and edit any of the previous settings. Once the user pool is created, some configuration values (such as required attributes) cannot be changed.
With your new user pool created, you can now proceed to integrate them in a sample iOS application using the AWS SDK for iOS.
Setting Up Your iOS Application For Your User Pool Link
I have created a sample iOS application that integrates with Cognito to allow the user to log in, log out, enter their first and last name, and set a password. For this initial demo, user sign-up is not included, so I’ve used Cognito’s console to add a new user for testing.
This application uses CocoaPods28 for managing dependencies. At this point, the only dependencies are the specific pieces of the AWS iOS SDK that relate to Cognito user pools.
(A full description of CocoaPods is beyond the scope of this article, however, a resource29 on CocoaPods’ website will help you get up and running, in case this concept is new to you.)
The contents of the Podfile for this application can be seen below:
source 'https://github.com/CocoaPods/Specs.git' platform :ios, '10.0' use_frameworks! target 'CognitoApplication' do pod 'AWSCore', '~> 2.5.5' pod 'AWSCognitoIdentityProvider', '~> 2.5.5' end
Assuming that CocoaPods is installed on your machine, you can just run pod install, and the necessary dependencies will be installed for you.
The next step is to include the values for your user pool and client application. The demo application is configured to use a file, CognitoApplication/CognitoConfig.plist, from which to pull this information. Four values need to be defined:
region (string)
This is the region in which you created your user pool. This needs to be the standard region identifier, such as us-east-1 or ap-southeast-1.
poolId (string)
This is the ID of the user pool that you created.
clientId (string)
This is the clientId configured as a part of the app that you attached to the user pool.
clientSecret (string)
This is the clientSecret that is configured as a part of the app that you attached to the user pool.
With that file and the proper values in place, the demo application can be launched. If any exceptions occur during launch, be sure that you have included each of the four values shown below and that the file is placed in the correct directory.
The core of the integration with Amazon Cognito happens within the application’s AppDelegate. Our first step is to ensure that we have set up logging and have connected to our user pool. As a part of that process, we will assign our AppDelegate as the delegate of the user pool. For this basic example, we can keep this logic within the AppDelegate. For larger projects, it might make sense to handle this elsewhere.
func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplicationLaunchOptionsKey: Any]?) -> Bool { // set up logging for AWS and Cognito AWSDDLog.sharedInstance.logLevel = .verbose AWSDDLog.add(AWSDDTTYLogger.sharedInstance) // set up Cognito config self.cognitoConfig = CognitoConfig() // set up Cognito setupCognitoUserPool() return true } func setupCognitoUserPool() { // we pull the needed values from the CognitoConfig object // this just pulls the values in from the plist let clientId:String = self.cognitoConfig!.getClientId() let poolId:String = self.cognitoConfig!.getPoolId() let clientSecret:String = self.cognitoConfig!.getClientSecret() let region:AWSRegionType = self.cognitoConfig!.getRegion() // we need to let Cognito know which region we plan to connect to let serviceConfiguration:AWSServiceConfiguration = AWSServiceConfiguration(region: region, credentialsProvider: nil) // we need to pass it the clientId and clientSecret from the app and the poolId for the user pool let cognitoConfiguration:AWSCognitoIdentityUserPoolConfiguration = AWSCognitoIdentityUserPoolConfiguration(clientId: clientId, clientSecret: clientSecret, poolId: poolId) AWSCognitoIdentityUserPool.register(with: serviceConfiguration, userPoolConfiguration: cognitoConfiguration, forKey: userPoolID) let pool:AWSCognitoIdentityUserPool = AppDelegate.defaultUserPool() // we need to set the AppDelegate as the user pool's delegate, which will get called when events occur pool.delegate = self }
After this configuration is in place, we need to configure the delegate methods for the user pool. The protocol that we are implementing is AWSCognitoIdentityInteractiveAuthenticationDelegate. This delegate will get called any time the user needs to log in, reset their password or provide a multi-factor authentication code or if we need to query the user about whether they would like their device to be remembered. For our example, we only need to implement the startPasswordAuthentication and the startNewPasswordRequired methods:
extension AppDelegate: AWSCognitoIdentityInteractiveAuthenticationDelegate { // This method is called when we need to log into the application. // It will grab the view controller from the storyboard and present it. func startPasswordAuthentication() -> AWSCognitoIdentityPasswordAuthentication { if(self.navigationController == nil) { self.navigationController = self.window?.rootViewController as? UINavigationController } if(self.loginViewController == nil) { self.loginViewController = self.storyboard?.instantiateViewController(withIdentifier: "LoginViewController") as? LoginViewController } DispatchQueue.main.async { if(self.loginViewController!.isViewLoaded || self.loginViewController!.view.window == nil) { self.navigationController?.present(self.loginViewController!, animated: true, completion: nil) } } return self.loginViewController! } // This method is called when we need to reset a password. // It will grab the view controller from the storyboard and present it. func startNewPasswordRequired() -> AWSCognitoIdentityNewPasswordRequired { if (self.resetPasswordViewController == nil) { self.resetPasswordViewController = self.storyboard?.instantiateViewController(withIdentifier: "ResetPasswordController") as? ResetPasswordViewController } DispatchQueue.main.async { if(self.resetPasswordViewController!.isViewLoaded || self.resetPasswordViewController!.view.window == nil) { self.navigationController?.present(self.resetPasswordViewController!, animated: true, completion: nil) } } return self.resetPasswordViewController! } }
One key thing to note is that both of these methods return a view controller that implements a specific protocol. For example, the LoginViewController implements AWSCognitoIdentityPasswordAuthentication, which has a single method that gets called with the parameters needed to enable the user to complete the log-in process.
With all of these pieces in place in the demo application, you can now see the log-in process work from beginning to end. The main view of the application shows the username and the first name and last name of the user. To make this happen, the following steps occur:
In the AppViewController, we call the fetchUserAttributes method in the viewDidLoad method. If the user isn’t logged in, this will trigger the log-in process.
The startPasswordAuthentication method in the AppDelegate will be triggered. This method loads the LoginViewController and presents it.
The getDetails method of LoginViewController is called by the AWS SDK. This includes an object that is an instance of AWSTaskCompletionSource, which we can use to allow the user to attempt to log in.
When the user presses the “Log in” button, we pass the log-in credentials to that object. This will then call the didCompleteStepWithError method, and we can handle the result accordingly. If there is no error, we can dismiss the view controller.
If we created the user in the console, we will have another step to handle here. Because we gave the user a temporary password, they will need to set a more permanent one. In addition, because we set the given name and family name as required parameters, we need to allow the user to enter those, too. The AWS SDK will detect this and call the startNewPasswordRequired method in the AppDelegate. This will present the ResetPasswordViewController and set its instance of AWSTaskCompletionSource.
The ResetPasswordViewController works almost identically to the LoginViewController. We simply need to ask the user for the correct values and then submit those values. Once this process is completed successfully, we dismiss the view controller.
Once the entire log-in process has completed, the SDK will securely store the tokens returned by Cognito. Then, we will finally retrieve the user details, and we can use those to populate the AppViewController with the user’s username, given name and family name.
While the user pool set-up process might have several steps, those steps are easy to navigate. In addition, the amount of configuration possible should give you confidence that it can support a majority of use cases. In my day job at Universal Mind, I’ve worked with several clients who are moving their existing applications over to leverage the capabilities that Cognito provides for user management.
Regardless of whether you need to implement a user management system regularly, this is a tool that every mobile and web developer should have in their toolbox. In the next article in this series, we will begin to explore the capabilities of Cognito a bit more by implementing a more full-featured demo application that implements more of the common user management use cases.
With a bit of practice, you can go and impress all of your friends by setting up a new application that satisfies all of these user management use cases within a day. That’s pretty good for a day’s work.
Not all products are created equal. While we repeatedly buy some products almost mindlessly, for others, we take a lot of time to make a purchasing decision. For a price tag that meets a certain threshold or if we are particularly invested in the quality of a product, we want to be absolutely certain that we are making the right choice and are getting a good product for a good price. That’s where a feature comparison table makes all the difference.
Feature comparison tables are helpful not only in their primary function, though. When designed properly, they can aid in decision-making way beyond placing product specifications side by side. They can also add meaning to an otherwise too technical product specification sheet, explaining why a certain feature is relevant to the customer or how a certain product is better than the others.
After our close examination of accordions1, time and date pickers2 and sliders3, in this article we’ll look into all of the fine details that make a perfect, accessible and helpful feature comparison table. Please note that this article isn’t necessarily about pricing plans, nor is it about data visualization methods. Rather, it’s tailored specifically for the case where a customer wants to confirm their purchasing choice or can’t choose between one of multiple preselected items.
Before diving into design decisions, we need to properly understand the user’s goals, intentions and behavioral patterns.
Many customers visit a website, browse around, add one or more products to their cart and then leave without completing their purchase. We can easily reduce abandonment rates with a couple of simple techniques. Read more →4
In observing customers in a few e-commerce projects, I found it quite revealing to notice how seemingly irrelevant a comparison feature appears to be to many customers. Quite often users will say that it clutters the interface, and that they never use the feature. The reason for it is simple: While we tend to purchase small low-priced items quite often, we tend to buy large high-priced items not so frequently. In fact, there are just not that many situations where we actually need a feature comparison.
Not many customers would even think of comparing a few books or pairs of socks. However, relatively few customers would purchase a coffee machine or refrigerator without exploring their options thoroughly. A feature comparison is indeed irrelevant for “small” purchases, but it becomes important for “large” purchases. In fact, when customers are committed to making a large purchase but can’t choose which product to buy, they are likely to end up not buying altogether, getting locked up in the choice paralysis. As a retailer, we obviously want to avoid these deadlock situations, and that’s where a feature comparison element can be very useful, simplifying the decision-making process and filtering out items that don’t meet relevant criteria.
The latter can apply to very different settings: We could be comparing locations, venues, glasses, cars, luggage, watches, TV sets or even chemicals29. However, for the scope of this article, we’ll be focusing on a very specific feature comparison among e-commerce retailers. The lessons we’ll learn here can be applied to any kind of comparison context, although the fine details might vary.
One way or another, in the end, it all boils down to what kind of purchase the customer is about to make. As Joe Leech states in his brilliant webinar on purchasing decisions30, when shopping online, users have either a “non-considered” or a “considered” purchase in mind.
Non-considered purchases are quick, low-effort purchases that we tend to make when we need a quick solution, or run errands. Whenever we need a pack of batteries, ordinary stationery, a “good-enough” dryer or a quick weekend getaway, what we’re actually doing is checking a box off our to-do list and moving on. Few people get excited about selecting batteries or pencils, and so we are unlikely to explore different websites a few times just to buy that perfect pack. Instead, we tend to purchase such items quickly, often on the go, skimming over vendor reviews and shopping by price, shipping speed and convenience.
Considered purchases, on the other hand, are slow, high-effort purchases, purchases that need time and consideration. When we buy a bicycle, a watch, a refrigerator or health insurance, we explore our options thoroughly, making sure we don’t end up with something that isn’t good enough or that doesn’t fit or that would need to be replaced soon after. In such cases, we tend to keep exploring a possible purchase for quite a long time, often browsing many different retailers, comparing prices, reading reviews and examining pictures. We might even ask the opinion of our friends, colleagues and loved ones. Eventually, a final decision is made based on the expected quality and service, rather than convenience and speed, and it’s not necessarily influenced by price point alone.
Of course, the more expensive an item, the more consideration it requires. But considered purchases aren’t necessarily expensive: Any item with a certain attribute, such as longevity, speed or quality, has to be thoroughly considered as well. This includes gifts, flowers, wine and spirits, clothing, mortgages and health insurance. The reason for it is obvious: it’s very hard to be very disappointed about a pack of batteries, but an uncomfortable gift, or wrong flowers sending a wrong message, or even an ill-fitting shirt that has to be returned, can be quite a frustrating experience.
Not many people know exactly what they want or need up front, unless they receive a trusted recommendation. So, every considered purchase requires a lot of thinking and consideration, comparing different options and filtering for that perfect one. The problem is that comparison isn’t a particularly fun activity on the web. Details are often missing, prices are not transparent (how often do you add an item to the shopping cart and go through the entire checkout up to payment, only to see the real final price?) and model numbers (such as for appliances) are not consistent.
That’s where a well-designed feature comparison can increase sales and improve user satisfaction. If we manage to pick up an indecisive customer in a moment of doubt — before they leave the website or start looking around — and guide them skillfully to a sound decision, then we are striving for a better customer experience, while also accounting for a larger profit and a more loyal customer base for the business. After all, customers don’t have to shop around on other websites when purchasing (often) expensive items. That’s something that might bear fruit for the business for years to come.
At this point, it’s probably no big revelation that feature comparison is relevant mostly for considered purchases. They are particularly useful in cases where a product is relatively complex — potentially including details that might be confusing or ambiguous. Good examples of this are digital cameras and TVs — for an informed comparison of choices, one often needs an understanding of the technical capabilities of these devices. Another example would be a vacation or business trip — anything that requires many small decisions, such as availability, pricing, convenient departure and arrival times, budget, and a thorough planning of activities up front.
What exactly makes a comparison relevant for the customer? Well, it’s relevant if it helps users make a good, informed choice. A feature comparison could be designed to drive more customers towards “high-profit” margin sales, but if they aren’t a good fit or if the customer feels they are overpaying, then the retailer will have to deal with either a high volume of returns or users abandoning them altogether in the long term.
When we observed and interviewed users to find out how a feature comparison might be relevant to them, we found that it essentially boils down to one single thing: seeing the difference between options, or filtering out unnecessary details quickly so that the differences become more obvious. Unfortunately (and surprisingly), many feature comparisons out there aren’t particularly good at that.
If we wanted to compare two or more items against each other to find the better fit, what would be the most obvious way to do that? With clothes, we would try them on and pick the one that feels right. But what if trying things on isn’t an option? When purchasing products online, we can rely on our past experiences, ratings, expert reviews, customer reviews and trustworthy recommendations to reduce the scope of options to just a few candidates.
Still, at some point, you might be left with a few too similar items — maybe one a bit too expensive, the other missing an important quality, and the third a recommendation from a friend’s friend. So, what do you do? You list all options, examine their attributes side by side, and eliminate options until you have a winner. (Well, at least most people do that.)
Translated to common interface patterns, this naturally calls for a structured layout that aids in the quick scanning of options — probably a good ol’ comparison table, with columns for products, and rows for their attributes. Once the user has selected products and prompted the comparison view, we can just extract all attributes from all selected products and list them as rows in the table. Should be easy enough, right? Yes, but that’s not necessarily the best approach for meaningful comparison.
Ideally, we’d love to display only meaningful, comparable attributes that the customer cares about. Rather than extracting and lining up all product specs, we could determine and highlight all relevant product attributes, while keeping all other attributes accessible. This requires us to (1) find out what the user is interested in and (2) have consistent, well-structured data about our products.
While the first requirement is just a matter of framing the question properly in the UI, the second requirement is a tough nut to crack. In practice, having well-structured meta data often turns out to be remarkably difficult, not because of technical or design limitations, but because of content limitations.
Unless a retailer is using a specialized, actively maintained system that gathers, organizes and cleans up meta data about all products in their inventory, getting well-structured, complete and consistent attribute details — at least about products merely in the same category — turns out to be a major undertaking. You can surely manage meta data for a relative small clothing store, but if you as retailer rely on specs coming from third-party vendors, a meaningful comparison will require quite an effort.
This raises a question: How would you display a comparison table for two digital cameras if critical attributes were missing in one of them? In that case, meaningful comparison would be impossible, making it also impossible for the customer to make an informed decision. When faced with such a situation, rather than picking one of the options blindly, most customers will abandon the purchase altogether, because the worry about purchasing a wrong product outweighs the desire for a product at all.
Conrad1263937 lists all products in a table, with every other row alternating in background color. Like in many other retail stores, meta data is often incomplete and inconsistent, leaving users in the dark. In the example above, the number of HDMI inputs, the weight, the highlights and player dimensions aren’t available for two of the three compared products.
The same happens when items are difficult to compare — for instance when noisy ill-formatted data appears next to well-structured data for many attributes. It might be possible to spot the differences between products with enough time investment, but it requires just too much work. In usability sessions, you can see this pattern manifest itself when customers prompt for a comparison view and scan the rows for a second or two, only to abandon the page a few seconds later. Moreover, once they’ve had this experience on the website, they will perceive the feature comparison on the website to be “broken” in general and ignore it altogether in future sessions.
So, what do we do if some information is missing, incomplete or inconsistent? Rather than display the comparison table as is, it would be better to inform the user that comparison isn’t possible because some data about a particular product is missing, and then guide them to relevant pages (perhaps standalone reviews of the compared products) or ask them questions about attributes that are relevant to them, and suggest the “best” option instead.
Comparing by attributes matters, but extracting and reorganizing data from a specification sheet alone might not be particularly useful for a not-so-savvy customer. In fact, it might be helpful to extend or even replace some attributes with data that the user would find more understandable — for example, replacing technical jargon with practical examples from the user’s daily routine? Or extracting advantages and disadvantages of products?
As noted by Nielsen Norman Group42, on Amazon, technical details aren’t displayed as is. Instead, the comparison table translates technical attributes into language that is understandable by the average consumer. Interface copy matters: this goes for attributes as much as for wording on buttons, labels and thumbnails.
For every two compared items, Imaging Resource46 extracts the advantages and disadvantages of the products, as well as the respective strengths and weaknesses, in a list. This might not be the fastest way to compare attributes, but it nicely separates qualities by default, prominently highlighting critical differences between options. The website also provides extracts from reviews and suggests other relevant comparisons.
Versus5352 goes one step further, highlighting how the features of the selected products compare against other products on average in a bar chart. Rather than only displaying all attributes as a table, they are also shown in a list view, with a detailed explanation of each attribute. Even better, the website puts every attribute into context by highlighting how much better the best product in that category is performing. The bonus is that members of the community can upvote every single attribute if they find it relevant. That’s way more helpful for customers than single attribute values in a table.
Cool Blue5554 has a fine feature comparison: Everything is just right. Not only does it display similar and different features prominently by default, it also highlights the pros and cons of each product and the pros and cons of each feature. The interface also granularly breaks down the rating for specific groups of features and customer reviews.
Flipkart5756 provides feature comparison on most category pages and most product pages, with advantages, disadvantages and highlights extracted from reviews. That makes the feature comparison infinitely more relevant, and it might make it slightly easier to jump to a purchasing decision.
More often than not, a detailed spec sheet alone might not be good enough for meaningful comparison. Extending the comparison with further details, such as relevant reviews, helpful rewording, as well as advantages and disadvantages in direct comparison can go a long way in helping the customer make that tough decision.
All of the options above provide a quick, scannable view of advantages and disadvantages, but depending on the complexity of a product, you might end up with 70 to 80 attributes lined up in a list. Going through all of them to find the ones that a customer cares about most would require quite some work.
One way to improve the scannability of attributes would be by grouping attributes in sections and then showing and collapsing them upon a click or tap. That’s where accordion guidelines58 come into play: In too many intefaces only icon acts as a toggle; of course, the entire bar should open or collapse the group of attributes. Additionally, an autocomplete search box or filter could allow customers to either jump to sections or to select and unselect categories for comparison.
Rather than just list all attributes, Home Depot59 groups them into “Dimensions,” “Details” and “Warranty / Certifications.” It also highlights differences between products and has a fancy print view (accessible via a tiny print icon — let’s see if you can find it!).
Sharp6362 allows customers to select a category of interest from a list, or even to use autosuggest to quickly jump to a specific category. A checkbox on the right allows users to highlight the differences, too — although the highlight isn’t always visually clear.
For its feature comparison, Otto6564, a German retail store, not only groups all attributes but also turns each group into collapsible and extendable sections. Some sections additionally contain detailed information about an attribute, provided upon a tap or click.
Garmin6766 goes even further. Rather than just displaying a dropdown at the top of the page, it floats it alongside the products as the user scrolls the page. That’s slightly better.
Rtings.com6968 extends a dropdown with filtering functionality for the entire table. If a customer is interested in a particular group of attributes, they can select the exact values that interest them. That’s a level of granularity that a feature comparison table usually doesn’t provide, and it’s especially useful for lengthy comparison views.
Ultimately, a floating dropdown with a selection of the attribute section would be just enough for any comparison. In general, a slightly better organization of the attributes would help users navigate towards points of interest, but being able to easily see differences or similarities within those points of interest would also be useful.
Highlight Differences Or Similarities… Or Both? Link
Because being able to easily see differences is one of the central purposes of a comparison, it makes sense to consider adding a toggle — like in Sharp’s example above — to allow users to switch between seeing only differences, seeing only similarities and seeing all available attributes.
In fact, when users access a comparison table and notice the “show differences” button, they often first scroll down past the entire table just to see how time-consuming the comparison will be, only then returning back to that shiny button, pressing it and exploring the updated view.
In fact, that feature seems to be used quite heavily, and it’s understandable why: Seeing the differences is exactly why customers actually prompt for a comparison view in the first place. That means that the option to highlight differences should be quite prominent. But then how exactly would you design it, and what options would you include, and what would the interaction look like?
On MediaMarkt70, for example, customers can choose to see all attributes or only attributes by which products differ. The button for “showing only differences” is located in the left upper corner, next to product thumbnails. Keeping it closer to the table might make it more difficult to overlook it. The German retail store uses alternate background colors for product rows, but not for headings. Many products have 10 to 15 groups of attributes, and each of them can be shown and collapsed. Also, each product has a link to the full specification sheet.
The problem with highlighting differences is that it’s enough for just one character in one table cell in the row to be slightly different, and the entire row will not disappear — even if all the other columns have the same, identical value. However, rather than just displaying the row as is, it would be infinitely more useful to actually highlight the difference — perhaps collapsing all “same” cells into one and highlighting that one cell that is different.
And then the question comes up: once “showing the differences” is selected, should identical attributes disappear altogether, or should they stay in the table with only different attributes being highlighted? It’s probably a matter of personal preference. If there are 60–80 attributes to compare, we’d probably remove similar rows for easier scanning. If the table is smaller, removing rows might not be necessary.
Electrolux1511507472, for instance, contains a button in the left upper corner, acting as a toggle. The state is indicated with a checkmark which can be on or off. Rows with identical data aren’t removed from the table — instead, differences are highlighted with a light blue background.
BestBuy7877 contains a lot of exact numerical data, such as height “69.88 inches” and “69.9 inches”. Most rows will never be omitted because of such minimal differences, making the comparison a bit more difficult.
Seeing only differences is useful, but would users also benefit from seeing only similarities? In fact, providing this option is not very common, but there are some good use cases for it. As it turns out, one important scenario is when selected products have too many differences to scan through easily.
Here’s an example. Let’s imagine the customer has selected four digital cameras to compare, with each product having 60–80 attributes. Before embarking on a long journey through dozens of attributes, some customers will attempt to eliminate the options based on “simple” criteria, such as price or release date, “too weak” or “too expensive” or “not up to date” qualities. Obviously, while eliminating those items, they will want to make sure they aren’t removing the wrong ones. In that particualr case, seeing similarities gives users validation that they are “still” looking at products that are “worth comparing” or “worth investing time into.”
The main use case when it happens is when a customer is comparing a few strong, similar candidates. They might vary in a dozen attributes, yet the list of all 80 attributes is too lengthy to easily compare. With an option to see only similarities or only differences, the customer can break down the complexity into two parts. What you notice in such cases is that customers tend to take care of the “easier” task first: they will look into similarities first (just to be sure all options are “solid”), and then look specifically into the differences.
You might be wondering if it’s necessary to provide the overview of all attributes? After all, the customers check both similarities and differences. The answer is “yes.” Customers don’t want to miss out on important details, and because they want to be certain about all available attributes, they will seek and examine the “all attributes” option as well, scanning it at least once during the session.
In terms of design, an obvious solution would be to use a group of mutually exclusive buttons or just one button or link that changes the contents and basically acts as a toggle.
Samsung828079 allows customers not only to see all attributes, only similarities and only differences, but also to select what attributes are relevant and compare only by them, removing everything else. All attributes are grouped into accordions, which all can be expanded or collapsed with one click.
LG8381‘s interface is similar to Samsung828079‘s, but the “compare” links are a bit too small, and because different views remain clickable all the time, it’s not always clear what you are looking at. Also, I’ve still yet to figure out what “locking” an item above the product thumbnails in the comparison view means — it probably means displaying the item first.
In practice, when encountering a feature to switch views, customers tend to alternate between all available options quite a lot. Seeing the differences and all attributes matters the most, but being able to see all similarities, while not necessary, might be reaffirming and supportive.
To highlight differences, we can remove similar or identical rows, but we could also use color-coding to indicate how different the compared items are, and which of them performs better. An obvious way to do this would be to use some kind of colors or patterns on table cells. Zipso84, for instance, colors fragments of each row for each selected attribute. While it’s helpful for a few attributes, when many of them are selected, the presentation quickly becomes too difficult to compare.
Prisjakt9189 uses color-coding of table cells to highlight differences by default. Also, customers can highlight relevant rows by tapping or clicking on them (although, on tap, the differences aren’t clear visually any longer). Every comparison also has a unique, shareable URL.
ProductChart9694 uses background bars to indicate which of the candidates performs better for a certain attribute. The length of the bar indicate how much better one of the options performs. Slightly highlighting the winner, or providing an overall score and suggesting a winner, might be helpful here.
Digital Camera Database100 displays the differences between products with filled colored rectangles, to indicate the dimensions of difference. That’s useful for highly technical and detailed comparisons, but not necessarily so for every kind of feature comparison.
If your feature comparison table is likely to contain a lot of numerical data, it might be useful to highlight both the row and the column upon a tap or click, so that the user always knows they are looking at the right data point.
Color-coding is a simple way to highlight differences, but we also need to provide an accessible alternative, perhaps elaborating on the difference between products in a summary above the table.
The Thing That Never Goes Away: Floating Header Link
You’ve probably been in this situation before. If you have three obscurely labelled products to compare, with over 50 attributes being compared against, you might have a very difficult time remembering exactly which product a column represents. To double-check, you’ll need to scroll all the way back up to the headings, and then scroll back all the way down to continue exploring the attributes.
One obvious way to make mapping less strained is by having sticky column headers, following the customer as they scroll down the comparison table. We don’t necessarily need to keep all of the details in the header, but providing a product model’s name, with its rating and a small thumbnail might be good enough.
Sony106105 keeps product labels and thumbnails floating above the comparison table as the user compares products. This gives customers a very clear mapping between attributes and a product. To compare, a quick look at the header is enough — no extra scrolling necessary!
Indesit108107 solves the same problem in a slightly different way. The interface keeps thumbnails in a floating bar at the bottom of the screen, rather than at the top. As the items are added, they are displayed in the bar at the bottom. To add the items though users need to hit the comparison icon tucked in the upper-right corner of the product — it might not be easy to identify. Ah, also the entire “Compare models” bar should act as a toggle — in the implementation, only the chevron icon triggers expansion and collapsing.
So, if a floating bar is in use, should it float above or below the table — or does it even matter? Keeping headings above the content seems slightly more natural, especially when the thumb is hovering over the contents of the comparison view on narrow screens. Users need to be more careful when scrolling the page on narrow screens — which is why the bar in the Indesit example disappears entirely on mobile. Keeping the bar above the table just seems a bit more reliable.
Obviously, it’s going to be very difficult to display all selected products as columns at all times. A table view works well if you have two to three products to compare, but probably not so well if there are five products in the table. In that case, a common way to conduct the comparison would be by sliding horizontally.
No conversation about tables can omit a close look into their responsive behavior across screens. A discussion of tables deserves a separate post, but there are a few common tricks to keep a table meaningful on narrow screens. Quite often, each table row will become a collapsed card, or headings will jump all over the place, or the table will be restructured to expose the data better, or the user can select which columns they want to see.
Problem solved? Not so fast. The feature comparison table is a beast of a special kind. The main purpose of the element is comparison: Keeping both attribute headings and product headings visible is important — after all, the customer wants to see the products they are comparing and the features they are comparing against. This means that for easy comparison on narrow screens, we need to float product headings, while keeping the attribute column locked as the user scrolls down the page. That doesn’t leave us with a lot of space to display actual product details.
Sadly, almost every other retail website makes feature comparison unavailable on narrow screens. Selected products will often disappear altogether, the comparison feature will be hidden, and loading a shared comparison link will appear to broken. In fact, it proved to be quite a challenge to find even a handful of examples out there.
Some interfaces try to make the best of what they have. Crutchfield110109‘s interface, for example, is responsive, but that doesn’t mean it’s useful. On narrow views, items are displayed in a 2 × 2 grid, and so are product attributes. Because there is no visual relation to the actual product, it makes it very difficult to compare features.
ProductReportCard112111 displays products in sets of three at a time. The attributes of each products are squeezed into a 33% column on narrow screens, making reading quite tiring, and comparison quite difficult.
Urban Ladder114113 allows its customers to shortlist and compare items in the product grid. Once the user hits the “Compare” button, they’re presnted with a quick overview of similar products which are auto-suggested. On narrow screens, users can compare only two items at a time.
One way to manage this problem would be to avoid a table view altogether. Instead, we could highlight similarities and differences in a list by default, allowing customers to switch between these views.
Alternatively, we could ask the user to choose the attributes that they care about most, and once the input is done, we could highlight relevant features, and perhaps even pull some data from reviews, displaying both of them in a list. Every relevant attribute row could become an expanded card, while all less relevant attributes could be displayed as collapsed cards below.
As always, limited space requires a more focused view and since differences are usually what matter the most, highlighting them and removing everything else seems quite reasonable.
Admittedly, with all of these options, we are losing the big-picture view that a table can provide. If you’d like to keep a table, usually you’ll have at most one column to fill in with actual content — as another column has to be reserved for attribute headings. To make it work, you could provide a stepper navigation between products, so that the user is able to switch between products predictably. In the same way, sometimes floating arrows are used left and right, similar to a slider.
OBI116115 allows customers to add as many products as they wish for comparison. In a comparison view, the navigation between products in the table happens via a stepper in the left upper corner. Unfortunately, the feature comparison isn’t available on narrow views.
Alternatively, you could also extend the table with a segmented control or multi-combination selector at the top, allowing users to choose two or more products out of the product comparison list — and display them side by side. With two products, the user would end up with a beautifully readable, responsive comparison table, and with more selected items, they would get either a scrollable area or a summary of differences and similarities. The user could then choose what they’d rather see.
What to choose then? If the feature comparison table contains mostly numerical data, then it might be easier just to explain differences in products up front. If that’s not the case or if the contents of the table is unpredictable, an option with stepper navigation, or a multi-combination selector, might work well. And if the product is complex and so attribute descriptions would be numerous and lengthy, then extracting relevant data and highlighting it, rather than sending the user on a journey through dozens of attributes, might be a better option.
When talking about responsive behavior of components, we tend to focus on “regular” and “narrow” screens, but we could be exploring adjustments for “wide” screens as well. If we do have enough space to display a feature comparison prominently on wide screens, why not make the best use of it? As the user navigates the category page, for example, we could display the feature comparison as a floating pane on the right, while the left area could be dedicated to products highlighted in that category. As the customer adds an item for comparison, it could appear in the side-by-side comparison right away. In his article on “Responsive Upscaling119,” Christian Holst mentions a good number of techniques applicable to e-commerce UX on large screens. They can be quite relevant for feature comparison as well.
What exactly happens before the comparison table appears? The customer will probably land on a category page, selecting a few items to compare, only to discover a button to prompt for the comparison. At this point, the customer might (or might not) know details about some of the selected items. In the same way, the order of selection for comparison might (or might not) be random. When displaying comparison results, a safe bet then is to display columns in the order of selection, because any different order might cause confusion.
As they are in the process of comparing, the customer will (hopefully) start to see things a bit more clearly, filtering out products that are clearly outperformed by selected competitors. To clear up the comparison view, we will allow the customer to remove a product from the comparison, of course, often indicated with an “x” in the upper-right corner of the column (or the floating header).
As it turns out, sometimes users will quickly dismiss one of the options, for example because it’s too expensive anyway, but they would want to keep that option in the comparison view for reference — just to put other candidates in context. That “reference” option might end up being stuck in the middle of the table, getting in the way of the comparison between two or more “real” candidates.
Obviously, the best arrangement for these options would be to display the main candidates first, side by side, followed by the “reference” candidates. In fact, you could even go as far as to allow the customer to downgrade or downvote some candidates and push them a bit to the side, displayed in a less prominent gray color.
A slightly more robust option would be to allow users to drag columns as they wish. That would help in the beginning when the customer has added quite a few items to the list but then, for instance, realized that the price difference was too high and so wanted to rearrange the products. It would also help in the case with “reference” candidates. In fact, in interviews, users sometimes compared product columns with cards or brochures or sticky notes that they could move around to group important ones against less important ones. A digital equivalent of the same experience in a feature comparison table would be draggable columns.
On Digital Photography Review159158121120, for example, users can move selected items left and right. That’s a nice accessible alternative to drag-and-drop.
The nature of SocialCompare123122 requires users to be able to drag columns and rows as they wish. However, moving columns around like cards might be helpful for customers of retail websites as well.
It’s important to note that drag-and-drop is (obviously) not accessible, so screen reader users would need to have access to navigation within the column headings. For example, you could have a select dropdown or a group of radio buttons as a fallback in that case.
But what if, after a detailed comparison, a customer is dissatisfied with all the options presented in the comparison view? In addition to being able to remove items from the list, it’s important to be able to add relevant items to the comparison view — and “relevant” is important here. In most cases, the “add” button will simply return customers to the category page, where they would be asked to add more items for comparison. Instead, we could suggest products that are likely to fit the bill, perhaps by showing ones similar to the items selected.
On Car Showroom125124, customers can add new items by typing in the model reference and using autosuggest. Also notice that the interface provides navigation within the comparison — comfortable for quick jumps to relevant features.
Because feature comparison is relevant mostly for purchases that take time, the more important the purchase, the more likely the customer is to explore the idea of buying an item over a long period of time. One thing we’ve noticed by observing shoppers is that, every now and again, in a moment of doubt, they will take a screenshot (or a series of screenshots) of the comparison table, and store it “for future reference,” until they’ve made a decision. Well, that’s not the full truth because one of the main reasons for storing that screenshot is to send it over to friends and colleagues who have a better understanding of technical details and to ask for their second opinion.
Indeed, second opinions matter for many people — even from a close friend who isn’t that knowledgeable in whatever category the product belongs to. That precious screenshot will end up wandering through Facebook chats and Skype chats, email attachments and WhatsApp groups. If your data tells you that many of your customers need a second opinion before purchasing items (and that will surely be the case for electronics or appliances), make it possible to “save the comparison for later or share it,” enhanced with friendly and encouraging copy. This means that the every comparison should have a unique URL, featuring all or selected attributes, the expanded and collapsed groups of attributes and the order of products.
It’s no secret that many customers misuse their shopping cart or wish lists to keep a selection of products intact for when they visit the website next time (often shortly afterwards). In the same way, storing the comparison table persistently (perhaps in localStorage or in a Service Worker) for some time is a good idea. In fact, no customer would be pleased if compared products were to disappear after they accidentally closed the tab.
Eventually, once the user visits the page a few days (or weeks) later, you could open a little message bar stating that their recently viewed items and compared items are still available, with an option to “dismiss” it. Should the user choose to explore that comparison, they could do it from the message bar. Should they browse a category and choose other products for comparison, obviously the comparison view should be overwritten with the newly selected products.
Interaction with a feature comparison table might appear to be quite self-explanatory, but many tiny decisions have to be made before the user even gets to see the comparison.
For one, the comparison feature obviously has to be indicated, promoted or featured somehow — but where exactly? There are many options. It could appear on the home page, in the category list or on the product page. It could also be available on the shopping cart page or on search results pages. On most e-commerce websites, the option to compare is visible only on the category page, often for the obvious reason of not distracting the customer from the purchasing funnel. Is it always the best solution, though?
Well, we should ask ourselves first, when would a customer want to compare items in the first place? One common use case is when they are looking at similar options but can’t decide which one to choose. This usually isn’t the case on the home page (too early) or on the shopping cart page (too late), but it definitely is the case on a category page and (often) on the product page.
On top of that, one can spot an interesting behavioral pattern when observing customers navigate category pages. When exploring options for purchase, a good number of users will open every potential product candidate in a separate tab and examine each thoroughly one by one first, closing tabs only if the candidate is clearly not good enough. Now, these customers might find a strong candidate and head straight to the checkout, or (more commonly) they might lean towards a few options.
In the latter case, being able to add items for comparison on a product page would obviously save those annoying roundtrips between product pages and category pages. However, we would save not just clicks or taps — more importantly, we would avoid deadlocks, those situations where a customer is indecisive and can’t proceed to check out, abandoning the purchase altogether. If the customer is undecided about the options, they will definitely end up not checking out; and if they do, you can expect the risk of high refund costs. In a way, feature comparison is an easy, helpful way to keep customers on the website by helping them making the right decision.
Another common use case is when a customer comes to a website with strong options in mind already but is looking for more detailed specifics of each option. In that situation, the customer is likely to search for these products right in the search field, often typing in obscure model numbers that they wrote down in a physical retail store. If the appliance can’t be found using search, some customers will still try to find it on the category page, but if their first attempts don’t bring the expected results, they will abandon the website altogether. Similar to the previous case, here we can guide potential customers by suggesting the products they might have meant and making it easier for them to make a decision. Perhaps we could even provide more competitive price and delivery options than a physical store can. Again, adding the comparison selection right in the search results might be a good option to consider as well.
There is another option, though. We could also highlight feature comparison as part of the global navigation. If you have a very limited range of products, each of them targeting a specific audience, it might be useful to clearly communicate what groups of customers each product is designed for.
For example, Konica Minolta128127 provides a separate feature comparison link in the main navigation. Unfortunately, it’s nothing but a list of all specifications for all products in a side-by-side view. Perhaps explaining the advantages of each product and whom it’s best for would be more helpful. Still, customers can export and print out results for easy scanning and reading.
Vizio130129 prominently integrates feature comparison in the main navigation. All products can be chosen for comparison, but every navigation section also contains a “Compare Sizes / Models” link, which features the entire spectrum of products, all broken down into groups, with filters for choosing the relevant ones. The attributes are broken down in groups, too, and displayed as accordions in a tabular view, while the products always remain visible in a floating bar.
Quite surprisingly, Amazon131 doesn’t display feature comparison as an option on the category page. In fact, it is quite difficult to notice on the product page as well. But rather than allowing customers to select the products they’d like to compare, Amazon allows them to only “Compare with similar products.” Only six attributes are displayed on mobile by default: the product’s title and its thumbnail, the customer rating, the price, shipping info and the retailer. The attributes are disclosed progressively, upon a tap or click.
Don’t get me wrong: of course the main goal of the website isn’t to bring as many people as possible to a comparison view, but rather to bring them to the checkout — with an item that will actually meet their needs. Because a comparison can help avoid deadlock, try enabling “adding to comparison” for product pages, category pages and search results, and then monitor conversion. If you have just a few products in the inventory, clearly labelling and targeting each group of customers might be a better (and simpler) option.
The Life Of A Lonely Checkbox, Or How To Indicate Comparison Link
Once we know which pages a feature comparison will appear on, we should ask ourselves how users will actually add items for comparison. This requires us to look very closely into the microscopic details of how the feature is indicated and how the user would interact with it.
While some designers choose to use a link or button with a label (for example, “Add to compare”), others use iconography (a plus sign or a custom “compare” icon) to indicate comparison. A more common option, though, seems to be a good ol’ checkbox with a label. A checkbox naturally communicates that and how an item can be selected and unselected, and with a proper label in place, it conveys the functionality unambiguously.
Now, where would you place that checkbox, you might ask? Frankly, if you look around e-commerce websites, you’ll find it pretty much everywhere — sometimes residing on top above headings, sometimes below thumbnails, sometimes in the bottom-right corner next to reviews, and quite often just above the price, where it’s difficult to miss. Admittedly, we couldn’t spot any significant difference; however, one thing was noticeable: The options with a checkbox seemed to consistently make feature comparison slightly more obvious and easy to find than plain text links.
Once the user has selected an item for comparison, it’s important to confirm the selection — a checkbox does a good job of that, but we could also change the wording (for example, from “Add to compare” to “Remove from comparison”) or change the background color (slightly highlighted) or fade in a label or a flag (“Shortlisted”) or a popover. We also have to indicate the change of state for screen readers.
Every selection should be easy to unselect with one tap as well, without resetting the entire selection. Unfortunately, the latter isn’t that uncommon, as some websites choose to disable the checkbox to prevent double-selection, effectively making it impossible to remove the product from comparison without prompting a comparison view.
Obviously, we also need to place a “compare” button somewhere, so that customers can easily proceed over to the comparison view. Now, that view wouldn’t make sense if there is no or only one item shortlisted for comparison. So, rather than displaying a disabled, grayed-out “comparison” button when there aren’t enough items to compare, we could display it only if there are at least two items in the list — perhaps inlined next to those “Add to compare” checkboxes or links of all of the candidates that the customer has selected.
Sony144, for example, uses the text label “Select to compare” for all products in a category first, and if one item is selected, it changes the checkbox label on that item to “Select two or more to compare.” When one more item is added for comparison, the label changes to “Selected,” with a “Compare now” link appearing inline on all selected products.
In fact, in practice, that “fancy” comparison button is unlikely to be very fancy, otherwise it would fight for attention with the primary buttons, such as “Add to cart.” Therefore, more often than not, it’s a subtle tertiary button that doesn’t fight for attention but is noticeable, close to the comparison checkboxes. Still, we could gently highlight it for a blink of a second with a subtle transition or animation once a new item has been added for comparison.
Wait a second! You might be thinking: well, if the feature comparison is so important, why not display a confirmation in a lightbox, prompting the customer to choose to go straight to the comparison or to continue browsing on the website? Well, the problem with this option is that it massively interrupts the flow. Rather than keeping the focus on the products, it directs the customer’s attention to a confirmation message that has to be responded to with every new added item.
Of course, we don’t know if the customer will add two or four or more items for comparison, but having to get rid of the lightbox to continue browsing products seems excessive and just plain unnecessary. With an inlined “comparison” button, we get the best of both options: Should the user want to continue browsing, they would do so seamlessly. Should they want to compare, they can compare easily as well. And the focus always stays on what matters the most: the products.
However, it’s not the best we can do. One issue we kept noticing in usability sessions is that as customers explore their options and add items for comparison, eventually they are ready to prompt the comparison view, but often can’t find the button to prompt it. In fact, they end up having to refind the products they’ve selected because that’s where “compare now” buttons are located. That’s especially difficult in a paginated category with a long list of scattered products.
We can solve this problem by displaying a semi-transparent comparison overlay at the bottom of the page. The overlay could appear when a customer adds the very first item for comparison and could fade away when the selection is cleared. By dedicating a portion of the screen to comparison, we regain just enough space to subtly confirm the user’s actions and to inform them about their next steps, without interrupting the flow.
Home Depot149148 uses a 60px tall comparison overlay at the bottom to highlight thumbnails of the selected products. The overlay us used to guide users through the selection — for example, by explaining how many items are required for comparison. Customers don’t have to search for the selected items on a category page, but they can unselect options right from the overlay. That’s also where an omnipresent “Compare” button resides.
Electrolux1511507472 displays notifications about selected items in the 75px tall bottom bar. It might be a bit too subtle to understand quickly. Rather than changing the text for “showing the differences” or “displaying all attributes,” it uses a pseudo-checkbox that users can toggle on and off.
Appliances Connection153152 uses a slightly less subtle 40px tall bar at the bottom, with a clear link indicating comparison and access to recently viewed items. The comparison view is sliding from top to bottom, and users can switch to recently viewed items as well.
The design of showing and hiding similar features is slightly off, tucked in the upper-right corner. Also, customers can add “Stock ID or SKU” for comparison — but not many customers will know what that means.
Abcam154 implements the bottom bar slightly differently, as an accordion with items lined up in a vertical list. Unfortunately, once the user is in the comparison mode, it’s impossible to remove items or clear the selection.
Delta157156 displays “Add to compare” only on hover, along with other important details, such as price. Unlike in previous examples, “Add to comparison” prompts an overlay at the top of the screen, where the customer can add more items for comparison.
In fact, overlay seems to be a quite common solution, and in fact, it can be helpful in quite many ways. For instance, if only one item is shortlisted, we could use the space to suggest similar comparable items, or even items that other customers often check out as well (“Suggest similar or better options”).
We could also group similar items and complement a comparison list with a shortlisted selection of products. What’s the difference? Instead of prompting the customer to pick one type of products, then select specific items of that type and compare them, we could enable customers to add products of different kinds, group them in the background and keep them accessible for any time later — not necessarily only for comparison. Think of it as a sort of extended list of favorites, or wishlist, with each selection getting a label and perhaps even a shareable URL.
Digital Photography Review159158121120 does just that. The user can “mark” any item for shortlist and then compare items in a particular category later. That’s a good example of resilient, forgiving design: Even if a customer selects batteries and laptops for comparison, they would never appear in a side-by-side comparison because they would be grouped separately. Each item can be removed individually, or the customer can remove an entire group, too.
While slightly more complex to implement, that’s pretty much an absolute solutions that seems to be working fairly well. Alternatively, just having a “comparison” bar docked at the bottom of the page is surely a reliable solution as well.
While some interfaces are very restrictive, allowing exactly 2 items to be compared at a time, it’s more common to allow up to 4–5 items for comparison — usually because of space limitations in the comparison view. Admittedly, the comparison becomes very complex with more than 5 items in the list, with columns getting hidden and “showing differences” getting less useful. But what if the customer chooses to compare more items after all?
Well, not many customers are likely to do that, except for one specific exception. Some customers tend to misuse the shopping cart and feature comparison as wishlist, “saving items for later” as reference. If they choose to save a large number of items, we could of course let them navigate through products using a stepper, but perhaps by default we could reshape the table and extract highlights, advantages and disadvantages instead. That might be slightly less annoying than being disallowed to add an item for comparison altogether.
Eventually, after tapping on those checkboxes or links, the customer hopefully will choose to see a comparison of the shortlisted options side by side. This comparison is usually a short-lived species: It’s used as long as it serves its purpose, potentially getting shared with friends and colleagues, only to disappear into oblivion a short while after. Now, the comparison could appear in different ways:
on the same page, as a full-page overlay;
on a separate new page, integrated in the website’s layout;
on a separate new page, standalone;
in a separate tab or window opened in addition to the tab that the user is currently on.
What’s best? In most situations, the second option might be difficult to pull off meaningfully, just because of the amount of space that a feature comparison needs to enable quick comparison of attributes. Both the first and the third options are usually easier to implement, but the first one might appear slightly faster because no navigation between pages is involved. However, it will also require proper implementation of the URL change based on the state of the comparison. With a standalone page, this problem would be slightly easier to solve. As an alternative, you could suggest to “save the comparison” and generate a link that can be shared.
The fourth option depends on your stake in the never-ending discussion of whether links should be opened in new tabs163 by default. That’s probably a matter of preference, but usually we must have a very good reason to open a window in addition to the existing one. While it might make sense for PDF files or any pages that might cause a loss of inputted data, it might not be critical enough for a comparison view.
Ideally, you could provide both options — the link could lead directly to the comparison view in the same tab, and a Wikipedia-like external-link icon could be used to indicate a view to be opened in a separate tab.
A Slightly Different Feature Comparison, Or Asking The Right Question In A Right Way Link
In the end, we just want to help users find relevant comparable attributes quickly. What better way to find them than by first asking the user to select the attributes that matter most to them?
For instance, we could extract some of those attributes automatically by looking into the qualities that appear in reviews for selected products, and suggest them in a small panel above the side-by-side comparison — pretty much like tags that the user can confirm or add.
Once the relevant attributes are defined, we could calculate the match score for all selected products (based on reviews and specifications), and if their average is way below expectations, suggest alternative products with a higher score instead.
The option with the highest score could be suggested as the “recommended purchase” or as the winner, with the percentage of customers who have ended up buying that product in the category and maybe even scores from external professional reviews. There, we could show options to purchase the item or pick it up in a store nearby more prominently. To round up, we could even complement the comparison with a lovely “battle” loading indicator to convey that we are “working hard” to find the best option.
Top Ten Reviews165164 manages to display 10 products in a side-by-side comparison. Each product has a rating broken down by specific groups of features, but also an overall score. The winner is highlighted with a “Gold Award,” and on narrow screens its column is fixed, while other products are compared against it. That’s a slightly more opinionated design, but perhaps it’s also slightly easier to detect the winning candidate from the user’s perspective.
When looking into comparisons, we naturally think about feature comparison tables, but perhaps a filtered view or a visual view would be a better option for comparisons — especially for complex ones. Product Chart167166, for example, uses a matrix presentation of products, with price mapped against screen size for monitors. Features and attributes can be adjusted as filters on the left, and the fewer the candidates, the larger the thumbnails. That’s not an option for every website, but it’s interesting to see a comparison outside the scope of a tabular layout.
Feature comparison can, but doesn’t have to be a complex task for customers. We could take care some of the heavy lifting by suggesting better options based on customer’s preferences. Unfortunately, I’m yet to find any example of this concept in a real eCommerce interface.
But what if we drop the idea of having a dedicated feature comparison altogether — and use a slightly more integrated approach instead? Customer’s experiences reflected in reviews are often more valuable than product specs, so what if we let customers explore suggestions based on keywords extracted from reviews?
A product page could display extracted review keywords upon tap or click. On a category page, a product comparison would extend “common” filters with sorting by these keywords. Finally, instead of a feature comparison table, the customer could select the features they care about most and the overview would provide a list of “best” options for them.
In the same way, if a customer is looking for a set of products rather than just one standalone product, we could provide “recommended” options in a contextual preview176. Based on measurements of an apartment, for example, we could suggest electronics and furniture that might work well. The feature might be particularly useful for the fashion industry as well.
These solutions basically provide a slightly extended filtering option, but it shows how a feature comparison can go beyond a “traditional” side-by-side comparison. The better and smarter the filtering options, the less critical a side-by-side feature comparison could be.
While many of us180 would consider the table element to mark up a comparison table, in accessibility terms, sometimes that might not be the best idea. The comparison could be just an unordered list (li) with headings — for instance, an h2 for the title of each product and h3 subheadings for the features of each product. Screen readers provide shortcuts for navigating between list items and headings, making it easier to jump back and forth to compare.
That way, we could basically create cards, collapsed or not by default, and then progressively enhance the list towards a tabular view for easier visual scanning. Highlghting differences would then mean just rearranging cards based on customer’s preferences. Still, with labels and headings, a table might be a good option as well.
As Léonie Watson181, an accessibility engineer and W3C Web Platform WG co-chair, put it, “casting your eyes between two data sources is relatively easy to do, but a screen reader doesn’t have any really good way to emulate that behavior”. According to Léonie, “if there is a single comparison table (where the data to be compared is in different columns/rows), then the most important thing is that the table has proper markup. Without properly marked up row/column headers, it is hard to understand the context for the data in an individual table cell.
Screen readers have keys for moving up/down through columns, and left/right through rows. When a screen reader moves focus left/right into a table cell, the column header is automatically announced before the content of the cell. Similarly, when screen reader focus moves up/down into a cell, the row header is announced before the cell content.
If the data sources for comparison are in different places/tables, then things get a bit harder. You have to remember the data from one source long enough to be able to navigate to the data source for comparison, and frankly that’s a cognitive burden most people will struggle with.
A more general purpose solution is to offer customers choices of how the data is presented — for example, to choose to view all data in a single table, or to select certain objects for comparison.”
Phew! That was quite a journey. Below you’ll find all of the design considerations one has to keep in mind when designing a feature comparison table. You thought it was easy? Think again.
Now, below is a list of features that a good comparison is likely to have. We covered most of them in the beginning of this article, but they are worth having in one place after all:
Every column contains the price (or price graph), a link to the standalone product page, ratings, the number of reviews, a thumbnail, the product’s model name, and a price-matching tooltip.
For every product, useful reviews, with major advantages and disadvantages as keywords, are extracted and highlighted above the comparison.
Attributes are consistent and have comparable meta data; they are grouped, and some of them are collapsed by default.
If there isn’t enough meaningful meta data to compare against, explain that to the customer and suggest third-party reviews instead. Irrelevant tables are frustrating.
The customer can switch to seeing only differences, only similarities or all attributes.
The customer can reset their selection and return back to the products (perhaps with breadcrumb navigation at the top).
The customer can add new products to the comparison (for example, if they are unsatisfied with the results of a comparison).
Columns and rows are highlighted upon hover or tap.
The customer can rearrange columns by dragging or moving them left and right.
Every action provides confirmation or feedback.
Customers can generate a shareable link for comparison (for example, “Save comparison as…”).
If the user spends too much time in the comparison view, a window with information for hotline support or chat is displayed.
Items are stored persistently after page refresh or abandonment.
The feature comparison is responsive, bringing focus to the differences and the advantages and disadvantages of products.
And here are the questions the team will have to consider when designing and implementing a comparison table.
How do you indicate that comparison is possible?
What happens when the first item is added for comparison?
Have you disabled the option to compare when only one item has been selected?
Once an item has been selected, do you change the link or highlight the selected product, or display a comparison bar, or display a lightbox?
How do users unselect a selected option?
If only one item has been added for comparison, should we suggest products to compare or enable users to “find similar products”?
When an item has been selected, do you provide visual feedback to reaffirm and reassure users about their choice. (For example, “Good choice! That’s one of the top-10 rated cameras in the category!”)
How many items may a customer add for comparison (usually three to five)? What happens to the comparison if no or one item has been selected. What about more than five items?
As items are being compared, do we use animation or transitions to indicate comparison (such as a battle animation)?
Do we display the price (or price development), a link to the individual product page, ratings, reviews, a thumbnail, the product’s model name, and price-matching tooltip?
Can users switch to see only differences, only similarities or all attributes?
Do we group and collapse attributes by default?
Do we track whether attributes are consistent and have comparable meta data? Otherwise, seeing differences would be meaningless.
Do we highlight columns and rows upon hover or tap?
Can the user move columns left and right?
What if the user compares items in unrelated categories (for example, a laptop against batteries)?
How do we allow users to add more items for comparison?
How do we allow users to remove items from comparison?
Should we dynamically track how many items are in the comparison list, and prompt a message if there are none (“Oh, nothing to compare! Here are some suggestions.”) or one (“Boo-yah! You’ve got a winner!”) or two (“So, you have just two candidates now.”)?
Should we ask customers to choose what they care about most?
Do we suggest a “winner” among the products selected for comparison, perhaps based on the user’s most relevant attributes?
Does every action have visual and/or aural feedback to indicate change?
Have we provided a shareable link for comparison (for example, “Save comparison as…”)?
If the user spends too much time in the comparison view, should we prompt a window with information for hotline support or chat?
Are compared items stored persistently after the page is refreshed or abandoned?
Do we include a “Notify about price drop” option for email subscription?
Is the feature comparison accessible, coded as an unordered list?
How do we make the feature comparison behave responsively?
This article is a part of the new ongoing series about design patterns here, on your truly Smashing Magazine. We’ll be publishing an article in this series every two–three weeks. Don’t miss the next one, on builders and configurators. Ah, interested in a (printed) book covering all of the patterns, including the one above? Let us know in the comments, too — perhaps we can look into combining all of these patterns into one single book and publish it on Smashing Magazine. Keep rockin’!
Huge thanks to Heydon Pickering, Léonie Watson, Simon Minter, Penny Kirby, Marta Moskwa, Sumit Paul for providing feedback for this article before publication.
Recently, I was leading a training session for one of our clients on best practices for implementing designs using HTML and CSS. Part of our time included a discussion of processes such as style-guide-driven development1, approaches such as OOCSS242 and SMACSS, and modular design. Near the end of the last day, someone asked, “But how will we know if we’ve done it right?”
At first, I was confused. I had just spent hours telling them everything they need to “do it right.” But after thinking about it, I realized the question was rooted in a deeper need to guide and evaluate what is often a set of subjective choices — choices that are sometimes made by a lot of different people at different times. Stuff like consistent naming conventions and live style guides are the end result, but these “best practices” are rooted in a deeper set of values that aren’t always explicit. For example, specific advice like “Minimize the number of classes with which another class collaborates3” isn’t as helpful without a broader appreciation for modularity.
I realized that in order to really know whether our work is any good, we need a higher level of principles that can be used as a measuring stick for implementing design. We need something that is removed from a specific language like CSS or an opinionated way of writing it. Much like the universal principles of design4 or Nielsen’s usability heuristics5, we need something to guide the way we implement design without telling us exactly how to do it. To bridge this gap, I’ve compiled nine principles of design implementation.
Architecting Progressive Single-Page Applications Link
By using just a couple of CSS tricks, less than 0.5 KB of JavaScript and, importantly, some static HTML, Heydon Pickering introduces an experimental solution for progressively enhanced single-page applications. Read more →6
This is not a checklist. Instead, it is a set of broad guidelines meant to preserve an underlying value. It can be used as a guide for someone working on implementation or as a tool to evaluate an existing project. So, whether you’re reviewing code, auditing CSS or interviewing candidates for a role on your team, these principles should provide a structure that transcends specific techniques and results in a common approach to implementing design.
Structured
The document is written semantically and logically, with or without styles.
Efficient
The least amount of markup and assets are used to achieve the design.
Standardized
Rules for common values are stored and used liberally.
Abstracted
Base elements are separated from a specific context and form a core framework.
Modular
Common elements are logically broken into reusable parts.
Configurable
Customizations to base elements are available through optional parameters.
Scalable
The code is easily extended and anticipates enhancements in the future.
Documented
All elements are described for others to use and extend.
Accurate
The final output is an appropriate representation of the intended design.
To make it easier to follow along and see how each principle applies to a project, we’ll use a design mockup from one of my projects as the basis for this article. It’s a special landing page promoting daily deals on an existing e-commerce website. While some of the styles will be inherited from the existing website, it’s important to note that the majority of these elements are brand new. Our goal is to take this static image and turn it into HTML and CSS using these principles.
The document is written semantically and logically, with or without styles.
The principle here is that the content of our document (HTML) has meaning even without presentational styles (CSS). Of course, that means we’re using properly ordered heading levels and unordered lists — but also using meaningful containers such as <header> and <article>. We shouldn’t skip out on using things such as ARIA labels, alt attributes and any other things we might need for accessibility.
It might not seem like a big deal, but it does matter whether you use an anchor tag or a button9 — even if they’re visually identical — because they communicate different meanings and provide different interactions. Semantic markup communicates that meaning to search engines and assistive technologies and even makes it easier to repurpose our work on other devices. It makes our projects more future-proof.
Creating a well-structured document means learning to write semantic HTML10, familiarizing yourself with W3C standards11 and even some best practices12 from other experts, and taking the time to make your code accessible. The simplest test is to look at your HTML in a browser with no styles:
Is it usable without CSS?
Does it still have a visible hierarchy?
Does the raw HTML convey meaning by itself?
One of the best things you can do to ensure a structured document is to start with the HTML. Before you think about the visual styles, write out the plain HTML for how the document should be structured and what each part means. Avoid divs and think about what an appropriate wrapping tag would be. Just this basic step will go a long way toward helping you to create an appropriate structure.
<section> <header> <h2>Daily Deals</h2> <strong>Sort</strong> <span></span> <ul> <li><a href="#">by ending time</a></li> <li><a href="#">by price</a></li> <li><a href="#">by popularity</a></li> </ul> <hr /> </header> <ul> <li aria-labelledby="prod7364536"> <a href="#"> <img src="prod7364536.jpg" alt="12 Night Therapy Euro Box Top Spring" /> <small>Ends in 9:42:57</small> <h3>12" Night Therapy Euro Box Top Spring</h3> <strong>$199.99</strong> <small>List $299</small> <span>10 Left</span> </a> </li> </ul> </section>
Starting with HTML only and thinking through the meaning of each element results in a more structured document. Above, you can see I’ve created the entire markup without using a single div.
The least amount of markup and assets are used to achieve the design.
We have to think through our code to make sure it’s concise and doesn’t contain unnecessary markup or styles. It’s common for me to review code that has divs within divs within divs using framework-specific class names just to achieve a block-level element that’s aligned to the right. Often, an overuse of HTML is the result of not understanding CSS or the underlying framework.
In addition to the markup and CSS, we might need other external assets, such as icons, web fonts and images. There are a lot of great methods and opinions about the best ways to implement these assets, from custom icon fonts to base64 embeds to external SVGs. Every project is different, but if you’ve got a 500-pixel PNG for a single icon on a button, chances are you’re not being intentional about efficiency.
When evaluating a project for efficiency, there are two important questions to ask:
Could I accomplish the same thing with less code?
What is the best way to optimize assets to achieve the smallest overhead?
Efficiency in implementation also overlaps with the following principles on standardization and modularity, because one way of being efficient is to implement designs using set standards and to make them reusable. Creating a mixin for a common box shadow is efficient, while also creating a standard that is modular.
Rules for common values are stored and used liberally.
Creating standards for a website or app is usually about establishing the rules that govern things like the size of each heading level, a common gutter width and the style for each button type. In plain CSS, you’d have to maintain these rules in an external style guide and remember to apply them correctly, but using a preprocessor such as LESS or Sass is best, so that you can store them in variables and mixins. The main takeaway here is to value standards over pixel-perfect designs.
So, when I get a design mockup with a gutter width of 22 pixels, instead of the 15 pixels we’re using elsewhere, I’m going to assume that such precision is not worth it and instead will use the standard 15-pixel gutter. Taken a step further, all of the spacing between elements will use this standard in multiples. An extra wide space is going to be $gutter-width * 2 (equalling 30 pixels), rather than a hardcoded value. In this way, the entire app has a consistent, aligned feel.
Because we’re using standardized values derived from LESS variables or mixins, our CSS doesn’t have any numerical values of its own. Everything is inherited from a centralized value.
To check for standardization, review the CSS and look for any hardcoded unit: pixels, HEX colors, ems or pretty much any numerical value.
Should these units use an existing standard value or variable?
Is the unit reused such that it would benefit from a new variable? Perhaps you’ve realized that this is the second time you’ve applied a partially opaque background, and the same opacity is needed both times.
Could the unit be derived from the calculation of an existing variable? This is useful for variations on color — for example, using a standard color and performing a calculation on it to get something 10% darker, rather than hardcoding the resulting HEX value.
As often as possible, I use standard values and create new ones only as an exception. If you find yourself adjusting an element 5 pixels here and 1 pixel there, chances are your standards are being compromised.
In my experience, the majority of preprocessed CSS should use centralized variables and mixins, and you should see almost no numeric, pixel or HEX values in individual components. Occasionally, adding a few pixels to adjust the position of an individual component might be appropriate — but even those cases should be rare and cause you to double-check your standards.
Base elements are separated from a specific context and form a core framework.
I originally called this principle “frameworked” because, in addition to creating the one specific project you’re working on now, you should also be working toward a design system that can be used beyond the original context. This principle is about identifying larger common elements that need to be used throughout the entire project or in future projects. This starts as broad as typography and form field inputs and all the way up to different navigation designs. Think of it this way: If your CSS were going to be open-sourced as a framework, like Bootstrap or Foundation, how would you separate the design elements? How would you style them differently? Even if you’re already using Bootstrap, chances are that your project has base elements that Bootstrap doesn’t provide, and those also need to be available in your project’s design system.
The key here is to think of each element in more generic terms, rather than in the specific context of your project. When you look at a particular element, break it into parts, and give each part overall styles that that element would have regardless of the specific implementation you’re working with now. The most common elements are typography (heading styles, line height, sizes and fonts), form elements and buttons. But other elements should be “frameworked” too, like the callout tag or any special price formatting that might have been designed for our Daily Deals but would also be useful elsewhere.
When checking your project for abstraction, ask:
How would I build this element if I knew it was going to be reused in another context with different needs?
How can I break it into parts that would be valuable throughout the entire application?
Thinking through a more general implementation of each element is key. These pieces should be stored as completely separate and independent classes or, better yet, as separate LESS or Sass files that can be compiled with the final CSS.
This principle is easier in a web component19 or module app architecture because the widgets are probably already separated in this way. But it has as many implications for our thinking as anything else. We should always be abstracting our work from the context it was derived from to be sure we’re creating something flexible.
Common elements are logically broken into reusable parts.
Overlapping with the “Abstracted” principle, making our designs modular is an important part of establishing a concrete design system that is easy to work with and maintain. There is a fine line between the two, but the difference is important in principle. The nuance is that, while global base elements need to be abstracted from their context, individual items in context also need to be reusable and to maintain independent styles. Modules may be unique to our app and not something we need available in the entire framework — but they still need to be reusable so that we aren’t duplicating code every time we use that module.
For example, if you’re implementing the product card list from the previous example for a Daily Deals landing page, instead of making the HTML and CSS specific to Daily Deals, with class names like daily-deal-product, instead create a more general product-cards class that includes all of the abstracted classes yet could also be reused outside of the Daily Deals page. This would probably result in three separate places where your component gets its styles:
base CSS
This is the underlying framework, including default values for typography, gutters, colors and more.
CSS components
These are the abstracted parts of the design that form the building blocks of the overall design but can be used in any context.
parent components
These are the daily-deal component (and any children) containing styles or customizations specific to Daily Deals. For many, this will be an actual JavaScript web component, but it could just be a parent template that includes the HTML necessary to render the entire design.
Of course, you can take this too far, so you’ll have to exercise judgement. But for the most part, everything you create should be as reusable as possible, without overcomplicating long-term maintenance.
Customizations to base elements are available through optional parameters.
Part of building a design system is about thinking through options that the project might need now or in the future. It’s not enough to implement the design only as prescribed. We also have to consider what parts might be optional, turned on or off through a different configuration.
For example, the callout flags in our design show only three variations of color, all pointing to the left. Rather than create three separate classes, we’ll create a default class and add additional class names as the optional parameters. Beyond that, I think someone might come along and want to point the flag right for a different context. In fact, using our default brand colors for these callouts is also useful, even though the design doesn’t specifically call for it. We’d write the CSS specifically to account for this, including both left and right as options.
While you’re thinking through a particular design element, think about the options that might be valuable. An important part of understanding this is thinking critically about other contexts in which this element might be reused.
What parts could be configured, optional or enabled through an external variable?
Would it be valuable for the color or position of the element to be able to change?
Would it be helpful to provide small, medium and large sizes?
Using a methodology such as BEM, OOCSS242 or SMACSS to organize your CSS and establish naming conventions can help you make these decisions. Working through these use cases is a big part of building a configurable design system.
The code is easily extended and anticipates enhancements in the future.
In the same spirit as the principle of “Configurable,” our implementation also has to expect changes in the future. While building in optional parameters is useful, we can’t anticipate everything that our project will need. Therefore, it’s important to also consider how the code we’re writing will affect future changes and intentionally organize it so that it’s easy to enhance.
Building scalable CSS usually requires me to use more advanced features of LESS and Sass to write mixins and functions. Because all of our call out flags are the same except for the colors, we can create a single LESS mixin that will generate the CSS for each call out without the need to duplicate code for each variation. The code is designed to scale and is simple to update in one place.
To make the callouts scalable, we’ll create a LESS mixin named .callout-generator that accepts values for such things as the background color, text color, angle of the point and border.
In the future, when a new requirement calls for a similar design pattern, generating a new style will be easy.
To create a scalable design system, learn to anticipate the changes that are common in projects, and apply that understanding to make sure the code you write is ready for those changes. Common solutions include using variables and mixins, as well as storing values in arrays and looping through them. Ask yourself:
What parts of these elements are likely to change?
How can you write the code so that it’s easy to make those changes in the future?
All elements are described for others to use and extend.
Documenting design is undervalued and is often the first corner to be cut in a project. But creating a record of your work is more than about just helping the next person figure out what you intended — it’s actually a great way to get all of your stakeholders onboard with the entire design system, so that you’re not reinventing the wheel everytime. Your documentation should be a reference for everyone on the team, from developers to executives.
The best way to document your work is to create a live style guide, one that is generated directly from the comments in your code. We use an approach called style-guide-driven development, along with DocumentCSS29, which pays for itself in dividends. But even if your project can’t have a live style guide, creating one manually in HTML or even a print-formatted PDF is fine. The principle to remember is that everything we do must be documented.
To document your design system, write instructions to help someone else understand how the design has been implemented and what they need to do to recreate it themselves. This information might include the specific design thinking behind an element, code samples or a demo of the element in action.
How would I tell someone else how to use my code?
If I were onboarding a new team member, what would I explain to make sure they know how to use it?
What variations of each widget can I show to demonstrate all the ways in which it can be used?
The final output is an appropriate representation of the intended design.
Finally, we can’t forget that what we create has to look just as great as the original design concept it’s intended to reflect. No one is going to appreciate a design system if it doesn’t meet their expectations for visual appeal. It’s important to emphasize that the result can only be an appropriate representation of the design and will not be pixel-perfect. I’m not fond of the phrase “pixel-perfect” because to suggest that an implementation has to be exactly like the mockup, pixel for pixel, is to forget any constraints and to devalue standardization (never mind that every browser renders CSS a little differently). Complicating accuracy is the fact that static designs for responsive applications rarely account for every possible device size. The point is that a certain degree of flexibility is required.
You’ll have to decide how much of an appropriate representation is needed for your project, but make sure that it meets the expectations of the people you’re working with. In our projects, I’ll review major deviations from pixel-perfection with the client, just to be sure we’re on the same page. “The designs show a default blue button style with a border, but our standard button color is slightly different and has no border, so we opted for that instead.” Setting expectations is the most important step here.
The goal of these nine principles is to provide a guide for implementing design in HTML and CSS. It is not a set of rules or prescriptive advice as much as it is a way of thinking about your work so that you can optimize for the best balance between great design and great code. It’s important to give yourself a certain amount of flexibility in applying these principles. You won’t be able to achieve perfection with each one every time. They are ideals. There are always other distractions, principles, and priorities that prevent us from doing our best work. Still, the principles should be something to always strive for, to constantly be checking yourself against, and to aggressively pursue as you take your design from the drawing board to the final medium in which it will be experienced. I hope they will help you to create better products and build design systems that will last for many years.
I’d love to hear from you about your experience in implementing design. Post a comment and share any advice or other principles you might be using in your own work.
John-David Dalton announces the release of standard/esm, an opt-in, spec-compliant, ECMAScript (ES) module loader that enables a smooth transition between Node and ES module formats with near built-in performance.
Editor’s Note:This article is targeted at readers experienced in using Google Analytics. If you’re new to Analytics, the following guide might be challenging.
Many websites use internal advertising in the form of banners or personalized product recommendations to bring additional products and services to the attention of visitors and to increase conversions and leads.
Naturally, the performance and effectiveness of internal marketing campaigns should be assessed, too, as this is one of the most powerful instruments for generating more leads, more conversions and more revenue on your website. In many cases, web analysts use Google Analytics’ UTM campaign parameters3 to track internal advertising.
For those of you who are not familiar with UTM parameters, this is what an internal link could look like if you add UTM parameters:
The problem: UTM parameters are intended to be used in external campaigns (for example, promoted posts on Facebook). Unfortunately, they are not suitable for tracking internal campaigns. In this article, I’ll explain why you would corrupt your Google Analytics data when using UTM parameters for internal tracking purposes.
The solution: I have developed a set of URL parameters (called ITM parameters) that make tracking internal marketing just as simple and convenient as the familiar UTM parameters. Set-up takes a little time, but then you’d have a means of tracking internal marketing that both is easy to use and delivers accurate data. This article presents the solution and includes a precise description of all the necessary steps.
Did you know that CSS can be used for collecting statistics? Indeed, there’s even a CSS-only approach for tracking UI interactions using Google Analytics. Read more →4
Can Internal Marketing Campaigns Be Tracked With UTM Parameters? Link
Well, yes and no. There’s no technical problem with using UTM parameters for internal links as well. However, in the context of web analytics, it is definitely not advisable! As soon as a visitor clicks on an internal link with UTM parameters, their current session in Google Analytics is ended and a new session is started with the source and medium information from the UTM parameters. This has disagreeable consequences for your data in Google Analytics:
The number of sessions counted increases artificially, because every click on an internal link with UTM parameters results in a new session.
A conversion can no longer be traced back to the original, external source of the traffic, because the new session uses the source and medium information from the UTM parameters, which makes it more difficult to quantify the contribution of various external traffic sources to your conversions.
If a new visitor clicks on an internal link with UTM parameters right on the landing page, this immediately results in a new session. The upshot of this is that the bounce rate for visitors from external sources increases artificially.
Naturally, you want to avoid these negative consequences, because they reduce the quality of your website’s analytics.
Nevertheless, the use of UTM parameters to track banners ads and product recommendations is widespread. For one thing, many web analysts are unaware of the problem described above. But the deciding factor is something else: The UTM parameters are easy to use, very flexible and accessible, even for the less technically adept among us, which is why they are so often misused for internal campaigns.
In my blog post “Tracking Teasers and Internal Links in Google Analytics9” (in German only, unfortunately), I’ve already covered how clicks on internal banner ads can be monitored and evaluated using event tracking or enhanced e-commerce tracking.
In practice, I’ve discovered that setting up tracking is too much for many people and/or they don’t follow through with it systematically. As a result, they fall back on using the UTM parameters for internal purposes.
So, I asked myself the following question: If the UTM parameters are so popular, can I find a way to track and evaluate internal marketing campaigns using URL parameters?
Tracking And Evaluating Internal Marketing Campaigns Using URL Parameters Link
The answer is yes, it is possible!
Not with UTM parameters, because the type of evaluation you can do with these is predetermined in Google Analytics. What we can do, however, is define a new set of URL parameters that we then use to track internal marketing campaigns.
I call these parameters simply ITM parameters. Instead of using utm_source, utm_medium and so on, you would now use the parameters itm_source, itm_medium and so on for tracking your internal advertising. I consciously took the names of the UTM parameters, so that existing URLs can be easily modified with “find and replace.”
OK, let’s get started. I’ll show you how to use Google Tag Manager and Google Analytics to create a set of URL parameters for tracking internal marketing campaigns and for analyses.
Drawing on the UTM parameters, we’ll use the following ITM parameters to track internal marketing campaigns:
itm_source
This identifies the source of the traffic. In the case of internal ads, this is usually your own domain. However, if you employ cross-domain tracking or different ad servers, different traffic sources can appear in this field.
itm_medium
This is the medium of the internal ad; for example, a banner or product recommendation.
itm_campaign
This is the name of the internal marketing campaign.
itm_content
This is a parameter to differentiate between similar content, or links in the same ad; for example, text_link or banner_468x60.
itm_term
This is the search keyword that triggered the internal ad. Alternatively, it could be a keyword to categorize the ad by content.
A URL that uses all five ITM parameters might look something like this:
Now we have to make sure that the information we are collecting with the ITM parameters is passed to Google Analytics and available for analysis.
Note: To analyze ITM parameters in Google Analytics, we need to create customs reports. In this article, I’ll show you how to create these reports, but I’ve uploaded them to Google Analytics’ Solutions Gallery, too! You’re free to download the reports here10 and here11.
Creating Custom Dimensions In Google Analytics Link
All of the information that we collect with the ITM parameters should be saved in Google Analytics. To this end, we’ll create custom dimensions in Google Analytics. Here, we have to consider their scope: Should we create the new dimensions at the session or hit level?
Let’s go back to the UTM parameters for a moment. The UTM parameters are session-level parameters — that is, the information regarding source, medium, campaign and so on applies to the entire session. What this means for internal campaign tracking is that only the information regarding the last link clicked is captured with UTM parameters.
However, if we want to measure the influence of internal ads on conversion rates, then we would be interested in all of the clicks on internal banners, not just the last one. For this reason, it is advisable to create the new dimensions in Google Analytics at the hit level (for example, pageviews) in order to record the individual clicks on banner ads accurately.
The catch is that session-based and hit-based dimensions can only be combined in custom reports in exceptional cases. That means that we can’t combine conversions (session level) with individual clicks on banners (hit level), and it means that an internal promotion report, like the one we know from enhanced ecommerce, is not available as a custom report.
So, how do we solve this problem?
We could capture the ITM parameters at the session level.
This allows us to use the predefined campaign reports in Google Analytics as templates for reports on internal marketing campaigns. This is similar to tracking with UTM parameters, which also don’t provide detailed information at the hit level. Not ideal, but easy to manage.
Or we could capture the information from the ITM parameters at the hit level.
Then, we can’t establish a direct relationship between the ITM data and conversions. However, we can use data segments to analyze precisely those sessions in which internal banners were clicked and, thus, determine the influence of internal marketing campaigns on conversions.
It’s up to you which method to use. If you capture the information from the ITM parameters on a session basis, it will be easier to manage the reports in Google Analytics, but the data will be less precise. If you go with the second variant, you will capture very accurate click data for internal banners but will have to invest more time to link this data to your conversions.
Alternatively, you could employ both methods in parallel by creating two custom dimensions for each ITM parameter in Google Analytics: one at the session level, the other at the hit level. Then, you will be free to decide how you want to analyze the data. The disadvantage of this solution is that you need 10 custom dimensions in Analytics to implement it, rather than 5. This might be a critical point, since the free version of Analytics allows a maximum of 20 custom dimensions.
In the following step, we’ll create two sets of dimension: one at the hit level and one at the session level. If you’ve already decided whether to count clicks on internal ads in sessions only or as hits only, then you’ll only need to create the one set that you require.
When you add the ITM parameters to your URLs, these enhanced URLs will also be displayed in Google Analytics (for example, in the content reports). In this case, you would see the same page twice: once with the additional ITM parameters and once without. We want to prevent this, because the content of the pages with and without ITM parameters would be identical. Thankfully, Google Analytics provides a simple means of excluding query parameters.
First, select your “Property” in the “Admin” view, and then the desired “data view.” Under the “View Settings” menu item, you’ll find the section “Exclude URL Query Parameters”:
Enter the ITM parameters that are to be used for tracking in the field provided: itm_source, itm_medium, itm_campaign, itm_content, itm_term.
Remember that the Google Bot will also find your URLs with the ITM parameters. This could mean that these URLs will also land in Google’s index and cause duplicate content. We can prevent duplicates from arising with a minor tweak in Google Search Console. Select “URL Parameters” from the “Crawl” section in Search Console:
Now click on the “Add Parameter” button to let Google know that the ITM parameter doesn’t affect the page’s content:
Enter itm_source in the “Parameter” field.
Select the entry “No: Doesn’t affect page content (ex: tracks usage)” from the dropdown menu.
“Save.”
Repeat these steps for the parameters itm_medium, itm_campaign, itm_content and itm_term as well. When you’re finished, all of the ITM parameters should be configured as follows:
Setting Up Internal Campaign Tracking In Google Tag Manager Link
Now that we have laid the groundwork, we finally come to the point that probably interests you the most: How will the ITM parameters be read with Google Tag Manager and be passed to the custom dimensions in Google Analytics?
This is surprisingly easy. All we need to do is store the content of the ITM parameters in user-defined variables and then send them to Google Analytics with the pageview tag. That’s it!
For each of the five ITM parameters, we’re going to create a variable in Tag Manager. When a URL that contains one of these parameters is called, its value will be saved in the corresponding variable.
Click on “Variables” in the “Workspace” in Tag Manager and then on “New” under “User-Defined Variables”.
In the window that opens, we’ll define the first variable, itm_source URL-Parameter, where we’ll be storing the value from the URL parameter itm_source.
Name the variable itm_source URL-Parameter.
Select “URL” as the variable type.
Select “Query” as the component type.
The query key is itm_source. This is the name of the URL parameter that is going to be saved in this variable.
Make sure that “Page URL / Default” is selected under “URL Source.”
Click “Save” to finish defining the variable.
Repeat these steps for the four other parameters (itm_medium, itm_campaign, itm_content and itm_term).
Once you’re finished, there will be five variables available in Tag Manager to save the values from the ITM parameters.
In this screenshot, you can see not only the five newly created variables (itm_xxx URL-Parameter), but also five additional variables with the type “Constant” (itm_xxx Index). Let’s take a closer look at the variable itm_source Index:
The variable itm_source Index is a constant and has the value 14.
What is it good for?
In the next step, we’re going to send the contents of the five itm_xxx URL-Parameter variables to the custom dimensions we prepared in Google Analytics. Each of these custom dimensions is addressed using an index number. In the example, the index number for the dimension itm_source is 14 (see the section “Creating Dimensions for the ITM Parameters”). In your case, the number will probably be different. If you use the custom dimensions in different tags, you will probably have to check the number of the relevant dimension every time in Google Analytics. That’s why I like to save the dimension number in its own variable — so that I don’t have to remember it. Saving the index numbers in separate variables is not necessary; you can also just address the dimensions in the tag directly with the respective number.
Enhancing the Pageview Tag for ITM Parameters Link
Even with the most basic configuration of Google Analytics and Tag Manager, you will have at least one tracking tag activated — namely, the tag used to send the pageviews to Google Analytics. This tag is ideally suited to be enhanced for the ITM parameters.
Click on “Tags” in the “Workspace” in Tag Manager, and then select the tag with the trigger “All Pages”:
Clicking on the name of the tag will display its details:
Make sure that the “Track Type” is set to “Pageview.” This is the tag used to send pageviews to Google Analytics.
Open the configuration page for the tag, and add the ITM parameters:
Open the “Custom Dimensions” area.
Click on the button “+ Add Custom Dimension.”
Enter the variable {{itm_source Index}} in the “Index” field and the variable {{itm_source URL-Parameter}} in the “Dimension Value” field. In this way, the content of the URL parameter itm_source — which we saved before in the variable itm_source URL-Parameter — will be passed to dimension number 14 in Analytics.
If you don’t use variables to save the index numbers, you can simply enter the respective number here (for example, 15). Add the four remaining dimensions for the variables itm_medium-URL Parameter, itm_campaign URL-Parameter,itm_content URL-Parameter and itm_term-URL Parameter.
“Save.”
Here, at the latest, you’ll have to decide whether the information from the ITM parameters should be saved at the session or hit level in Google Analytics. In the example above, I used the index numbers of the session-based dimensions. You will have to adjust this according to your needs.
Now that we have enhanced the pageviews tag, don’t forget to publish the changes in Tag Manager. Now, when the requested URL contains additional ITM parameters, Tag Manager will read them and pass the information to Google Analytics.
Evaluating Internal Marketing Campaigns In Google Analytics Link
After you have finished setting up ITM parameter tracking in Tag Manager, it’s time to take a closer look at the analysis options in Google Analytics. In doing so, we will have to decide whether to save the ITM parameters at the session or hit level (see the section “Creating Custom Dimensions in Google Analytics”).
If you save the ITM parameters in custom dimensions with the scope “Session,” then evaluation of the internal marketing campaign will work in much the same way as with UTM parameters. As I’ve already mentioned, the disadvantage of this method is that there is only one set of ITM data for the whole session, which is overwritten each time a URL with ITM parameters is clicked. As a result, you will only see the information relating to the last click on an internal banner. Of course, this was already the case when using UTM parameters.
Session-based tracking of ITM parameters is more convenient in terms of web analytics, but less accurate than capturing the ITM parameters at the hit level.
The simplest way to create a custom report for evaluating internal marketing campaigns is via the standard reports for campaigns. In Google Analytics, select the report via “Acquisition” → “Campaigns” → “All Campaigns.” You can then modify this report to suit your evaluation requirements by clicking on the “Customize” button.
I named the report “Internal Marketing Campaigns (session-based).”
The report has two tabs: an “Explorer,” which you can see here, and a “Flat Table,” which is visible in the next screenshot.
The “Explorer” tab can have various freely customizable metric groups.
The “Explorer” drills down through the dimensions itm_campaign_s → itm_content_s → itm_term_s. So, the internal marketing campaign level is immediately visible in this report. The two other dimensions, itm_source_s and itm_medium_s, can be added easily as required.
I included a filter, so that the report only displays sessions where the dimension itm_campaign_s is not empty.
Let’s look at the second tab in the report:
This tab features a flat table which is well suited for exporting the data.
The table uses all of the session-based dimensions that store ITM parameters: itm_source_s, itm_medium_s, itm_campaign_s, itm_content_s and itm_term_s.
The specified metrics are freely customizable.
Important tip: The table only displays sessions that have values in all five(!) ITM dimensions. If, for example, you don’t use the parameter itm_term, then the dimension itm_term_s will remain empty, which means that no sessions would be displayed in the table. We can eliminate this restriction in two ways: either by removing the unused dimensions from the report or by using Google Tag Manager to set a default value in the dimension if the corresponding ITM parameter is empty. The second option necessitates more work in Tag Manager, as we would then have to distinguish between mandatory parameters and optional parameters.
Session-Based Evaluation of Internal Marketing Campaigns Link
With the aid of some simple example data, I’d like to illustrate how the data from internal marketing campaigns is presented in this report.
In the “Explorer” tab, it is possible to combine various dimensions with one another; for example, itm_campaign_s (marketing campaign) and itm_content_s (ad content):
Or you can analyze the effectiveness of internal marketing campaigns in relation to the external traffic sources. This, for instance, is not possible with UTM parameters:
The table contained in the report provides you with a session-based view of all the ITM parameters and makes it simple to export data for further processing in Excel, Google Docs, etc.
Hit-based evaluation of internal marketing campaigns provides you with an accurate overview of how often specific advertising content has been clicked. However, an additional step is necessary in order to establish the relationship to the conversions achieved. This makes this method of analysis more time-consuming than session-based evaluation. However, it enables more sophisticated analyses.
There is no standard report for hit-based evaluation that we can take and customize for our purposes, so we’ll have to create our own report that will provide us with information on how often the individual pieces of ad content have been clicked. Based on this report, we will then infer the data segments that we need for our analysis. But first, the report.
I named it “Internal Marketing Campaigns (hit-based).”
The report has two tabs: an “Explorer,” which you can see here, and a “Flat Table,” which is visible in the next screenshot.
I kept the “Explorer” tab simple; it should provide you with an overview of the usage of internal advertising over time. That’s why it only contains a single metric group with the metric “Hits.”
The “Explorer” drills down through the dimensions itm_source_h → item_medium_h → itm_campaign_h → itm_content_h → itm_term_h.
I included a filter so that the report only displays sessions in which the dimension itm_source_h is not empty.
Let’s look at the second tab in the report:
This tab features a flat table that provides you with an overview of the clicks on different advertising content and that is well suited for exporting the data.
The table uses all of the hit-based dimensions that store ITM parameters: itm_source_h, itm_medium_h, itm_campaign_h, itm_content_h and itm_term_h.
I only used the metric “Hits.” The session-based metrics such as number of conversions cannot be combined with the selected dimensions.
Important tip: This table also only displays hits that have values in all five(!) ITM dimensions. If, for example, you don’t use the parameter itm_term, then the dimension itm_term_h would remain empty, which means that no hits would be displayed in the table. In this case, you would either have to remove the unused dimensions from the table or use default values for the parameters (see above).
Hit-Based Evaluation of Internal Marketing Campaigns Link
As I already mentioned, combining hit-based data (such as clicks on internal ads) and session-based data (such as conversions) directly is not possible in Google Analytics. That’s why we first use the custom report “Internal Marketing Campaigns (hit-based)” to identify data segments that we can then apply to the acquisition reports, for example.
Let’s look at the report, jumping straight to the table this time:
In the table, we see the number of clicks (hits) on internal ads. The data is grouped based on the combinations of ITM parameters that occurred. In a sense, this is our raw data. Now we have to consider which data segments we can identify:
In this example, there is only one internal marketing campaign, but this is already interesting as a data segment. We want to examine more closely how the number of conversions differs between sessions with and without clicks on internal ad content.
Then, we drill down a little deeper and differentiate based on the dimension itm_term_h, which identifies ad content with a relationship to web analytics (webanalyse) or AdWords (adwords).
Even with so little data, we can continue to slice it up into new segments. For example, you could create segments from the dimensions itm_term_h and itm_content_h to examine which type of ad content works better for itm_term_h = "webanalyse" or itm_term_h = "adwords". But that’s just a side note.
I’m not going to go into the details of creating the data segments. You can see the definition of the segments in their names.
We’ll start with the segment itm_campaign_h = marketing2017. To make the difference clearer, I also defined a segment that includes all of the users who didn’t click on internal ads. This segment is named itm_campaign_h != marketing2017. != is the operator for “not equal to.”
Even with these two simple segments, we can distinguish between users who clicked on internal advertising and those who didn’t.
We can see significant differences in the number of conversions and the conversion rate.
We can examine the effectiveness of the internal marketing campaign in relation to the traffic sources. This data provides us with valuable starting points for better targeting and optimizing our internal marketing campaigns. Performing this kind of analysis is impossible if you use UTM parameters!
In the second short analysis, we’ll consider the segments itm_term_h = webanalyse and itm_term_h = adwords. These segments enable us to differentiate based on the context of the internal ad with regard to content — in this example, therefore, AdWords (adwords) and Web Analytics (webanalyse).
We can now distinguish between internal ads with the subjects AdWords (adwords) and Web Analytics (webanalyse). Once again, I’ve activated the segment itm_campaign_h != marketing2017 (users without clicks on internal ads) as a control group.
If we look at the number of conversions, we’ll notice that the total of the individual segments (3 + 4 + 2 = 9) is greater than the total number of conversions. Here we might have expected that the sum of itm_term_h = webanalyse and itm_term_h = adwords would be six conversions, but it is actually seven conversions. This is a consequence of hit-based evaluation. It is possible that users click on banner ads for AdWords and banner ads for Web Analytics during a single session. These users would, then, be part of both data segments!
Naturally, these examples are just a taste of the many analyses that are possible with data segments. But they clearly show how much more the potential for analysis is when clicks on internal ads are captured at the hit level.
Don’t use UTM parameters to track internal marketing campaigns. They are intended for external campaigns. If you use these parameters internally, you’ll lose information about the sources of the users’ traffic. Moreover, you’ll only ever have information about a user’s last click on an ad.
If you already employ ecommerce tracking on your website, you should check whether you can monitor your internal marketing with enhanced ecommerce tracking.
If enhanced ecommerce tracking is not an option for you or you are looking for a solution that’s easy to manage, then consider implementing tracking using ITM parameters.
A little technical know-how is required to implement this method of tracking, but later on the parameters will be as easy to use as the UTM parameters.
Give some thought to capturing data: Should clicks on internal banners be saved at the hit or session level? Both options have their advantages and disadvantages (data quality versus the time and effort of analysis). If necessary, you could even use both options in parallel. Then, you’d need ten custom dimensions in Analytics instead of five. Using both methods in parallel would allow you to compare them directly.
The five ITM parameters that I presented here are not set in stone. I chose them because they make the switch from UTM parameters simple. If you require additional data, adding further tracking parameters is easy.
All done! This article ended up being much longer than I originally planned.
Thanks for sticking with me right to the end and for taking on the challenges of tracking and analyzing internal marketing campaigns with me. As you’ve seen, there are a number of decisions to be made and various settings to be configured.
But they’re worth the effort — internal advertising is one of the most powerful instruments for generating more leads, more conversions and more revenue on your website. In order to employ this instrument profitably, you’ll need data that allows for the evaluation and optimization of your internal marketing campaigns. Particularly if you’re offering personalized product recommendations and the like, detailed data is indispensable for identifying jumping-off points for optimizations.
All the best in the analysis of your internal marketing campaigns!
AsciiDots is an esoteric programming language based on ASCII art. In this language, dots, represented by periods, travel down ASCII art paths and undergo operations. By Aaron Janse.
Creating good user experiences for apps inside messaging platforms poses a relatively new design challenge. When moving from desktop web to mobile interfaces, developers have had to rethink interaction design1 to work around a constrained screen size, a new set of input gestures2 and unreliable network connections. Like our tiny touchscreens, messaging platforms also shake up the types of input that apps can accept, change designers’ canvas size, and demand a different set of assumptions about how users communicate.
Our extended UX guide walks you through designing a good experience end to end, but here, we’ll focus on a few things:
Identify basic assumptions about users. Who are they? What do they know about and expect from your app? How do they expect to receive information?
Consider UI aspects that are specific to messaging platforms, including available components and expected behaviors.
Write app text for conversation (i.e. doing more with fewer words).
Apps are often abandoned due to a poorly designed interface or an overall negative experience. Make sure that you clearly show users why they need your app. Read more →3
Basic Assumptions: Who Are The People Using My App? Link
This is probably the most critical step when it comes to building your user experience. In order to cater to your users, you need to know what they need and expect from the messaging service they’re using, and what they need from your app.
We’ll spare you any preaching on the value of user personas for consumer apps; in this article, though, we do want to call out specific needs that users have in enterprise messaging apps.
The most basic assumption about people using an enterprise messaging app is that they are using the messaging client at work, for work. It’s a big assumption but is key to everything else: You are building an app for an organization, and your app is helping that organization be productive.
There are many variables to consider:
Users might be from a big enterprise group or a small company.
Users might belong to different business units, such as engineering or HR.
Administrators might have different preferences or requirements when it comes to app installation, permissions and security.
Your app might target all of these people or just a subset of them, but your users could come from any company. They will be people of all ages, races, genders and ability levels. They might have poor Internet connections, use the messaging app only on mobile or be forced to use it by their boss.
The four factors below will ensure that they all have a great experience.
There are 5-person non-profits and 50,000-person enterprises that could use your app. Some teams keep their conversations in a dozen channels, while others create five channels for every new project or subteam across their organization.
To make your app work well for 5-person teams and 50,000-person teams, take into account how your app uses lists of users and channels. You might need to leave room in your UI for extra channels, or find ways to abbreviate or truncate what you display.
Even on a single team, users might be spread out across multiple time zones. If you’re planning to post notifications at a particular time of day, understand that some people might be seeing the notification at a different time than everybody else on their team.
People also use their work messaging apps while traveling, so they might be in one time zone when they installed your app but in a different time zone a few months later.
Access controls are extremely important in enterprise software. For example, team administrators might permit only certain people to install and manage apps according to company policy. Prompting a user to install your app when they don’t have permission to will fail; check their access levels before suggesting an action, or fail gracefully with informative error messages and an escalation path to their team administrator. If you can, test the installation flow and interactions in your app using a variety of user types.
Just because somebody installs your app on their team doesn’t necessarily mean that everybody else on the team knows about it. In fact, some people might not even realize they’ve installed an app and might think that your app is part of the messaging client’s built-in functionality. As the app developer, you are responsible for making a great first impression on those interacting with your app.
UI Considerations: Make It Pretty, Make It Work Link
Messages inside any enterprise messaging tool should be designed to help people get work done efficiently. Here are some general rules we’ve learned from seeing hundreds of apps.
Create a storyboard of your app’s interactions before you start coding it. It’s a great way to stay user-centric and provide the most valuable and pleasant experience possible. Mapping out a user’s journey can also help you spot any inefficient paths to completing a task, redundant flows or confusing cycles.
Keep Text Segments Bite-Sized and Conversational Link
People don’t read4 big blocks of text. Make text scannable by keeping it short.
Make Use of Each Platform’s Built-In UI Elements Link
When you use system features, you stay consistent with the platform’s UI (for example, fonts will look good together, buttons will stay in proportion with everything else on the screen), and you’ll also benefit from the work each company has done to optimize those features for display across mobile and desktop. Remember that your app is contained within the frame of another app, so completely rolling your own style would look more jarring than unique.
What’s more, UI elements such as buttons8, attachments9 and menus10 give you a lot of flexibility to make your messages pop. If you’re having trouble keeping your messages short, think about using images or linkified text to make them easier to read.
For example, the message below uses links and colored buttons to highlight the key elements of the message: There is a task to be completed, related to a specific message belonging to a specific topic.
Besides making sure your app’s messages look great, you’ll want to do a few more things.
Help Users Choose by Picking Sensible Default Options Link
Save people work wherever you can by minimizing the choices they have to make. When you give menus and buttons good default values, you decrease the number of choices users have to make from many to just one: yes or no.
Say your app helps people buy coffee. Instead of presenting a full menu of choices every time someone orders, you could make the user’s last order the default option. In the best-case scenario, this reduces the coffee order to a single click.
Users will be interacting with your app asynchronously. They might begin a task, then get interrupted and revisit later. Your app should plan for that. Interruptions or periods of inactivity may or may not signal a loss of intent on the part of the user.
Decide whether anything in your workflow is time-sensitive. Should someone be able to jump in on a task again at any time? Should some content expire or need to be redone? Make a conscious choice about the lifespan of messages.
UI-heavy messages are great in the moment you receive them: They’re easy to read, good to look at and simple to interact with. They also take up a lot of space on the person’s screen.
In your storyboard, think about what a person will need to remember about their interaction with your app when they come back to it later, at the end of the message’s life or after an exchange of several messages. Do those buttons and menus need to stick around, or could you condense the message down to a simple text record of what happened? Be considerate and update your message after the interactive portion of the conversation has expired.
If your app has a conversational bot component18, the way your bot communicates will become part of your brand’s voice. In the most literal interpretation of a chat UI, your bot is your company’s logo having a conversation with the end user — so, no matter what, you are expressing something about your brand and values, and you should be thoughtful about what you say.
If you strive to make your bot sound clear, concise and human, you can’t go far wrong. The bot’s voice could start out being more or less your own, progressively edited for clarity.
Believe it or not, if you’re doing any sort of natural language processing on your users’ responses, taking the time to write clear, concise bot copy will pay dividends. On nearly all platforms, your bot will speak to the user first. This helps set the tone for the conversation, since people generally mimic the way they’re spoken to when replying. If your bot speaks clearly and in grammatical English (or the language of your choice), you’re more likely to get grammatical replies back.
Write down some adjectives that you’d like people to use to describe your brand or your bot (for example, “friendly,” “authoritative”). Then you can go back and edit the words you would have used as yourself, and make sure they’re consistent with how someone communicates when they’re friendly or authoritative.
Small words can make a difference. Think about how you would reply to a bot that greets you like this…
Hello, how may I help you?
… versus like this:
Hey, how can I help you?
For further reading, MailChimp’s “Voice and Tone19” guide is a very helpful way to frame how you approach writing for your bot.
Your bot’s primary purpose is not to sound clever or to entertain users, but to help them accomplish a task — even if that task is inherently entertaining, like finding just the right cat GIF. Even if your app is useful, if it feels like more work chatting with your bot than just doing the same task outside of the messaging interface, then you’re going to lose some users.
A bot with personality can help you stand out, but within limits:
Don’t construct a personality that requires you to add a lot of fluff to messages in order to express character or humor. Get to the point.
Try to avoid puns or wordplay if they detract from the meaning.
Informality is good, but getting overly friendly will be charming to a very small number of people. The rest will find it grating or even culturally insensitive (particularly in a workplace).
If you decide to give a gender to your bot (and it’s very easy not to), then be appropriate with the kinds of things they say and don’t say. Don’t be tempted to use it as an excuse to get lazy about behaviors or about phrases stereotypical to one gender or another (or to particular age groups, or anything else). You’ll end up driving people away.
Using contractions and conversational cadence is a good way to lightly infuse your bot with human personality — “You’ll be able to” rather than “You will be able to.”
A little goes a long way. We cannot say this enough.
Don’t add a joke or aside just to add one. Almost every word your bot says should facilitate an interaction. (Courteous parts of speech, such as greetings, are also useful.)
Don’t do this:
Try this instead:
The second example still has plenty of distinctive personality, but gets straight to the point.
Try to write copy for your interactions that someone who doesn’t speak your language fluently could easily understand. That means:
avoid over-relying on jargon and slang;
avoid culturally specific references, such as jokes from movies;
stick to common, simple words;
don’t replace words with emoji.
No:
Yes:
The emoji combination is also potentially confusing and might stall some users as they try to decipher it (“Fire?… Meat?… Firemeat?”).
This button copy is similarly difficult to understand and, in the worst case, could prevent users from selecting a response at all. Try to use standard combinations on buttons (“Yes” and “No,” “Confirm” and “Cancel”) to help users have a smooth, simple experience with your app.
Read over the copy and ask yourself, “Is there anywhere a user might pause in confusion?” Better yet, watch someone read your copy and see if they get confused.
Make an effort to test your bot with users from a variety of backgrounds, in different settings (on mobile, with flaky Wi-Fi, etc.).
Don’t assume any level of technical fluency from your users.
Writing for a broad audience takes a bit of practice if you’re new to it, and it is usually easier if you have a team of people from diverse backgrounds working on your bot from the start.
Words and copy used in your interactions should be easily understood even by someone who doesn’t speak the same language fluently.
Avoid jargon and slang.
For basic actions, stick to common, simple words.
Write button copy in the active voice, and reflect the user’s outcome (“Save,” “Book Flight,” “Place Order”).
Be concise. Button copy should only rarely exceed two words.
Avoid vague, non-actionable text like “Click here” or “Settings.”
Above all, don’t get too attached to anything you write for your bot. Good writing is always in the editing; be open to changes and to feedback, and you’ll iterate your way to success.
Taking your app to a new platform requires that you adapt to your users’ expectations and needs in that new medium. We hope this guide has clarified some of the things that are most impactful to enterprise users of messaging apps: the context of work, UI primitives and how to speak in messages.
We don’t have all the answers. Everything we’ve recommended here is what we’ve seen work for us (building Slackbot34) and for successful apps on Slack35. If you have your own ideas on what works, we’d love to hear them — drop us a line36!
This is an excerpt adapted from Slack’s new UX guide — a comprehensive look at what goes into building a great app for Slack. You can read the full version here37.
Have you ever read a post that has left you feeling wholly inadequate because you know you can’t live up to the high standards they lay out? Well, that is how I feel when I read posts about how much to charge my clients.
When Smashing Magazine asked me to write an article sharing my thoughts on pricing my services, I agreed without much thought. But now I sit down to write it, and I’m faced with a conundrum. Do I write about how you should price projects or do I tell you the truth about the unorthodox approach I take?
I have read many posts about the pricing of a project. From value-based pricing1, to billing around Agile cycles2. These are all great approaches I aspire to, but have somehow never managed to implement. I suspect I am not alone.
So instead of intimidating you with complex value-based pricing formulas or boring you to death with project Gantt charts, I am going to share with you the rather inelegant approach I take to the subject. Inelegant it may be, but it has allowed me to run a lucrative business for the last 15 years.
It begins by knowing the minimum you have to charge per hour.
Okay, so this is the one bit of the process I have made an effort to approach professionally. It is important to know the minimum you have to charge per day to live. That isn’t going to be the rate I charge my clients. It is just the rate I am not willing to go below.
So how do we calculate that rate? Fortunately, it isn’t rocket science. Just follow these steps:
Start with the minimum wage you would like to take out of the business. How much do you want to earn a year? Remember, this is the minimum figure you could survive on. We will increase that number later.
Next, calculate your costs including reoccurring costs such as software services, rent, etc. But also include a budget for one-off costs such as that shiny new Mac you want. Work out how much this all comes to a year.
Don’t forget to plan for the future. Set aside some money each month for pension and savings. Calculate the total over a year and add it to your costs and salary.
The figure you now have tells you how much money you must earn before tax. Now, add to that total the tax you would have to pay on that revenue. That will give you the minimum figure you need to earn over the year.
Unfortunately, we still cannot take our target revenue and divided by 230 days. That is because you will not be able to charge out every hour you work.
There are many other factors to consider. You will spend time marketing, writing proposals, doing admin, and managing your finances. None of these is chargeable. Realistically you can’t expect to charge yourself out more than about 60% of the time.
All of this work will leave you with a minimum daily rate. Now we need to calculate how long our project will take. That is where you will begin to see my rather ad-hoc approach.
It’s embarrassing but true. When it comes to estimating how long a project will take, I make an educated guess. (Large preview11)
Sure, if it is a big project I break it down a bit and try to price each part separately. But mainly it is a gut feel for how long things will take. I guess I have the advantage of doing this job for over 20 years, so the chances are I have done a similar project before. But I don’t want to lie to you and pretend I have some clever system. I don’t.
I found that for me, spending hours calculating how long things would take wasn’t worth it. Whether I was bad at it or whether I am just unlucky, but there always seemed to be a curve ball that ended up making my carefully crafted figure inaccurate. I appear to do just as well taking an educated guess.
I make my best guess and times it by my minimum rate per day, and that gives me my minimum price. But this is not what I charge. In fact, if a client can only pay my minimum price then I walk away in most cases. After all, my calculation for the length of the project isn’t exactly accurate, so I need to ensure I have a markup.
My minimum price tells me if the project is viable, but it isn’t an amount the client will ever see. I decide on that pricing using three very professional (sarcasm) factors that I outline below.
You know what it is like, some projects get you excited and some you couldn’t care less if you do. So why not factor that into your pricing? Projects that sound awesome and you desperately want to win, will hardly be marked up at all. Projects that look as dull as ditchwater gets a hefty wedge added on top of the minimum rate.
My rates reflect my level of interest in the project. The more interested, the lower my price will be. (Large preview13)
That makes a lot of sense if you think about it. It encourages more of the kind of work you want to be known for and that you love. It might seem unfair, but if the client doesn’t want to pay your premium to do a tedious project, he can always go elsewhere.
Next up we have “the client is an ass” tax. Always, always, take every opportunity you can to speak to the client before pricing their project. Ask lots of questions and get them talking. Do your best to ascertain what kind of client they are likely to be.
If you think they are going to be difficult to work with, charge them more. Again this makes sense. Demanding clients require you to put in more effort to deliver and so should be charged a premium for that extra effort.
If you think a client is going to be challenging, make sure you charge accordingly. (Large preview15)
Finally, we come to the most interesting one – what can I get away with charging? I work with a huge range of clients. Some are multinational conglomerates, and others are small charities or public sector organizations.
In theory, you could argue that I should charge all of those clients the same, but I don’t. The multinational will pay more because they have more. I never got on with value based pricing, but I do recognize I have value and that my value is proportional to the organization. The larger the organization, the more they can do with the value I bring and so the more I am going to charge them for it.
Value is relative. What is expensive to one client will not be to another and cost is often equated to expertise. (Large preview17)
There is a related factor here too. People often equate value with how much they pay. Therefore the more I charge them, the more they value what I produce. They take me more seriously if they are paying top dollar for me.
But what people perceive as expensive varies based on their situation. My charge out rate would seem cheap to a large multinational and costly to that smaller charity. Hence, I change my rates depending on the client.
It is important to note that none of the factors I use to decide on a price reflects how desperate I am for work. I don’t charge less if I need the work and neither do I charge more if I am busy. That just feels like a slippery slope to me and stinks of desperation.
If times are tough, I would prefer to put my energies into sales and marketing rather than working on a project that I might not even break even on.
I am conscious that this post may reflect poorly on me. I nearly didn’t write it for fear it would seem unprofessional. But I know I am not alone in taking this kind of approach. We just don’t talk about it.
In truth pricing projects is almost impossible to do accurately or in an entirely ‘fair’ way. There are just too many variables, too many things that can go wrong. All we can do is take our best guess.
Also at the end of the day, pricing is about supply and demand. Pricing isn’t a matter of calculating a rate based on hours spent or return generated. It’s your time, and if people are willing to pay, you can charge whatever you like.
Using voice commands has become pretty ubiquitous nowadays, as more mobile phone users use voice assistants such as Siri and Cortana, and as devices such as Amazon Echo and Google Home1 have been invading our living rooms. These systems are built with speech recognition software that allows their users to issue voice commands2. Now, our web browsers will become familiar with to Web Speech API, which allows users to integrate voice data in web apps.
With the current state of web apps, we can rely on various UI elements to interact with users. With the Web Speech API, we can develop rich web applications with natural user interactions and minimal visual interface, using voice commands. This enables countless use cases for richer web applications. Moreover, the API can make web apps accessible, helping people3 with physical or cognitive disabilities or injuries. The future web will be more conversational and accessible!
Web Speech API enables websites and web apps not only to speak to you, but to listen, too. Take a look at just some great examples of how it can be used to enhance the user experience. Read more →4
In this tutorial, we will use the API to create an artificial intelligence (AI) voice chat interface in the browser. The app will listen to the user’s voice and reply with a synthetic voice. Because the Web Speech API is still experimental, the app works only in supported browsers5. The features used for this article, both speech recognition and speech synthesis, are currently only in the Chromium-based browsers, including Chrome 25+ and Opera 27+, while Firefox, Edge and Safari support only speech synthesis at the moment.
This video shows the demo in Chrome, and this is what we are going to build in this tutorial!
A simple AI chat bot demo with Web Speech API
To build the web app, we’re going to take three major steps:
Use the Web Speech API’s SpeechRecognition interface to listen to the user’s voice.
Send the user’s message to a commercial natural-language-processing API as a text string.
Once API.AI returns the response text back, use the SpeechSynthesis interface to give it a synthetic voice.
Then, run this command to initialize your Node.js app:
$ npm init -f
The -f accepts the default setting, or else you can configure the app manually without the flag. Also, this will generate a package.json file that contains the basic info for your app.
Now, install all of the dependencies needed to build this app:
$ npm install express socket.io apiai --save
With the --save flag added, your package.json file will be automatically updated with the dependencies.
We are going to use Express11, a Node.js web application server framework, to run the server locally. To enable real-time bidirectional communication between the server and the browser, we’ll use Socket.IO12. Also, we’ll install the natural language processing service tool, API.AI13 in order to build an AI chatbot that can have an artificial conversation.
Socket.IO is a library that enables us to use WebSocket easily with Node.js. By establishing a socket connection between the client and server, our chat messages will be passed back and forth between the browser and our server, as soon as text data is returned by the Web Speech API (the voice message) or by API.AI API (the “AI” message).
Now, let’s create an index.js file and instantiate Express and listen to the server:
Now, let’s work on our app! In the next step, we will integrate the front-end code with the Web Speech API.
Receiving Speech With The SpeechRecognition Interface Link
The Web Speech API has a main controller interface, named SpeechRecognition1614, to receive the user’s speech from a microphone and understand what they’re saying.
The UI of this app is simple: just a button to trigger voice recognition. Let’s set up our index.html file and include our front-end JavaScript file (script.js) and Socket.IO, which we will use later to enable the real-time communication:
We’re including both prefixed and non-prefixed objects, because Chrome currently supports the API with prefixed properties.
Also, we are using some of ECMAScript6 syntax in this tutorial, because the syntax, including the const and arrow functions, are available in browsers that support both Speech API interfaces, SpeechRecognition and SpeechSynthesis.
Optionally, you can set varieties of properties17 to customize speech recognition:
Once speech recognition has started, use the result event to retrieve what was said as text:
recognition.addEventListener('result', (e) => { let last = e.results.length - 1; let text = e.results[last][0].transcript; console.log('Confidence: ' + e.results[0][0].confidence); // We will use the Socket.IO here later… });
This will return a SpeechRecognitionResultList object containing the result, and you can retrieve the text in the array. Also, as you can see in the code sample, this will return confidence for the transcription, too.
Now, let’s use Socket.IO to pass the result to our server code.
Socket.IO18 is a library for real-time web applications. It enables real-time bidirectional communication between web clients and servers. We are going to use it to pass the result from the browser to the Node.js code, and then pass the response back to the browser.
You may be wondering why are we not using simple HTTP or AJAX instead. You could send data to the server via POST. However, we are using WebSocket via Socket.IO because sockets are the best solution for bidirectional communication, especially when pushing an event from the server to the browser. With a continuous socket connection, we won’t need to reload the browser or keep sending an AJAX request at a frequent interval.
Numerous platforms and services enable you to integrate an app with an AI system using speech-to-text and natural language processing, including IBM’s Watson20, Microsoft’s LUIS21 and Wit.ai4022. To build a quick conversational interface, we will use API.AI, because it provides a free developer account and allows us to set up a small-talk system quickly using its web interface and Node.js library.
Once you’ve created an account, create an “agent.” Refer to the “Getting Started23” guide, step one.
Then, instead of going the full customization route by creating entities and intents, first, simply click the “Small Talk” preset from the left menu, then, secondly, toggle the switch to enable the service.
Customize your small-talk agent as you’d like using the API.AI interface.
Go to the “General Settings” page by clicking the cog icon next to your agent’s name in the menu, and get your API key. You will need the “client access token” to use the Node.js SDK.
Let’s hook up our Node.js app to API.AI using the latter’s Node.js SDK! Go back to your index.js file and initialize API.AI with your access token:
const apiai = require('apiai')(APIAI_TOKEN);
If you just want to run the code locally, you can hardcode your API key here. There are multiple ways to set your environment variables, but I usually set an .env file to include the variables. In the source code on GitHub, I’ve hidden my own credentials by including the file with .gitignore, but you can look at the .env-test26 file to see how it is set.
Now we are using the server-side Socket.IO to receive the result from the browser.
Once the connection is established and the message is received, use the API.AI APIs to retrieve a reply to the user’s message:
io.on('connection', function(socket) { socket.on('chat message', (text) => { // Get a reply from API.AI let apiaiReq = apiai.textRequest(text, { sessionId: APIAI_SESSION_ID }); apiaiReq.on('response', (response) => { let aiText = response.result.fulfillment.speech; socket.emit('bot reply', aiText); // Send the result back to the browser! }); apiaiReq.on('error', (error) => { console.log(error); }); apiaiReq.end(); }); });
When API.AI returns the result, use Socket.IO’s socket.emit() to send it back to the browser.
Giving The AI A Voice With The SpeechSynthesis Interface Link
Let’s go back to script.js once again to finish off the app!
Create a function to generate a synthetic voice. This time, we are using the SpeechSynthesis controller interface of the Web Speech API.
The function takes a string as an argument and enables the browser to speak the text:
function synthVoice(text) { const synth = window.speechSynthesis; const utterance = new SpeechSynthesisUtterance(); utterance.text = text; synth.speak(utterance); }
In the function, first, create a reference to the API entry point, window.speechSynthesis. You might notice that there is no prefixed property this time: This API is more widely supported than SpeechRecognition, and all browsers that support it have already dropped the prefix for SpeechSysthesis.
Then, create a new SpeechSynthesisUtterance()27 instance using its constructor, and set the text that will be synthesised when the utterance is spoken. You can set other properties28, such as voice to choose the type of the voices that the browser and operating system should support.
Finally, use the SpeechSynthesis.speak() to let it speak!
Now, get the response from the server using Socket.IO again. Once the message is received, call the function.
Note that the browser will ask you for permission to use the microphone the first time. Like other web APIs, such as the Geolocation API and the Notification API, the browser will never access your sensitive information unless you grant it, so your voice will not be secretly recorded without your knowledge.
You will soon get bored with the conversation because the AI is too simple. However, API.AI is configurable and trainable. Read the API.AI documentation31 to make it smarter.
I hope you’ve enjoyed the tutorial and created a fun chatbot!
Voice interaction has transformed the way users control computing and connected devices. Now with the Web Speech API, the user experience is transforming on the web, too. Combined with AI and deep learning, your web apps will become more intelligent and provide better experiences for users!
This tutorial has covered only the core features of the API, but the API is actually pretty flexible and customizable. You can change the language of recognition and synthesis, the synthetic voice, including the accent (like US or UK English), the speech pitch and the speech rate. You can learn more about the API here: