No, this is not a broken page or a sneaky landing page. Today marks an important milestone in Smashing Magazine’s life, and this very page is an early preview of what’s coming up next: many experiments, new challenges, but still a good ol’ obsession with quality content. A complete overhaul, both visually and technically, a fine new printed magazine, and a shiny new Smashing Membership, with nifty features and goodies for you, our lovely community. Curious? Well, fasten your seatbelt and browse around — it’s going to be quite a journey!
Today, we are happy to announce the public open beta of the next Smashing Magazine, next.smashingmagazine.com, with a new design, but one with a few bugs and issues that have to be… you know, smashed first. (Sorry about that!) The big-bang release is scheduled for later this year (probably early May to June). Some things now might be broken or just plain weird — we still have a bit of work to do, so do please let us know if you encounter any issues (and you definitely will!).
What’s different? Well, everything. The website won’t be running on WordPress anymore; in fact, it won’t have a back end at all. We are moving to a JAMstack: articles published directly to Netlify CDNs, with a custom shop based on an open-sourced headless E-Commerce GoCommerce and a job board that’s all just static HTML; content editing with Netlify’s new open-source, Git-Based CMS, real-time search powered by Algolia, full HTTP/2 support, and the whole website running as a progressive web app with a service worker in the background (thanks to the awesome Service Worker Toolbox library). Booo-yah!
We don’t really have a back-end any more. Instead, static HTML, advanced JavaScript APIs, running as a progressive web app with a service worker in the background and blazingly fast performance — served from a CDN near you.
How does it work? Quite simple, actually. Content is stored in Markdown files. HTML is pre-baked using the static site generator Hugo, combined with a modern asset pipeline built with Gulp and webpack, all based on the Victor Hugo boilerplate.
We’ve spiced it all up with a handful of fancy APIs, including ones by Stripe for payments, Algolia for search, Cloudinary for responsive images, and Netlify’s open-source APIs GoCommerce (a headless e-commerce API), GoTrue for authentication, and GoTell for our more than 150,000 comments.
Every time content changes, it’s all pushed to Netlify’s CDN nodes close to you. We do our best to ensure that content is accessible and enhanced progressively, with performance in mind. If JavaScript isn’t available or if the network is slow, then we deliver content via static fallbacks (for example, by linking directly to Google search), as well as a service worker that persistently stores CSS, JavaScripts, SVGs, font files and other assets in its cache. The dynamic components in JavaScript are all based on Preact, and most pages you’ll see on the website will have been fully pre-rendered, prebuilt and deployed, ready to be served from a CDN near you.
Why Such A Big Change?
That’s a good question to ask. There are a couple of reasons. In the past, we were using WordPress as a CMS, our job board was running on Ruby, and at one point we switched to Shopify from Magento for our online shop. Not only was maintenance of four separate platforms incredibly complicated, but designing a consistent, smashing experience the way we envisioned it proved to be nearly impossible due to technical restrictions or requirements imposed by these platforms.
Indeed, developing for every platform separately was quite expensive and slow, and in many ways, creating a cohesive experience where a user would literally “flow” from one area to another was remarkably difficult. As a result, because some areas were more important from the business perspective, they were growing and evolving fast, while the others were decaying in the dark. This led to an inconsistent, incosehive, and, frankly, quite annoying and frustrating experience. And even once we’ve committed to make some major changes to unify the experience, it turned out to be a big, challenging undertaking since we were using different platforms, stacks and at some point even different designs.
Performance was another reason. With a proper CDN, full HTTP/2 support and service workers in place, last year we managed to beat the performance results we ever had. However, even with a fancy nginx setup, the performance we could get with a pre-built page enhanced with JavaScript was nothing short of breathtaking. At this point we aren’t quite where we want to be, and we are looking forward to optimize the HTTP/2 delivery and add some minor and major improvements and measure the results. But the initial test showed the Start Render time hitting around 500–600ms, and Time To Interactive steadily being below 1s. We weren’t able to reach the same level of performance with WordPress and LAMP stack in place.
For the first time in 10 years, we were able to define a smashing experience from scratch and implement it from very start to finish. It’s not a big revelation: when it comes to interaction design, asking all the right questions is not enough; it also matters how and when those questions are asked. With the redesign, we had the freedom to explore slightly unconventional ways of asking these questions. If you look closely, you will find some of these (unusual) decisions pretty much everywhere on the site.
Visual Overhaul: It’s All About Cats
As you might have noticed, visually we’ve introduced some major changes as well. In the previous design, we always struggled to find good spots to prominently display our products — books and eBooks and job board postings and conferences; a major goal of the redesign was to change that. We looked for a way to bring our products to the forefront, but without making them feel like blatant, boring, cheap advertising; they had to fit the overall visual language and layout we were developing.
The truth is that you’re probably using an ad blocker (in fact, more than 60% of our readers do), so if most users don’t see the ads, what’s the point of having them in the first place? Instead of pushing advertising over the edge, we’ve taken the drastic decision of removing advertising almost entirely, focusing instead on featuring our lovely products.
As part of the relaunch, we took our time to study and rediscover our signature and personality, and we’ve highlighted it prominently in every component of every page. Have you noticed yet something consistently shining through in the new design? Yep, that’s right. It’s cats. We designed 56 different cats (yes, cats) that will be appearing throughout the website in various places, and if you take the time to find them all, then you’d probably deserve a free ticket to one of our conferences. We’ve also leaned even more heavily towards the signature defined by our logo: Most elements on the page are tilted at just the right angle. We’re also using transitions and animations (work in progress!) to keep interactions smooth and clear, which we didn’t do before.
Now, the redesign is just a part of the story. We’ve got something new cookin’, too: the Smashing Membership, with webinars, workshops and whopping discounts and Smashing Magazine Print, our new, physical, printed magazine — and we wanted them to become an integral part of the site. After years of refining the ideas, now it’s finally happening. However, they deserve a standalone article — just like a detailed overview of the design process and what we’ve learned in the last 18 months.
Up For The Next Part Of Our Journey?
So, here we go! Please feel free to browse around — and be prepared to discover all kinds of cats… oh, things. As it is always the case with an open beta, there are still a good number of bugs to be smashed, and we are on it. Still, please do report issues and bugs on GitHub — the more, the merrier! We can’t wait to read your feedback in good ol’ social media and in the comments. Meow! — and look out for the next articles of the series! 😉
Sometimes we tend to think of our designs as if they are pieces of art. But if we think of them this way, it means they won’t be ready to face the uncertain conditions of the “real world.” However, there is also beauty in designing an interface that is ready for changes — and, let’s admit it, interfaces do change, all the time.
One of the things I like most about designing a mobile app is that, from the initial concepts to the time when you are fine-tuning and polishing all of the interface details, this is a process with many steps. In this process, I, together with several other members of the team (from researchers to illustrators to developers), am involved as a designer. But this also means that a lot of decisions have to be made in each and every stage — and some of them don’t always seem to be as fun to make as others.
As UX specialists, we have varied and diverse backgrounds, but visual interfaces are what we spend most of our time on (and are what is most often attributed to us). We are visual thinkers with a highly trained eye. That’s why it’s tempting sometimes to jump straight to the visual UI design stage when starting a new project, and one of the reasons why we may be bored by some other tasks.
This also means that we often postpone (or, worse yet, neglect) other important parts of our process and workflow: defining user needs and goals, sketching task flows, working on all the details of the information and interaction design, etc. These are critically important, too, and at the same time, they are more abstract and more difficult for many people to visualize how they will become a tangible part of the final product.
When we’re working on a visual design, the so-called pixel-perfect philosophy could be a trap that makes us spend more time than necessary crafting the little details until even the smallest of them is in the “perfect” place in the interface. This leads to a generation of designers who use Dribbble and Behance9 mainly to show polished screens of apps and websites and who are more concerned with looks than with how a design actually works. And in the real world, things tend not to go as well as we expect.
I see designer after designer focus on the fourth layer without really considering the others. Working from the bottom up rather than the top down. The grid, font, colour, and aesthetic style are irrelevant if the other three layers haven’t been resolved first. Many designers say they do this, but don’t walk the walk, because sometimes it’s just more fun to draw nice pictures and bury oneself in pixels than deal with complicated business decisions and people with different opinions. That’s fine, stay in the fourth layer, but that’s art not design. You’re a digital artist, not a designer.
Personally, I think the best designs (when speaking about user interface design) are the ones that not only look and feel good, but also respond elegantly to variable conditions and even unpredictable situations.
In the long road of building a product, there are phases when designers need to be more collaborative and less focused on the visual design. And this is precisely where I’m going to focus on for the sake of this article’s length (I don’t want you to fall asleep at the keyboard!). In the next few paragraphs, I’ll give you some hints and tips on how to put that app design you are working on to the test, and to see whether it’s ready to be released into the wild.
When I was studying graphic design in college, they taught us about the beauty of balance, alignment, proportion and tension and how to position elements in space in such a way that they are harmonious and pleasing to the eye. With this knowledge, my life changed and I started to look at the world with different eyes. Later on, I started designing interfaces and I tried to put those same principles into action — all of the information on the screen should form a visual composition that’s highly satisfying to look at.
If you apply these principles to mobile app design, then we’d find that we must display just the right amount of information. For example, if a screen has to list people’s names, the designer would usually select a few short and common ones and arrange them together perfectly — leaving no room for an unexpectedly long name that could break the design or make it fall apart later.
This approach is based on the assumption that there is no beauty in chaos and imperfection — even though these two aspects appear frequently in the real world. But visual interfaces are not static pieces of art to be admired; they are dynamic, functional spaces that change and adapt for each person using them. We should not succumb to the temptation to design purely for aesthetics, because we can never control everything an interface must present to (and also do for) people.
Instead, we must design for change! This is what the Japanese call wabi-sabi11, a “worldview centered on the acceptance of transience and imperfection.”
Because of this, it’s important to think and design differently:
try to present data in your design in many ways;
whenever possible, use real data.
When you try to present data in a few ways, including some unpredictable ones, you will be able to test whether the interface is ready to handle these situations that are beyond the design’s “comfort zone.” Also, be prepared for extreme cases (when none or a lot of information is present, for example), and try to avoid the “centers” (when everything looks good and balanced).
If you have already launched the product, this will be easier because you can pay attention to the real data12 and use it in your ongoing design process as a reference. But if you are working on something new, then you will have to dig a bit deeper, do some research and try to understand how (and what kind of) information will be presented later on. You can also talk about this with a developer from your back-end team, who will be able to better explain to you what kinds of data will be stored and presented.
I’ll give you one last, more graphic example with something that a developer friend16 of mine calls “the pretty friend syndrome.” When we are designing a screen that will contain pictures of people, like user profiles, we tend to use nearly-stock photos of people who look good and fit well within the design. Yet when he sees such designs, my friend says, “I wish I had friends so handsome.”
So, an alternative to “perfect” imagery could be to use more random photos17 of people with varying colors. That way, you will be able to test how overlaid elements look with different kinds of backgrounds, allowing you to see whether contrast and legibility are still intact.
We are optimists by nature on how an app is going to work. We suppose that everything will go quickly and smoothly and without interruption because… why not? That’s why we sometimes forget how to design and handle some of the potentially not-so-good situations that the user might face later on.
Just to name a few, what would happen if suddenly the Internet connection drops? Or what if there’s an error while the browser is trying to connect to the API when executing a task? And if the connection is too slow, will there be a loading indicator (such as a spinner or progress bar), or will there be some placeholders to (temporarily) fill the display blocks while the actual data is being loaded? And what about the possibility of refreshing certain screens of the app? When (and in which cases) would this be possible?
As you can see, I’m not talking about errors made by the user (for example, making a mistake when filling a form), but about errors that are out of their control but that happen nevertheless. In this case, it’s even more important to talk to developers to know and understand what could go wrong on different screens, and then to devise an approach that could get the user out of trouble easily, giving them the option to try again later or to perform a different action.
In any case, it’s always a good idea to identify the specific conditions that trigger each error and to design a helpful error message for each individual case. These helpful messages will help the user to respond appropriately in each case and to know what to do next to fix the problem. Even if it’s somewhat tempting, avoid a generic error message at all costs.
An interface comprises many elements that together form the whole layout of the application. However, when we focus on the user interface as a whole, we often forget that some elements also have smaller tasks to perform that contribute to the general goal.
Speaking of goals, this is like football (or soccer, if you happen to live in the US). You see, I’m a big fan of this sport, like most people in Argentina, where I’m from. In order for the team as a whole to win, the coach needs to know what to expect from each player and what task they will perform at different moments — even when the unpredictability of some of them (I’m thinking about the magic of Messi23) can make this harder.
Moving forward and (hopefully) forgetting about my sports analogies, let’s translate this to our design cases. If there is a button or item that triggers some kind of interaction, then look ahead and think about the next step: Will a loading state be displayed while the action is being performed? Could it be disabled for some reason? What if the user is holding down the button for a while; will there be any feedback? Just as there are different states for whole screens26, the same should apply to individual elements as well.
In addition, remember to consider how the logic of the product matches the user’s mental model, helping them to accurately and efficiently achieve their goal and to complete their tasks in a meaningful and predictable way.
What I do to address all of these points is to just stop what I’m doing, pause, step back and look at the bigger picture of the entire flow of multiple screens and states over a sequence of steps and actions. I’ll look for the multiple paths that lead to that point, and the multiple paths that lead away from it.
You can do the same while using a prototype, performing the actions slowly, conscientiously and carefully. If this is too challenging for you — because you have probably done this several times before and it’s become kind of an automated task now — borrow some fresh eyes (not literally, of course!) and simply ask a colleague, friend or active user to look at the design or prototype. Seeing someone else use and interact with your design can be illuminating, because we are often too close to and too familiar with it and, so, can overlook things.
When I’m designing, I usually have my phone next to me, so that I can preview my work and make adjustments in real time. To do this, I use Mira27, Crystal28 or Sketch Mirror29, depending on whether I’m designing for Android or iOS.
I think this is a good practice, but this way it’s also easy to forget about all of the other phones different from yours that people may be using. A lot of different screen sizes are out there (especially on the Android platform); try to consider all possible variations.
One way of knowing where to start is to check what kinds of devices your actual users have.
And when preparing your design for those various screen sizes and orientations, it’s not just about stretching boxes and repositioning elements. Carefully consider how to make the most of each situation and, furthermore, how to make the necessary adjustments even when it means deviating a bit from the original design.
In these cases, the same principles that we’ve discussed before still apply: unpredictable situations, different kinds of content, variable amounts of information, missing data and so on — you have to design for all kinds of possible scenarios. Don’t fall into the trap of designing screens as separate, individual parts of the product — they are all connected.
This will be helpful not only to you, but also to your developer friend, who will need to know many of the possible scenarios in order to write the code and prepare the interface to tackle these situations.
You may have noticed that the goal of many of the points in this article is to reduce the unexpected. Even so, there will be many situations in which you won’t have a clear answer. Developers will often ask, “So, what would happen if I do this instead of that?” — pointing to a potential outcome that you hadn’t considered before.
If this happens, then you will have to solve that particular issue for only one case and only one screen. But always try to think globally, and consider how the answer to that particular problem could be designed to work in a flexible way so that you could potentially reuse it later on.
After all, this is what we, UX designers, do — we design and define flexible systems that adapt to unanticipated states, conditions and flows. Think of your interface as a living ecosystem of moving, changing smart parts, instead of a collection of individual blocks of pixels.
During this part of the process, you’ll need to work very closely with developers on your team, mostly to define a set of rules of behaviors for many different situations. But keep a good balance — try not to over-design things. Set your own limits with a dash of common sense. You need to strike a good balance between functionality and consistency. Remember that a good design system is flexible and is prepared for some exceptions to the rules on certain occasions.
On the other hand, think of how elements you have already designed could be tweaked to fit new situations. You will see this better if you make a library of design components, so that, with just a quick overview of the library, you will know whether you need to design something from scratch or you can use something readymade.
If you want to prepare your designs to face the unpredictable and the unknown — and if you want to also, hopefully, get along better with your lead developer by providing them with everything they will need beforehand — here are some final tips.
One of my best experiences with a developer so far (hi, Pier35!) came about simply from sitting right next to him. This is very important, and it will make a huge difference because it will improve communication. Talk often in order to better understand the product and how it works and what they will need to learn from you. Ask, and ask over and over again, in order to see the bigger picture of all possible outcomes.
Get involved in your own design in a conscious, critical way, paying attention to details and small interactions. Get involved and commit yourself. At every step, think of the purpose of each element, which interactions define it and what would happen if something either goes well or goes wrong.
See how things behave in circumstances different from the ones you have while working on the design, beyond the design’s comfort zone. Leave your desk and speak with the actual people who are using (or will be using) the app. And, if possible, bring along others from your team with you. This is also very important — everybody has to be in touch with the real world, so that you all understand better the situations of real users.
The key to good communication is to understand each other well. We sometimes use fancy words to sound “smarter” or to justify our work, but more important is for everybody on the team to be on the same page. Mutual understanding is key here.
Developers have jargon of their own, but most of the time you will be talking about the same things, only with different words. For example, what you call a “screen” is a “view” to them, and what you call a “button” is a “control,” and so on. So, try to align and agree on the terminology that you will be using, to make the exchange of information easier.
This is just as important when you’re talking to product managers and business partners. Designers need to be “multilingual” and understand everyone.
Did I mention that I am a big advocate of design systems? Using a component library, I can design a new screen in literally five minutes (don’t tell my boss!), because I already have what I need to make it. Sometimes you will need to define those components the first time they arise, but they can be reused later for similar cases, not just to save the day in an emergency. Even though it might look like a waste of time in the beginning, it really will pay off in the long run.
Note: This article is not focused on components or pattern libraries, but here’s an excellent (and very detailed) read that I recommend in case you would like to learn more: “Taking Pattern Libraries to the Next Level4636.”
Last but not least, don’t reinvent the wheel unless you have to. By taking advantage of common patterns and elements of the operating system and the app itself, you will design faster, the developers will build the screens more easily and, finally, the user’s learning curve will be less steep.
Discussion Is Better When Something’s on the Table Link
Do some design and prototyping even before you know whether the idea is feasible. It’s always better to show what you have in mind with a (hopefully) working live prototype as a means of communication.
Responding to a tangible proposal is always easier for people than imagining something theoretical. Don’t fall in love with your idea too early because it could be easily dismissed, but at least the prototype will help everyone to see what you’re talking about. (I wrote a little something about prototyping and prototyping tools not long ago: “Choosing the Right Prototyping Tool41”.)
Having a clearly defined problem with an elegant solution based on a design system will make the visual design part of our work even more fun, because we can focus on the refinements, polish and delight of the interface, without having to iterate endlessly. When we jump to the visuals too soon, we have to solve the problem and craft the interface at the same time, which often leads to frustration and burnout.
Changing your workflow might be challenging in the beginning, but after a while you will enjoy working within the constraints. This will also transform the way you think, and hopefully help you to move away from focusing on the visual details. You will become a more complete and capable UX designer, using the appropriate deliverables, and not just churning out an endless stream of visual mockups and compositions.
Good luck, dear reader! Do tell me what your thoughts are in the comments below, or ping me on Twitter42, I’d like to hear your feedback!
Pull-to-refresh is one of the most popular gestures in mobile applications right now. It’s easy to use, natural and so intuitive that it is hard to imagine refreshing a page without it. In 2010, Loren Brichter created Tweetie, one of numerous Twitter applications. Diving into the pool of similar applications, you won’t see much difference among them; but Loren’s Tweetie stood out then.
It was one simple animation that changed the game — pull-to-refresh, an absolute innovation for the time. No wonder Twitter didn’t hesitate to buy Tweetie and hire Loren Brichter. Wise choice! As time went on, more and more developers integrated this gesture into their applications, and finally, Apple itself brought pull-to-refresh to its system application Mail, to the joy of people who value usability.
Today, most clients wish to see this gesture in their apps, and most designers want to create prototypes with integrated pull-to-refresh animation, preferably a custom one. This tutorial explains how to build a prototype in Flinto3, a tool that makes swipe-gesture animation possible, and obviously you cannot create a pull-to-refresh animation without a pull. However, it would be fair to say that Flinto is not the only tool that gives us the swipe gesture — Facebook Origami and POP are worth mentioning. After we create a prototype, we will code it into our design of an Android application.
This tutorial will help you master Flinto, understand the logic of creating prototypes of this kind, and learn the process of coding these prototypes in your application. To follow the steps, you will need macOS, Sketch for Mac4, Flinto for Mac5 to create the prototype, and Android Studio6 and JDK 7+7 to write the code.
For the prototype, I am using screens of ChatBoard12, an Android chat application by Erminesoft13. The list of user chat rooms would be a perfect place to integrate a refresh animation to check new messages. Let’s begin!
We’ll make all of the designs in Sketch14. For the first step, we’ll need to create one screen with any list of items the user will be able to refresh. Now we need to export the screen to Flinto. We have two options here:
Let’s move to Flinto for Mac, which you can buy for $99, or you can download a free trial on the website16. To make a simple pull-to-refresh animation, we need five screens. At this point, we can add a custom image or use standard Flinto forms (a rectangle or circle) to create the animated element. For this project, I am using three standard circles. Stop right there: Don’t search for a circle form. Use a rectangle (R), make a square out of it, and set a maximum corner radius. There you go — you’ve got a circle!
The first animation frame requires a separate layer with the list of content. Behind it, we’ll place the animated element in the starting position; in our case, there will be three circles placed on the same X and Y coordinates. That’s screen 1.
On screen 2, we need to move the content down the Y-axis, revealing the animated element hidden behind the list of content.
Additionally at this step (and all following steps), the transition timer (“Timer Link”) should be turned on and set to 0 milliseconds, to eliminate any lag in transition to the next animation screen. Just click on an artboard title to see the timer transition settings.
The previous screen (screen 2) shows only one circle, but remember that three circles are placed at the same X and Y coordinates. At this point (screen 3), our task is to move one of the circles 30 pixels left along the X-axis, and another circle 30 pixels right along the X-axis. Don’t forget to set the transition timer to 0 milliseconds.
Let’s move on to screen 4. Repeat step 5 doing the same thing but moving the circles along the Y-axis instead of the X-axis for the same 30 pixels. The X coordinates of all of the elements should be the same and center-aligned. Don’t forget about the transition timer.
All of the preparations are done, and we can now move to the animations. Create a new transition. Select the layer of content on the home screen, press F, and link it to screen 2.
(By the way, the key F refers to the name of the program itself, “Flinto.” It is its signature key.)
Now we get to the custom transition animation section. The first thing to do here is to lay one screen above the other. This creates the impression that it is one animated screen, instead of two screens, because it technically is.
At this point, we need to set the connections between elements throughout the screens in order for the program to associate them. For example, the element named “Circle-1” on the home screen is the same object on all of the screens. We just need to select two identical elements and click “Connect Layers.”
We have to connect all identical elements in this way for our “New Transition.” You can try out various kinds of animations in the “Effects” section, but in this particular case, I advise you to use “Spring,” to make our circles bounce.
Click “Save & Exit.” Now we need to select this transition type for all of the transitions in our project, including our timers.
(An interesting fact: In Principle, the prototyping tool, layers are connected automatically when the program finds two elements with identical names. I find the automatic connection more convenient for those who keep Sketch’s layers’ names in order. Flinto is a better choice for the lazy ones who prefer to connect all animated elements while creating a prototype.)
Additionally, to achieve a more realistic effect, you can make the refreshed screen show an update or an additional item.
Despite the simplicity of this animation, it delivers surprising dynamics and responsiveness to the prototype. It also gives a feeling of product completeness, and it is necessary to making a prototype feel as product-like as possible.
Prototyping is a crucial stage in application development, not only impressing the client and verifying the design concept, but also helping to establish a hand-off process between the designers (who create the animations) and the developers (who implement them). Prototypes can become a valuable asset of communication between team members because they ensure that coders understand the project’s specifications and can implement the designer’s custom animations.
Now, let’s proceed to code our prototype in a Java application for Android mobile devices.
The element has now been formed. The next step is to build the animated movement of these elements. Let’s jump to the anim folder (or create it if it’s absent), and add two files, named left_step_anim.xml and right_step_anim.xml.
The following code listing is for left_step_anim.xml:
We need to create an item to use in our custom ListView. To do this, navigate to the layout folder and create a file named list_item.xml, containing one TextView element. This is what it should look like:
Now, let’s add a file with our ListView to the layout folder. In our case, it is a file named main.xml in a folder named layout. It should read as follows:
The last step of this process involves binding together all of the elements above. We need to create two classes. The first class is named PullToRefreshListViewSampleActivity and is used when launching the application. The second class, PullToRefreshListView, will contain our element.
In the PullToRefreshListViewSampleActivity class, our attention is on the onRefresh() method of the onCreate() method. This method is exactly where all of the ListView refreshing magic will happen. Because this is an example, we’ve added our own test data with the loadData() method of the internal class PullToRfreshListSampleAdapter. The remaining code of the PullToRefreshListViewSampleActivity class is relatively simple.
Let’s move on to the PullToRefreshListView class. Because the main functionality is built on the standard ListView, we’ll add extends ListView to its name. The class is quite simple, yet animation involves a few constants that are defined by experimentation. Besides that, the interface implements the onRefresh() method.
Now let’s add a file with our ListView to the layout folder. In our case, it is a file named main.xml in a folder named layout. It should read as follows:
public interface OnRefreshListener{ void onRefresh(); }
This method will be used to refresh the ListView. Our class also contains several constructors to create the View element.
public PullToRefreshListView(Context context){ super(context); init(context); } public PullToRefreshListView(Context context, AttributeSet attrs){ super(context, attrs); init(context); } public PullToRefreshListView(Context context, AttributeSet attrs, int defStyle){ super(context, attrs, defStyle); init(context); }
The class also includes the onTouchEvent and onScrollChanged event handlers. These are standard solutions that have to be implemented. You will also need the private class HeaderAnimationListener, which handles animation in the ListView.
private class HeaderAnimationListener implements AnimationListener{ ... }
This tutorial is intended to encourage designers and developers to work together to integrate a custom pull-to-refresh animation and to make it a small yet nice surprise for users. It adds a certain uniqueness to an application and shows that the developers are dedicated to creating an engaging experience for the user above all else. It is also the foundation for making more complex animation, limited only by your imagination. We believe it’s important to experiment with custom animations, adding a touch of creativity to every project you build!
Over the last five years, Node.js1 has helped to bring uniformity to software development. You can do anything in Node.js, whether it be front-end development, server-side scripting, cross-platform desktop applications, cross-platform mobile applications, Internet of Things, you name it. Writing command line tools has also become easier than ever before because of Node.js — not just any command line tools, but tools that are interactive, useful and less time-consuming to develop.
If you are a front-end developer, then you must have heard of or worked on Gulp2, Angular CLI3, Cordova4, Yeoman5 and others. Have you ever wondered how they work? For example, in the case of Angular CLI, by running a command like ng new <project-name>, you end up creating an Angular project with basic configuration. Tools such as Yeoman ask for runtime inputs that eventually help you to customize a project’s configuration as well. Some generators in Yeoman help you to deploy a project in your production environment. That is exactly what we are going to learn today.
In this tutorial, we will develop a command line application that accepts a CSV file of customer information, and using the SendGrid API10, we will send emails to them. Here are the contents of this tutorial:
This tutorial assumes you have installed Node.js on your system. In case you have not, please install it18. Node.js also comes with a package manager named npm19. Using npm, you can install many open-source packages. You can get the complete list on npm’s official website20. For this project, we will be using many open-source modules (more on that later). Now, let’s create a Node.js project using npm.
I have created a directory named broadcast, inside of which I have run the npm init command. As you can see, I have provided basic information about the project, such as name, description, version and entry point. The entry point is the main JavaScript file from where the execution of the script will start. By default, Node.js assigns index.js as the entry point; however, in this case, we are changing it to broadcast.js. When you run the npm init command, you will get a few more options, such as the Git repository, license and author. You can either provide values or leave them blank.
Upon successful execution of the npm init, you will find that a package.json file has been created in the same directory. This is our configuration file. At the moment, it holds the information that we provided while creating the project. You can explore more about package.jsonin npm’s documentation21.
Now that our project is set up, let’s create a “Hello world” program. To start, create a broadcast.js file in your project, which will be your main file, with the following snippet:
console.log('hello world');
Now, let’s run this code.
$ node broadcast hello world
As you can see, “hello word” is printed to the console. You can run the script with either node broadcast.js or node broadcast; Node.js is smart enough to understand the difference.
According to package.json’s documentation, there is an option named dependencies22 in which you can mention all of the third-party modules that you plan to use in the project, along with their version numbers. As mentioned, we will be using many third-party open-source modules to develop this tool. In our case, package.json looks like this:
Reading command line arguments is not difficult. You can simply use process.argv29 to read them. However, parsing their values and options is a cumbersome task. So, instead of reinventing the wheel, we will use the Commander3025 module. Commander is an open-source Node.js module that helps you write interactive command line tools. It comes with very interesting features for parsing command line options, and it has Git-like subcommands, but the thing I like best about Commander is the automatic generation of help screens. You don’t have to write extra lines of code — just parse the --help or -h option. As you start defining various command line options, the --help screen will get populated automatically. Let’s dive in:
$ npm install commander --save
This will install the Commander module in your Node.js project. Running the npm install with --save option will automatically include Commander in the project’s dependencies, defined in package.json. In our case, all of the dependencies have already been mentioned; hence, there is no need to run this command.
var program = require('commander'); program .version('0.0.1') .option('-l, --list [list]', 'list of customers in CSV file') .parse(process.argv) console.log(program.list);
As you can see, handling command line arguments is straightforward. We have defined a --list option. Now, whatever values we provide followed by the --list option will get stored in a variable wrapped in brackets — in this case, list. You can access it from the program variable, which is an instance of Commander. At the moment, this program only accepts a file path for the --list option and prints it in the console.
You must have noticed also a chained method that we have invoked, named version. Whenever we run the command providing --version or -V as the option, whatever value is passed in this method will get printed.
$ node broadcast --version 0.0.1
Similarly, when you run the command with the --help option, it will print all of the options and subcommands defined by you. In this case, it will look like this:
$ node broadcast --help Usage: broadcast [options] Options: -h, --help output usage information -V, --version output the version number -l, --list <list> list of customers in CSV file
Now that we are accepting file paths from command line arguments, we can start reading the CSV file using the CSV3126 module. The CSV module is an all-in-one-solution for handling CSV files. From creating a CSV file to parsing it, you can achieve anything with this module.
Because we plan to send emails using the SendGrid API, we are using the following document as a sample CSV file. Using the CSV module, we will read the data and display the name and email address provided in the respective rows.
First name
Last name
Email
Dwight
Schrute
dwight.schrute@dundermifflin.com
Jim
Halpert
jim.halpert@dundermifflin.com
Pam
Beesly
pam.beesly@dundermifflin.com
Ryan
Howard
ryan.howard@dundermifflin.com
Stanley
Hudson
stanley.hudson@dundermifflin.com
Now, let’s write a program to read this CSV file and print the data to the console.
const program = require('commander'); const csv = require('csv'); const fs = require('fs'); program .version('0.0.1') .option('-l, --list [list]', 'List of customers in CSV') .parse(process.argv) let parse = csv.parse; let stream = fs.createReadStream(program.list) .pipe(parse({ delimiter : ',' })); stream .on('data', function (data) { let firstname = data[0]; let lastname = data[1]; let email = data[2]; console.log(firstname, lastname, email); });
Using the native File System32 module, we are reading the file provided via command line arguments. The File System module comes with predefined events, one of which is data, which is fired when a chunk of data is being read. The parse method from the CSV module splits the CSV file into individual rows and fires multiple data events. Every data event sends an array of column data. Thus, in this case, it prints the data in the following format:
$ node broadcast --list input/employees.csv Dwight Schrute dwight.schrute@dundermifflin.com Jim Halpert jim.halpert@dundermifflin.com Pam Beesly pam.beesly@dundermifflin.com Ryan Howard ryan.howard@dundermifflin.com Stanley Hudson stanley.hudson@dundermifflin.com
Runtime User Inputs
Now we know how to accept command line arguments and how to parse them. But what if we want to accept input during runtime? A module named Inquirer.js33 enables us to accept various types of input, from plain text to passwords to a multi-selection checklist.
For this demo, we will accept the sender’s email address and name via runtime inputs.
… let questions = [ { type : "input", name : "sender.email", message : "Sender's email address - " }, { type : "input", name : "sender.name", message : "Sender's name - " }, { type : "input", name : "subject", message : "Subject - " } ]; let contactList = []; let parse = csv.parse; let stream = fs.createReadStream(program.list) .pipe(parse({ delimiter : "," })); stream .on("error", function (err) { return console.error(err.message); }) .on("data", function (data) { let name = data[0] + " " + data[1]; let email = data[2]; contactList.push({ name : name, email : email }); }) .on("end", function () { inquirer.prompt(questions).then(function (answers) { console.log(answers); }); });
First, you’ll notice in the example above that we’ve created an array named contactList, which we’re using to store the data from the CSV file.
Inquirer.js comes with a method named prompt34, which accepts an array of questions that we want to ask during runtime. In this case, we want to know the sender’s name and email address and the subject of their email. We have created an array named questions in which we are storing all of these questions. This array accepts objects with properties such as type, which could be anything from an input to a password to a raw list. You can see the list of all available types in the official documentation35. Here, name holds the name of the key against which user input will be stored. The prompt method returns a promise object that eventually invokes a chain of success and failure callbacks, which are executed when the user has answered all of the questions. The user’s response can be accessed via the answers variable, which is sent as a parameter to the then callback. Here is what happens when you execute the code:
$ node broadcast -l input/employees.csv ? Sender's email address - michael.scott@dundermifflin.com ? Sender's name - Micheal Scott ? Subject - Greetings from Dunder Mifflin { sender: { email: 'michael.scott@dundermifflin.com', name: 'Michael Scott' }, subject: 'Greetings from Dunder Mifflin' }
Asynchronous Network Communication
Now that we can read the recipient’s data from the CSV file and accept the sender’s details via the command line prompt, it is time to send the emails. We will be using SendGrid’s API to send email.
… let __sendEmail = function (to, from, subject, callback) { let template = "Wishing you a Merry Christmas and a " + "prosperous year ahead. P.S. Toby, I hate you."; let helper = require('sendgrid').mail; let fromEmail = new helper.Email(from.email, from.name); let toEmail = new helper.Email(to.email, to.name); let body = new helper.Content("text/plain", template); let mail = new helper.Mail(fromEmail, subject, toEmail, body); let sg = require('sendgrid')(process.env.SENDGRID_API_KEY); let request = sg.emptyRequest({ method: 'POST', path: '/v3/mail/send', body: mail.toJSON(), }); sg.API(request, function(error, response) { if (error) { return callback(error); } callback(); }); }; stream .on("error", function (err) { return console.error(err.response); }) .on("data", function (data) { let name = data[0] + " " + data[1]; let email = data[2]; contactList.push({ name : name, email : email }); }) .on("end", function () { inquirer.prompt(questions).then(function (ans) { async.each(contactList, function (recipient, fn) { __sendEmail(recipient, ans.sender, ans.subject, fn); }); }); });
In order to start using the SendGrid3628 module, we need to get an API key. You can generate this API key from SendGrid’s dashboard37 (you’ll need to create an account). Once the API key is generated, we will store this key in environment variables against a key named SENDGRID_API_KEY. You can access environment variables in Node.js using process.env38.
In the code above, we are sending asynchronous email using SendGrid’s API and the Async3923 module. The Async module is one of the most powerful Node.js modules. Handling asynchronous callbacks often leads to callback hell40. There comes a point when there are so many asynchronous calls that you end up writing callbacks within a callback, and often there is no end to it. Handling errors gets even more complicated for a JavaScript ninja. The Async module helps you to overcome callback hell, providing handy methods such as each41, series42, map43 and many more. These methods help us write code that is more manageable and that, in turn, appears like synchronous behavior.
In this example, rather than sending a synchronous request to SendGrid, we are sending an asynchronous request in order to send an email. Based on the response, we’ll send subsequent requests. Using each method in the Async module, we are iterating over the contactList array and calling a function named __sendEmail. This function accepts the recipient’s details, the sender’s details, the subject line and the callback for the asynchronous call. __sendEmail sends emails using SendGrid’s API; you can explore more about the SendGrid module in the official documentation44. Once an email is successfully sent, an asynchronous callback is invoked, which passes the next object from the contactList array.
That’s it! Using Node.js, we have created a command line application that accepts CSV input and sends email.
Decorating The Output
Now that our application is ready to send emails, let’s see how can we decorate the output, such as errors and success messages. To do so, we’ll use the Chalk4524 module, which is used to style command line inputs.
… stream .on("error", function (err) { return console.error(err.response); }) .on("data", function (data) { let name = data[0] + " " + data[1]; let email = data[2]; contactList.push({ name : name, email : email }); }) .on("end", function () { inquirer.prompt(questions).then(function (ans) { async.each(contactList, function (recipient, fn) { __sendEmail(recipient, ans.sender, ans.subject, fn); }, function (err) { if (err) { return console.error(chalk.red(err.message)); } console.log(chalk.green('Success')); }); }); });
In the snippet above, we have added a callback function while sending emails, and that function is called when the asynchronous each loop is either completed or broken due to runtime error. Whenever a loop is not completed, it sends an error object, which we print to the console in red. Otherwise, we print a success message in green.
If you go through Chalk’s documentation, you will find many options to style this input, including a range of console colors (magenta, yellow, blue, etc.) underlining and bolded text.
Making It A Shell Command
Now that our tool is complete, it is time to make it executable like a regular shell command. First, let’s add a shebang46 at the top of broadcast.js, which will tell the shell how to execute this script.
We have added a new property named bin47, in which we have provided the name of the command from which broadcast.js will be executed.
Now for the final step. Let’s install this script at the global level so that we can start executing it like a regular shell command.
$ npm install -g
Before executing this command, make sure you are in the same project directory. Once the installation is complete, you can test the command.
$ broadcast --help
This should print all of the available options that we get after executing node broadcast --help. Now you are ready to present your utility to the world.
One thing to keep in mind: During development, any change you make in the project will not be visible if you simply execute the broadcast command with the given options. If you run which broadcast, you will realize that the path of broadcast is not the same as the project path in which you are working. To prevent this, simply run npm link in your project folder. This will automatically establish a symbolic link between the executable command and the project directory. Henceforth, whatever changes you make in the project directory will be reflected in the broadcast command as well.
Beyond JavaScript
The scope of the implementation of these kinds of CLI tools goes well beyond JavaScript projects. If you have some experience with software development and IT, then Bash tools48 will have been a part of your development process. From deployment scripts to cron jobs49 to backups, you could automate anything using Bash scripts. In fact, before Docker50, Chef51 and Puppet52 became the de facto standards for infrastructure management, Bash was the savior. However, Bash scripts always had some issues. They do not easily fit in a development workflow. Usually, we use anything from Python to Java to JavaScript; Bash has rarely been a part of core development. Even writing a simple conditional statement in Bash requires going through endless documentation and debugging.
However, with JavaScript, this whole process becomes simpler and more efficient. All of the tools automatically become cross-platform. If you want to run a native shell command such as git, mongodb or heroku, you could do that easily with the Child Process53 module in Node.js. This enables you to write software tools with the simplicity of JavaScript.
I hope this tutorial has been helpful to you. If you have any questions, please drop them in the comments section below or tweet me54.
Editor’s Note: Making big changes doesn’t necessarily require big efforts — it’s just a matter of moving in the right direction. We can’t wait for Paul’s new book on User Experience Revolution1 (free worldwide shipping starting from April 18!), and in this article, Paul shares just some of the little tricks and techniques to bring around a big UX revolution into your company — with a series of small, effective steps.
It feels like everywhere I turn somebody is saying that user experience is the next frontier in business, that we have moved beyond the age of features to creating outstanding experiences.
But for many of us who work on in-house teams, the reality feels a million miles away from this. Getting management to understand the importance of user experience2 seems so tough. Even colleagues don’t seem to see the benefit. For those of us in-house, how are we going to get to this golden age of user experience design that people keep promising us?
After all, design-led companies have outperformed the rest of the market by 228% over 10 years3. 89% of customers say4 they’ve stopped doing business with a company after a bad experience. Why then aren’t companies falling over themselves to create a better experience? Why is your job so frustrating at times?
The answer is simple. Change is hard. The world has changed. Digital has changed it, and people’s expectations are higher than ever. People and not companies now have the power. Yet many managers still live in the past, in a world of mass production and mass market.
We need to provide the wake-up call our clients and management need. We need to help them realize the potential of user experience design, its potential to reshape their business and provide a competitive advantage. But how?
Sometimes we designers are our own worst enemy. We sit around moaning that nobody gets it, that nobody understands. But that isn’t true. Designers are not the only people who see the value of a great customer experience. Others might not know the term UX, but that doesn’t mean they don’t care.
Marketers and sales people understand; they know that a satisfied customer is the best form of advertising. Customer support staff know; they know that happy customers are less likely to call them with problems. Even the finance people get it; they understand that happy customers are less likely to return products, that they are more likely to make repeat purchases.
Neither are we the only people with insights into the user. Marketers have done their market research. Customer-support staff talk to customers every day.
If we want to bring about change in our organization, we cannot continue to be the lone wolf. We need to find allies. We need to bring together all those who care about the user’s experience.
This was part of the reason why Google’s culture shifted from being engineering focused to embracing the importance of design. Designers across the company started to reach out to each other. They started to talk to one another and united around shared values. You need to do the same.
Take the time to seek out people who share your belief in customer experience and start talking to them. Create a Slack channel or mailing list. Get together over lunch. Keep in touch. Share experiences and ideas.
Of course, chatting over lunch or in Slack isn’t going to bring about change. For that, you need to start a movement. You need to unite the group around a common goal, around a common cause.
There is power in numbers. Management is more likely to listen to you if you have a clear, consistent message.
Once you have formed your group, outline some principles on how the company could work to create a better user experience.
But be careful. This shouldn’t turn into a list of everything you perceive to be wrong with the company. That would do nothing but anger and threaten. Also, managers spend their lives hearing problems from employees. If you want to get their attention, you need to be more positive.
Instead, create a positive set of values that will encourage change, the kind of values that put user needs at the centre of decision-making, values such as “Design with data” and “If in doubt, test.”
On day one, these principles aren’t going to have teeth. Management won’t have bought into them. They aren’t something you can enforce. All of that can come later. For now, they will unite the group and strengthen your resolve.
But most of all, they will give you a clear message to take to the rest of the company, a message that has the customer at its heart.
The next step in your campaign for change is to raise the profile of the customer. It is shocking how little most organizations talk about customer needs — or at least talk about it in any meaningful way.
Take a look at the average office wall. Companies cover them with certificates, product shots, executives shaking hands and motivational posters — all inward-looking, all focusing on the achievements of the organization. Rarely do you see the user.
Meanwhile, they shove the things that give insights into the user in a drawer somewhere: the personas, customer journey maps and user research. Why aren’t these things on the wall? Why don’t you make sure they are.
Instead of lecturing colleagues about users, make it impossible to ignore them. Turn that user research into attractive infographics. Cover the walls of your office with them. Sneak in at night and replace those inward-looking wall hangings. Put personas, data and quotes about the user in their place.
But don’t stop there. Open up those usability sessions that you run (you do run them, don’t you?) and invite anybody to come along. Bribe them with food if you have to, anything to get them to see users firsthand.
Failing that, record those sessions. Edit together the highlights, and distribute the video to colleagues.
Also, start sharing best practices on the user experience. Send out a weekly newsletter. Use it to highlight what others are doing to improve the customer experience. Quote experts and research in the field. Share testimonials from customers you have interviewed.
Consider running lunchtime sessions, presentations in which you share best practices, but also in which you share your research. Once again, lay out some food to encourage people to attend. If the budget allows, bring in the occasional outside speaker, someone who will add some credibility to the proceedings.
In short, create some buzz around the customer experience. Treat it like a product launch or marketing campaign. Get imaginative and make it impossible for your colleagues to escape the user.
But don’t target management. The biggest mistake I see people make is going to management for permission too soon. If you want to win over management, timing is everything. You need momentum and numbers behind you. You also need a clear vision of a better future.
As user experience designers, we tend to be visual people. We find it easy to imagine what could be. But not everybody is like that. Most people need to see the potential, rather than be told.
If we want management to care, we need to excite them. To excite them, we need to show them a better future, something they can see potential in. That is why, before we ever approach management, we need to have something to show.
Employees at Disney had an idea. They envisioned a MagicBand20 that visitors to their parks would wear. The band would allow visitors to pay for anything, unlock their hotel room door and more. It would track one’s position so that Mickey could walk up to a child and wish them a happy birthday by name. It would allow a maître d’ to greet a patron personally as they arrived at the restaurant.
Winning over the executives would be tough. The investment was going to be large. So, they built a prototype. They converted an empty warehouse into a cardboard park. It had rides and hotel rooms and restaurants, everything they needed to show their idea.
They invited along the executives. They strapped paper prototypes of the MagicBand around their wrists. They then guided them around the cardboard park, giving them a sense of what the experience would be like.
By showing the executives, rather than telling them, they created excitement. That is what you need to do. A document or slide presentation isn’t going to do the job.
Build a prototype showing management a better way. Visualize a better user journey. Do whatever it takes to excite them about the potential.
You might have to work evenings to get it done. You might have to squeeze it between other work. But it will be worth it when you finally approach management to get them on board.
You are going to need management’s support if you want to see the company become user-centric. No amount of grassroots change is going to get the job done without them. So, when the moment comes, you need to make sure you don’t blow it.
First, put off talking to management until you have to. The longer you leave it, the better prepared you will be, the more momentum among colleagues will be behind you. Remember that management’s job is to shoot down half-baked ideas, so we want to give them no excuse.
Your prototype or vision of the future will help, but it won’t be enough. Some managers get excited by concepts and potential. Others are more risk-averse and prefer hard numbers. That is why you will want some data to back up your proposal. If possible, test the prototype with real users.
You can further mitigate risk by referring to outside data and experts. Third-party sources add credibility to your argument. Management will also consider them more impartial. That is why many choose to bring in an external consultant at this stage.
As you sit down to talk to management, bear in mind one important thing: They don’t care about the user experience, and no amount of arguing will change that. Tim Cook once said:
Most business models have focused on self interest instead of user experience.
Although you want to change that in the long term, you are not there yet. So, focusing on improvements to the user experience will not convince them.
You can find out what your executives are already convinced of. If they are any good at what they do, they likely have something they want to improve. It’s likely to be related to improving revenues, reducing costs, increasing the number of new customers, increasing the sales from existing customers, or increasing shareholder value. Good UX can help with each of those things.
Whatever you want to do to improve the user experience, frame it with what management cares about. If you don’t know what that is, find out. Dig out that company strategy you never bothered to read. That will tell you exactly what they want to achieve. Now all you need to do is show how your ideas will move the company closer to those goals.
Of course, the big question is, what should you ask management for? What is it you want them to do? This is where things can go wrong.
If you ask them for wholesale change, you will overwhelm them. If you ask for a big investment, they will be hesitant. That would require you to have a very compelling case or a great track record of delivering on big-budget projects.
Instead, start small. Earn their trust and build their confidence. Avoid overwhelming them.
Start by outlining your vision of the future. Get them excited. But to avoid overwhelming them, you need to make the next step easy.
Instead of asking for wholesale change, ask them to take one small step. Ask permission to build a proof of concept, something to show that user experience can make a difference to the business.
A proof of concept is a chance to prove yourself and user experience design to management. It will show how you can make a difference to the business. So, getting the right proof of concept is critical.
There are three considerations in picking a project:
You need a project that is not expensive to build. Otherwise, management will be hesitant to say yes.
You need something that supports one of management’s goals and that uses UX design to achieve it.
You want something measurable, so that you can prove whether the project works.
For example, suppose management wants to increase the number of leads the company gets. You might agree to run a project that encourages newsletter signups, but to run it in a way that provides value to the user, rather than tricking people into signing up. This will give you a chance to show that putting the user first provides better returns.
This project would work well because you can tie it to a management goal. It would also be inexpensive to build, and its effectiveness would be easy to measure.
The key to a successful proof of concept is to gather as much evidence of success as possible. Measure relentlessly. Even trial more than one approach for comparison. This will provide the evidence that management needs to be confident in you and in the benefits of user experience. It will give them the confidence to take the next step.
A proof of concept is still only the beginning of the journey. You will need to run many such projects that guide management step by step towards your vision of the future.
Remember that this is a marathon and not a sprint. It will take time. It will be frustrating. But what is so obvious to you isn’t to everybody. They will need to go on that journey at their own pace. We need to stand with them, encouraging folks to take that next step, keeping them focused on the end goal.
That is why we cannot hope to do it alone. We need to unite with others around this common aim and vision of the future. We need to work hard to raise the profile of the customer and to approach management with care. But most of all, we need a plan, a plan that starts with some simple pilot projects to build trust, but also one that shows the importance of user experience.
You might conclude that it is not worth the hassle, that you would prefer to work somewhere that already gets it. That is fine. This role is not for everybody. But if you do persevere and change the culture of your company, you will become invaluable. It is the kind of journey that can make a career and transform a company. From my perspective at least, that is worth it.
In part 11 of this article, we looked at where in the world the new entrants to the World Wide Web are, and some of the new technologies the standards community has worked on to address some of the challenges that the next 4 billion people are facing when accessing the web. In short, we’ve tried to make some supply-side improvements to web standards so that websites can be made to better serve the whole world, not just the wealthy West.
But there are other challenges to surmount, such as ways to get over creaky infrastructure in developing markets (which can be done with stopgap technological solutions, such as proxy browsers), and we’ll also look at some of the reasons why some of the offline billions remain offline, and what can be done to address this.
Proxy Browsers
A common problem people encounter in emerging economies relates to networks. Networks are getting better, but they’re not there yet. In 2016, Ericsson reported2:
While cellular networks have improved… smartphone users are still facing issues as frequently as they did in 2013. Globally, 26 percent of smartphone users say they face video streaming issues daily, increasing to over one third in markets like Brazil, India and Indonesia.
This is an excellent statement of the problem. Infrastructure is expensive to upgrade, especially in countries like Indonesia, which is made up of thousands of islands, and India, which is huge and has vast mountain ranges. And as soon as infrastructure is upgraded, more people come online and want to consume video rather than boring old text, and so much more bandwidth is required, and the newly upgraded network crawls again.
In places where bandwidth is seriously constrained (in congested Asian megacities, not just rural areas; I’m in the heart of an Indian city with an international airport, and power cuts and Internet outages occur daily), a lot of people opt to use proxy browsers. Proxy browsers do a lot of the heavy lifting of rendering web pages on their servers and sending compressed versions down to the user, resulting in often significant reductions in data consumption. This is obviously a very appealing proposition for consumers in territories where bandwidth is expensive. Because the data transferred is smaller, websites render faster, too.
Scientia Mobile reported3 in 2016 that, for global market share of proxy browsers, Opera Mini is at 42%, Opera Turbo is at 9%, Chrome is at 39% and UCWeb is at 6%, and there is also Puffin, Silk and others. Opera reports more than 250 million active monthly users of Opera Mini.
Websites compressed by proxy browsers and sent as binary blobs also have a better chance of getting through congested networks. In 2013, Avendus reported (in “India’s Mobile Internet: The Revolution Has Begun,” no longer online):
In India, only 96k of the 736k cell towers are 3G enabled… only 35k of those towers have a fiber-optic connection to the backbone.
(If you’re a real networking anorak, you’ll find the report 2G Network Issues Observed While in India4 to be even more fun than a game of Werewolf at a Star Wars convention.)
Limitations of Proxy Browsers
So, if proxy browsers are so great, why doesn’t everyone use them? The answer is that such compression comes at a cost; websites can look very different in a proxy browser, and JavaScript can often behave unexpectedly.
In a proxy browser, everything happens on the server, everything needs user interaction, and everything needs a round-trip to the server. Opera Mini on Android and iOS has two modes; one uses proprietary compression techniques and the device’s standard web view, and thus JavaScript isn’t affected. In Opera Mini’s extreme mode, all of the rendering is done on Opera’s server farms, on which JavaScript is allowed to run for 5 seconds and then is throttled. (Disclosure: I was Deputy CTO of Opera until November 2016 and can talk authoritatively about its technology. However, I have no relationship with the company now.)
Therefore, to make websites work in Opera Mini’s extreme mode, treat JavaScript as an enhancement, and ensure that your core functionality works without it. Of course, it will probably be clunkier without scripts, but if your website works and your competitors’ don’t work for Opera Mini’s quarter of a billion users, you’ll get the business.
Mini’s extreme mode has design constraints, too: it doesn’t do CSS rounded corners or gradients, because it can’t rely upon every client device to draw those things successfully; so, to render it on the server, it would need to convert them into bitmaps, which would bloat the page. But that’s OK; CSS is for design, and people in highly bandwidth-constrained situations are happy to get the words.
Similarly, Mini’s extreme mode doesn’t do CSS or SVG animations, only showing the first frame. The reason for this is that animations consume CPU cycles, and CPU cycles consume battery. Yes, your animations are lovely, but if you are sitting on a bus in Lagos in a traffic jam and need to phone your sister to ask her to pick up your children from school, you need that battery life more than you need pretty animations.
Neither does Mini’s extreme mode download web fonts, which can be huge files that are primarily for aesthetics. On many very small monochrome screens, the system fonts tend to be designed for those screens and work better. If you want icons, use SVG rather than icon fonts because, well, that’s what SVG is for.
Revolutionary New Development Technique
The best way to ensure that your website or web app (is there a real difference?) will work for people on proxy browsers, conventional browsers with very slow connections and everyone else is to adopt a revolutionary new development methodology.
… we’ve launched our first Holy Grail app… It looks exactly the same as the app it replaced, however initial pageload feels drastically quicker…
What voodoo magic did they employ?
… we serve up real HTML instead of waiting for the client to download JavaScript before rendering. Plus, it is fully crawlable by search engines.… It feels 5x faster.
Who knew? The best way to ensure that everyone gets your content is to write real, semantic HTML, to style it with CSS and ensure sensible fallbacks for CSS gradients, to use SVG for icons, and to treat JavaScript as an enhancement, ensuring that core functionality works without scripts. Package up your website with a manifest file and associated icons, add a service worker, and you’ll have a progressive web app in conforming browsers and a normal website everywhere else.
I call this amazing new technique “progressive enhancement.”
You heard it here first, folks!
What If We Threw A Party And Nobody Came?
We have supply-side improvements such as HTML responsive images, progressive web apps, renewed CSS work on vertical text. We have the methodology of progressive enhancement. We have proxy browsers that compress websites for people in low-bandwidth and expensive data plans. We have an explosion of ever-cheaper smartphones (and I’m not talking about the Galaxy Note 7 here). So, why aren’t the next 4 billion already on the web?
There are demand-side problems in emerging markets.
One of those is that the market for smartphones is either flat6 or declining7, depending on which statistics you read. There are many potential reasons for this: the slowdown in the Chinese economy (China, India and the US are the largest consumers of smartphones8); it could be a classic case of market saturation — everyone who wants a smartphone and can afford one already has one.
It could also have to do with devices like the one shown below, which was given to me at the Mobile World Congress SynergyFest.
It’s a feature phone, made in China; it doesn’t have Wi-Fi but is dual-SIM (very important in places like Africa and Asia); it has an FM radio; and it can connect to the web with a WAP-like browser. Its retail price in Africa and Latin America is $2.36 USD.
In a country like Cambodia, where a garment worker’s minimum wage is $14011 a month, or Liberia, where an unskilled worker’s minimum wage is $412 a day, even a $60 to $70 entry-level smartphone is practically unaffordable. But $2.36? That’s affordable.
But it is not just affordability that stops people from coming to the web. As GSMA Intelligence wrote13 in July 2016:
Despite the fact that Africa has the lowest income per capita of any region, affordability was only identified as the most important barrier in one out of 13 markets in our survey.
Network coverage was not perceived as an issue in most countries, reflecting the increasing availability of mobile networks. However, mobile broadband (3G or 4G) coverage remains low in most parts of Africa.
The problem is much more profound, and doesn’t have a technological solution. From the same report:
A lack of awareness and locally relevant content was considered the most important barrier to internet adoption in North Africa and the second biggest barrier in Sub-Saharan Africa.
There’s also a worrying lack of digital skills that prevents people from using the web in Africa:
A lack of digital skills was identified as the biggest barrier to internet adoption in Sub-saharan Africa and the second biggest in North Africa.
In Africa, seven in ten people who do not use the internet say they just don’t know how to use it, and almost four in ten say they do not know what the internet is.
This is true not only of developing economies, by the way. The World Bank continues:
In high-income Poland and the Slovak Republic, one-fifth of adults cannot use a computer.
41% [of American seniors] do not use the internet at all, 53% do not have broadband access at home, and 23% do not use cell phones.
The lack of digital skills is felt acutely by Asia, too. In its January 2016 Asia survey17, GSMA Intelligence wrote:
A lack of awareness and locally relevant content was the most commonly cited barrier to internet adoption: 72% of non-internet users across the six survey markets felt this was a barrier.… 50% of websites worldwide are in English, a language spoken by only 10% of speakers in the survey countries. By way of contrast, only 2% of websites worldwide are in Mandarin and less than 0.1% are in Hindi.
Below is a video made in a user-testing lab in rural Pakistan, featuring a man in his 20s. He has used a feature phone but never a smartphone. He’s given a simple-sounding task: Go to Google and search for the the name of your favourite actress. Watch the video. Watch it all.
Presumably, he’s stuck because he’s been accustomed to a feature phone with physical keys, and he simply doesn’t know how to call up the virtual keyboard. Anytime you believe that doing something in your web app is “obvious” or “intuitive”, watch this video again.
Digital Divide
Just as there is a digital divide between “the West” and developing economies, there is a digital divide in income, age, rural and urban, and women and men across the developing world. The World Bank illustrates a divide in income, gender, age and location (rural and urban).
In February 2016, 53% of urban areas20 had mobile Internet connectivity, a growth of 71% in one year. In the same period, Internet usage in rural India increased by 93%, but that means that only 9% of people in rural India has access. As I write this, there is a copy of The Hindu newspaper next to my laptop, which today reports21 that the Indian government is recommending a community model of Wi-Fi hotspots, such as in neighborhood grocery stores, in rural areas where connectivity is poor and laying cables is not feasible.
The World Bank reports that in many countries (Cuba, Cambodia, Brazil and others), computers are taxed as luxury goods.
Many countries, such as Fiji, Bangladesh and Pakistan, tax mobile phones as luxuries, too.
Pressure For Change
The World Bank recommends that:
Making the internet universally accessible and affordable should be a global priority.
There are moves afoot to make this happen, such as an initiative called FASTAfrica26. FAST stands for fast, affordable, secure and transparent. In a series of grassroots events across Africa in 2016, the organization has demanded:
fair and transparent taxes on information and communications technology (ICT),
greater effort from governments and donors,
agreement on affordability (1 GB for 2% of disposable income),
prioritization of getting women online.
World Wide Web Women
The World Bank writes:
Online work can prove particularly beneficial for women, youth, older workers, and the disabled, who may prefer the flexibility of working from home or working flexible hours.
In India… women are 62% less likely to use the internet than men. Many of the underlying reasons for this — affordability, skills and content — are the same as for men; they are simply felt more acutely by women.
Yet we know that the web empowers women. Across the world in non-agricultural employment, women make up 25% of the work force, but in online work, women make up 44% of the work force.
When asked why online work is advantageous, women overwhelmingly cited the ability to work flexible hours from home as the primary advantage. (32% of women, compared with 23% of men, said that the primary disadvantage to online work is that payment is not high enough, which suggests that Africa, too, has an unjustifiable difference between what women and men get paid.
One successful government initiative is the Kudumbashree project29, here in Kerala, India, where I’m writing this. “Kudumbashree” means “prosperity of the family” in the local Malayalm language. The World Bank reports:
The government of Kerala, India, outsources information technology services to cooperatives of women from poor families … Average earnings were US$45 a month, with close to 80 per-cent of women earning at least US$1 a day. Nine in ten of the women had previously not worked outside the home. Samasource, RuralShores, and Digital Divide Data are three private service providers. Samasource splits jobs into microwork for almost 6,400 workers, mostly in Ghana, Haiti, India, Kenya, and Uganda, on average more than doubling their previous income.
A similar story is happening in China:
Online shop owners using Alibaba in China, on average, employ 2.6 additional workers. Four in ten shop owners are women, 19 percent were previously unemployed, 7 percent were farmers, and about 1 percent are persons with disabilities.
What Can Be Done, And How Can You Help?
The World Bank says that:
Access to the internet is critical, but not sufficient. The full benefits of the information and communication transformation will not be realized unless countries continue to improve their business climate, invest in people’s education and health and promote good governance
Should you happen to be a politician in Africa, Asia or Latin America, please take note of the above. But, because you’re reading this, I expect you’re a web professional — but you can play your part! Make sure your websites are ready for the next 4 billion people, with their (potentially) slow networks and browsers and devices you may never have heard of.
Progressively enhance to make sure your core functionality works without JavaScript.
Use feature detection, rather than browser sniffing. You will never, ever be able to maintain a list of all devices in the world and their UA strings.
Compress images properly and remove unnecessary meta data with a tool such as ImageOptim30.
Use HTML responsive images (and generate WebP31 versions of all of your assets), and send them to supporting browsers with the picture element.
Focus relentlessly on performance.
Write a progressive web app, rather than a native app.
Test in a proxy browser, such as Opera Mini.
Developing countries are home to 94% of the global offline population. These people may be your next potential customers. But if they can’t buy from you because your website is a Wealthy Western Website rather than a World Wide Website, you can bet that your competitors will be happy to take their money.
An increase in internet maturity similar to the one experienced in mature countries over the past 5 years creates an increase in real GDP per capita of $500 on average during this period. It took the industrial revolution of the 19th century 50 years to produce same result.
But it’s not just about money. It’s about doing the right thing and keeping the web open, democratic and global.
Millions of people in Bangladesh, Nepal and India have miles to walk to visit a doctor, so a feature phone and the free online book Where There Is No Doctor33 (often translated to their local language) becomes first-line medical care.
Millions of people in Subsaharan Africa can’t afford school textbooks, but Worldreader34 has tens of thousands of books available for free online, and the books work fine even on old feature phones.
And millions of people in despotic regimes have the web as their only way to contact the outside world.
Please, do your part to ensure the health of the web that has provided you with so much, and pay it forward so the next people can benefit.
Thanks to Mrunmaiy Abroal35, until recently Opera’s Head of Comms in India, and Peko Wan36, Head of PR & Communications for Opera in Asia, for their help collecting some of the information in this article. Thanks to Karin Greve-Isdahl37, VP Communications at Opera for allowing me to use charts and illustrations made while I was at Opera. Big thanks to Clara at Damcho Studio38 for helping to prepare this article.
This week was a big week in terms of web development news. We got much broader support for CSS Grids and Web Assembly, for example, but I also stumbled across some great resources that teach us a lot of valuable things.
With this Web Development Reading List, we’ll dive deep into security and privacy issues, take a look at a lightweight virtual DOM alternative, and get insights into how we can overcome our biases (or at least how we can better deal with them). So without further ado, let’s get started!
Chrome 575 was released this week, and it brings us CSS Grids, the Media Session API, and Web Assembly. Also new is that Chrome will return an error for SHA1 certificates from now on.
A new Firefox version was released to the public this week: Firefox 526. The new version will display a prominent warning if a user fills in their password on a non-secure page, rel="noopener" was implemented, too, just like broad support for CSS Grids7, Web Assembly, and async/await. They also disabled all plugins except for Adobe Flash.
The Samsung Internet browser beta is now available8 in the Google Play Store and via Samsung Galaxy Apps. It runs on Chromium 51, has support for progressive web apps, Service Worker, and content blockers.
By analyzing the large-scale issues the web faced in the past month (think Amazon’s S3 outage causing a downtime of millions of websites, Cloudflare’s data leak that required users of very popular websites to change their passwords, or Google’s accidental WiFi reset which wiped out customers’ Internet profiles) Tristan Louis reflects on the question if we are breaking the Internet9. The trend towards a few services hosting a majority of the Internet’s infrastructure is causing more and more large-scale problems. If we want to avoid issues like these, we need to rethink this new kind of centralization and fix it.
Bruce Lawson wrote about the “World Wide Web, Not Wealthy Western Web10”. It’s about the bias of western web developers, about ignoring other continents, and why we need to see the bigger picture instead. A piece you should definitely take the time to read.
It’s easy to build a standard card design, but we could do so much more with them. Andrew Coyle wrote about designing better cards11, a component we use in nearly every design today.
In an attempt to participate in their own bug bounty program, Brett Buerhaus and Ben Sadeghipour analyzed AirBNB’s web service. And indeed, they stumbled over some pretty good examples of how to bypass a lot of security measurements15 they already had implemented.
We know backups are crucial in IT operations. But what we often don’t think about is the backup’s security. A company that’s responsible for a lot of email spam recently exposed their backups to the public16 for over a month. Initially, we might think that’s great as this mishap makes it relatively easy to bring their operations to a halt, but then others have probably already picked up all the data to use it for their operations and, thus, producing an increase of spam.
Tobias Laudinger and some of his co-workers conducted the first comprehensive study of client-side JavaScript library usage17 and the security implications it brings along. Based on data from over 133K websites, they found that 37% of websites include at least one library with a known vulnerability. Time to reconsider our use of external dependencies and how we can keep them up-to-date.
Another big data leak was revealed by WikiLeaks this week, this time it’s called “Vault7 — CIA Hacking Tools Revealed18”. And, well, it does, in fact, confirm the worst fears privacy researchers had: The CIA is spying on Samsung TVs19, and it’s extremely likely that Amazon’s Alexa is no exception20, just like a lot of other centralized, not end-to-end-encrypted services, too. The findings also caused a lot of discussion about whether messaging apps like Whatsapp and Signal are safe since their encryption was reportedly broken as well. But you need to differentiate here, because, in the case of the messaging apps, the encryption was broken by infecting only selected targeted devices with malware. Together with this news about decrypted PGP messages21, the published data shows that apps like Signal do indeed work as expected: They prevent third parties from mass-capturing private data and instead force them to target individual devices.
Since this week, we’re able to play around with CSS Grids in a lot more public browsers26 (Chrome, Firefox, Edge with the old spec). When you do, this quite complete guide to CSS grid27 might come in handy.
Did you know you can use CSS to lint your HTML markup28? Ire Aderinokun shared a couple of use cases and some very neat tricks — how to check for unlabelled form elements or inaccessible viewport attributes, for example.
Jelmer Mommers recently stumbled across a video from the oil company Shell that shows that they were aware of the dangers that global warming brings along already more than 25 years ago. Unfortunately, they decided to focus on short-term solutions nevertheless, for financial reasons. This great article shows how money can make us ignore important facts36. I really believe that you and me, we can do better than Shell.
What would life be without surprises? Pretty plain, wouldn’t you agree? Today, we are happy to announce a freebie that bubbles over with its friendly optimistic spirit, bound to sprinkle some unexpected sparks of delight into your projects: Ballicons 3. If that name rings a bell, well, it’s the third iteration of the previous Ballicons icon set1 created by the folks at Pixel Buddha19182.
This icon set covers a vibrant potpourri of subjects, 30 icons ranging from nature, travel and leisure motifs to tech and office. All icons are available in five formats (AI, EPS, PSD, SVG, and PNG) so you can resize and customize them until they match your project’s visual style perfectly. No matter if you like it bright and bubbly or rather sleek and simple — the set has the makings to become a real allrounder in your digital tool belt.
Please note that this icon set is released under a Creative Commons Attribution 3.0 Unported5 license. This means that you may modify the size, color and shape of the icons. No attribution is required, however, reselling of bundles or individual icons is not cool. If you want to spread the word in blog posts or anywhere else, feel free to do so, but please remember to credit the designers and provide a link to this article.
“Creation of icons is something special for our team, as it’s our first icon set which made us refuse from client work and concentrate on crafting content for other specialists. Ballicons and Ballicons 2 have become really popular, and today, after 2.5 years we’ve realized we’re willing to present another iteration of the icons from this series — even more striking and interesting.”
A big thank you to the folks at Pixel Buddha19182 for designing this wonderful icon set — we sincerely appreciate your time and efforts! Keep up the brilliant work!
The past year has seen quite a rise in UI design tools. While existing applications, such as Affinity Designer1, Gravit2 and Sketch3, have improved drastically, some new players have entered the field, such as Adobe XD754 (short for Adobe Experience Design) and Figma255.
For me, the latter is the most remarkable. Due to its similarity to Sketch, Figma was easy for me to grasp right from the start, but it also has some unique features to differentiate it from its competitor, such as easy file-sharing6, vector networks7, “constraints468” (for responsive design) and real-time collaboration9.
In this article, I’d like to compare both apps in detail and highlight where each of them shines.
The greatest weakness of Sketch has always been its lock-in to the Apple ecosystem. You need a Mac not only to design with it but also to open and inspect files. If you are on Windows or Linux, you’re out of luck. (Technically, you could get Sketch running on Windows, as Martijn Schoenmaker14 and Oscar Oto Mir15 describe, but it might not be worth the hassle for some.) That’s why, over time, a few paid services16 have emerged that enable you to provide teammates with specifications of Sketch files, but that’s still an unfortunate extra step.
That’s probably one of the reasons why Figma took a different route right from the start: It can be used right in your web browser, on basically any device in the world. No installation whatsoever needed: Just open Chrome (or Firefox), set up an account and start designing. While I’ve been hesitant with browser-based tools before, Figma blew my doubts away in a single stroke. It’s as performant as Sketch (if not more), even with a multitude of elements on the canvas.
Even standalone desktop apps of Figma for Windows and Mac are available, but basically, they are no more than wrappers of the browser version. While they give you better support for keyboard shortcuts, they don’t save you from having to be online all the time.
Basically, Figma can be best described as an in-browser version of Sketch. It offers mostly the same features, the interface looks similar, and even the keyboard shortcuts are mostly taken from its competitor. If you knew Sketch already, you could have easily found your way around Figma when it was released at the beginning of 2016.
Figma has improved on many aspects since then, and has even surpassed Sketch in parts, but some features are still missing, such as plugins8319, integration with third-party tools20 and the ability to use real data21 (with the help of the Craft plugin). Nevertheless, at the pace Figma is improving, it may be just a matter of time before it’s on par with Sketch.
Winner: A draw
Why: At the moment, they don’t differentiate enough. But still, Sketch is more mature, offers more odds and ends to make you more productive.
Sketch initially started with a one-time fee of around $99 USD for each new major version. But having finally realized that this business model is not sustainable, Bohemian Coding has recently changed its pricing to a kind of subscription-based model: For $99 USD, you get one year of free updates. After that, you can keep using the app as is or invest the same amount of money to get another year of updates.
This price increase might seem inappropriate at first, but what’s $99 for your main tool as a designer? Right, not much, and still cheaper than Adobe’s Creative Cloud22 subscription price. (Also, note that when you stop paying for your Creative Cloud subscription, you cannot keep using any of the Adobe CC apps, whereas your existing version of Sketch will continue running with no problems at all.)
For the time being, Figma is still free to use for everybody, but that might change later this year. I’d bet that it will also be subscription-based, with a monthly fee of around $10 to $15 USD (don’t hold me to that, though). One thing’s for sure: Figma will always be free for students23 (similar to how Sketch offers a reasonable discount24 to students and teachers).
The main differentiator of Figma is probably the real-time collaboration feature, called “multiplayer.” It allows a user not only to edit a file at the same time as others but also to watch somebody else fiddle around in a design and communicate changes with the built-in commenting system. For one, this simultaneous editing is great for presentations and remote teams, but it also ensures that everyone involved sees the same state, and it prevents unintended overwrites.
Sketch doesn’t offer anything like that (not even with the help of plugins), and it’s probably the better for it. Designers are skeptical of this feature, and rightly so, because it can lead to the dreaded “design by committee” and to a mentality of “Make this button bigger. No, wait! I’ll just do it by myself!” Of course, I’m exaggerating a bit here. This feature can be somewhat valuable, but it has risks if you don’t set certain rules up front.
Winner: Figma
Why: Allows multiple people to work on the same file at the same time.
Somewhat related to real-time collaboration is the sharing of design files. Because everything in Figma happens in the browser, every file naturally has a URL that you can send to your peers and bosses (in view-only mode, too). Another huge advantage of this browser-based approach is that everybody, developers foremost, can open a design and inspect it right away, with no third-party tool needed.
That’s exactly what you need in Sketch in order to offer teammates all of the necessary information on a file, such as font sizes, colors, dimensions, assets and so on. However, as mentioned, to help you here, you’ll need paid tools such as Zeplin28, Sympli29, InVision’s Inspect30, Avocode31 or Markly32. To be fair, the first three are free for a single project, but that might not be enough when you’re working on more complex stuff.
If you need more or don’t want to invest money in such (admittedly basic) functionality, there are also the free plugins Marketch33 and Sketch Measure34. Many paid apps also allow you to add comments and create style guides automatically.
Alternatively, you could even import your Sketch files into Figma and inspect them there, because Figma supports the Sketch file format.
If you just need to let your teammates have a look at a design, then the built-in Sketch Cloud38 does exactly that. To view a design on your iOS device, you have Sketch Mirror39 and Figma Mirror40; for an Android device, you can use Mira41 and Crystal42. Figma doesn’t offer an Android-based app, but opening www.figma.com/mobile-app43 on the device after selecting a frame or component (on your desktop computer) will give you a kind of preview.
Winner: Figma
Why: Files are easily shareable, right in the browser, without the need for a third-party tool.
Thanks to the organizational capabilities that Sketch gives with its pages, you can design an entire app or website in a single file, and switch between its pages quickly. Figma is left behind here: It doesn’t provide pages, but you can at least combine files into a project.
Another important concept of Sketch — artboards — has been taken on as well, but in an enhanced form. In Figma, you can apply fills, borders and effects to artboards, and they can also be nested and be easily created from groups. For one, this allows for containers of elements that are bigger than their combined bounds, which can be helpful for the “constraints468” functionality. Furthermore, each artboard can have its own nested layout grid, which helps with responsive design.
It’s worth mentioning that Sketch also allows you to nest artboards, but the implementation is not nearly as powerful as Figma’s.
Winner: A draw
Why: Sketch has pages, but Figma’s artboards (frames) are more powerful.
A feature that Figma introduced (and that Sketch followed with months later) is responsive design, or the ability to set how elements respond to different artboard or device sizes. There have been plugins that offer such functionality in Sketch, but they’ve always felt a bit clunky. Only the native “group resizing47” feature has been easy to use, competing with Figma’s “constraints” feature.
They do basically the same thing but are very different in execution. In Sketch, you just have a dropdown menu with four options, and it can be a bit hard to imagine what they do. Figma offers a more visual way to define these properties; for example, to pin an element to the edge of an artboard or to let it resize proportionally.
A huge advantage of Figma is that these constraints work in combination with layout grids. Elements can be pinned to certain columns (or rows), for example, or take on their relative width. Furthermore, you are able to change these settings independently for each axis, and they work not only for groups but also for artboards. None of that is possible in Sketch.
Winner: Figma
Why: Its constraints functionality is far more advanced and offers more options.
One major feature of Figma when it was released was “vector networks.” No longer is a vector just a sequence of connected points (nodes). Now, you can easily branch off from any node with a new line. Creating an arrow, for example, is as easy as pie.
But there’s much more to it. Parts of a vector can be easily filled (or removed) with the Paint Bucket tool. You can move entire segments (or bend them) with the help of the Bend tool. And copying lines (as well as points) from one vector to another is a snap.
This feature alone sets Figma apart from Sketch in a very special way and makes it so much more suitable for icon design. Compared to it, the tools available in Sketch feel almost archaic. While Sketch might have everything you’ll ever need to create and manipulate vector objects, as soon as more complex vector operations are involved, it can get a bit too tricky.
While most of the time you’ll need to use Boolean operations55 to create certain types of shapes in Sketch, much of that can be achieved more easily with vector networks in Figma. The fact that in Sketch it’s often hard to join two vector objects or even flatten a shape after a Boolean operation has been applied makes things even worse — the app certainly has a lot of ground to cover in this regard.
Winner: Figma
Why: Vector networks give a whole new dimension to vector manipulation, something still unmatched in Sketch.
Whenever a new application appears, you often face the problem of having to redo everything if you want to migrate a project. Not so in Figma: It can open Sketch files in an impressively accurate way, which gives it an immediate advantage.
Unfortunately, it doesn’t work in the other direction. Once you start designing in Figma, you are bound to it. It can’t export back to Sketch, nor is it capable of creating PDF files (or importing them). There is an option to export to SVG, but that’s not the ideal format in which to exchange complex files between design applications.
Another limiting factor is that you can’t set the quality level when exporting JPG images in Figma, so you might need a separate application to optimize images. The latest version of Sketch improves the exporting options even further. It allows you to set both a prefix and a suffix for file names now, as well as provides exporting presets, which makes exporting for Android much easier.
Let’s look at the export file formats that each application supports.
Sketch:
PNG
JPG
TIFF
WebP
PDF
EPS
SVG
Figma:
PNG
JPG
SVG
Winner: Sketch
Why: More file formats, and better saving and exporting options.
Though Figma shares many keyboard shortcuts with Sketch, it doesn’t have the same level of accessibility. In Sketch, almost everything can be achieved with a key press. And if you are not satisfied with a certain shortcut combination, you can set your own in the system’s preferences (which largely goes for shortcut commands not assigned by default).
Note: You can learn how to change shortcuts in Sketch in the article “Set Custom Keyboard Shortcuts58,” published on SketchTips.
Because Figma lives in the browser, none of that is possible. While the desktop app makes up for it, certain functions simply can’t be assigned to the keyboard due to a lack of certain menu bar commands.
Winner: Sketch
Why: Almost all features in the application are accessible with keyboard shortcuts, and they can be customized.
An important feature has only just been introduced to Figma: symbols, or, as they are called there, components. Components allow you to combine several elements into a single compound object that can be easily reused in the form of “instances.” As soon as the master component is changed, all instances inherit the change. (However, certain properties can still be modified individually, allowing for variations of the same component.)
Lately, it’s even possible to create a library of components in Figma, that can be shared across files and with multiple users — a huge step towards a robust design system.
In Sketch, the modifications of individual instances are handled in the form of overrides in the Inspector panel, allowing you to change text values, image fills and nested symbols separately for each instance. Figma lets you update the properties of an element directly on the canvas, including the fill, border and effects (shadows, blurs).
The true strength of symbols (and components) becomes evident with responsive design. In both apps, you can define how elements within a symbol react when their parent is resized, making it possible to use symbols in different places or screen sizes.
An area where Figma is still behind is layer63 and text styles64. You can’t save the styling of shapes or text layers and sync it with other elements. What you can do, however, is copy and paste the styling from one element to another.
Winner: A draw
Why: The implementations are quite comparable, but Figma has a slight advantage with its shareable components.
Then there’s Figma, which has none of that up until now. To be fair, Sketch probably didn’t offer many of these helpers early on in its development, and I’d bet that a plugin ecosystem will be introduced to Figma in the future.
A word of caution: While plugins give Sketch a huge advantage, they can be somewhat of a pitfall, too. They tend to break with each new release, and the Sketch team needs to be very cautious to avoid alienating developers in the future.
Winner: Sketch
Why: It’s simple: Figma doesn’t have plugins at all.
Unlike Adobe XD754, neither Sketch nor Figma has any native prototyping capabilities. There are no plans for the latter so far, but Sketch has the Craft plugin8476, which brings at least basic prototyping functionality. Furthermore, Sketch’s popularity has led to broad integration with other prototyping tools: InVision77, Marvel78, Proto.io79, Flinto80, and Framer81, to name a few. The idea with these is that you can easily import a design and sync up the apps.
The only application that works with Figma so far is Framer82. It’s hard to say what support for similar tools will look like in the future, but it may well be that such interfaces are already in the works.
Winner: Sketch
Why: Though the Craft plugin doesn’t provide persuasive prototyping capabilities, Sketch is well integrated with many prototyping services.
Among the countless plugins8319 that Sketch offers, you will also find some so-called “content generators.” For one, they let you fill a design with dummy content. But what’s even more interesting is that these little helpers enable you to pull in real data from various sources, or even insert content in the form of JSON files. This allows you to make more informed decisions and account for edge cases early in the design process.
By far, the best in this category is the Craft plugin8476 from InVision. As soon as you use it to populate an element with content, you can easily duplicate and vary it in a single step. Craft also allows you to save design assets and share them with your team, or to create a style guide automatically. For an in-depth look at the Craft plugin, have a look at my recent article “Designing With Real Data In Sketch Using The Craft Plugin85.”
Figma doesn’t offer such functionality yet, but I think it’s just a matter of time. It will be interesting to see which approach it takes: integrating it in the app, as Adobe XD teased some time ago90, or relying on a plugin system.
Winner: Sketch
Why: Compensating for a lack of native functionality, various plugins allow you to use real data in Sketch easily.
One thing I constantly do is undo my changes one by one with Command + Z to see what an earlier version of a design looks like — either to have a point of reference or to duplicate as a base for a new iteration. You might suppose that Figma is at a dead end here because of the browser’s limited resources, but in reality it is in no way inferior to Sketch.
Both apps save automatically, with the ability to browse and restore old versions. Sketch’s implementation, however, is far from ideal, and I don’t even know where to start. It’s laggy; you need to restore a version before you can actually inspect it; and it can lead to wasted space on the hard drive. (If you encounter this problem, Thomas Degry’s article “How Sketch Took Over 200GB of Our MacBooks91” might help.) To cap it off, things can get screwed up pretty quickly when multiple people are working on the same file in the cloud (via Dropbox, etc.).
Figma lets you restore previous versions instantaneously, which is especially useful if multiple people are editing the same file, helping you to isolate the changes of each person. You can then jump back to a previous state or duplicate the version as the starting point for a new idea.
Winner: Figma
Why: Sketch’s versioning is so imperfect that the winner is obvious.
As in other regards, the handling of text is quite similar in both applications, but some differences are worth mentioning. By default, Figma just gives you Google Fonts to choose from, but with a simple installer, you can access system fonts, too.
Something I constantly miss in Sketch is the ability to change the border position of text layers; “center” is the only option here. Figma also offers “inside” and “outside,” like it does for the shape layers.
Furthermore, Figma lets you set the vertical alignment of text, allowing for easy vertical centering inside the frame. In contrast, Sketch has more typography options and enables you to apply OpenType features such as small caps, old-style figures, stylistic alternatives and ligatures (if a font supports them).
Winner: A draw
Why: Both applications have their distinct advantages. Sketch might be ahead a bit, due to its advanced typography options, but not enough to win outright.
Sketch has been on the market now for over six years, which has allowed the community to accumulate a huge number of useful resources (articles, tutorials, blog posts, etc.). And due to the similarities between the two apps, some Sketch concepts can be applied to Figma, which makes some resources suitable for both applications. I’ll mention only a few here:
SketchTips94 My very own project, where I regularly publish articles about Sketch.
The discontinuation of Adobe Fireworks in 2013 left a huge gap106 in the world of UI design tools — a gap that Sketch gladly filled almost immediately107. Sketch became so successful that it closed in on its main competitor, Photoshop, over time and even topped it last year as the tool of choice108 among user interface designers.
Sketch would probably be even more successful if there was a Windows version, though, a problem with other tools, too. So, Affinity Designer and Adobe XD (two new competitors that started as Mac-only apps) quickly stretched out to support Windows as well. There is also Gravit Designer, which works in the browser109 and which now also exists as a desktop app110 (for Windows, Mac and Linux); but like Affinity Designer and Adobe XD, it’s hard for it to compete with Sketch, which is probably the most mature tool of them all at the moment.
And now we have Figma111: Not only is its feature set similar to Sketch’s, but it supports multiple platforms because the only thing you needed to run it is a modern browser. Moreover, Figma allows you to open Sketch files, which means you can basically continue in Figma where you’ve left off in Sketch, without much effort. (Figma requires you to be signed in and online to use it, though, a serious limitation112 in some cases.)
But as different as all of these tools are, they all have one thing in common: They want a slice of the pie, and so they are challenging Sketch in every possible way. But the Sketch team isn’t resting on its laurels. It is constantly improving the application and pushing new updates often, with the goal of making it the best possible UI design tool out there. So far, they are succeeding, but the pressure is growing.
In a world driven by the Internet, mobile apps need to share and receive information from their products’ back end (for example, from databases) as well as from third-party sources such as Facebook and Twitter. These interactions are often made through RESTful1 APIs. When the number of requests increases, the way these requests are made becomes very critical to development, because the manner in which you fetch data can really affect the user experience of an app.
In this article, I’d like to take you through my experience of using networking libraries in Android, focusing on APIs. I’ll start with the basics of synchronous and asynchronous programming and cover the complexities of Android threads. We’ll then dive into the AsyncTask2 module, understand its architectural flows and look at code examples to learn the implementation. I’ll also cover the limitations of the AsyncTask library and introduce Android Volley253 as a better approach to making asynchronous network calls. We will then delve deeper into Volley’s architecture and cover its valuable features with code examples.
Still interested? Conquering Android networking will take you far in your journey toward becoming a skillful app developer.
Note: A few more Android libraries with networking capabilities are not covered in this article, including Retrofit4, OkHttp5. I recommend that you go through them to get a glimpse of those libraries.
Programming Approaches Simplified: Sync And Async Link
“Hold on, Mom, I’m coming,” said Jason, still on his couch, waiting for a text from his girlfriend, who he had texted an hour back. “You could clean your room while you wait for a reply from your friend,” replied Jason’s mother with a hint of sarcasm. Isn’t her suggestion an obvious one? Similar is the case with synchronous and asynchronous HTTP10 requests. Let’s look at them.
Synchronous requests behave like Jason, staying idle until there is a response from the server. Synchronous requests block the interface, increase computation time and make a mobile app unresponsive. (Not always, though — sometimes it doesn’t make sense to go ahead, such as with banking transactions.) A smarter way to handle requests is suggested by Jason’s mother. In the asynchronous world, when the client makes a request to the server, the server dispatches the request to an event handler, registers for a callback and moves on to the next request. When the response is available, the client is responded to with the results. This is a far better approach, because asynchronous requests let you execute tasks independently.
The diagram above shows how both programming approaches differ from each other in a client-server model. In Android, the UI thread, often known as the main thread, is based on the same philosophy as asynchronous programming.
Threads are sets of instructions that are managed by the operating system. Multiple threads run under a single process (a Linux process in the case of Android) and share resources such as memory. In Android, when the app runs, the system creates a thread of execution for the whole application, called the “main” thread (or UI thread). The main thread works on a single-threaded model. It is in charge of dispatching events to UI widgets (drawing events), interacting with components from the UI toolkit, such as View.OnClickListener(), and responding to system events, such as onKeyLongPress().
The UI thread runs on an infinite loop and monitors the message queue to check whether the UI needs to be updated. Let’s consider an example. When the user touches a button, the UI thread dispatches the touch event to the widget, which in turn sets its pressed state and posts a request to the message queue. The UI thread dequeues the request from the message queue and notifies the widget to take action — in this case, to redraw itself to indicate the button has been pressed. If you’re interested in delving deeper into the internals of the UI thread, you should read about Looper13, the MessageQueue14 and the Handler15 classes, which accomplish the tasks discussed in our example. As you’d imagine, the UI thread has a lot of responsibilities, such as:
When you think about it, your single-threaded UI thread performs all of its work in response to user interactions. Because everything happens on the UI thread, time-consuming operations such as database queries and network calls will block the UI. The UI thread will dispatch events to the UI widget. The app will perform poorly, and the user will feel that the app is unresponsive. If these tasks take time and the UI thread is blocked for 4 to 5 seconds, Android will throw an “Application Not Responding16” (ANR) error. Referring to such an Android app as being not user-friendly would be an understatement, not to mention the poor ratings and uninstalls.
Using the main thread for long tasks would hold things up. Your app will always remain responsive to user events if your UI thread is non-blocking. That is why, if your application requires making network calls, the calls need to be performed on the worker threads that run in the background, not on the main thread. You could use a Java HTTP client library to send and receive data over the network, but the network call itself should be performed by a worker thread. But wait, there’s another issue with Android: thread safety.
The Android UI toolkit is notthread-safe17. If the worker thread (which performs the task of making network calls) updates the Android UI toolkit, it could result in undefined and unexpected behavior. This can be difficult and time-consuming to track down. The single-thread model ensures that the UI is not modified by different threads at the same time. So, if we have to update the ImageView with an image from the network, the worker thread will perform the network operation in a separate thread, while the ImageView will be updated by the UI thread. This makes sure that the operations are thread-safe, with the UI thread providing the necessary synchronization. It also helps that the UI thread is always non-blocking, because the actual task happens in the background by the worker thread.
In summary, follow two simple rules in Android development:
The UI thread should not be blocked.
The UI toolkit should not be directly updated from a non-UI worker thread.
When you talk about making requests from an “activity,” you will come across Android “services18.” A service is an app component that can perform long operations in the background without the app being active or even when the user has switched to another app. For example, playing music or downloading content in the background can be done well with services. If you choose to work with a service, it will still run in your application’s main thread by default, so you’ll need to create a new thread within the service to handle blocking operations. If you need to perform work outside of your main thread while the user is interacting with your app, you are better off using a networking library such as AsyncTask or Volley.
Performing tasks in worker threads is great, but as your app starts to perform complex network operations, worker threads can get difficult to maintain.
It’s quite clear now that we should use a robust HTTP client library and ensure that the network task is achieved in the background using worker threads — essentially, with non-UI threads.
Android does have a resource to help handle network calls asynchronously. AsyncTask19 is a module that allows us to perform asynchronous work on the user interface.
AsyncTask performs all of the blocking operations in a worker thread, such as network calls, and publishes the results once it’s done. The UI thread gets these results and updates the user interface accordingly.
Here is how I implemented an asynchronous worker thread using AsyncTask:
Subclass AsyncTask to implement the onPreExecute() method, which will create a toast message suggesting that the network call is about to happen.
Implement the doInBackground(Params...) method. As the name suggests, doInBackground is the worker thread that makes network calls and keeps the main thread free.
Because the worker thread cannot update the UI directly, I implemented my own postExecute(Result) method, which will deliver the results from the network call and run in the UI thread so that the user interface can be safely modified.
The progress of the background task can be published from the worker thread with the publishProgress() method and can be updated on the UI thread using the onProgressUpdate(Progress...) method. These methods are not implemented in the example code but are fairly straightforward to work with.
Finally, call the asynchronous task using the execute() method from the UI thread.
Note:execute() and postExecute() both run on the UI thread, whereas doInBackground() is a non-UI worker thread.
In the context of my app, I make a POST request on a REST API to start a calling session for a campaign. I also pass the access token in the request header and the campaign ID in the body. If you look at the code, java.net.HttpURLConnection is used to make a network call, but the actual work is done in the doInBackground() method of AsyncTask. In the example above, we also make use of the application context to pop up toast messages, but AsyncTasks can be defined as inner classes in activities if they are small enough, avoiding the need for the Context property.
A generic type is a generic class or interface that is parameterized over types. Just like how we define formal parameters used in method declarations, type parameters help you to reuse the same code with different input types. While inputs to methods are values, inputs to type parameters are types. There are three types used by an asynchronous task:
Params
The type of the parameters sent to the task upon execution.
Progress
The type of the progress units published during the background computation.
Result
The type of the result of the background computation.
This is how I have extended AsyncTask with types:
public class MyAsync extends AsyncTask<String, Void, Integer>
So, the Params sent to the task are of type String; Progress is set to Void; and the Result is of type Integer. In our implementation, we’re passing the URL (type String) to the doInBackground(String... params) method; while we don’t set a Progress type, we pass the status code of the response (type Integer) to onPostExecute(Integer integer). Not all types are used by an asynchronous task; and to mark a type as unused, we use type Void.
Working with AsyncTask is pretty nice until you start doing more complex operations with it. A few instances where AsyncTask would not be useful are highlighted below:
If a background task is running using AsyncTask and the user rotates the screen, the entire activity will be destroyed and recreated. As a result, the reference to the activity will be lost, and the result of AsyncTask will update the UI elements that don’t exist anymore. By the way, if we have to handle this in AsyncTask, we have to monitor for activity getting destroyed in the onPostExecute method.
Cancelling requests with AsyncTask ensures that the postExecute() method is not called. Unfortunately, it doesn’t actually make the request every time. This behavior is not implicit, and it’s the job of the developer to explicitly cancel asynchronous tasks.
AsyncTask does not provide the facility of caching results, which can be a setback. Often, an image such as a thumbnail is displayed several times. To reduce bandwidth when displaying this image, we can use the help of caching mechanisms.
There are limits on how many parallel tasks you can run with AsyncTask. It can handle 128 concurrent tasks, with a queue length of 10. So, be aware when you’re crossing those limits. These limits are derived from ThreadPoolExecutor24.
Even though AsyncTask does a good job of performing asynchronous operations, its utility can be limiting due to the reasons mentioned above. Luckily, we have Volley at our disposal, an Android module for making asynchronous network calls.
Volley253 is a networking library developed by Google and introduced at Google I/O 2013. In Volley, all network calls are asynchronous by default, so you don’t have to worry about performing tasks in the background anymore. Volley considerably simplifies networking with its cool set of features.
Before looking at the code, let’s get ourselves elbow-deep in Volley and understand its architecture. Below is a high-level architectural diagram of Volley’s flow. It works in a very simple way:
Volley runs one cache processing thread and a pool of network dispatch threads.
Network call requests are first triaged by the cache thread. If the response can be served from the cache, then the cached response is parsed by CacheDispatcher and delivered back to the main thread, the UI thread.
If the result is not available in the cache, then a network request needs to be made to get the required data, for which the request is placed in the network queue.
The first available network thread (NetworkDispatcher) takes the request from the queue. It then performs the HTTP request, parses the response on the worker thread and writes the response to cache. It then delivers the parsed response back to the main thread.
If you carefully analyze Volley’s architecture, you’ll see that it solves issues that we face with AsyncTask:
With Volley, we don’t have to worry about getting work done in the background (doInBackground() from AsyncTask) because the library makes asynchronous network calls and manages it for you in NetworkDispatcher.
Updating the results back to the UI thread from the worker thread is handled by Volley, even without your noticing it.
Volley frees you from having to write boilerplate code and allows you to concentrate on the tasks specific to your app.
Volley has nice set of features, too. To name a few:
Volley caches API responses, so if you make the same request twice, the results are fetched from the cache. This is really useful, as in the case of loading the same image multiple times — we can reduce bandwidth usage by getting the image from cache. We’ll address how to implement the cache in Volley soon.
Volley helps you to prioritize requests. If there are multiple network calls, you could prioritize a given network call based on its impact and importance.
You can easily cancel or retry requests. You have the flexibility to cancel a single request or to cancel blocks of requests.
Volley has strong ordering, which makes it easy to populate the UI in sequence while still fetching data asynchronously.
Volley can handle multiple request types, such as string and JSON. In fact, Volley is perfect for API calls such as JSON objects and lists, and it makes working with RESTful applications very easy.
Image loading is one of the more useful features of Volley. You can write a custom ImageLoader and complement it with the LRU bitmap cache to make network calls for images. We’ll talk about that a bit later.
Volley is maintained by Google, which takes care of fixing bugs. That definitely doesn’t hurt, does it!
Let’s see how to make asynchronous calls using Volley. Start by including Volley in your Android project.
Another way to do this is by cloning the Volley repository. Build Volley with Ant, copy the built volley.jar file in the libs folder, and then create an entry in build.gradle to use the jar file. Here’s how:
git clone https://android.googlesource.com/platform/frameworks/volley cd volley android update project -p . ant jar
You can find the generated volley.jar in Volley’s bin folder. Copy it to your libs folder in Android Studio, and add the entry below to app/build.gradle:
And you’re done! You have added Volley to your project without any hassle. To use Volley, you must add the android.permission.INTERNET permission to your app’s manifest. Without this, your app won’t be able to connect to the network.
“Hello World” With Volley: Handling Standard String Requests Link
The code example below shows you how to make a request on https://api.ipify.org/?format=json, get the response and update the text view of your app. We use Volley by creating a RequestQueue and passing it Request objects. The RequestQueue manages the worker threads and makes the network calls in the background. It also takes care of writing to cache and parsing the response. Volley takes the parsed response and delivers it to the main thread. Appropriate code constructs are highlighted with comments in the code snippet below. I haven’t implemented caching yet; I’ll talk about that in the next example.
To set up the cache, we have to implement a disk-based cache and add the cache object to the RequestQueue. I set up a HttpURLConnection28 to make the network requests. Volley’s toolbox provides a standard cache implementation via the DiskBasedCache class, which caches the data directly on the hard disk. So, when the button is clicked for the first time, a network call is made, but in the next occurence of a button click, I get the data from the cache. Nice!
If you have to fire network requests in multiple Android activities, you should avoid using Volley.newRequestQueue.add(), as we did in the first example. You can develop a singleton29 class for the RequestQueue and use it across your project. Creating a RequestQueue as a singleton is recommended, so that the RequestQueue lasts for the lifetime of your app. It also ensures that the same RequestQueue is utilized even when the activity is recreated, as in case of a screen rotation.
package com.example.chetan.androidnetworking; import android.content.Context; import com.android.volley.Request; import com.android.volley.RequestQueue; import com.android.volley.toolbox.ImageLoader; import com.android.volley.toolbox.Volley; public class VolleyController { private static VolleyController mInstance; private RequestQueue mRequestQueue; private static Context mCtx; private VolleyController(Context context) { mCtx = context; mRequestQueue = getRequestQueue(); } public static synchronized VolleyController getInstance(Context context) { // If instance is not available, create it. If available, reuse and return the object. if (mInstance == null) { mInstance = new VolleyController(context); } return mInstance; } public RequestQueue getRequestQueue() { if (mRequestQueue == null) { // getApplicationContext() is key. It should not be activity context, // or else RequestQueue won't last for the lifetime of your app mRequestQueue = Volley.newRequestQueue(mCtx.getApplicationContext()); } return mRequestQueue; } public void addToRequestQueue(Request req) { getRequestQueue().add(req); } }
You can now use the VolleyController in your MainActivity like this: VolleyController.getInstance(getApplicationContext()).addToRequestQueue(stringRequest);. Or you can create a queue in this way: RequestQueue queue = VolleyController.getInstance(this.getApplicationContext()).getRequestQueue();. Note the use of ApplicationContext in these examples.
In Volley, you can set up up a custom JSON request by extending the Request class. This will help you to parse and deliver network responses. You can also do more things such as set request priorities and set up cookies with this custom class. Below is the code for creating a custom JSONObject Request in Volley. You can handle ImageRequest types in the same manner.
With asynchronous tasks, you can’t know when the response will arrive from your API. You need to execute a Volley request and wait for the response in order to parse and return it. You can do this with the help of a callback. Callbacks can be easily implemented with Java interfaces. The code below shows how to build your callback with the help of the VolleyCallback interface.
Volley offers the following classes for requesting images:
ImageRequest
This is used to get an image at the given URL. It also helps with resizing images to the size you need, and all of this happens in the worker thread.
ImageLoader
This class handles the loading and caching of images from remote URLs.
NetworkImageView
This replaces ImageView when the image is being fetched from a URL via the network call. It also cancels pending requests if the ImageView detaches and is no longer available.
For caching images, you should use the in-memory LruBitmapCache class, which extends LruCache31. LRU stands for “least recently used”; this type of caching makes sure that the least used objects are removed first from the cache when it gets full. So, when loading a bitmap into an ImageView, the LruCache is checked first. If an entry is found, it is used immediately to update the ImageView; otherwise, a background thread is spawned to process the image. Just what we want!
Volley does retry network calls if you have set the retry policy for your requests. We can change the retry values for each request using setRetryPolicy(). This is implemented in the DefaultRetryPolicy class32 of Android. You can set the retry policy for a request in this manner:
DEFAULT_TIMEOUT_MS is the default socket timeout in milliseconds. DEFAULT_MAX_RETRIES is the maximum number of request retries you want to perform. And DEFAULT_BACKOFF_MULT is the default backoff multiplier, which determines the exponential time set to the socket for every retry attempt.
Volley can catch network errors very easily, and you don’t have to bother much with cases in which there is a loss of network connectivity. In my app, I’ve chosen to handle network errors with the error message “No Internet access.”
The code below shows how to handle NoConnection errors.
To make API calls to third-party REST APIs, you need to pass API access tokens or have support for different authorization types. Volley lets you do that easily. Add the headers to the HTTP GET call using the headers.put(key,value) method call:
Setting priorities for your network calls is required in order to differentiate between critical operations, such as fetching the status of a resource and pulling its meta data. You don’t want to compromise a critical operation, which is why you should implement priorities. Below is an example that demonstrates how you can use Volley to set priorities. Here, we are using the CustomJSONObjectRequest class, which we defined earlier, to implement the setPriority() and getPriority() methods, and then in the MainActivity class, we are setting the appropriate priority for our request. As a rule of thumb, you can use these priorities for the relevant operations:
public void setPriority(Priority priority) { mPriority = priority; } @Override public Priority getPriority() { // Priority is set to NORMAL by default return mPriority != null ? mPriority : Priority.NORMAL; }
Volley is a useful library and can save the day for a developer. It’s an integral part of my toolkit, and it would be a huge win for a development team in any project. Let’s review Volley’s benefits:
It is responsive to user interactions, because all network processes happen asynchronously. The UI thread always remain free to handle any user interaction.
It handles networking tasks asynchronously. Whenever a network request is made, a background thread is created to process network calls.
Volley improves the lag time between requests because it can make multiple requests without the overhead of thread management.
Google has made considerable efforts to improve the performance of the Volley library by improving memory usage patterns and by passing callbacks to the main thread in a batch. This reduces context switching and will make your app faster.
The UI thread, or the main thread, in Android does the work of dispatching events to the UI toolkit and is responsible for dequeueing the request from message queue to notify the widget to take action. That’s why it’s important that the UI thread always be non-blocking.
Android has its own HTTP client libraries, such as HttpURLConnection, which help you to perform synchronous network calls. To keep the main thread non-blocking, the network calls need to be performed in worker threads that run in the background.
Android’s AsyncTask library helps you run tasks in the background and ensures that your main thread is non-blocking. It also ensures that a background task doesn’t directly update the UI. Instead, it returns the result to the UI thread.
AsyncTask has its limitations, such as not being able to cache responses and not being able to handle parallel requests. It also doesn’t gracefully handle scenarios such as screen rotation when background tasks are running.
Making asynchronous network calls using Volley is a cleaner solution to developing Android apps. Volley has an awesome set of features, such as caching, request cancellation and prioritization.
Volley can handle multiple request types, such as JSON, images and text, and it performs better than AsyncTask.
I hope you’ve enjoyed the article. All of the code examples are available for downloading. The complete app is hosted on GitHub34.
What exactly are the benefits of a content hub strategy? Well, first of all, when done correctly, a content hub will capture a significant volume of traffic. And that’s what most online businesses want, right?
We have recently introduced several clients to the concept of a content hub and would like to share our experience in this article. The clients are high-quality portals filled with targeted, valuable and often evergreen articles that users can return to time and again.
Sometimes these are hosted on a separate domain, but the focus is usually on provide supporting, information-led content, rather than sales-driven pages. L’Oreal’s Makeup.com3, Ricoh’s Workintelligent.ly204, and Nasty Gal’s Nasty Galaxy5 are great examples of this in action.
A hub also acts as a tool to reinforce your brand. This is an opportunity to show your expertise in your field, providing knowledge and insight to your visitors. This traffic will also generate a substantial amount of very useful data. You’ll quickly learn the most popular subjects and gain an understanding of your key audience.
Effective content is a considerable asset. Once you have a solid reputation, there is great potential for cross-promotion with other brands and individuals. To help you get started with your own content hub, or indeed any large-scale content project, here is our comprehensive guide to getting it right.
Here are the topics we’ll be going through in detail:
From a commercial point of view, simply creating thousands of new web pages will not necessarily help you sell more products or services or deliver more value to your users. At the highest level, content hubs are a big investment and not a path to be taken lightly, especially given the amount of resources required, including design, development, SEO and content, as well as buy-in from senior stakeholders. Typically, these stakeholders will be marketing, SEO, and design and development managers or directors, and each will have their own personal objectives, which could include a focus on a particular product or area of the business, or will have concerns about resource or time allocation. Bear this in mind when building your case.
Despite all of this, it is not just large-scale businesses that would benefit from this digital marketing strategy. While a significantly sized content hub might be out of reach for some, the key principles here, such as understanding the possible return on investment (ROI) such content provides, as well as how to effectively research and deliver useful information to your target audience, remain applicable for businesses that don’t have a large budget to invest.
Assessing the cost versus potential return is the first hurdle to overcome. This might include assessing a desire for this scale and depth of content among your user base, as well as benchmarking against keyword difficulty and your competitors. Overall — and we cannot stress this enough — providing something unique and of value to your target audience is important. This will ensure that both aims of the content hub are met: reinforcing brand trust and optimizing effectively for search engines. Every new page should contribute something towards establishing your brand as an authority in its sector, as well as one that knows what makes its customers tick.
Let’s take Workintelligent.ly204 as another example here. Below are some articles taken from its current website. Each piece is written clearly and well targeted to its audience of professionals and business leaders, offering practical, actionable advice.
With all of this in mind, you’ll need to establish an outline budget early on. Later on in this article, we will discuss how to create a detailed quote, including for project management, editing and administration. At the earliest stage, though, the key figures you will need to establish are cost per page and a rough total number of articles, so that feasibility can be discussed. Again, these are both areas we will examine in more detail.
We work with many clients that rely heavily on organic search. These businesses would benefit a lot from content hubs, due to the large number of pages that are created for their websites, which bring in significant traffic from targeted long- and short-tail keywords. While there are wider SEO benefits, too, such as potentially reducing the problem of thin content, increasing dwell time24 and attracting inbound links and social shares, this might be the area that attracts the most interest from the various stakeholders in the process.
To help with the business case, we have developed a simple formula to calculate the potential value of a large-scale content project:
(number of pages) × (average number of visits per page per month) × (average conversion from organic traffic) × (average order value) = potential monthly return on content.
For example, if a website has 1,000 pages and traffic of around 75,000 views per month, this gives us roughly 75 views per page each month. With an average conversion rate of 1.5% and an average order value of £100, each page gives us a potential monthly return of £112.5. Over the course of a year, this works out to £1350. If your production costs are in the region of £100 per page, this will provide a return very quickly.
Obviously, this is a very broad calculation. Only a fraction of page types, such as products and services, might drive revenue. In this case, you can apply the calculation to various categories and build it into your equation.
At the same time, the model can be used to provide some useful estimates and forecasts. By varying your expected conversion rate, you can quickly carry out cost-benefit analysis for design work on key areas of your website. The prospect of upping potential traffic volumes could also be used to provide a business case for SEO or other marketing work streams.
For many businesses, attribution modeling27 might also be worth considering at this point. Very often, a sale is not the result of a single search — instead, a user’s path to conversion will consist of multiple visits across pages and channels, including your social media accounts. It’s worth understanding these interactions and how they relate to your content, especially when prioritizing the kind of content to produce. Often we’ve seen that high-quality blog or information pages are visited in the middle of a sales journey. This insight is missing from the usual high-converting page reports in Google Analytics, for example, yet can be vital when planning one’s approach. This is also discussed in more detail later in this document.
Planning The Project
Once the project is agreed upon, it might be tempting to dive in and start writing. However, don’t create any content until you’ve taken stock of the current situation. We can’t emphasize enough that this should happen at the very outset of the project, because anything missed could cost you dearly down the line. As with major offline content projects, such as magazine and book production, remedying mistakes or adding complexity when additional pages need to be created or modified can be both time-consuming and expensive. If, for example, you quote for the delivery of 5,000 pages and then discover that another 1,000 have to be created, that difference will probably come off your bottom line. If new templates are required, extra costs and time will be required, too.
For the first exercise, look at the pages that already exist on your website. Set up a spreadsheet to record the pages on the website, the types of pages, the subjects, the keywords, the word counts and even the images on those pages and their associated properties.
By conducting this exercise, you should be in a position to identify any gaps in your content, and any areas that have been spread too thin and that could be consolidated. While you might have covered a particular subject extensively, could the website benefit from a section of related information? For instance, we are currently working on a content hub project for our client Holiday Hypermarket28, and while there are pages covering worldwide holiday destinations, we have identified a need for supplementary pages covering nightlife, restaurants and things to do in those areas, as well as in-depth information about each hotel. By doing this, we are in the process of creating a comprehensive guide to tourist hotspots that visitors can refer to both before and after booking their next vacation.
Taking Stock
If you have a large website, we recommend running an audit using crawling software, such as Screaming Frog29, to make sure you’ve caught every page, including non-HTML content and non-200 response codes. Xenu’s Link Sleuth30 is another good free tool, and although it hasn’t been updated for years, it could yield valuable insight. DeepCrawl31 is another thorough SEO package and well worth a look.
These tools work by traveling from link to link, so be aware that if any of your pages has been orphaned by a lack of internal links, they won’t be found. This problem can be tricky to overcome, but looking at all of your Google Analytics landing pages over a 12-month period, for example, might shine a little light. If there are no analytics or similar tracking data, then server logs can be a useful resource.
This research could also reveal useful information, including paths users take through the website, which pages are most frequently landed on and where traffic is coming from, giving you a full picture of how your website is being used. Conversion-rate optimization (CRO) testing software such as Visual Website Optimizer32 can be useful, too, especially with its new visitor analysis function33.
CMS Reviews
We also recommend assessing your content management system (CMS) at this point. Any limitations it has will define your path through the project, so have an open discussion with the relevant team as soon as possible. Ask as many questions as you can. Will you be able to bulk upload? What are the requirements on formatting? Does the layout have any flexibility? Are there word count limits? Identifying potential roadblocks early on is always a sensible move.
The longer it takes to upload a piece of content, the more labour-intensive and costly the project becomes. Including images, a 500-word page should, as a rule of thumb, take no longer than three to four minutes to add to the CMS. If you are likely to be going past this point, then a cost-benefit analysis might be worth carrying out, weighing the investment of development time against the benefit of faster uploads.
For most projects such as this, organic traffic will be a priority. For this reason, keyword research needs to begin early on in the process. This will enable you to home in on opportunities with your potential subject matter, and also give you an idea of the sorts of traffic figures and return on investment you can expect. This is obviously a massive field, so take the time to get right, and consider outsourcing the work if you don’t have the expertise in-house. If you’ve never tried it before, Moz has a pretty definitive guide34.
If you’re keen to raise your search engine rankings organically with a content hub, then benchmark your website before starting. Tools such as Serpfox35 and Ahrefs36 will tell you where your key landing pages rank before you launch your content hub, so that you can monitor improvement.
Competitor Analysis
By this point, you should have a detailed view of your current content and of any glaring shortfalls. Of course, no website stands in isolation, so the next phase is competitor research. Here, you’re looking for ways to stand out against websites in your target market, whether through high-quality content, better design or more targeted copy.
The majority of the steps discussed above — save viewing data from analytics software and server logs, or heatmapping and CRO testing — can be used for competitor research, too. The scale and value of your project will define the amount of detail to go into with competitors, but as always, err on the side of caution.
Look at what your closest competitors are doing well, and identify ways in which you can improve upon it. A good way to do this is by seeing what gets shared on social media; it might be that a particular subject resonates with their audience and that you could produce even more in-depth content that your joint audience might find valuable — or produce a whole host of content that answers every question users could possibly imagine.
Tools such as Riffle37, FollowerWonk38 and Simply Measured39 can help you to identify the competition’s most popular social media updates. Next, look at the content itself. How many words are they writing for the most popular subjects? Is it significantly more or less than you are currently writing? Can you add to the content with even more valuable information?
Look at the keywords they are targeting, too. We often use Searchmetrics42 to see which terms, both paid and organic, are driving traffic to these websites, as well as keywords that we may have missed in our own hubs. This tool is unusual in that it shows overall search visibility, rather than just visibility for keywords you are tracking. It does this by monitoring a vast database of keywords — several billion in total — and then pulling from this data when requested. Because Google has stopped providing43 detailed keyword reporting in Analytics, this information is invaluable, and being able to see the same insight for competitors can be very useful, too.
Next, it’s time to think sideways. See what organizations in related industries are up to. To continue with our Holiday Hypermarket example, we chose to investigate the activities of tourist boards and travel magazines to see what works for them and whether we knew any subjects well enough to create a huge range of pages.
Customer Research
Wherever possible, carry out some market research on your customers. For instance, you might want to run a test with a tool such as What Users Do44, so that you can find out what information customers are looking for and whether they’ve had any problems using your website. Think carefully about the types of questions to ask them. We typically ask whether they have frustrations using the website, whether any information is missing, and about things they’d like to see. On e-commerce websites, we ask how many other websites they typically use before making a purchase and what those websites are. If your budget restricts this, then sending your existing customers a survey or asking them questions on social media is always worthwhile. Incentivize these comments to make sure you get enough feedback to work from.
Again, this research often reveals information that you have never considered and uncovers competitors that have never crossed your mind. If this happens, then it’s a good time to loop back and examine each of the elements in more detail.
Finally, don’t forget to ask your internal teams what they think of the website and what’s missing. These teams will have a wealth of expertise, in both your own and related industries. Brainstorming sessions focused on topic areas and on your industry can elicit great ideas from people with years of experience in the field.
Setting Targets
By the end of this process, you should have a solid idea of the subjects to cover in your content hub. This is the point when you should identify what success looks like. Draw a list of key performance indicators (KPIs) that you’d like to track, within a range of time scales. Perhaps you want to drive 50% more traffic to your website within six months of launch, or get 500 social media mentions after publication, or even double the number of sales that come via the content hub itself.
Also, identify how you will measure these outcomes. You might need to set up additional tools to keep track. You will also want to look at the current situation to set a benchmark, so that you can measure improvement over the months and years ahead.
Planning The Content
And now for possibly the most important part of the whole process: creating the content. Before you start writing, identify what the content should look like, from word count to target keywords and, if applicable, page design.
One of the most crucial aspects here is content modeling45. At a high level, this is a framework of the various types of pages you intend to create at the outset. For developers, this is essential because it will define the various templates that are used in the CMS, their attributes and how they interact with each other. This area has been covered in some detail elsewhere on Smashing Magazine, so we won’t dive in depth here, but we highly recommend Andy Fitzgerald’s content-first approach46.
As a content producer, your input here is vital. You will need to know not only which section of your website the copy will live in, but also whether different types of pages are required and what their purpose will be. To continue with the travel example, suppose you have a destination content hub, which sits in the top navigation and where website visitors will find information about the given country, the regions within that country, as well as places to visit, the best beaches, the best restaurants, and a guide to all of the hotels in each of those regions.
In this case, the hierarchy would be:
home page;
country landing page;
region landing page;
subregion landing page;
things to do, restaurants, nightlife, hotel pages.
The research conducted on your own and competitors’ websites will enable you to pinpoint a word count for each page. If these similar pages are doing well, then this is the amount of content your audience would most like to read and share. Of course, if you can increase the length by adding useful information, then you should do so to add value for your readers.
Once you have a document outlining these points, you’re ready to look at the design of your content hub. Draw up a wireframe of each page, roughly illustrating how it should look. Bear in mind all of your findings, not only from your own website, but your competitors’ pages, too. How did they lay out the information? How do people use your website? Consider their frustrations, and be mindful to find solutions to these. Wireframe.cc is an easy-to-use tool to map out initial ideas, so that designers can refine the layout and start building the pages.
Precise Costs And Budgets
By this point, you should have a spreadsheet showing all of the content on the website, as well as the content you would like to edit or create.
Now that you know the size and scale of the project, it’s time to determine exactly how much the content hub will cost to complete. To avoid any unexpected expenses, consider not just the development time, but the number of people involved and how much time it will take each of them to complete their section, along with your own time and any on the client’s part.
In our experience, we have to factor in not only writers and developers, but also researchers and editors, plus the time to upload the content to the website — for each and every page. Once we have that information, we can set a deadline for the completion of the content hub, before adding a margin of contingency time in case of illness or unexpected issues thrown up by the building and production of the hub.
The next stage is to work out how many people we’ll need to complete the project on time and how much it would cost to employ each of those people. We opt for a mix of full-time, part-time and freelance employees, which gives us flexibility with the project. Add a 5% margin of error to your costs to cover for unexpected issues, such as images that are hard to find, illnesses and vacations.
With writers and editors, it’s a good idea to commission a few sample pages, to get a feel for how long they will take. Note their output per hour, but bear in mind that this could go down once the team becomes more familiar with the content and process, or up if extra levels of research or other complexities are introduced.
Some essential numbers to have at this point are the cost per page type, the cost per word and the editing and uploading costs. Together, these will give you an overall cost per page. At this point, we will refer back to our return-on-content model to see how this compares. If the expected return is much greater than the per-page outlay, then we’ll know that the project will likely succeed.
Building The Team
Of course, the project can’t get started without having a team in place. Having scoped how many people need to be involved, you can now easily identify whether additional human resources are needed to get the job done. Needless to say, any new hires must have a track record in their respective fields. We recommend assigning a trial piece of writing to be completed before a contract is signed, to ensure they are able to work from a brief. Having a bank of freelancers is also invaluable to picking up additional work and hitting deadlines.
Expert contributors are a less common but equally vital part of the team. In our experience, this is the area that can have the biggest impact on the overall quality of the project. High-level insight and knowledge about a subject isn’t always in great supply, and your writing team probably does not consist of experts in the field you are covering.
Hiring experts on an ad-hoc basis is a good solution. Typically, we ask for bullet points of information or notes, which can then be written up in-house. Training each and every contributor to understand the style guide would add too much time to the production process. By not paying them to write full articles, we keep our costs down.
We find these people by searching freelance databases and by putting out calls on PR wires and social media. For the travel content hubs, we might put out a call for an expert on the destination or even contact the tourist board for suggestions.
Getting experienced, professional writers who can follow a tight brief will ensure that the content you receive is in a format you can work with — and accurate. This might not be practical in many industries, though. For example, a qualified psychiatrist might not be interested in spending hours writing a thousand-word article for your medically focused website, but they might be willing to put together a brief document for your writers or to edit the completed work. Needless to say, if you do hire experts for this work, check their credentials thoroughly, and listen to their suggestions. After all, that’s what you’re paying them for.
The Production Process
Request a style guide from the client at the very start of each project, which will ensure that any content hits the mark in tone and branding, or create one if no document is in place. At a broad level, the client might want short sentences broken up by bullet points and headings. Other brands might want long-form content with little interruption. There might even be banned words or other small things that the client doesn’t allow.
Regardless, everyone on the team should familiarize themselves with the style guide from the outset; a workshop session is a great way to get everyone on board. To advance the process, present a preliminary batch of content to the client to ensure they are happy with the style, tone and structure of each page type. From there, update and amend the style guide to ensure that you have a keen awareness of how the content should read. It also helps to pin down a tighter brief for external writers, so that you can identify common misunderstandings and get all of the content right the first time.
A range of tools are available to help you manage production. It might seem basic, but a spreadsheet is a great tool for allocating work in some projects. Every single page of the hub can be listed, along with the writer, first editor, second editor and uploader. Using a color code, mark where in the production process each page is currently at — for instance, yellow for underway, green for complete and red for late. In case it helps, you can refer to one based on one of our recent projects49.
For a larger project, a tool such as Beegit50 and GatherContent51 can also be used to track and store each file, so that the project manager has an overview of the hubs and can monitor progress. We are big fans of Beegit, and the team’s receptiveness to feature requests is very impressive. Over the past few months, we’ve asked for numerous updates to improve our workflow, including reporting and tracking of each file update, all of which have been implemented.
A good task-management tool is also essential, especially if you’re not using content delivery software. A popular choice here is Redbooth54. We’ve used it for some time and found it to be easy and quick to use.
We also produce weekly reports to monitor the time writers spend on each page and to ensure we’re meeting deadlines.
It might be worth getting your editors to use a timesheet to keep an eye on how long the project is taking. Redbooth has a built-in time-tracker, which will help you keep track of each part of the process, but a number of tools are available.
While the editing is taking place, we’ll also typically have a member of the team undertake photo research to ensure that the content is ready to go live by the deadline. Alongside stock galleries, images can often be obtained from an industry’s official organizations, and sometimes user-generated content is suitable. For instance, with Holiday Hypermarket and the travel sector, many images were sourced from tourist boards and holidaymakers.
We use Google Reverse Image Search to ensure that any images we’re considering haven’t been used elsewhere online or used by competitor brands or used in contexts we would rather avoid. Of course, if you have access to a library of unique client images, then all the better.
The photo researcher is also responsible for making sure images are in the right format (either JPG or PNG), with the right dimensions for the wireframe, and that file sizes are small, so that pages don’t take long to load. Once they are satisfied, the images are stored in the project-management tool, ready for uploading.
Managing Time
Of course, we have more than one client, so balancing the production of the content hubs with the wider needs of the business is vital.
Time management has been key to ensuring that content production doesn’t go over deadline or budget. As mentioned, we leave some slack in the budget in case we need to get external help to cover anything unexpected (for instance, an illness among the team) that could affect the project’s completion.
The team is also encouraged to give feedback on progress and any sticking points, so that solutions can be found before they escalate. A key example of this has been the staggering of content delivery. Hubs are often broken down into key stages, as are projects within them. For instance, if there’s a lot to be said about a particular category, this will be handled over the course of a month. Then, the editors will have time to check during the following month, before delivery to the client.
We’ve also seen freelance writers miss the brief or fall short of a word count; so, once the editors get around to reviewing this content, it has to be sent back. Having a month for editing allows for us to have those discussions with freelancers and for them to get the content back to us before the month’s end. If this is not possible, then the editors need time to make modifications or rewrites before the deadline.
Measuring Success
Measuring the success of content is notoriously difficult. However, it is by no means impossible, and given the scale of a content hub project, it certainly cannot be dismissed.
Our typical approach to measuring success is as follows.
Define the key metrics that relate to success, and understand why they relate.
You might find it helpful to group these attributes into the categories of commercial, tactical and brand, as recommended by Smart Insights57.
At the top level, usually a hub will be developed to meet a specific commercial goal, such as boosting purchases, increasing market share or generating leads. These will be easy to define and report using sales data, your CRM system or Google Analytics.
The next level covers tactical elements such as page views, unique users and search engine rankings. All of this will offer useful insight, but these should all be seen as part of the picture that makes up your overall commercial goals. Don’t focus too much on these numbers and lose sight of the big picture.
The visibility of your brand is another key area and can be monitored by tracking brand mentions, sentiment and social interactions. Tools such as Mention.com58 and Brandwatch59 are useful here.
Be consistent in how you measure, across the business.
Choose your metrics and stick to them. If you chop and change the elements that you track, you’ll lose the visibility of trends, even if they are not exactly for the area you are currently focusing on. For this reason, automate as much as possible — no one wants to manually update spreadsheets every week or month.
On a simpler level, a host of free Google Analytics dashboards can be easily plugged into your account. Simply click “Dashboards” → “News Dashboard” → “Create From Gallery,” and enter your criteria.
The Content Analysis Dashboard62, shown above, is a good place to start. As with all of these dashboards, it is fully customizable.
As part of the research process, target keywords should have been defined at the outset. Again, Searchmetrics is a good tool to use here because it will show overall visibility, rather than just the terms you are tracking, which can be very useful if you’re working with thousands of pages.
Create reports that meet the particular needs of the various stakeholders. They should offer actionable insight, too, rather than fluffy numbers.
Reports on word counts and completed pages might be of interest to your delivery team, but likely wouldn’t appeal to a CEO or sales director. Speak with each stakeholder and find out what information would be most useful to them. Reports should clearly identify any problems and outline solutions, too — don’t leave figures open to interpretation.
Of course, some areas are easier to cover in a report than others. Even if a page has a clearly defined goal — such as the purchase of a product — conversion data doesn’t always tell the full story. Rarely does a consumer buy on the first visit to a website, especially a large purchase, such as a vacation, and information-focused content such as blogs and resource sections can often drive the decision-making process.
As mentioned earlier, we often recommend attribution modelling as way to gain insight into content performance. This is a detailed subject unto itself, and a good introduction can be found over on The Drum63. Yet the premise is simple: Google Analytics and other packages enable you to string together the various paths to your goals, whether they are across social media, pay-per-click (PPC) or organic search.
This is an ideal way to measure a content hub. You can see, for example, how many people visit your hub or download a brochure before making a purchase within a 30-day period.
Attribution is not an exact science, but it does enable you to make informed decisions about what works and what doesn’t. With marketing channels ever converging online, this insight is crucial.
As with any major project, a content hub should not be taken lightly. Being prepared is key, and that means digging deep into your website to understand both the scale of the task at hand and what will be required to achieve your goals. This rigor and depth of understanding are not reserved for massive hubs, though — any website that relies on content would benefit from all or part of the methods discussed.
“There are more things in heaven and earth, Horatio, than are dreamt of in your philosophy,” said Shakespeare’s Hamlet, in the famous scene in which Hamlet teaches Horatio to be a web designer.
Horatio, as every schoolchild knows, is a designer from Berlin (or sometimes London or Silicon Valley) who has a top-of-the-line MacBook, the latest iPhone and an unlimited data plan over the fastest, most reliable network. But, as Hamlet points out to him, this is not the experience of most of the world’s web visitors.
The World Bank reports1 that 1.1 billion people across the world have access to high-speed Internet; 3.2 billion people have some kind of access to the web; 5.2 billion own a mobile phone; and 7 billion live within coverage of a mobile network.
Unsurprisingly, many of those currently unconnected are in India, China, Indonesia — these being the biggest countries in the world. But being unconnected (for whatever reason) isn’t only a reality in developing economies; 51 million people in the US are not connected.
When I speak at conferences in rich Western countries, I often ask people, “Where will your next customers come from?” You don’t know. In our truly worldwide web, you can’t know.
Take Ignighter, a dating website set up by three Jewish guys in the US, with a culturally targeted model: Instead of a boy and girl going out on a date, 10 guys and 10 girls would go out together on organized group dates.
Ignighter got 50,000 registrations4, but it wasn’t enough to reach critical mass, and the founders considered abandoning their business. Then, they noticed they were getting as many sign-ups a week from India as they did in a year in the USA.
Perhaps the group-dating model that they anticipated for Jewish families really resonated with conservative Muslim, Hindu and Sikh families in India, Singapore and Malaysia, so they rebranded as Stepout, relocated to Mumbai and became India’s biggest dating website.
I’d bet that if you had asked them when they set up Ignighter, “What’s your India strategy?,” they would have said something like, “We don’t have one. We don’t care. We are focusing on middle-class New York Jewish people.” It’s also worth noting that if Ignighter had been an iOS app, they would not have been able to pivot their business, because iOS use in subcontinental Asia is very low. The product was discovered by their new customers precisely because they were on the web, accessible to everybody, regardless of device, operating system or network conditions.
You can’t predict the unpredictable, but, like, whatever, now I’m making a prediction: Many of your next customers will come from the area circled below, if only because there are more human beings alive in this circle5 than in the world outside the circle.
Asia has 4 billion people right now (out of 7.2 billion globally). The United Nations predicts8 that, by 2050, the population of Asia will reach 5 billion. By 2050, the population of Africa is set to double to 2 billion, and by 2100 (which is a bit late for me and perhaps for you), the population of Africa alone will reach 5 billion.
By 2100, the population of the planet will stabilize at 11 billion, and 50% of the world will live in just these 10 countries highlighted below, only one of which is in what we now consider the developed West.
Over the same period, the population of the West will actually drop, due to declining birthrates. So, it makes sense to target people as your next customers in countries where the population is growing.
But it’s not only a question of head counts. Many of the developing economies are growing extraordinarily fast, with a rapidly expanding middle class that has increasing disposable income. Let’s examine some of those countries now, concentrating for the moment on Asia.
China
China has 1.4 billion people. Its economy saw 6.6% growth11 in gross domestic product (GDP). I don’t know the GDP growth of your country, but I’d imagine that your politicians would love to have 6.6% GDP growth.
So much money changes hands in China. For comparison, in 2014, on Black Friday and Cyber Monday combined, $2.9 billion changed hands in the US. In the same year in China, on Singles’ Day (November 11th), $9.2 billion changed hands. It is predicted that, by 2019, e-commerce will be worth $1 trillion a year12 in China.
Indonesia
Indonesia has 258 million people and GDP growth of 4.9%. 75% of mobile phone subscribers are on 2G or EDGE networks, and half of all smartphone users say they experience network problems daily13. This is very much tied to geography: Indonesia consists of thousands of islands. In 2015, GBD Indonesia wrote14:
Indonesia is still predominantly a 2G market, and leapfrogging from there to 4G is a huge task that will require substantial investment in towers and equipment.
Southeast Asia is the fastest-growing Internet market in the world, and Indonesia is the fastest-growing country. The Internet economy in Southeast Asia will reach $200 billion by 2025 — 6.5 times what it is now, as estimated by Google and Temasec17 in 2016.
Myanmar
Myanmar has 57 million people and 8.1% GDP growth, largely fuelled by the government’s democratic reforms (or, perhaps more accurately, reforms designed to appear democratic). One of the reasons for this growth is that five years ago a SIM card cost $200018 in Myanmar; last August it went down to $1.50, which, of course, is fuelling growth in mobile phones.
India
As I write this, I’m sitting in a coffee shop in Kochi, Kerala State, India. The country has a population of 1.3 billion people, with a GDP growth of 7.6%. Boston Consulting Group estimates19 that the number of Internet users will double from 190 to 400 million by 2018 and that the web will contribute $200 billion to India’s GDP by 2020. Indian (and Indonesian) smartphone users are particularly sensitive about data consumption; 36% of Asia-Pacific20 smartphone users block advertisements, whereas two thirds do in India and Indonesia.
What Do These Nations Have In Common?
Apart from China (because of its now-abandoned policy of one child per family), the populations of these nations are young. Of course, young people are always on their smartphones, looking for Pokemons, taking selfies, Instagraming their coffee: A young population is an Internet-savvy population.
56% of people in emerging economies see themselves first and foremost as global citizens, rather than national citizens, the BBC reported21 last year. This is particularly pronounced in Nigeria, China, Peru and India.
And, of course, the people coming to the web are coming on smartphones. According to MIT22, of the 690 million Internet users in China, 620 million go online with a mobile device.
There is a more profound commonality as well. Below are the top-10 domains that Opera Mini users in the US visited in September 2016. (These figures are from Opera’s internal reporting tools; I was Deputy CTO of Opera until November 2016. Now I have no relationship with Opera.)
google.co.m
facebook.com
youtube.com
wikipedia.org
yahoo.com
twitter.com
wellhello.com
addthis.com
wordpress.com
apple.com
The top-10 handsets used to view those websites were:
Apple iPhone
Apple iPad
Samsung Galaxy S Duos 2
Samsung Galaxy S3
Samsung Galaxy Grand Prime
Samsung Galaxy Grand Neo Plus
Samsung Galaxy grand Neo GT
Nokia Asha 201
Samsung Galaxy Note III
TracFone LG 306G
The top-10 domains visited in Indonesia during the same period were:
facebook.com
google.com
google.co.id
wordpress.com
youtube.com
blogspot.co.id
wikipedia.org
indosat.com
liputan6.com
xl.co.id
Note the commonalities — keeping in touch with friends and family; search; video; uncensored news and information (Wikipedia) — as well as the local variations.
The top-10 handsets in Indonesia are lower-end than those used in the US:
Nokia X201
Nokia Asha 210
Nokia C3-00
Generic WAP
Nokia Asha 205.1
Samsung Galaxy V SM-G313HZ
Nokia 215
Nokia X2-02
Samsung GTS5260 Star 2
Nokia 5130 XpressMusic
In Nigeria last month, almost the same kinds of websites were viewed — again, with local variations; Nigeria is football-crazy, hence goal.com.
google.com.ng
facebook.com
google.com
naij.com
youtube.com
bbc.com
opera.com
wikipedia.org
goal.com
waptrick.com
But the top-10 handsets in Nigeria are lower-end than in Indonesia.
Nokia Asha 200
Nokia Asha 210
Nokia X2-01
Nokia C3-00
TECNO P5
Nokia Asha 205
Nokia Asha 201
TECNO M3
Infinix Hot Note X551
Infinix Hot 2 X510
This suggests that across the world, regardless of disposable income, regardless of hardware or network speed, people want to consume the same kinds of goods and services. And if your websites are made for the whole world, not just the wealthy Western world, then the next 4 billion people might consume the stuff that your organization makes.
Better Standards, Better Browsers
In Browserland and Web Standards World (not theme parks — yet — but wouldn’t they be great ones?), we are trying to make better standards and better browsers to make using the web a better experience for the next 4 billion people.
Let’s take a quick tour of some of the stuff we’ve been working on. My goal isn’t to give you a tutorial on these technologies (plenty of those are available elsewhere), but to explain why we’ve developed these standards, and to show that the use cases they address are not just nice-to-haves for Horatio and his Western colleagues, but that they address important needs for the rest of the world, too.
Progressive Web Apps
We know that end users love to install apps to the home screen, each app with its own icon that they can tickle to life with a digit. But native apps work only on single platforms; they are generally only available from a walled-garden app store (with a 30% fee going to the gatekeeper); and they’re often heavy downloads. Facebook found23 that a typical 20 MB Android application package (APK) takes more than 30 minutes to download over a 2G connection, and that download often fails because of flaky networks.
Most installed apps are not used. According to Google24, the average smartphone user has 36 apps on their device. One in four are used daily, and one in four are never used. But we know that people in emerging markets use cheaper phones, and cheaper phones have less storage. Even now, 25% of all new Android shipments go out with only 512 MB of RAM and maybe only 8 GB of storage.
The World Bank asked people across 30 nations in Africa what they use their phone for.
Unsurprisingly, phone calls and text messages were the primary use case, followed by missed calls. Across Africa and Asia, businesses encourage potential customers to send them a “missed call” — that is, to dial their number and then hang up. The business then phones the customer back, so that the cost of the contact is borne by the business, not the customer.
Here’s an example I photographed today in Kochi, India:
The next most popular uses of mobile phones in Africa are games, music and transferring airtime. (In many countries, carrying cash can be a little risky, and many people don’t have access to banks, so people pay for goods and services by transferring airtime from their phone to the vendor’s phone.)
Then you have photos and videos, etc. Like everybody else, they are unlikely to delete video of their family or their favourite MP3s to make room for your e-commerce app. Birdly29, in a blog post explaining why you shouldn’t bother creating a mobile app, said, “We didn’t stand a chance as we were fighting with both our competitors and other apps for a few more MB of room inside people’s phone.”
Wouldn’t it be super and gorgeous if we could offer the user experience of native apps with the reach of the web? Well, dear reader, now we can!
Progressive web apps (PWAs) allow users to “install” your app to their home screen (on supporting devices and browsers). Your PWA can launch in full-screen, portrait or landscape mode, just like a native app. But, crucially, your app lives on the web — it’s fully part of the web, and like the web, it’s based on the principles of progressive enhancement.
Recently, my ex-Opera colleague Andreas Bovens and I interviewed a Nigerian and a Kenyan developer who made some of the earliest progressive web apps. Constance Okoghenun said30:
Nigerians are extremely data sensitive. People side-load apps and other content from third parties [or via] Xender. With PWAs […], without the download overhead of native apps […] developers in Nigeria can now give a great and up-to-date experience to their users.
Kenyan developer Eugene Mutai said:
[PWAs] may solve problems that make the daily usage of native mobile applications in Africa a challenge; for example, the size of apps and the requirement of new downloads every time they are updated, among many others.
We are seeing the best PWAs come out of India, Nigeria, Kenya and Indonesia. Let’s look briefly at why PWAs are particularly well suited to emerging economies.
With a PWA, all the user downloads is a manifest file, which is a small text file with JSON information. You link to the manifest file from the head element in your HTML document, and browsers that don’t understand it just ignore it and show a normal website. This is because HTML is fault-tolerant. The vital point here is that everybody gets something, and nobody gets a worse experience.
(Making a manifest file is easy, and a lot of the information required is probably already in your head elements in proprietary meta tags. So, Stuart Langridge and I wrote a manifest generator31: Give it a URL, and it will spider your website and write a manifest file for you to download or copy and paste.)
The manifest just gives the browser the information it needs to install the PWA (an icon for the home screen, the name of the app and the URL to go to when it launches) and is, therefore, very small. The actual app lives on your server. This means there is no lag with distributing updates. Usually, users receive notifications saying that new versions of their native apps have been released, but weeks might go by before they go to a coffee shop with free Wi-Fi to install the updates, or they might never download the updates at all — disastrous if one of the updates corrects a security flaw. But because PWAs are web apps, when you make an update, the next time the user starts the app on their device, they will automatically get the newest version.
Crucially, a PWA is just a normal website on Safari, Windows phones and Opera Mini. Nobody is locked out — that’s why they are called progressive web apps; they are progressively enhanced websites.
Flipkart Lite
Flipkart is a major e-commerce website in India (competing with Amazon). A couple of years ago, they decided to abandon their mobile website and redirect users to the app stores to download native apps. Only 4%32 of people who actually took the trouble to type the website’s URL (and, therefore, presumably were actively shopping) ever downloaded the app. With 96% of users failing to download the apps, Flipkart reversed its policy and replaced its website with a progressive web app, called Flipkart Lite. Since its launch, Flipkart reports 40% returning visitors week over week, 63% increased conversions from home-screen visits, and a tripling of the time that visitors browse the website.
Flipkart’s commitment to PWAs was expressed by Amar Nagaram, of Flipkart engineering, at its PWA summit in Bangalore, where I spoke:
We want Flipkart Lite available on every phone over every flaky network in India.
One great thing about a PWA is that, like any other secure website, it works offline, using the magic of service workers33. This further closes the gap between native and web apps; an offline experience for the web is (I hate to use the phrase) a “paradigm shift.” Until now, when your web browser is disconnected from the Internet, you get a boring browser-derived “Sorry” message. Now, with service workers sitting between a page and the network, you can give visitors a meaningful offline experience. For example, when the user goes to your website for the first time, you can download images of the 10 most popular products to the cache, and upon subsequent offline visits, you could say, “I’m sorry. You are offline, but you can browse our top products and press ‘Buy,’ and we will background sync later.” The offline experience you provide will obviously depend on what your app does, but service workers give you all the flexibility you need.
Additionally, service workers give you:
push notifications
Please don’t spam and pollute the ecosystem for everyone by making consumers sick of notifications!
background sync
This could allow the user to press a button to buy, and when they go back online, the buying process just automatically syncs up.
Currently, PWAs are supported on Chrome for Android, Microsoft Edge and Opera for Android. (Opera may have a small market share where you are, but it’s long been a significant player in the developing world.) Mozilla has signalled that it’s implementing PWAs on Firefox for Android. Safari for iOS has a non-standard mechanism for adding websites to the home screen but as of yet doesn’t support service workers.
To recap, the advantages of a PWA are these:
With no app store or gatekeeper, the browser can offer to add a web app to the home screen when the user visits your website.
It is searchable, indexable and linkable.
It works offline.
Visitors without supporting browsers get a normal website; no one is left behind.
If you want to see some real PWAs, check out the community-curated website (itself a PWA) PWA.Rocks34
Responsive Images
Around 2011, at any conference I went to, everybody would tell me about the responsive images problem: How can we send “Retina-quality” images (much bigger in file size) to devices that can display them properly and send smaller images to non-Retina devices? At the time, we couldn’t; the venerable img element can point to only one source image, and that’s the only one that could be sent to all devices.
But solving this problem is vital if we want to save bandwidth for consumers whose devices aren’t Retina, and also to save battery life; sending unnecessarily large images and asking the browser to resize them with the conventional img {max-width:100%} trick requires a lot of CPU cycles, which causes delays and drains the battery. As Tim Kadlec wrote35:
On the test page with 6x images (not unusual at the moment on many responsive sites), the combination of resizes and decodes added an additional 278ms in Chrome and 95.17ms in IE (perhaps more …) to the time it took to display those 10 images.
In many parts of the world, battery life is a considerable problem. If you have a two-hour commute across Lagos or Nairobi to get to work, and a two-hour commute back, you wouldn’t be able to recharge your device, which you’d need to do if you wanted to make phone calls.
A third of Indian citizens, especially in the rural parts of the country, remain without power, as do 6% of the urban population. During peak hours, the shortage was 9.8%.
Battery life is so important that in India it has become a secondary industry unto itself. Alok Gupta, managing director and chief executive of The Mobile Store, India’s largest mobile phone retailer, recalls in October 201537:
Nearly 30 per cent of our annual smartphone unit sales have power banks bundled in. Two years ago, less than 1 per cent of our annual smartphone sales had power banks bundled in.
So (spurred on by a slight post-conference-season hangover), in December 2011, I wrote a blog post38 with a straw man suggestion for a new HTML picture element to solve the problem. My idea wasn’t fully thought out and wouldn’t have worked properly in its initial incarnation (damn hangovers), but cleverer people than me — Yoav Weiss (now at Akamai), Mat Marquis of Bocoup, Tab Atkins of Google, Marcos Cáceres of Mozilla, Simon Pieters of Opera — saw the utility in it and worked to make a proper specification. It was implemented, and now it is in every modern browser — even Safari.
This isn’t the place to talk about the nuts and bolts of HTML responsive images39, but if you use them, you’ll get significant savings on your images.
I did a talk about responsive images in Bristol last June, and the next day a developer in the audience named Mike Babb used the techniques and reduced his web page down by 70%40. This is important because the average web page (page, not full app or website) is 2.3 MB, of which 1.6 MB are images41. If you can save data, your website will be faster.
Mike saved 70%, and that 70% matters, because not everybody is like us and has a big data plan. In Germany, buying an entry-level mobile data plan of 500 MB per month takes one hour of work at minimum wage. In the US, it takes six hours, and in Brazil, it takes 34 hours of work42 .
If your bloated images are eating up people’s data plan, then you are literally making them work more hours — and that it is hugely discourteous. As well as being rude, it’s bad business: They simply won’t go back to your website. (If you’d like to know more about the cost of accessing your website, check Tim Kadlec’s utility What Does My Site Cost?45)
More Next Time!
In this article, we’ve explored where the next 4 billion connected people will come from, as well as some of the innovations that the standards community has made to better serve them. In the next part, we’ll look at some of the demand-side problems that prevent people from accessing the web easily and what can be done to overcome them.
The population projections in this article are originally from the United Nations, but I got them from the excellent, humane documentary named Don’t Panic: The Facts About Population46 by Hans Rosling, a hero of mine who died while I was writing this article. Thanks to Clara at Damcho Studio47 for helping to prepare this article.
It’s almost time to leave winter behind us here in the Northern Hemisphere. Most of the time, the weather can’t quite make up its mind, and so the days pass by with half of the sky sunny while the other half gray. Nature usually tends to have a strong impact on my mood, and so these days I feel like I’m literally in a gray zone — between winter and spring.
I’m not sure about you, but with springtime lurking around the corner, my need for extra inspiration is even bigger. So, I hope that this month’s set will give you just that spark you need to cheer you up and boost your creativity.
Exciting times ahead for cycling fans. This nice design however was to celebrate Chris Froome winning Le Tour for the third time. Love how the body is arched over.
I have seen this technique of combining real products with simple shape paper-cut elements a couple of times already and the result is really beautiful.
In order to celebrate the 50th Anniversary of the original Star Trek television series on CBS, the Philadelphian design studio ‘The Heads Of State’ created these stamps. Large view here23.
An illustration for Umami, a chain of restaurants based in Zagreb, Croatia. It was made as a part of a poster series which takes on curiosity and exploration of different tastes and flavors.
Wonderful color choices. Love this special 2D approach where things are viewed from 2 different angles, top and front. Makes you hardly wait for summer to arrive!
Talk about making the most of a limited set of colors. The neon lightbulbs are so well done. Incredible piece of work. Be sure to watch the process video38!
Quite busy, I admit, but I admire how everything flows into each other. Well done, especially since it’s not that easy using only a limited amount of colors.
Big neon sign for Teddy’s Nacho Royale, a burrito joint on the campus of a large social media company. Had no clue that they could create such attractive neon signs.
The artist’s style is obviously inspired by 1940s comic book art in this piece, as well as the Russian avant-garde movement and printed materials from the 1950s/60s.
As web developers, we all approach our work very differently. And even when you take a look at yourself, you’ll notice that the way you do your work does vary all the time. I, for example, have not reported a single bug to a browser vendor in the past year, despite having stumbled over a couple. I was just too lazy to write them up, report them, write a test case and care about follow-up comments.
This week, however, when integrating the Internationalization API for dates and times, I noticed a couple of inconsistencies and specification violations in several browsers, and I reported them. It took me one hour, but now browser vendors can at least fix these bugs. Today, I filed two new issues, because I’ve become more aware again of things that work in one browser but not in others. I think it’s important to change the way we work from time to time. It’s as easy as caring more about the issues we face and reporting them back.
Web annotations5 are now a web standard6, with a defined data model, vocabulary, and protocol. Let’s hope many of the browser vendors (Microsoft Edge) and service platforms will adopt the standard soon. For us developers it’s a huge opportunity, too, to build standardized annotations that are interoperable and to communicate with each other.
Google Chrome now allows permitted site owners of a Chrome extension to override selected user settings12. This means that a browser extension vendor who verified their domain via Google Webmaster Tools can override user settings such as homepage or the default search provider via their website. After reading this attack scenario13, I fear this could take the DNS subdomain attack to a new level.
Sometimes it’s the small things that help you a lot. Did you know that you can save time and avoid confusion in CSS by using the :not(:last-of-type) selector instead of two different selectors, for example? Timothy B. Smith explains how this little trick works15.
The free Sustainable UX conference took place two weeks ago. To get some insights into how we can achieve sustainability in tech, you can now watch the conference talks for free18.