Gravity
Gravity is a powerful, dynamically typed, lightweight and embeddable programming language written in C.
Gravity is a powerful, dynamically typed, lightweight and embeddable programming language written in C.
In a world driven by the Internet, mobile apps need to share and receive information from their products’ back end (for example, from databases) as well as from third-party sources such as Facebook and Twitter. These interactions are often made through RESTful1 APIs. When the number of requests increases, the way these requests are made becomes very critical to development, because the manner in which you fetch data can really affect the user experience of an app.
In this article, I’d like to take you through my experience of using networking libraries in Android, focusing on APIs. I’ll start with the basics of synchronous and asynchronous programming and cover the complexities of Android threads. We’ll then dive into the AsyncTask2 module, understand its architectural flows and look at code examples to learn the implementation. I’ll also cover the limitations of the AsyncTask library and introduce Android Volley253 as a better approach to making asynchronous network calls. We will then delve deeper into Volley’s architecture and cover its valuable features with code examples.
Still interested? Conquering Android networking will take you far in your journey toward becoming a skillful app developer.
Note: A few more Android libraries with networking capabilities are not covered in this article, including Retrofit4, OkHttp5. I recommend that you go through them to get a glimpse of those libraries.
“Hold on, Mom, I’m coming,” said Jason, still on his couch, waiting for a text from his girlfriend, who he had texted an hour back. “You could clean your room while you wait for a reply from your friend,” replied Jason’s mother with a hint of sarcasm. Isn’t her suggestion an obvious one? Similar is the case with synchronous and asynchronous HTTP10 requests. Let’s look at them.
Synchronous requests behave like Jason, staying idle until there is a response from the server. Synchronous requests block the interface, increase computation time and make a mobile app unresponsive. (Not always, though — sometimes it doesn’t make sense to go ahead, such as with banking transactions.) A smarter way to handle requests is suggested by Jason’s mother. In the asynchronous world, when the client makes a request to the server, the server dispatches the request to an event handler, registers for a callback and moves on to the next request. When the response is available, the client is responded to with the results. This is a far better approach, because asynchronous requests let you execute tasks independently.
The diagram above shows how both programming approaches differ from each other in a client-server model. In Android, the UI thread, often known as the main thread, is based on the same philosophy as asynchronous programming.
Threads are sets of instructions that are managed by the operating system. Multiple threads run under a single process (a Linux process in the case of Android) and share resources such as memory. In Android, when the app runs, the system creates a thread of execution for the whole application, called the “main” thread (or UI thread). The main thread works on a single-threaded model. It is in charge of dispatching events to UI widgets (drawing events), interacting with components from the UI toolkit, such as View.OnClickListener()
, and responding to system events, such as onKeyLongPress()
.
The UI thread runs on an infinite loop and monitors the message queue to check whether the UI needs to be updated. Let’s consider an example. When the user touches a button, the UI thread dispatches the touch event to the widget, which in turn sets its pressed state and posts a request to the message queue. The UI thread dequeues the request from the message queue and notifies the widget to take action — in this case, to redraw itself to indicate the button has been pressed. If you’re interested in delving deeper into the internals of the UI thread, you should read about Looper13, the MessageQueue14 and the Handler15 classes, which accomplish the tasks discussed in our example. As you’d imagine, the UI thread has a lot of responsibilities, such as:
When you think about it, your single-threaded UI thread performs all of its work in response to user interactions. Because everything happens on the UI thread, time-consuming operations such as database queries and network calls will block the UI. The UI thread will dispatch events to the UI widget. The app will perform poorly, and the user will feel that the app is unresponsive. If these tasks take time and the UI thread is blocked for 4 to 5 seconds, Android will throw an “Application Not Responding16” (ANR) error. Referring to such an Android app as being not user-friendly would be an understatement, not to mention the poor ratings and uninstalls.
Using the main thread for long tasks would hold things up. Your app will always remain responsive to user events if your UI thread is non-blocking. That is why, if your application requires making network calls, the calls need to be performed on the worker threads that run in the background, not on the main thread. You could use a Java HTTP client library to send and receive data over the network, but the network call itself should be performed by a worker thread. But wait, there’s another issue with Android: thread safety.
The Android UI toolkit is notthread-safe17. If the worker thread (which performs the task of making network calls) updates the Android UI toolkit, it could result in undefined and unexpected behavior. This can be difficult and time-consuming to track down. The single-thread model ensures that the UI is not modified by different threads at the same time. So, if we have to update the ImageView with an image from the network, the worker thread will perform the network operation in a separate thread, while the ImageView will be updated by the UI thread. This makes sure that the operations are thread-safe, with the UI thread providing the necessary synchronization. It also helps that the UI thread is always non-blocking, because the actual task happens in the background by the worker thread.
In summary, follow two simple rules in Android development:
When you talk about making requests from an “activity,” you will come across Android “services18.” A service is an app component that can perform long operations in the background without the app being active or even when the user has switched to another app. For example, playing music or downloading content in the background can be done well with services. If you choose to work with a service, it will still run in your application’s main thread by default, so you’ll need to create a new thread within the service to handle blocking operations. If you need to perform work outside of your main thread while the user is interacting with your app, you are better off using a networking library such as AsyncTask or Volley.
Performing tasks in worker threads is great, but as your app starts to perform complex network operations, worker threads can get difficult to maintain.
It’s quite clear now that we should use a robust HTTP client library and ensure that the network task is achieved in the background using worker threads — essentially, with non-UI threads.
Android does have a resource to help handle network calls asynchronously. AsyncTask19 is a module that allows us to perform asynchronous work on the user interface.
AsyncTask performs all of the blocking operations in a worker thread, such as network calls, and publishes the results once it’s done. The UI thread gets these results and updates the user interface accordingly.
Here is how I implemented an asynchronous worker thread using AsyncTask:
onPreExecute()
method, which will create a toast message suggesting that the network call is about to happen.doInBackground(Params...)
method. As the name suggests, doInBackground
is the worker thread that makes network calls and keeps the main thread free.postExecute(Result)
method, which will deliver the results from the network call and run in the UI thread so that the user interface can be safely modified.publishProgress()
method and can be updated on the UI thread using the onProgressUpdate(Progress...)
method. These methods are not implemented in the example code but are fairly straightforward to work with.execute()
method from the UI thread.Note:execute()
and postExecute()
both run on the UI thread, whereas doInBackground()
is a non-UI worker thread.
In the context of my app, I make a POST
request on a REST API to start a calling session for a campaign. I also pass the access token in the request header and the campaign ID in the body. If you look at the code, java.net.HttpURLConnection
is used to make a network call, but the actual work is done in the doInBackground()
method of AsyncTask. In the example above, we also make use of the application context to pop up toast messages, but AsyncTasks can be defined as inner classes in activities if they are small enough, avoiding the need for the Context
property.
A generic type is a generic class or interface that is parameterized over types. Just like how we define formal parameters used in method declarations, type parameters help you to reuse the same code with different input types. While inputs to methods are values, inputs to type parameters are types. There are three types used by an asynchronous task:
Params
Progress
Result
This is how I have extended AsyncTask with types:
public class MyAsync extends AsyncTask<String, Void, Integer>
So, the Params
sent to the task are of type String
; Progress
is set to Void
; and the Result
is of type Integer
. In our implementation, we’re passing the URL (type String
) to the doInBackground(String... params)
method; while we don’t set a Progress
type, we pass the status code of the response (type Integer
) to onPostExecute(Integer integer)
. Not all types are used by an asynchronous task; and to mark a type as unused, we use type Void
.
The code is available for downloading on GitHub:
Working with AsyncTask is pretty nice until you start doing more complex operations with it. A few instances where AsyncTask would not be useful are highlighted below:
onPostExecute
method.postExecute()
method is not called. Unfortunately, it doesn’t actually make the request every time. This behavior is not implicit, and it’s the job of the developer to explicitly cancel asynchronous tasks.Even though AsyncTask does a good job of performing asynchronous operations, its utility can be limiting due to the reasons mentioned above. Luckily, we have Volley at our disposal, an Android module for making asynchronous network calls.
Volley253 is a networking library developed by Google and introduced at Google I/O 2013. In Volley, all network calls are asynchronous by default, so you don’t have to worry about performing tasks in the background anymore. Volley considerably simplifies networking with its cool set of features.
Before looking at the code, let’s get ourselves elbow-deep in Volley and understand its architecture. Below is a high-level architectural diagram of Volley’s flow. It works in a very simple way:
CacheDispatcher
and delivered back to the main thread, the UI thread.NetworkDispatcher
) takes the request from the queue. It then performs the HTTP request, parses the response on the worker thread and writes the response to cache. It then delivers the parsed response back to the main thread.If you carefully analyze Volley’s architecture, you’ll see that it solves issues that we face with AsyncTask:
doInBackground()
from AsyncTask) because the library makes asynchronous network calls and manages it for you in NetworkDispatcher
.Let’s see how to make asynchronous calls using Volley. Start by including Volley in your Android project.
The easiest way to add Volley to your project is to add the following dependency to your app’s build.gradle
file.
dependencies { compile 'com.android.volley:volley:x.y.z' }
Another way to do this is by cloning the Volley repository. Build Volley with Ant, copy the built volley.jar
file in the libs
folder, and then create an entry in build.gradle
to use the jar
file. Here’s how:
git clone https://android.googlesource.com/platform/frameworks/volley cd volley android update project -p . ant jar
You can find the generated volley.jar
in Volley’s bin folder. Copy it to your libs
folder in Android Studio, and add the entry below to app/build.gradle
:
dependencies { compile files('libs/volley.jar') }
And you’re done! You have added Volley to your project without any hassle. To use Volley, you must add the android.permission.INTERNET
permission to your app’s manifest. Without this, your app won’t be able to connect to the network.
<uses-feature android:name="android.hardware.wifi" android:required="true" /> <uses-permission android:name="android.permission.INTERNET" />
The code example below shows you how to make a request on https://api.ipify.org/?format=json
, get the response and update the text view of your app. We use Volley by creating a RequestQueue
and passing it Request
objects. The RequestQueue
manages the worker threads and makes the network calls in the background. It also takes care of writing to cache and parsing the response. Volley takes the parsed response and delivers it to the main thread. Appropriate code constructs are highlighted with comments in the code snippet below. I haven’t implemented caching yet; I’ll talk about that in the next example.
package com.example.chetan.androidnetworking; import android.os.Bundle; import android.support.v7.app.AppCompatActivity; import android.support.v7.widget.Toolbar; import android.view.View; import android.widget.Button; import android.widget.TextView; import com.android.volley.Request; import com.android.volley.RequestQueue; import com.android.volley.Response; import com.android.volley.VolleyError; import com.android.volley.toolbox.StringRequest; import com.android.volley.toolbox.Volley; public class MainActivity extends AppCompatActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); //Set the title of Toolbar Toolbar toolbar = (Toolbar) findViewById(R.id.toolbar); setSupportActionBar(toolbar); //Get the ID of button that will perform the network call Button btn = (Button) findViewById(R.id.button); assert btn != null; btn.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { String url = "https://api.ipify.org/?format=json"; final TextView txtView = (TextView) findViewById(R.id.textView3); assert txtView != null; //Request a string response from the URL resource StringRequest stringRequest = new StringRequest(Request.Method.GET, url, new Response.Listener() { @Override public void onResponse(String response) { // Display the response string. txtView.setText("Response is: " + response.toString()); } }, new Response.ErrorListener() { @Override public void onErrorResponse(VolleyError error) { txtView.setText("Oops! That didn't work!"); } }); //Instantiate the RequestQueue and add the request to the queue RequestQueue queue = Volley.newRequestQueue(getApplicationContext()); queue.add(stringRequest); } }); }
To set up the cache, we have to implement a disk-based cache and add the cache object to the RequestQueue
. I set up a HttpURLConnection28 to make the network requests. Volley’s toolbox provides a standard cache implementation via the DiskBasedCache
class, which caches the data directly on the hard disk. So, when the button is clicked for the first time, a network call is made, but in the next occurence of a button click, I get the data from the cache. Nice!
package com.example.chetan.androidnetworking; import android.os.Bundle; import android.support.v7.app.AppCompatActivity; import android.support.v7.widget.Toolbar; import android.view.View; import android.widget.Button; import android.widget.TextView; import com.android.volley.Request; import com.android.volley.RequestQueue; import com.android.volley.Response; import com.android.volley.VolleyError; import com.android.volley.toolbox.BasicNetwork; import com.android.volley.toolbox.DiskBasedCache; import com.android.volley.toolbox.HurlStack; import com.android.volley.toolbox.StringRequest; import com.android.volley.toolbox.Volley; public class MainActivity extends AppCompatActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); //Set the title of Toolbar Toolbar toolbar = (Toolbar) findViewById(R.id.toolbar); setSupportActionBar(toolbar); //Get the ID of button, which will perform the network call Button btn = (Button) findViewById(R.id.button); assert btn != null; btn.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { String url = "https://api.ipify.org/?format=json"; final TextView txtView = (TextView) findViewById(R.id.textView3); assert txtView != null; // Setup 1 MB disk-based cache for Volley Cache cache = new DiskBasedCache(getCacheDir(), 1024 * 1024); // Use HttpURLConnection as the HTTP client Network network = new BasicNetwork(new HurlStack()); StringRequest stringRequest = new StringRequest(Request.Method.GET, url, new Response.Listener() { @Override public void onResponse(String response) { // Display the string response on the UI txtView.setText("Response is: " + response.toString()); } }, new Response.ErrorListener() { @Override public void onErrorResponse(VolleyError error) { txtView.setText("Oops! That didn't work!"); } }); // Instantiate the RequestQueue with the cache and network, start the request // and add it to the queue RequestQueue queue = new RequestQueue(cache, network); queue.start(); queue.add(stringRequest); } }); }
If you have to fire network requests in multiple Android activities, you should avoid using Volley.newRequestQueue.add()
, as we did in the first example. You can develop a singleton29 class for the RequestQueue
and use it across your project. Creating a RequestQueue
as a singleton is recommended, so that the RequestQueue
lasts for the lifetime of your app. It also ensures that the same RequestQueue
is utilized even when the activity is recreated, as in case of a screen rotation.
package com.example.chetan.androidnetworking; import android.content.Context; import com.android.volley.Request; import com.android.volley.RequestQueue; import com.android.volley.toolbox.ImageLoader; import com.android.volley.toolbox.Volley; public class VolleyController { private static VolleyController mInstance; private RequestQueue mRequestQueue; private static Context mCtx; private VolleyController(Context context) { mCtx = context; mRequestQueue = getRequestQueue(); } public static synchronized VolleyController getInstance(Context context) { // If instance is not available, create it. If available, reuse and return the object. if (mInstance == null) { mInstance = new VolleyController(context); } return mInstance; } public RequestQueue getRequestQueue() { if (mRequestQueue == null) { // getApplicationContext() is key. It should not be activity context, // or else RequestQueue won't last for the lifetime of your app mRequestQueue = Volley.newRequestQueue(mCtx.getApplicationContext()); } return mRequestQueue; } public void addToRequestQueue(Request req) { getRequestQueue().add(req); } }
You can now use the VolleyController
in your MainActivity
like this: VolleyController.getInstance(getApplicationContext()).addToRequestQueue(stringRequest);
. Or you can create a queue in this way: RequestQueue queue = VolleyController.getInstance(this.getApplicationContext()).getRequestQueue();
. Note the use of ApplicationContext
in these examples.
In Volley, you can set up up a custom JSON request by extending the Request
class. This will help you to parse and deliver network responses. You can also do more things such as set request priorities and set up cookies with this custom class. Below is the code for creating a custom JSONObject Request
in Volley. You can handle ImageRequest
types in the same manner.
package com.example.chetan.androidnetworking; import com.android.volley.NetworkResponse; import com.android.volley.ParseError; import com.android.volley.Request; import com.android.volley.Response; import com.android.volley.Response.ErrorListener; import com.android.volley.Response.Listener; import com.android.volley.toolbox.HttpHeaderParser; import org.json.JSONException; import org.json.JSONObject; import java.io.UnsupportedEncodingException; import java.util.Map; public class CustomJSONObjectRequest extends Request { private Listener listener; private Map<String, String> params; Priority mPriority; public CustomJSONObjectRequest(int method, String url, Map<String, String> params, Listener responseListener, ErrorListener errorListener) { super(method, url, errorListener); this.listener = responseListener; this.params = params; } protected Map<String, String> getParams() throws com.android.volley.AuthFailureError { return params; }; @Override protected Response parseNetworkResponse(NetworkResponse response) { try { String jsonString = new String(response.data, HttpHeaderParser.parseCharset(response.headers)); return Response.success(new JSONObject(jsonString), HttpHeaderParser.parseCacheHeaders(response)); } catch (UnsupportedEncodingException e) { return Response.error(new ParseError(e)); } catch (JSONException je) { return Response.error(new ParseError(je)); } } @Override protected void deliverResponse(JSONObject response) { listener.onResponse(response); } }
With asynchronous tasks, you can’t know when the response will arrive from your API. You need to execute a Volley request and wait for the response in order to parse and return it. You can do this with the help of a callback. Callbacks can be easily implemented with Java interfaces. The code below shows how to build your callback with the help of the VolleyCallback
interface.
package com.example.chetan.androidnetworking; import org.json.JSONException; import org.json.JSONObject; public interface VolleyCallback { void onSuccess(JSONObject result) throws JSONException; void onError(String result) throws Exception; }
Now, let’s make a network call using the custom JSON request class and update the UI with the response.
public class MainActivity extends AppCompatActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); Toolbar toolbar = (Toolbar) findViewById(R.id.toolbar); setSupportActionBar(toolbar); Button btn = (Button) findViewById(R.id.button); assert btn != null; btn.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { String url = "https://api.ipify.org/?format=json"; final TextView txtView = (TextView) findViewById(R.id.textView3); assert txtView != null; makeRequest(url, new VolleyCallback() { @Override public void onSuccess(JSONObject result) throws JSONException { Toast.makeText(getApplicationContext(), "Hurray!!", Toast.LENGTH_LONG).show(); txtView.setText(String.format("My IP is: %s", result.getString("ip"))); txtView.setTextColor(Color.BLUE); } @Override public void onError(String result) throws Exception {} }); } }); } // Custom JSON Request Handler public void makeRequest( final String url, final VolleyCallback callback) { CustomJSONObjectRequest rq = new CustomJSONObjectRequest(Request.Method.GET, url, null, new Response.Listener() { //Pass response to success callback @Override public void onResponse(JSONObject response) { Log.v("Response", response.toString()); try { String ip = response.getString("ip"); if (ip != "null") { callback.onSuccess(response); } } catch (Exception e) { e.printStackTrace(); } } }, new Response.ErrorListener() { @Override public void onErrorResponse(VolleyError error) {} }) { @Override public Map<String, String> getHeaders() throws AuthFailureError { HashMap<String, String> headers = new HashMap<String, String>(); return headers; } @Override protected Map<String, String> getParams() { Map<String, String> params = new HashMap<String, String>(); return params; } }; // Request added to the RequestQueue VolleyController.getInstance(getApplicationContext()).addToRequestQueue(rq); }
Volley offers the following classes for requesting images:
ImageRequest
ImageLoader
NetworkImageView
ImageView
when the image is being fetched from a URL via the network call. It also cancels pending requests if the ImageView
detaches and is no longer available.For caching images, you should use the in-memory LruBitmapCache
class, which extends LruCache31. LRU stands for “least recently used”; this type of caching makes sure that the least used objects are removed first from the cache when it gets full. So, when loading a bitmap into an ImageView
, the LruCache
is checked first. If an entry is found, it is used immediately to update the ImageView
; otherwise, a background thread is spawned to process the image. Just what we want!
Volley does retry network calls if you have set the retry policy for your requests. We can change the retry values for each request using setRetryPolicy()
. This is implemented in the DefaultRetryPolicy
class32 of Android. You can set the retry policy for a request in this manner:
rq.setRetryPolicy(new DefaultRetryPolicy(DefaultRetryPolicy.TIMEOUT_MS, DefaultRetryPolicy.DEFAULT_MAX_RETRIES, DefaultRetryPolicy.DEFAULT_BACKOFF_MULT));
DEFAULT_TIMEOUT_MS
is the default socket timeout in milliseconds. DEFAULT_MAX_RETRIES
is the maximum number of request retries you want to perform. And DEFAULT_BACKOFF_MULT
is the default backoff multiplier, which determines the exponential time set to the socket for every retry attempt.
Volley can catch network errors very easily, and you don’t have to bother much with cases in which there is a loss of network connectivity. In my app, I’ve chosen to handle network errors with the error message “No Internet access.”
The code below shows how to handle NoConnection
errors.
public class MainActivity extends AppCompatActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); Toolbar toolbar = (Toolbar) findViewById(R.id.toolbar); setSupportActionBar(toolbar); Button btn = (Button) findViewById(R.id.button); assert btn != null; btn.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { String url = "https://api.ipify.org/?format=json"; final TextView txtView = (TextView) findViewById(R.id.textView3); assert txtView != null; makeRequest(url, new VolleyCallback() { @Override public void onSuccess(JSONObject result) throws JSONException { Toast.makeText(getApplicationContext(), "Hurray!!", Toast.LENGTH_LONG).show(); txtView.setText(String.format("My IP is: %s", result.getString("ip"))); txtView.setTextColor(Color.BLUE); } @Override public void onError(String result) throws Exception { Toast.makeText(getApplicationContext(), "Oops!!", Toast.LENGTH_LONG).show(); txtView.setText(result); txtView.setTextColor(Color.RED); } }); } }); } public void makeRequest( final String url, final VolleyCallback callback) { CustomJSONObjectRequest rq = new CustomJSONObjectRequest(Request.Method.GET, url, null, new Response.Listener() { @Override public void onResponse(JSONObject response) { Log.v("Response", response.toString()); try { String ip = response.getString("ip"); if (ip != "null") { callback.onSuccess(response); } } catch (Exception e) { e.printStackTrace(); } } }, new Response.ErrorListener() { @Override public void onErrorResponse(VolleyError error) { Log.v("Response", error.toString()); String err = null; if (error instanceof com.android.volley.NoConnectionError){ err = "No internet Access!"; } try { if(err != "null") { callback.onError(err); } else { callback.onError(error.toString()); } } catch (Exception e) { e.printStackTrace(); } } }) { @Override public Map<String, String> getHeaders() throws AuthFailureError { HashMap<String, String> headers = new HashMap<String, String>(); headers.put("Content-Type", "application/json"); return headers; } @Override protected Map<String, String> getParams() { Map<String, String> params = new HashMap<String, String>(); return params; } }; rq.setPriority(Request.Priority.HIGH); VolleyController.getInstance(getApplicationContext()).addToRequestQueue(rq); } }
To make API calls to third-party REST APIs, you need to pass API access tokens or have support for different authorization types. Volley lets you do that easily. Add the headers to the HTTP GET call using the headers.put(key,value)
method call:
@Override public Map<String, String> getHeaders() throws AuthFailureError { HashMap<String, String> headers = new HashMap<String, String>(); headers.put("Content-Type", "application/json"); return headers; }
Setting priorities for your network calls is required in order to differentiate between critical operations, such as fetching the status of a resource and pulling its meta data. You don’t want to compromise a critical operation, which is why you should implement priorities. Below is an example that demonstrates how you can use Volley to set priorities. Here, we are using the CustomJSONObjectRequest
class, which we defined earlier, to implement the setPriority()
and getPriority()
methods, and then in the MainActivity
class, we are setting the appropriate priority for our request. As a rule of thumb, you can use these priorities for the relevant operations:
Priority.LOW
// images, thumbnailsPriority.NORMAL
// standard queriesPriority.HIGH
// descriptions, listsPriority.IMMEDIATE
// login, logoutpublic void setPriority(Priority priority) { mPriority = priority; } @Override public Priority getPriority() { // Priority is set to NORMAL by default return mPriority != null ? mPriority : Priority.NORMAL; }
// set priority to HIGH rq.setPriority(Request.Priority.HIGH);
Volley is a useful library and can save the day for a developer. It’s an integral part of my toolkit, and it would be a huge win for a development team in any project. Let’s review Volley’s benefits:
HttpURLConnection
, which help you to perform synchronous network calls. To keep the main thread non-blocking, the network calls need to be performed in worker threads that run in the background.I hope you’ve enjoyed the article. All of the code examples are available for downloading. The complete app is hosted on GitHub34.
(da, al, il)
What exactly are the benefits of a content hub strategy? Well, first of all, when done correctly, a content hub will capture a significant volume of traffic. And that’s what most online businesses want, right?
We have recently introduced several clients to the concept of a content hub and would like to share our experience in this article. The clients are high-quality portals filled with targeted, valuable and often evergreen articles that users can return to time and again.
Sometimes these are hosted on a separate domain, but the focus is usually on provide supporting, information-led content, rather than sales-driven pages. L’Oreal’s Makeup.com3, Ricoh’s Workintelligent.ly204, and Nasty Gal’s Nasty Galaxy5 are great examples of this in action.
A hub also acts as a tool to reinforce your brand. This is an opportunity to show your expertise in your field, providing knowledge and insight to your visitors. This traffic will also generate a substantial amount of very useful data. You’ll quickly learn the most popular subjects and gain an understanding of your key audience.
Effective content is a considerable asset. Once you have a solid reputation, there is great potential for cross-promotion with other brands and individuals. To help you get started with your own content hub, or indeed any large-scale content project, here is our comprehensive guide to getting it right.
Here are the topics we’ll be going through in detail:
From a commercial point of view, simply creating thousands of new web pages will not necessarily help you sell more products or services or deliver more value to your users. At the highest level, content hubs are a big investment and not a path to be taken lightly, especially given the amount of resources required, including design, development, SEO and content, as well as buy-in from senior stakeholders. Typically, these stakeholders will be marketing, SEO, and design and development managers or directors, and each will have their own personal objectives, which could include a focus on a particular product or area of the business, or will have concerns about resource or time allocation. Bear this in mind when building your case.
Despite all of this, it is not just large-scale businesses that would benefit from this digital marketing strategy. While a significantly sized content hub might be out of reach for some, the key principles here, such as understanding the possible return on investment (ROI) such content provides, as well as how to effectively research and deliver useful information to your target audience, remain applicable for businesses that don’t have a large budget to invest.
Assessing the cost versus potential return is the first hurdle to overcome. This might include assessing a desire for this scale and depth of content among your user base, as well as benchmarking against keyword difficulty and your competitors. Overall — and we cannot stress this enough — providing something unique and of value to your target audience is important. This will ensure that both aims of the content hub are met: reinforcing brand trust and optimizing effectively for search engines. Every new page should contribute something towards establishing your brand as an authority in its sector, as well as one that knows what makes its customers tick.
Let’s take Workintelligent.ly204 as another example here. Below are some articles taken from its current website. Each piece is written clearly and well targeted to its audience of professionals and business leaders, offering practical, actionable advice.
With all of this in mind, you’ll need to establish an outline budget early on. Later on in this article, we will discuss how to create a detailed quote, including for project management, editing and administration. At the earliest stage, though, the key figures you will need to establish are cost per page and a rough total number of articles, so that feasibility can be discussed. Again, these are both areas we will examine in more detail.
We work with many clients that rely heavily on organic search. These businesses would benefit a lot from content hubs, due to the large number of pages that are created for their websites, which bring in significant traffic from targeted long- and short-tail keywords. While there are wider SEO benefits, too, such as potentially reducing the problem of thin content, increasing dwell time24 and attracting inbound links and social shares, this might be the area that attracts the most interest from the various stakeholders in the process.
To help with the business case, we have developed a simple formula to calculate the potential value of a large-scale content project:
(number of pages) × (average number of visits per page per month) × (average conversion from organic traffic) × (average order value) = potential monthly return on content.
For example, if a website has 1,000 pages and traffic of around 75,000 views per month, this gives us roughly 75 views per page each month. With an average conversion rate of 1.5% and an average order value of £100, each page gives us a potential monthly return of £112.5. Over the course of a year, this works out to £1350. If your production costs are in the region of £100 per page, this will provide a return very quickly.
Obviously, this is a very broad calculation. Only a fraction of page types, such as products and services, might drive revenue. In this case, you can apply the calculation to various categories and build it into your equation.
At the same time, the model can be used to provide some useful estimates and forecasts. By varying your expected conversion rate, you can quickly carry out cost-benefit analysis for design work on key areas of your website. The prospect of upping potential traffic volumes could also be used to provide a business case for SEO or other marketing work streams.
For many businesses, attribution modeling27 might also be worth considering at this point. Very often, a sale is not the result of a single search — instead, a user’s path to conversion will consist of multiple visits across pages and channels, including your social media accounts. It’s worth understanding these interactions and how they relate to your content, especially when prioritizing the kind of content to produce. Often we’ve seen that high-quality blog or information pages are visited in the middle of a sales journey. This insight is missing from the usual high-converting page reports in Google Analytics, for example, yet can be vital when planning one’s approach. This is also discussed in more detail later in this document.
Once the project is agreed upon, it might be tempting to dive in and start writing. However, don’t create any content until you’ve taken stock of the current situation. We can’t emphasize enough that this should happen at the very outset of the project, because anything missed could cost you dearly down the line. As with major offline content projects, such as magazine and book production, remedying mistakes or adding complexity when additional pages need to be created or modified can be both time-consuming and expensive. If, for example, you quote for the delivery of 5,000 pages and then discover that another 1,000 have to be created, that difference will probably come off your bottom line. If new templates are required, extra costs and time will be required, too.
For the first exercise, look at the pages that already exist on your website. Set up a spreadsheet to record the pages on the website, the types of pages, the subjects, the keywords, the word counts and even the images on those pages and their associated properties.
By conducting this exercise, you should be in a position to identify any gaps in your content, and any areas that have been spread too thin and that could be consolidated. While you might have covered a particular subject extensively, could the website benefit from a section of related information? For instance, we are currently working on a content hub project for our client Holiday Hypermarket28, and while there are pages covering worldwide holiday destinations, we have identified a need for supplementary pages covering nightlife, restaurants and things to do in those areas, as well as in-depth information about each hotel. By doing this, we are in the process of creating a comprehensive guide to tourist hotspots that visitors can refer to both before and after booking their next vacation.
If you have a large website, we recommend running an audit using crawling software, such as Screaming Frog29, to make sure you’ve caught every page, including non-HTML content and non-200 response codes. Xenu’s Link Sleuth30 is another good free tool, and although it hasn’t been updated for years, it could yield valuable insight. DeepCrawl31 is another thorough SEO package and well worth a look.
These tools work by traveling from link to link, so be aware that if any of your pages has been orphaned by a lack of internal links, they won’t be found. This problem can be tricky to overcome, but looking at all of your Google Analytics landing pages over a 12-month period, for example, might shine a little light. If there are no analytics or similar tracking data, then server logs can be a useful resource.
This research could also reveal useful information, including paths users take through the website, which pages are most frequently landed on and where traffic is coming from, giving you a full picture of how your website is being used. Conversion-rate optimization (CRO) testing software such as Visual Website Optimizer32 can be useful, too, especially with its new visitor analysis function33.
We also recommend assessing your content management system (CMS) at this point. Any limitations it has will define your path through the project, so have an open discussion with the relevant team as soon as possible. Ask as many questions as you can. Will you be able to bulk upload? What are the requirements on formatting? Does the layout have any flexibility? Are there word count limits? Identifying potential roadblocks early on is always a sensible move.
The longer it takes to upload a piece of content, the more labour-intensive and costly the project becomes. Including images, a 500-word page should, as a rule of thumb, take no longer than three to four minutes to add to the CMS. If you are likely to be going past this point, then a cost-benefit analysis might be worth carrying out, weighing the investment of development time against the benefit of faster uploads.
For most projects such as this, organic traffic will be a priority. For this reason, keyword research needs to begin early on in the process. This will enable you to home in on opportunities with your potential subject matter, and also give you an idea of the sorts of traffic figures and return on investment you can expect. This is obviously a massive field, so take the time to get right, and consider outsourcing the work if you don’t have the expertise in-house. If you’ve never tried it before, Moz has a pretty definitive guide34.
If you’re keen to raise your search engine rankings organically with a content hub, then benchmark your website before starting. Tools such as Serpfox35 and Ahrefs36 will tell you where your key landing pages rank before you launch your content hub, so that you can monitor improvement.
By this point, you should have a detailed view of your current content and of any glaring shortfalls. Of course, no website stands in isolation, so the next phase is competitor research. Here, you’re looking for ways to stand out against websites in your target market, whether through high-quality content, better design or more targeted copy.
The majority of the steps discussed above — save viewing data from analytics software and server logs, or heatmapping and CRO testing — can be used for competitor research, too. The scale and value of your project will define the amount of detail to go into with competitors, but as always, err on the side of caution.
Look at what your closest competitors are doing well, and identify ways in which you can improve upon it. A good way to do this is by seeing what gets shared on social media; it might be that a particular subject resonates with their audience and that you could produce even more in-depth content that your joint audience might find valuable — or produce a whole host of content that answers every question users could possibly imagine.
Tools such as Riffle37, FollowerWonk38 and Simply Measured39 can help you to identify the competition’s most popular social media updates. Next, look at the content itself. How many words are they writing for the most popular subjects? Is it significantly more or less than you are currently writing? Can you add to the content with even more valuable information?
Look at the keywords they are targeting, too. We often use Searchmetrics42 to see which terms, both paid and organic, are driving traffic to these websites, as well as keywords that we may have missed in our own hubs. This tool is unusual in that it shows overall search visibility, rather than just visibility for keywords you are tracking. It does this by monitoring a vast database of keywords — several billion in total — and then pulling from this data when requested. Because Google has stopped providing43 detailed keyword reporting in Analytics, this information is invaluable, and being able to see the same insight for competitors can be very useful, too.
Next, it’s time to think sideways. See what organizations in related industries are up to. To continue with our Holiday Hypermarket example, we chose to investigate the activities of tourist boards and travel magazines to see what works for them and whether we knew any subjects well enough to create a huge range of pages.
Wherever possible, carry out some market research on your customers. For instance, you might want to run a test with a tool such as What Users Do44, so that you can find out what information customers are looking for and whether they’ve had any problems using your website. Think carefully about the types of questions to ask them. We typically ask whether they have frustrations using the website, whether any information is missing, and about things they’d like to see. On e-commerce websites, we ask how many other websites they typically use before making a purchase and what those websites are. If your budget restricts this, then sending your existing customers a survey or asking them questions on social media is always worthwhile. Incentivize these comments to make sure you get enough feedback to work from.
Again, this research often reveals information that you have never considered and uncovers competitors that have never crossed your mind. If this happens, then it’s a good time to loop back and examine each of the elements in more detail.
Finally, don’t forget to ask your internal teams what they think of the website and what’s missing. These teams will have a wealth of expertise, in both your own and related industries. Brainstorming sessions focused on topic areas and on your industry can elicit great ideas from people with years of experience in the field.
By the end of this process, you should have a solid idea of the subjects to cover in your content hub. This is the point when you should identify what success looks like. Draw a list of key performance indicators (KPIs) that you’d like to track, within a range of time scales. Perhaps you want to drive 50% more traffic to your website within six months of launch, or get 500 social media mentions after publication, or even double the number of sales that come via the content hub itself.
Also, identify how you will measure these outcomes. You might need to set up additional tools to keep track. You will also want to look at the current situation to set a benchmark, so that you can measure improvement over the months and years ahead.
And now for possibly the most important part of the whole process: creating the content. Before you start writing, identify what the content should look like, from word count to target keywords and, if applicable, page design.
One of the most crucial aspects here is content modeling45. At a high level, this is a framework of the various types of pages you intend to create at the outset. For developers, this is essential because it will define the various templates that are used in the CMS, their attributes and how they interact with each other. This area has been covered in some detail elsewhere on Smashing Magazine, so we won’t dive in depth here, but we highly recommend Andy Fitzgerald’s content-first approach46.
As a content producer, your input here is vital. You will need to know not only which section of your website the copy will live in, but also whether different types of pages are required and what their purpose will be. To continue with the travel example, suppose you have a destination content hub, which sits in the top navigation and where website visitors will find information about the given country, the regions within that country, as well as places to visit, the best beaches, the best restaurants, and a guide to all of the hotels in each of those regions.
In this case, the hierarchy would be:
The research conducted on your own and competitors’ websites will enable you to pinpoint a word count for each page. If these similar pages are doing well, then this is the amount of content your audience would most like to read and share. Of course, if you can increase the length by adding useful information, then you should do so to add value for your readers.
Once you have a document outlining these points, you’re ready to look at the design of your content hub. Draw up a wireframe of each page, roughly illustrating how it should look. Bear in mind all of your findings, not only from your own website, but your competitors’ pages, too. How did they lay out the information? How do people use your website? Consider their frustrations, and be mindful to find solutions to these. Wireframe.cc is an easy-to-use tool to map out initial ideas, so that designers can refine the layout and start building the pages.
By this point, you should have a spreadsheet showing all of the content on the website, as well as the content you would like to edit or create.
Now that you know the size and scale of the project, it’s time to determine exactly how much the content hub will cost to complete. To avoid any unexpected expenses, consider not just the development time, but the number of people involved and how much time it will take each of them to complete their section, along with your own time and any on the client’s part.
In our experience, we have to factor in not only writers and developers, but also researchers and editors, plus the time to upload the content to the website — for each and every page. Once we have that information, we can set a deadline for the completion of the content hub, before adding a margin of contingency time in case of illness or unexpected issues thrown up by the building and production of the hub.
The next stage is to work out how many people we’ll need to complete the project on time and how much it would cost to employ each of those people. We opt for a mix of full-time, part-time and freelance employees, which gives us flexibility with the project. Add a 5% margin of error to your costs to cover for unexpected issues, such as images that are hard to find, illnesses and vacations.
With writers and editors, it’s a good idea to commission a few sample pages, to get a feel for how long they will take. Note their output per hour, but bear in mind that this could go down once the team becomes more familiar with the content and process, or up if extra levels of research or other complexities are introduced.
Some essential numbers to have at this point are the cost per page type, the cost per word and the editing and uploading costs. Together, these will give you an overall cost per page. At this point, we will refer back to our return-on-content model to see how this compares. If the expected return is much greater than the per-page outlay, then we’ll know that the project will likely succeed.
Of course, the project can’t get started without having a team in place. Having scoped how many people need to be involved, you can now easily identify whether additional human resources are needed to get the job done. Needless to say, any new hires must have a track record in their respective fields. We recommend assigning a trial piece of writing to be completed before a contract is signed, to ensure they are able to work from a brief. Having a bank of freelancers is also invaluable to picking up additional work and hitting deadlines.
Expert contributors are a less common but equally vital part of the team. In our experience, this is the area that can have the biggest impact on the overall quality of the project. High-level insight and knowledge about a subject isn’t always in great supply, and your writing team probably does not consist of experts in the field you are covering.
Hiring experts on an ad-hoc basis is a good solution. Typically, we ask for bullet points of information or notes, which can then be written up in-house. Training each and every contributor to understand the style guide would add too much time to the production process. By not paying them to write full articles, we keep our costs down.
We find these people by searching freelance databases and by putting out calls on PR wires and social media. For the travel content hubs, we might put out a call for an expert on the destination or even contact the tourist board for suggestions.
Getting experienced, professional writers who can follow a tight brief will ensure that the content you receive is in a format you can work with — and accurate. This might not be practical in many industries, though. For example, a qualified psychiatrist might not be interested in spending hours writing a thousand-word article for your medically focused website, but they might be willing to put together a brief document for your writers or to edit the completed work. Needless to say, if you do hire experts for this work, check their credentials thoroughly, and listen to their suggestions. After all, that’s what you’re paying them for.
Request a style guide from the client at the very start of each project, which will ensure that any content hits the mark in tone and branding, or create one if no document is in place. At a broad level, the client might want short sentences broken up by bullet points and headings. Other brands might want long-form content with little interruption. There might even be banned words or other small things that the client doesn’t allow.
Regardless, everyone on the team should familiarize themselves with the style guide from the outset; a workshop session is a great way to get everyone on board. To advance the process, present a preliminary batch of content to the client to ensure they are happy with the style, tone and structure of each page type. From there, update and amend the style guide to ensure that you have a keen awareness of how the content should read. It also helps to pin down a tighter brief for external writers, so that you can identify common misunderstandings and get all of the content right the first time.
A range of tools are available to help you manage production. It might seem basic, but a spreadsheet is a great tool for allocating work in some projects. Every single page of the hub can be listed, along with the writer, first editor, second editor and uploader. Using a color code, mark where in the production process each page is currently at — for instance, yellow for underway, green for complete and red for late. In case it helps, you can refer to one based on one of our recent projects49.
For a larger project, a tool such as Beegit50 and GatherContent51 can also be used to track and store each file, so that the project manager has an overview of the hubs and can monitor progress. We are big fans of Beegit, and the team’s receptiveness to feature requests is very impressive. Over the past few months, we’ve asked for numerous updates to improve our workflow, including reporting and tracking of each file update, all of which have been implemented.
A good task-management tool is also essential, especially if you’re not using content delivery software. A popular choice here is Redbooth54. We’ve used it for some time and found it to be easy and quick to use.
We also produce weekly reports to monitor the time writers spend on each page and to ensure we’re meeting deadlines.
It might be worth getting your editors to use a timesheet to keep an eye on how long the project is taking. Redbooth has a built-in time-tracker, which will help you keep track of each part of the process, but a number of tools are available.
While the editing is taking place, we’ll also typically have a member of the team undertake photo research to ensure that the content is ready to go live by the deadline. Alongside stock galleries, images can often be obtained from an industry’s official organizations, and sometimes user-generated content is suitable. For instance, with Holiday Hypermarket and the travel sector, many images were sourced from tourist boards and holidaymakers.
We use Google Reverse Image Search to ensure that any images we’re considering haven’t been used elsewhere online or used by competitor brands or used in contexts we would rather avoid. Of course, if you have access to a library of unique client images, then all the better.
The photo researcher is also responsible for making sure images are in the right format (either JPG or PNG), with the right dimensions for the wireframe, and that file sizes are small, so that pages don’t take long to load. Once they are satisfied, the images are stored in the project-management tool, ready for uploading.
Of course, we have more than one client, so balancing the production of the content hubs with the wider needs of the business is vital.
Time management has been key to ensuring that content production doesn’t go over deadline or budget. As mentioned, we leave some slack in the budget in case we need to get external help to cover anything unexpected (for instance, an illness among the team) that could affect the project’s completion.
The team is also encouraged to give feedback on progress and any sticking points, so that solutions can be found before they escalate. A key example of this has been the staggering of content delivery. Hubs are often broken down into key stages, as are projects within them. For instance, if there’s a lot to be said about a particular category, this will be handled over the course of a month. Then, the editors will have time to check during the following month, before delivery to the client.
We’ve also seen freelance writers miss the brief or fall short of a word count; so, once the editors get around to reviewing this content, it has to be sent back. Having a month for editing allows for us to have those discussions with freelancers and for them to get the content back to us before the month’s end. If this is not possible, then the editors need time to make modifications or rewrites before the deadline.
Measuring the success of content is notoriously difficult. However, it is by no means impossible, and given the scale of a content hub project, it certainly cannot be dismissed.
Our typical approach to measuring success is as follows.
Define the key metrics that relate to success, and understand why they relate.
You might find it helpful to group these attributes into the categories of commercial, tactical and brand, as recommended by Smart Insights57.
At the top level, usually a hub will be developed to meet a specific commercial goal, such as boosting purchases, increasing market share or generating leads. These will be easy to define and report using sales data, your CRM system or Google Analytics.
The next level covers tactical elements such as page views, unique users and search engine rankings. All of this will offer useful insight, but these should all be seen as part of the picture that makes up your overall commercial goals. Don’t focus too much on these numbers and lose sight of the big picture.
The visibility of your brand is another key area and can be monitored by tracking brand mentions, sentiment and social interactions. Tools such as Mention.com58 and Brandwatch59 are useful here.
Be consistent in how you measure, across the business.
Choose your metrics and stick to them. If you chop and change the elements that you track, you’ll lose the visibility of trends, even if they are not exactly for the area you are currently focusing on. For this reason, automate as much as possible — no one wants to manually update spreadsheets every week or month.
On a simpler level, a host of free Google Analytics dashboards can be easily plugged into your account. Simply click “Dashboards” → “News Dashboard” → “Create From Gallery,” and enter your criteria.
The Content Analysis Dashboard62, shown above, is a good place to start. As with all of these dashboards, it is fully customizable.
As part of the research process, target keywords should have been defined at the outset. Again, Searchmetrics is a good tool to use here because it will show overall visibility, rather than just the terms you are tracking, which can be very useful if you’re working with thousands of pages.
Create reports that meet the particular needs of the various stakeholders. They should offer actionable insight, too, rather than fluffy numbers.
Reports on word counts and completed pages might be of interest to your delivery team, but likely wouldn’t appeal to a CEO or sales director. Speak with each stakeholder and find out what information would be most useful to them. Reports should clearly identify any problems and outline solutions, too — don’t leave figures open to interpretation.
Of course, some areas are easier to cover in a report than others. Even if a page has a clearly defined goal — such as the purchase of a product — conversion data doesn’t always tell the full story. Rarely does a consumer buy on the first visit to a website, especially a large purchase, such as a vacation, and information-focused content such as blogs and resource sections can often drive the decision-making process.
As mentioned earlier, we often recommend attribution modelling as way to gain insight into content performance. This is a detailed subject unto itself, and a good introduction can be found over on The Drum63. Yet the premise is simple: Google Analytics and other packages enable you to string together the various paths to your goals, whether they are across social media, pay-per-click (PPC) or organic search.
This is an ideal way to measure a content hub. You can see, for example, how many people visit your hub or download a brochure before making a purchase within a 30-day period.
Attribution is not an exact science, but it does enable you to make informed decisions about what works and what doesn’t. With marketing channels ever converging online, this insight is crucial.
As with any major project, a content hub should not be taken lightly. Being prepared is key, and that means digging deep into your website to understand both the scale of the task at hand and what will be required to achieve your goals. This rigor and depth of understanding are not reserved for massive hubs, though — any website that relies on content would benefit from all or part of the methods discussed.
(al, il)
“There are more things in heaven and earth, Horatio, than are dreamt of in your philosophy,” said Shakespeare’s Hamlet, in the famous scene in which Hamlet teaches Horatio to be a web designer.
Horatio, as every schoolchild knows, is a designer from Berlin (or sometimes London or Silicon Valley) who has a top-of-the-line MacBook, the latest iPhone and an unlimited data plan over the fastest, most reliable network. But, as Hamlet points out to him, this is not the experience of most of the world’s web visitors.
The World Bank reports1 that 1.1 billion people across the world have access to high-speed Internet; 3.2 billion people have some kind of access to the web; 5.2 billion own a mobile phone; and 7 billion live within coverage of a mobile network.
Unsurprisingly, many of those currently unconnected are in India, China, Indonesia — these being the biggest countries in the world. But being unconnected (for whatever reason) isn’t only a reality in developing economies; 51 million people in the US are not connected.
When I speak at conferences in rich Western countries, I often ask people, “Where will your next customers come from?” You don’t know. In our truly worldwide web, you can’t know.
Take Ignighter, a dating website set up by three Jewish guys in the US, with a culturally targeted model: Instead of a boy and girl going out on a date, 10 guys and 10 girls would go out together on organized group dates.
Ignighter got 50,000 registrations4, but it wasn’t enough to reach critical mass, and the founders considered abandoning their business. Then, they noticed they were getting as many sign-ups a week from India as they did in a year in the USA.
Perhaps the group-dating model that they anticipated for Jewish families really resonated with conservative Muslim, Hindu and Sikh families in India, Singapore and Malaysia, so they rebranded as Stepout, relocated to Mumbai and became India’s biggest dating website.
I’d bet that if you had asked them when they set up Ignighter, “What’s your India strategy?,” they would have said something like, “We don’t have one. We don’t care. We are focusing on middle-class New York Jewish people.” It’s also worth noting that if Ignighter had been an iOS app, they would not have been able to pivot their business, because iOS use in subcontinental Asia is very low. The product was discovered by their new customers precisely because they were on the web, accessible to everybody, regardless of device, operating system or network conditions.
You can’t predict the unpredictable, but, like, whatever, now I’m making a prediction: Many of your next customers will come from the area circled below, if only because there are more human beings alive in this circle5 than in the world outside the circle.
Asia has 4 billion people right now (out of 7.2 billion globally). The United Nations predicts8 that, by 2050, the population of Asia will reach 5 billion. By 2050, the population of Africa is set to double to 2 billion, and by 2100 (which is a bit late for me and perhaps for you), the population of Africa alone will reach 5 billion.
By 2100, the population of the planet will stabilize at 11 billion, and 50% of the world will live in just these 10 countries highlighted below, only one of which is in what we now consider the developed West.
Over the same period, the population of the West will actually drop, due to declining birthrates. So, it makes sense to target people as your next customers in countries where the population is growing.
But it’s not only a question of head counts. Many of the developing economies are growing extraordinarily fast, with a rapidly expanding middle class that has increasing disposable income. Let’s examine some of those countries now, concentrating for the moment on Asia.
China has 1.4 billion people. Its economy saw 6.6% growth11 in gross domestic product (GDP). I don’t know the GDP growth of your country, but I’d imagine that your politicians would love to have 6.6% GDP growth.
So much money changes hands in China. For comparison, in 2014, on Black Friday and Cyber Monday combined, $2.9 billion changed hands in the US. In the same year in China, on Singles’ Day (November 11th), $9.2 billion changed hands. It is predicted that, by 2019, e-commerce will be worth $1 trillion a year12 in China.
Indonesia has 258 million people and GDP growth of 4.9%. 75% of mobile phone subscribers are on 2G or EDGE networks, and half of all smartphone users say they experience network problems daily13. This is very much tied to geography: Indonesia consists of thousands of islands. In 2015, GBD Indonesia wrote14:
Indonesia is still predominantly a 2G market, and leapfrogging from there to 4G is a huge task that will require substantial investment in towers and equipment.
Nevertheless, for the Indonesian website BliBli, more than one third of its 2.5 million customers live in rural areas15, and Indonesia is the social media capital of the world16, being third most talkative on Twitter and fourth most on Facebook.
Southeast Asia is the fastest-growing Internet market in the world, and Indonesia is the fastest-growing country. The Internet economy in Southeast Asia will reach $200 billion by 2025 — 6.5 times what it is now, as estimated by Google and Temasec17 in 2016.
Myanmar has 57 million people and 8.1% GDP growth, largely fuelled by the government’s democratic reforms (or, perhaps more accurately, reforms designed to appear democratic). One of the reasons for this growth is that five years ago a SIM card cost $200018 in Myanmar; last August it went down to $1.50, which, of course, is fuelling growth in mobile phones.
As I write this, I’m sitting in a coffee shop in Kochi, Kerala State, India. The country has a population of 1.3 billion people, with a GDP growth of 7.6%. Boston Consulting Group estimates19 that the number of Internet users will double from 190 to 400 million by 2018 and that the web will contribute $200 billion to India’s GDP by 2020. Indian (and Indonesian) smartphone users are particularly sensitive about data consumption; 36% of Asia-Pacific20 smartphone users block advertisements, whereas two thirds do in India and Indonesia.
Apart from China (because of its now-abandoned policy of one child per family), the populations of these nations are young. Of course, young people are always on their smartphones, looking for Pokemons, taking selfies, Instagraming their coffee: A young population is an Internet-savvy population.
56% of people in emerging economies see themselves first and foremost as global citizens, rather than national citizens, the BBC reported21 last year. This is particularly pronounced in Nigeria, China, Peru and India.
And, of course, the people coming to the web are coming on smartphones. According to MIT22, of the 690 million Internet users in China, 620 million go online with a mobile device.
There is a more profound commonality as well. Below are the top-10 domains that Opera Mini users in the US visited in September 2016. (These figures are from Opera’s internal reporting tools; I was Deputy CTO of Opera until November 2016. Now I have no relationship with Opera.)
The top-10 handsets used to view those websites were:
The top-10 domains visited in Indonesia during the same period were:
Note the commonalities — keeping in touch with friends and family; search; video; uncensored news and information (Wikipedia) — as well as the local variations.
The top-10 handsets in Indonesia are lower-end than those used in the US:
In Nigeria last month, almost the same kinds of websites were viewed — again, with local variations; Nigeria is football-crazy, hence goal.com.
But the top-10 handsets in Nigeria are lower-end than in Indonesia.
This suggests that across the world, regardless of disposable income, regardless of hardware or network speed, people want to consume the same kinds of goods and services. And if your websites are made for the whole world, not just the wealthy Western world, then the next 4 billion people might consume the stuff that your organization makes.
In Browserland and Web Standards World (not theme parks — yet — but wouldn’t they be great ones?), we are trying to make better standards and better browsers to make using the web a better experience for the next 4 billion people.
Let’s take a quick tour of some of the stuff we’ve been working on. My goal isn’t to give you a tutorial on these technologies (plenty of those are available elsewhere), but to explain why we’ve developed these standards, and to show that the use cases they address are not just nice-to-haves for Horatio and his Western colleagues, but that they address important needs for the rest of the world, too.
We know that end users love to install apps to the home screen, each app with its own icon that they can tickle to life with a digit. But native apps work only on single platforms; they are generally only available from a walled-garden app store (with a 30% fee going to the gatekeeper); and they’re often heavy downloads. Facebook found23 that a typical 20 MB Android application package (APK) takes more than 30 minutes to download over a 2G connection, and that download often fails because of flaky networks.
Most installed apps are not used. According to Google24, the average smartphone user has 36 apps on their device. One in four are used daily, and one in four are never used. But we know that people in emerging markets use cheaper phones, and cheaper phones have less storage. Even now, 25% of all new Android shipments go out with only 512 MB of RAM and maybe only 8 GB of storage.
The World Bank asked people across 30 nations in Africa what they use their phone for.
Unsurprisingly, phone calls and text messages were the primary use case, followed by missed calls. Across Africa and Asia, businesses encourage potential customers to send them a “missed call” — that is, to dial their number and then hang up. The business then phones the customer back, so that the cost of the contact is borne by the business, not the customer.
Here’s an example I photographed today in Kochi, India:
The next most popular uses of mobile phones in Africa are games, music and transferring airtime. (In many countries, carrying cash can be a little risky, and many people don’t have access to banks, so people pay for goods and services by transferring airtime from their phone to the vendor’s phone.)
Then you have photos and videos, etc. Like everybody else, they are unlikely to delete video of their family or their favourite MP3s to make room for your e-commerce app. Birdly29, in a blog post explaining why you shouldn’t bother creating a mobile app, said, “We didn’t stand a chance as we were fighting with both our competitors and other apps for a few more MB of room inside people’s phone.”
Wouldn’t it be super and gorgeous if we could offer the user experience of native apps with the reach of the web? Well, dear reader, now we can!
Progressive web apps (PWAs) allow users to “install” your app to their home screen (on supporting devices and browsers). Your PWA can launch in full-screen, portrait or landscape mode, just like a native app. But, crucially, your app lives on the web — it’s fully part of the web, and like the web, it’s based on the principles of progressive enhancement.
Recently, my ex-Opera colleague Andreas Bovens and I interviewed a Nigerian and a Kenyan developer who made some of the earliest progressive web apps. Constance Okoghenun said30:
Nigerians are extremely data sensitive. People side-load apps and other content from third parties [or via] Xender. With PWAs […], without the download overhead of native apps […] developers in Nigeria can now give a great and up-to-date experience to their users.
Kenyan developer Eugene Mutai said:
[PWAs] may solve problems that make the daily usage of native mobile applications in Africa a challenge; for example, the size of apps and the requirement of new downloads every time they are updated, among many others.
We are seeing the best PWAs come out of India, Nigeria, Kenya and Indonesia. Let’s look briefly at why PWAs are particularly well suited to emerging economies.
With a PWA, all the user downloads is a manifest file, which is a small text file with JSON information. You link to the manifest file from the head
element in your HTML document, and browsers that don’t understand it just ignore it and show a normal website. This is because HTML is fault-tolerant. The vital point here is that everybody gets something, and nobody gets a worse experience.
(Making a manifest file is easy, and a lot of the information required is probably already in your head
elements in proprietary meta tags. So, Stuart Langridge and I wrote a manifest generator31: Give it a URL, and it will spider your website and write a manifest file for you to download or copy and paste.)
The manifest just gives the browser the information it needs to install the PWA (an icon for the home screen, the name of the app and the URL to go to when it launches) and is, therefore, very small. The actual app lives on your server. This means there is no lag with distributing updates. Usually, users receive notifications saying that new versions of their native apps have been released, but weeks might go by before they go to a coffee shop with free Wi-Fi to install the updates, or they might never download the updates at all — disastrous if one of the updates corrects a security flaw. But because PWAs are web apps, when you make an update, the next time the user starts the app on their device, they will automatically get the newest version.
Crucially, a PWA is just a normal website on Safari, Windows phones and Opera Mini. Nobody is locked out — that’s why they are called progressive web apps; they are progressively enhanced websites.
Flipkart is a major e-commerce website in India (competing with Amazon). A couple of years ago, they decided to abandon their mobile website and redirect users to the app stores to download native apps. Only 4%32 of people who actually took the trouble to type the website’s URL (and, therefore, presumably were actively shopping) ever downloaded the app. With 96% of users failing to download the apps, Flipkart reversed its policy and replaced its website with a progressive web app, called Flipkart Lite. Since its launch, Flipkart reports 40% returning visitors week over week, 63% increased conversions from home-screen visits, and a tripling of the time that visitors browse the website.
Flipkart’s commitment to PWAs was expressed by Amar Nagaram, of Flipkart engineering, at its PWA summit in Bangalore, where I spoke:
We want Flipkart Lite available on every phone over every flaky network in India.
One great thing about a PWA is that, like any other secure website, it works offline, using the magic of service workers33. This further closes the gap between native and web apps; an offline experience for the web is (I hate to use the phrase) a “paradigm shift.” Until now, when your web browser is disconnected from the Internet, you get a boring browser-derived “Sorry” message. Now, with service workers sitting between a page and the network, you can give visitors a meaningful offline experience. For example, when the user goes to your website for the first time, you can download images of the 10 most popular products to the cache, and upon subsequent offline visits, you could say, “I’m sorry. You are offline, but you can browse our top products and press ‘Buy,’ and we will background sync later.” The offline experience you provide will obviously depend on what your app does, but service workers give you all the flexibility you need.
Additionally, service workers give you:
Currently, PWAs are supported on Chrome for Android, Microsoft Edge and Opera for Android. (Opera may have a small market share where you are, but it’s long been a significant player in the developing world.) Mozilla has signalled that it’s implementing PWAs on Firefox for Android. Safari for iOS has a non-standard mechanism for adding websites to the home screen but as of yet doesn’t support service workers.
To recap, the advantages of a PWA are these:
If you want to see some real PWAs, check out the community-curated website (itself a PWA) PWA.Rocks34
Around 2011, at any conference I went to, everybody would tell me about the responsive images problem: How can we send “Retina-quality” images (much bigger in file size) to devices that can display them properly and send smaller images to non-Retina devices? At the time, we couldn’t; the venerable img
element can point to only one source image, and that’s the only one that could be sent to all devices.
But solving this problem is vital if we want to save bandwidth for consumers whose devices aren’t Retina, and also to save battery life; sending unnecessarily large images and asking the browser to resize them with the conventional img {max-width:100%}
trick requires a lot of CPU cycles, which causes delays and drains the battery. As Tim Kadlec wrote35:
On the test page with 6x images (not unusual at the moment on many responsive sites), the combination of resizes and decodes added an additional 278ms in Chrome and 95.17ms in IE (perhaps more …) to the time it took to display those 10 images.
In many parts of the world, battery life is a considerable problem. If you have a two-hour commute across Lagos or Nairobi to get to work, and a two-hour commute back, you wouldn’t be able to recharge your device, which you’d need to do if you wanted to make phone calls.
For instance, power is in short supply in India. According to the Federation of Indian Chambers of Commerce & Industry36, in 2012 (the last reliable figures I could find):
A third of Indian citizens, especially in the rural parts of the country, remain without power, as do 6% of the urban population. During peak hours, the shortage was 9.8%.
Battery life is so important that in India it has become a secondary industry unto itself. Alok Gupta, managing director and chief executive of The Mobile Store, India’s largest mobile phone retailer, recalls in October 201537:
Nearly 30 per cent of our annual smartphone unit sales have power banks bundled in. Two years ago, less than 1 per cent of our annual smartphone sales had power banks bundled in.
So (spurred on by a slight post-conference-season hangover), in December 2011, I wrote a blog post38 with a straw man suggestion for a new HTML picture
element to solve the problem. My idea wasn’t fully thought out and wouldn’t have worked properly in its initial incarnation (damn hangovers), but cleverer people than me — Yoav Weiss (now at Akamai), Mat Marquis of Bocoup, Tab Atkins of Google, Marcos Cáceres of Mozilla, Simon Pieters of Opera — saw the utility in it and worked to make a proper specification. It was implemented, and now it is in every modern browser — even Safari.
This isn’t the place to talk about the nuts and bolts of HTML responsive images39, but if you use them, you’ll get significant savings on your images.
I did a talk about responsive images in Bristol last June, and the next day a developer in the audience named Mike Babb used the techniques and reduced his web page down by 70%40. This is important because the average web page (page, not full app or website) is 2.3 MB, of which 1.6 MB are images41. If you can save data, your website will be faster.
Mike saved 70%, and that 70% matters, because not everybody is like us and has a big data plan. In Germany, buying an entry-level mobile data plan of 500 MB per month takes one hour of work at minimum wage. In the US, it takes six hours, and in Brazil, it takes 34 hours of work42 .
If your bloated images are eating up people’s data plan, then you are literally making them work more hours — and that it is hugely discourteous. As well as being rude, it’s bad business: They simply won’t go back to your website. (If you’d like to know more about the cost of accessing your website, check Tim Kadlec’s utility What Does My Site Cost?45)
In this article, we’ve explored where the next 4 billion connected people will come from, as well as some of the innovations that the standards community has made to better serve them. In the next part, we’ll look at some of the demand-side problems that prevent people from accessing the web easily and what can be done to overcome them.
The population projections in this article are originally from the United Nations, but I got them from the excellent, humane documentary named Don’t Panic: The Facts About Population46 by Hans Rosling, a hero of mine who died while I was writing this article. Thanks to Clara at Damcho Studio47 for helping to prepare this article.
(vf, al, il)
A modern, extensible file manager for power users. For Windows, OS X and Linux.
It’s almost time to leave winter behind us here in the Northern Hemisphere. Most of the time, the weather can’t quite make up its mind, and so the days pass by with half of the sky sunny while the other half gray. Nature usually tends to have a strong impact on my mood, and so these days I feel like I’m literally in a gray zone — between winter and spring.
I’m not sure about you, but with springtime lurking around the corner, my need for extra inspiration is even bigger. So, I hope that this month’s set will give you just that spark you need to cheer you up and boost your creativity.
The style and the little details such as the bottom band on his pants, his shirts and his handbag are the cherries on the cake.
Exciting times ahead for cycling fans. This nice design however was to celebrate Chris Froome winning Le Tour for the third time. Love how the body is arched over.
Illustration for Lonely Planet‘s book. A illustration style reminiscent of mid-century hand-painted billboards.
Probably not your average color combo but it sure works.
Part of a murals series for a company that is leading the way in smart home technology. They turned out wonderful. See them all14.
Cool idea to visualize a campaign to ‘buy local’ this way. Creative! Also digging those grunge effects.
An editorial illustration that accompanies an article about the history of the British Pound.
I have seen this technique of combining real products with simple shape paper-cut elements a couple of times already and the result is really beautiful.
In order to celebrate the 50th Anniversary of the original Star Trek television series on CBS, the Philadelphian design studio ‘The Heads Of State’ created these stamps. Large view here23.
An illustration for Umami, a chain of restaurants based in Zagreb, Croatia. It was made as a part of a poster series which takes on curiosity and exploration of different tastes and flavors.
The hair, the skirt but especially her stockings are what makes this illustration what it is. Such class! Look at that shadow.
These vegetables are beautiful. They have a certain edge. They are items for new packaging.
Wonderful color choices. Love this special 2D approach where things are viewed from 2 different angles, top and front. Makes you hardly wait for summer to arrive!
Artwork that shows us a beautiful way of how the illustrator’s imagination thinks and works.
I need my weekly dose to keep me sane. Interesting bright colors and cool pattern details.
Talk about making the most of a limited set of colors. The neon lightbulbs are so well done. Incredible piece of work. Be sure to watch the process video38!
Great textures and atmosphere. Just by looking at it you can almost feel the cold.
I have been enjoying Tycho’s latest album and wanted to go see Scott live in Brussels but it’s sold out already. This is the world tour poster.
Sometimes you don’t need much to get an interesting result. Look up! In his series, Alexander Missen captured many fundamental symbols of America.
Love the illustration style. Inspiring color choices, too.
So simple, yet so amazing! Love deconstructing this. So much to learn.
Great personal project by Peter Tarka. Such great details and attention to colors and backdrops in this series. There are a few more51 to check out.
Very nice how the objects are placed so you are really looking inside.
Pictures like this make you forget the cold. Some nice light and colors shot in Ringerike, Norway.
A poster for a bicycle Retro Cruise in Kyiv. Original style! Patterns on the socks and jacket are simply fabulous.
Quite busy, I admit, but I admire how everything flows into each other. Well done, especially since it’s not that easy using only a limited amount of colors.
This cover for Genome is just absolutely brilliant! So fitting, and just look at those colors. So yummy!
I’ve heard so many good things about Iceland and by looking at this picture they all must be true.
Big neon sign for Teddy’s Nacho Royale, a burrito joint on the campus of a large social media company. Had no clue that they could create such attractive neon signs.
The vintage atmosphere is lovely. Makes me think of a Jules Verne novel.
The artist’s style is obviously inspired by 1940s comic book art in this piece, as well as the Russian avant-garde movement and printed materials from the 1950s/60s.
The work of Christopher DeLorenzo is mostly black and white, and consists of lines with only a few filled areas.
An illustration of Eustace Tilley, the iconic dandy of New York.
Lovely colors and admiring the simplicity of the characters.
Cover illustration of ski resort Val Thorens for the EasyJet magazine Traveller.
The witty style of Jean-Michel Tixier is one of my top favorites. It was inspired by traditional French cartoons.
The geometric style of Matthew Lyons. I first discovered his work way back in 201082 thanks to Scott Hansen.
A nice combination of colors, don’t you think?
I always admired illustrators that are working for the fashion industry. The faces are so good! Those are difficult to get right.
A second one from Caroline. Just look at that hair! Those are some mad skills! The blush, the eyes, the lips…
Stunning capture! It’s like a dream.
Memories from my childhood. The arcade park! Tasteful colors.
Wonderful to see the design system evolve with each location ISO50 visits while he’s on tour.
Such a special style. Love how the ‘hair’ is done with all the hidden elements.
Lovely example that shows what can be done when photographs are combined with custom lettering.
Julien Pacaud is a French illustrator with a very particular style that you could describe as retro-futuristic.
The shape of the figures are super cute.
Happy from just looking at this. A lot items but still all very balanced. Illustration for Swissmiss’s Creative Morning‘s website.
It’s quite impressive when you realize that most of this is made out of rectangles.
A bit of motivation can’t hurt. Love the type used here and how the buildings were made.
As web developers, we all approach our work very differently. And even when you take a look at yourself, you’ll notice that the way you do your work does vary all the time. I, for example, have not reported a single bug to a browser vendor in the past year, despite having stumbled over a couple. I was just too lazy to write them up, report them, write a test case and care about follow-up comments.
This week, however, when integrating the Internationalization API for dates and times, I noticed a couple of inconsistencies and specification violations in several browsers, and I reported them. It took me one hour, but now browser vendors can at least fix these bugs. Today, I filed two new issues, because I’ve become more aware again of things that work in one browser but not in others. I think it’s important to change the way we work from time to time. It’s as easy as caring more about the issues we face and reporting them back.
:not(:last-of-type)
selector instead of two different selectors, for example? Timothy B. Smith explains how this little trick works15.And with that, I’ll close for this week. If you like what I write each week, please support me with a donation21 or share this resource with other people. You can learn more about the costs of the project here22. It’s available via email, RSS and online.
— Anselm
Playing with CSS Grids * Numscrubber.js * Scrimba * Animista * Service Worker Toolchain * hyperHTML * colorfonts.wtf * 3…
Douglas Crockford famously declared browsers to be “the most hostile software engineering environment imaginable,” and that wasn’t hyperbole. Ensuring that our websites work across a myriad of different devices, screen sizes and browsers our users depend on to access the web is a tall order, but it’s necessary. If our websites don’t enable users to accomplish the key tasks they come to do, we’ve failed them.
We should do everything in our power to ensure our websites function under even the harshest of scenarios, but at the same, we can’t expect our users to have the exact same experience in every browser, on every device. Yahoo realized this more than a decade ago and made it a central concept in its “Graded Browser Support1” strategy:
Support does not mean that everybody gets the same thing. Expecting two users using different browser software to have an identical experience fails to embrace or acknowledge the heterogeneous essence of the Web. In fact, requiring the same experience for all users creates an artificial barrier to participation. Availability and accessibility of content should be our key priority.
And that was a few years before the iPhone was introduced!
Providing alternate experience pathways for our core functionality should be a no-brainer, but when it comes to implementing stuff we’d rather not think about, we often reach for the simplest drop-in solution, despite the potential negative impact it could have on our business.
Consider the EU’s “cookie law.”2 If you’re unfamiliar, this somewhat contentious law3 is privacy legislation that requires websites to obtain consent from visitors before storing or retrieving information from their device. We call it the cookie law, but the legislation also applies to web storage4, IndexedDB5 and other client-side data storage and retrieval APIs.
Compliance with this law is achieved by:
If you operate a website aimed at folks living in the EU and fail to do this, you could be subject to a substantial fine. You could even open yourself up to a lawsuit.
If you’ve had to deal with the EU cookie law before, you’re probably keenly aware that a ton of “solutions” are available to provide compliance. Those quotation marks are fully intentional because nearly every one I found — including the one provided by the EU6 itself — has been a drop-in JavaScript file that enables compliance. If we’re talking about the letter of the law, however, they don’t actually. The problem is that, as awesome and comprehensive as some of these solutions are, we can never be guaranteed that our JavaScript programs will actually run7. In order to truly comply with the letter of the law, we should provide a fallback version of the utility — just in case. Most people will never see it, but at least we know we’re covered if something goes wrong.
I stumbled into this morass while building the 10k Apart contest website8. We weren’t using cookies for much on the website — mainly analytics and vote-tracking — but we were using the Web Storage API9 to speed up the performance of the website and to save form data temporarily while folks were filling out the form. Because the contest was open to folks who live in the EU, we needed to abide by the cookie law. And because none of the solutions I found actually complied with the law in either spirit or reality — the notable exception being WordPress’ EU Cookie Law10 plugin, which works both with and without JavaScript, but the contest website wasn’t built in WordPress or even PHP, so I had to do something else — I opted to roll my own robust solution.
I’m a big fan of using interface experience (IX) maps11 to diagram functionality. I find their simple nature easy to understand and to tweak as I increase the fidelity of an experience. For this feature, I started with a (relatively) simple IX map that diagrammed what would happen when a user requests a page on the website.
This IX map outlines several potential experiences that vary based on the user’s choice and feature availability. I’ll walk through the ideal scenario first:
accepts
cookie and closes the banner.accepts
cookie and does not inject the banner code. JavaScript sees the cookie and enables the cookie and web storage code.For the vast majority of users, this is the experience they’ll get, and that’s awesome. That said, however, we can never be 100% guaranteed our client-side JavaScript code will run, so we need a backup plan. Here’s the fallback experience:
accepts
cookie before redirecting the user back to the page they were on.accepts
cookie and does not inject the banner code.Not bad. There’s an extra roundtrip to the server, but it’s a quick one, and, more importantly, it provides a foolproof fallback in the absence of our preferred JavaScript-driven option. True, it could fall victim to a networking issue, but there’s not much we can do to mitigate that without JavaScript in play.
Speaking of mitigating networking issues, the 10k Apart contest website uses a service worker14 to do some pretty aggressive caching; the service worker intercepts any page request and supplies a cached version if one exists. That could result in users getting a copy of the page with the banner still in it, even if they’ve already agreed to allow cookies. Time to update the IX map.
This is one of the reasons I like IX maps so much: They are really easy to generate and simple to update when you want to add features or handle more scenarios. With a few adjustments in place, I can account for the scenario in which a stale page includes the banner unnecessarily and have JavaScript remove it.
With this plan in place, it was time to implement it.
10k Apart’s back end is written in Node.js17 and uses Express18. I’m not going to get into the nitty-gritty of our installation and configuration, but I do want to talk about how I implemented this feature. First off, I opted to use Express’ cookie-parser19 middleware to let me get and set the cookie.
// enable cookie-parser for Express var cookieParser = require('cookie-parser'); app.use(cookieParser());
Once that was set up, I created my own custom Express middleware20 that would intercept requests and check for the approves_cookies
cookie:
var checkCookie = function(req, res, next) { res.locals.approves_cookies = ( req.cookies['approves_cookies'] === 'yes' ); res.locals.current_url = req.url || '/'; next(); };
This code establishes a middleware function named checkCookie()
. All Express middleware gets access to the request (req
), the response (res
) and the next middleware function (next
), so you’ll see those accounted for as the three arguments to that function. Then, within the function, I am modifying the response object to include two local variables (res.locals
) to capture whether the cookie has already been set (res.locals.approves_cookies
) and the currently requested URL (res.locals.current_url
). Then, I call the next middleware function.
With that written, I can include this middleware in Express:
app.use(checkCookie);
All of the templates for the website are Mustache21 files, and Express automatically pipes res.locals
into those templates. Knowing that, I created a Mustache partial22 to handle the banner:
{{^approves_cookies}} <div role="alert"> <form action="/cookies-ok" method="post"> <input type="hidden" name="redirect_to" value="{{current_url}}"> <p>This site uses cookies for analytics and to track voting. If you're interested, more details can be found in <a href="{{privacy_url}}#maincookiessimilartechnologiesmodule">our cookie policy</a>.</p> <button type="submit">I'm cool with that</button> </form> </div> {{/approves_cookies}}
This template uses an inverted section23 that only renders the div
when approves_cookies
is false. Within that markup, you can also see the current_url
getting piped into a hidden input
to indicate where a user should be redirected if the form method of setting the cookie is used. You remembered: the fallback.
Speaking of the fallback, since we have one, we also need to handle that on the server side. Here’s the Node.js code for that:
var affirmCookies = function (req, res) { if ( ! req.cookies['approves_cookies'] ) { res.cookie('approves_cookies', 'yes', { secure: true, maxAge: ( 365 * 24 * 60 * 60 ) // 1 year }); } res.redirect(req.body.redirect_to); }; app.post('/cookies-ok', affirmCookies);
This ensures that if the form is submitted, Express will respond by setting the approves_cookies
cookie (if it’s not already set) and then redirecting the user to the page they were on. Taken altogether, this gives us a solid baseline experience for every user.
Now, it’s worth noting that none of this code is going to be useful to you if your projects don’t involve the specific stack I was working with on this project (Node.js, Express, Mustache). That said, the logic I’ve outlined here and in the IX map is portable to pretty much any language or framework you happen to know and love.
OK, let’s switch gears and work some magic on the front end.
When JavaScript is available and running properly, we’ll want to take full advantage of it, but it doesn’t make sense to run any code against the banner if it doesn’t exist, so first things first: I should check to see whether the banner is even in the page.
var $cookie_banner = document.getElementById('cookie-banner'); if ( $cookie_banner ) { // actual code will go here }
In order to streamline the application logic, I’m going to add another conditional within to check for the accepts_cookies
cookie. I know from my second pass on the IX map there’s an outside chance that the banner might be served up by my service worker even if the accepts
cookie exists, so checking for the cookie early lets me run only the bit of JavaScript that removes the banner. But before I jump into all of that, I’ll create a function I can call in any of my code to let me know whether the user has agreed to let me cookie them:
function cookiesApproved(){ return document.cookie.indexOf('approves_cookies') > -1; }
I need this check in multiple places throughout my JavaScript, so it makes sense to break it out into a separate function. Now, let’s revisit my banner-handling logic:
var $cookie_banner = document.getElementById('cookie-banner'); if ( $cookie_banner ) { // banner exists but cookie is set if ( cookiesApproved() ) { // hide the banner immediately! } // cookie has not been set else { // add the logic to set the cookie // and close the banner } }
Setting cookies in JavaScript is a little convoluted because you need to set it as a string, but it’s not too ghastly. I broke out the process into its own function so that I could set it as an event handler on the form:
function approveCookies( e ) { // prevent the form from submitting e.preventDefault(); var cookie, // placeholder for the cookie expires = new Date(); // start building expiry date // expire in one year expires.setFullYear( expires.getFullYear() + 1 ); // build the cookie cookie = [ 'approves_cookies=yes', 'expires=' + expires.toUTCString(), 'domain=' + window.location.hostname, window.location.protocol == 'https:' ? 'secure' : '' ]; // set it document.cookie = cookie.join('; '); // close up the banner closeCookieBanner(); // return return false; }; // find the form inside the banner var $form = $cookie_banner.getElementsByTagName('form')[0]; // hijack the submit event $form.addEventListener( 'submit', approveCookies, false );
The comments in the code should make it pretty clear, but just in case, here’s what I’m doing:
e
) and cancel its default action using e.preventDefault()
.Date
object to construct a date one year out.approves_cookies
value, the expiry date, the domain the cookie is bound to, and whether the cookie should be secure (so I can test locally).document.cookie
equal to the assembled cookie string.closeCookieBanner()
— to close the banner (which I will cover in a moment).With that in place, I can define closeCookieBanner()
to handle, well, closing up the banner. There are actually two instances in which I need this functionality: after setting the cookie (as we just saw) and if the service worker serves up a stale page that still has the banner in it. Even though each requires roughly the same functionality, I want to make the stale-page cleanup version a little more aggressive. Here’s the code:
function closeCookieBanner( immediate ) { // How fast to close? Animation takes .5s var close_speed = immediate ? 0 : 600; // remove window.setTimeout(function(){ $cookie_banner.parentNode.removeChild( $cookie_banner ); // remove the DOM reference $cookie_banner = null; }, close_speed); // animate closed if ( ! immediate ) { $cookie_banner.className = 'closing'; } }
This function takes a single optional argument. If true
(or anything “truthy”24) is passed in, the banner is immediately removed from the page (and its reference is deleted). If no argument is passed in, that doesn’t happen for 0.6 seconds, which is 0.1 seconds after the animation finishes up (we’ll get to the animation momentarily). The class
change triggers that animation.
You already saw one instance of this function referenced in the previous code block. Here it is in the cached template branch of the conditional you saw earlier:
… // banner exists but cookie is set if ( cookiesApproved() ) { // close immediately closeCookieBanner( true ); } …
Because I brought up animations, I’ll discuss the CSS I’m using for the cookie banner component, too. Like most implementations of cookie notices, I opted for a visual full-width banner. On small screens, I wanted the banner to appear above the content and push it down the page. On larger screens I opted to affix it to the top of the viewport because it would not obstruct reading to nearly the same degree as it would on a small screen. Accomplishing this involved very little code:
#cookie-banner { background: #000; color: #fff; font-size: .875rem; text-align: center; } @media (min-width: 60em) { #cookie-banner { position: fixed; top: 0; left: 0; right: 0; z-index: 1000; } }
Using the browser’s default styles, the cookie banner already displays block
, so I didn’t really need to do much apart from set some basic text styles and colors. For the large screen (the “full-screen” version comes in at 60 ems), I affix it to the top of the screen using position: fixed
, with a top
offset of 0
. Setting its left
and right
offsets to 0
ensures it will always take up the full width of the viewport. I also set the z-index
quite high so it sits on top of everything else in the stack.
Here’s the result:
Once the basic design was there, I took another pass to spice it up a bit. I decided to have the banner animate in and out using CSS. First things first: I created two animations. Initially, I tried to run a single animation in two directions for each state (opening and closing) but ran into problems triggering the reversal — you might be better at CSS animations than I am, so feel free to give it a shot. In the end, I also decided to tweak the two animations to be slightly different, so I’m fine with having two of them:
@keyframes cookie-banner { 0% { max-height: 0; } 100% { max-height: 20em; } } @keyframes cookie-banner-reverse { 0% { max-height: 20em; } 100% { max-height: 0; display: none; } }
Not knowing how tall the banner would be (this is responsive design, after all), I needed it to animate to and from a height
of auto
. Thankfully, Nikita Vasilyev25 published a fantastic overview of how to transition values to and from auto
26 a few years back. In short, animate max-height
instead. The only thing to keep in mind is that the size of the non-zero max-height
value you are transitioning to and from needs to be larger than your max, and it will also directly affect the speed of the animation. I found 20 ems to be more than adequate for this use case, but your project may require a different value.
It’s also worth noting that I used display: none
at the conclusion of my cookie-banner-reverse
animation (the closing one) to ensure the banner becomes unreachable to users of assistive technology such as screen readers. It’s probably unnecessary, but I did it as a failsafe just in case something happens and JavaScript doesn’t remove the banner from the DOM.
Wiring it up required only a few minor tweaks to the CSS:
#cookie-banner { … box-sizing: border-box; overflow: hidden; animation: cookie-banner 1s 1s linear forwards; } #cookie-banner.closing { animation: cookie-banner-reverse .5s linear forwards; }
This assigned the two animations to the two different banner states: The opening and resting state, cookie-banner
, runs for one second after a one-second delay; the closing state, cookie-banner-reverse
, runs for only half a second with no delay. I am using a class of closing
, set via the JavaScript I showed earlier, to trigger the state change. Just for completeness, I’ll note that this code also stabilizes the dimensions of the banner with box-sizing: border-box
and keeps the contents from spilling out of the banner using overflow: hidden
.
One last bit of CSS tweaking and we’re done. On small screens, I’m leaving a margin between the cookie notice (#cookie-banner
) and the page header (.banner
). I want that to go away when the banner collapses, even if the cookie notice is not removed from the DOM. I can accomplish that with an adjacent-sibling selector:
#cookie-banner + .banner { transition: margin-top .5s; } #cookie-banner.closing + .banner { margin-top: 0; }
It’s worth noting that I am setting the top margin on every element but the first one, using Heydon Pickering’s clever “lobotomized owl27” selector. So, the transition of margin-top
on .banner
will be from a specific value (in my case, 1.375 rem
) to 0
. With this code in place, the top margin will collapse over the same duration as the one used for the closing animation of the cookie banner and will be triggered by the very same class addition.
What I like about this approach is that it is fairly simple. It took only about an hour or two to research and implement, and it checks all of the compliance boxes with respect to the EU law. It has minimal dependencies, offers several fallback options, cleans up after itself and is a relatively back-end-agnostic pattern.
When tasked with adding features we may not like — and, yes, I’d count a persistent nagging banner as one of those features — it’s often tempting to throw some code at it to get it done and over with. JavaScript is often a handy tool to accomplish that, especially because the logic can often be self-contained in an external script, configured and forgotten. But there’s a risk in that approach: JavaScript is never guaranteed28. If the feature is “nice to have,” you might be able to get away with it, but it’s probably not a good idea to play fast and loose with a legal mandate like this. Taking a few minutes to step back and explore how the feature can be implemented with minimal effort on all fronts will pay dividends down the road. Believe me29.
(rb, vf, il, al)
Front page image credit: Pexels30.
Poxi is a modern, hackable pixel art editor for the browser. Made by Felix Maier.
Social media is one of the dominant forms of interactions on the Internet. Leading platforms such as Facebook and Twitter count hundreds of millions of users each month. In this article, I will show you how social media is a rich vein of data for user researchers. I will argue that it would be an oversight for an organization to treat social media as nothing more than an opportunity for customer service enquiries, help requests and brand advocacy.
In the commercial sector, social media is a source of data about users that often gets ignored in favor of other more controlled user research activities, such as interviews and user testing. (Though, it is often used to recruit participants for these traditional methods.) Conversely, in the academic world, social media was immediately recognized as an interesting primary source of data. But it has been typically addressed with quantitative research methods, such as visualizing information flows between network members and graphing peaks of activity, which are not so relevant in most typical user research projects.
In recent years, a range of commercially available monitoring software tools have emerged to make it relatively easy to track a range of keywords and capture a wealth of tweets, posts and mentions on topics of interest. However, these tools are also principally set up to do sentiment analysis, such as whether brand mentions are broadly positive or negative. But these high-level insights come at the expense of the nuanced details that reside in the individual tweets, posts and mentions. So, how can we do useful user research with social media?
Social media platforms enable social listening. We can tap into the recent or “in the moment” experience of real issues in context, rather than asking people, for example, to recall experiences in a face-to-face interview that takes place a week afterwards. It is particularly well suited to researching instances of mundane, everyday activities (such as smartphone habits) that would otherwise be poorly remembered and inaccessible to the researcher in the lab or to popular services that have already been launched. And when we tap in, we get data in the users’ language, not the language of the researcher. This amounts to research gold, and all we need is to get a pan and jump into the river. (There are some things to be aware of, of course, which I will describe later.)
While working as a user researcher for Highways England, I created the user research with social media technique that I describe here. This was part of a wide-ranging user research project to make wholesale improvements to the Dart Charge service1. This is a highly used GOV.UK service that enables over 5 milion drivers to pay remotely to use a part of the UK’s M25 motorway network in outer London, called the Dartford Crossing, where the motorway crosses the River Thames. A key research challenge was to understand user needs around paying before or after using the Dartford Crossing. Many other typical user research methods were used on this project, including user testing sessions and interview studies, but the service team welcomed new ideas for gaining research insights, particularly those relating to routine activities such as driving, because it was felt these were not well captured by the other types of research being done.
The sources chosen were Facebook and Twitter on account of their popularity in the UK among the 5 million users of the Dart Charge service. Salesforce’s Radian 6 (now part of Social Studio) was chosen because it supports tracking of multiple keywords across multiple social networks. We selected the month of August 2015, mainly because it was the most recent full month and gathered around a thousand mentions. It was also felt to be the maximum amount of data the team could analyze in the time available.
Ultimately, this resulted in 39 insights, which were added to the product backlog at the time of the research. The four steps in this technique were:
This first step is about setting up the search query, or queries, that are going to be used. These could be user accounts, hash tags and phrases. A good place to start is to gather the project’s team together and collate all of the ways they think real people might refer to the product or situation of interest.
In the case of Highways England, we identified 10 target phrases, hash tags and accounts. Notably, some of these were official terms for the service, such as Dart Charge and Dartford Crossing, and some were unofficial but widely used, such as Dartford Tunnel and Dartford Toll. Examples of the phrases, hash tags and accounts we used are in the table below:
Phrases | Hash tags | Accounts |
---|---|---|
“Dart charge,” “Dartford Tunnel,” “Dartford Toll,” “Dartford Crossing” | #dartfordcrossing, #dartcharge, #dartfordbridge | @dartcharge, @dartfordtoll, @dartfordtraffic |
A variety of tools are available to gather social media data, including free tools and some very expensive one. A good place to start is with the search facilities within the social media networks themselves, because they provide the opportunity to gather data at no cost. These search tools have their drawbacks. For example, Facebook’s groups and privacy settings make it more difficult to search than Twitter, which is much more open. So, choose a social media network that your users, or prospective users, are using, and then start gathering data.
Once you are familiar with this research technique, you can come back to the decision of whether to spend money on other tools. Some, such as Hootsuite, Sprout Social, DiscoverText and IBM’s Watson Analytics, charge a monthly fee. Others, such as Salesforce’s Social Studio, Sysomos and Oracle’s Social Cloud, don’t even show the price. You have to email them for a demo and a quote. (It is a bit like being in a shop where the expensive clothes don’t have price tags on them!)
Once you have chosen a tool, gathering data is relatively straightforward. Let’s start with using the free search tool within Twitter. It is a process of running a search and saving the search results. But a few tips are worth bearing in mind.
First, you don’t even need a Twitter account. You can just head over to the Advanced Search tool2 and enter your keywords directly. Facebook is not so easy to access.
Secondly, defining the date range covered by your data sample is always smart (in case you, or others, want to repeat it). You can set a date range using the “since” and “until” commands in Twitter.
Thirdly, you can save a PDF of the search results via the print menu in your browser. This is worth doing because deleted tweets will disappear from results (when the user deletes them), so you would lose them otherwise. You will also need to manually expose conversations, because these are hidden by default (and will not be shown in your PDF).
There are many ways to analyze the qualitative data we have just gathered. In a nutshell, really, what we are doing at this stage is sense-making. We are asking, What does this piece of data really mean? More or less, it is a case of taking stock of any individual piece of data (for example, a tweet or post) and annotating it with additional words (sometimes called tags or codes) that encapsulate what is being meant. If there are items in the data sample that are hard to understand or incomplete or irrelevant, skipping over them is OK. (Likewise, skip any that appear to be from fake accounts or are the work of trolls.) Typically, only around 1 in 10 items in the data sample might actually be annotated with tags. Practically, this can be done by printing out the PDF data set and using a highlighter pen to mark up interesting pieces of data and writing the tags beside the data.
Once all of the interesting items in the data sample have been tagged, it is effective to pull out only the tagged items and group any related items together. This is a process of affinity-sorting. Again, practically, this can be done by cutting up the print-outs and spatially rearranging the affinity groups. Seeing different perspectives on the same issue helps to form a rounded insight, as we can see in these examples from our Highways England project. Some example tweets and how they were tagged are shown in the images below:
Taking stock of just these four tagged data items, we can get to the following insights about user needs and values:
Ideally, the best way to do this sort of analysis is to have a fellow team member involved in the tagging, affinity-sorting and insight generation. If you see different things in the same piece of data, that is good, not bad, because a richer and more nuanced interpretation emerges. (Afterwards, long-term management of the analysis can be supported by transferring the data items and their associated tags into digital tools, such as NVivo or Reframer.)
The technique of doing user research with social media is particularly well suited to certain project situations:
It is not so well suited to other common project situations:
Social media research offers many benefits to the user researcher, but there are some things to be aware of. Getting up and running is quick, and you don’t have to wait to recruit participants or need any prior hypotheses. Indeed, because the users are expressing themselves in public, they are not participants, so there are no data-protection issues to concern anyone. Nor are there any demand characteristics biases that would mean the participants are politely trying to please the researchers.
But these advantages do need to be balanced with the disadvantages. Is the data clear enough to understand? Are we interpreting the data to fit the questions we ask? Are we just sampling social media users and not everyone else? Are we getting only the most positive and negative voices (and none from the middle)? Are we seeing self-reported behavior, not real behavior? Are we prone to other cognitive biases in the data, such as researchers just seeing the things that are easiest to see (the availability bias) or participants overemphasizing the most intense or most recent aspects of their experience (the peak-end effect)? The answer is, of course, “Yes, but…!”
Ultimately, it is poor practice to rely on one research technique to answer any given question and lay yourself open to criticism of the shortcomings of that technique. It is always best to have multiple research techniques that address the same objective from different angles. This enables a team to notice the different biases at play in each research technique and to come to a rounded view of users’ needs and the best course of action in the design stage.
Social media provides a rich source of data for user researchers. It allows researchers to tap into the recent experience of people without the formality of interviewing or user testing. And while it is not without its disadvantages, it is illuminating, and you can get started for free. It helped Highways England realize the importance of the issue of forgetting to pay. So, why not add user research with social media to your toolbox and see what you find?
(cc, al, il)
Front page image credit: Pexels17.
Sometimes all we need is a little inspiration kick to get our creative juices flowing. Maybe your secret is to go for a short walk, have a little chat with a colleague, or scroll through your favorite eye candy resources. Whatever it might be that helps you get new ideas, we, too, have something for you that could work just as good: desktop wallpapers.
To bring you a regular dose of unique and inspiring wallpapers, we embarked on our monthly wallpapers mission1 eight years ago. Each month, artists and designers from across the globe diligently contribute their works to it. And well, it wasn’t any different this time around. This post features their artwork for March 2017. The wallpapers all come in versions with and without a calendar. Time to freshen up your desktop!
Please note that:
Designed by Nathalie Ouederni6 from France.
“A day, even a whole month aren’t enough to show how much a woman should be appreciated. Dear ladies, any day or month are yours if you decide so.” — Designed by Ana Masnikosa21 from Belgrade, Serbia.
“Early spring in March is for me the time when the snow melts, everything isn’t very colorful. This is what I wanted to show. Everything comes to life slowly, as this bear. Flowers are banal, so instead of a purple crocus we have a purple bird-harbinger.” — Designed by Marek Kedzierski64 from Poland.
“I like to draw and want that people see my illustration.” — Designed by Hushlenko Antonina89 from Ukraine.
“We’ve had several unseasonably warm days in Chicago and I’m ready for some spring blooms!” — Designed by Denise Johnson134 from Chicago.
“March the 2nd marks the birthday of the most creative and extraordinary author ever, Dr. Seuss! I have included an inspirational quote about learning to encourage everyone to continue learning new things everyday.” — Designed by Safia Begum149 from the United Kingdom.
“Baby lambs are a sign of spring.” — Designed by Lucas Debelder168 from Belgium.
“Don’t get me wrong, I like winter and snow, but in Austria where I live there was not much of it this year. It is still cold and moody, and I think by now I’m ready for spring!” — Designed by Izabela Grzegorczyk199 from Poland.
“Instead of focusing on St. Patrick’s day, I decided to feature my favorite March pun in textured handlettering. In this month – especially on the Fourth – it’s a great time time to grab life by the shamrocks and make it count. Now go march forth!” — Designed by Phil Scroggs212 from Seattle, WA, USA.
“Jīngzhé is the third of the 24 solar terms in the traditional East Asian calendars. The word 驚蟄 means ‘the awakening of hibernating insects’. 驚 is ‘to start’ and 蟄 means ‘hibernating insects’. Traditional Chinese folklore says that during Jingzhe, thunderstorms will wake up the hibernating insects, which implies that the weather is getting warmer.” — Designed by Sunny Hong255 from Taiwan.
“In the U.S., March is National Umbrella Month. Let this be a reminder to keep an umbrella handy with rain on the way.” — Designed by Karen Frolo274 from the United States.
“When we celebrate the achievements of women, of how far they have come, we’re actually celebrating the fact that no power on earth can rein a woman who can dream. Here’s to the women who with their lives are setting an example to the next generation of boys and girls what it means to be a woman. That femininity is not just about looking pretty but it is also about being bold, courageous and strong-willed. Happy Women’s Day.” — Designed by Acodez IT Solutions305 from India.
“Freedom should always be peaceful and respectful. Present your opinion peacefully through your art and let people see it and respect it.” — Designed by Hatim M. M. Yousif Al-Sharif348 from the United Arab Emirates.
“Who needs an excuse to look at pizza all month?” — Designed by James Mitchell391 from the United Kingdom.
Designed by Dan Di412 from Italy.
“The year is passing fast and so it is time to get back to our resolutions and find a new best friend.” — Designed by Maria Keller459 from Mexico.
“Just two weeks ago the world lost a wonderful artist and my beloved brother/sister, Ken/Kat – also known as ‘Psychedelic Rainbow KaTgirl Superstar DJ’. This wallpaper was created inspired by her designs – and for all those who miss her – that may she live on.” — Designed by Katherine Appleby512 from Australia.
Designed by Elise Vanoorbeek555 from Belgium.
“We must enjoy life and its many wonders whenever the opportunity presents itself and then accept whatever consequences may come with an equal amount of vigor and enthusiasm.” — Designed by Roxi Nastase584 from Romania.
“Since the month March is derivated from the planet Mars, I decided to make a wallpaper with the planet Mars in the universe. And when I heard that they discovered several new planets in our galaxy where there could be life, I chose to put a green marsian on the planet. Maybe one day we will all live on another planet.” — Designed by Melissa Bogemans627 from Belgium.
“In some parts of the world there is the beauty of nature. This is one of the best beaches in the world: Ponta Negra in the North East of Brazil” — Designed by Elisa de Castro Guerra668 from France.
“It’s autumn in the southern hemisphere!” — Designed by Tazi Design713 from Australia.
Designed by Rucha Shreyas Gosavi from Dubai.
Please note that we respect and carefully consider the ideas and motivation behind each and every artist’s work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience throughout their works. This is also why the themes of the wallpapers weren’t anyhow influenced by us, but rather designed from scratch by the artists themselves.
A big thank you to all designers for their participation. Join in next month756!
What’s your favorite theme or wallpaper for this month? Please let us know in the comment section below.
Typography is a primary element of composition. Being a designer, I pay a lot of attention to its quality. Operating Photoshop is easy for me; however, to level up my skills, I am always learning to work with letters, using my hands, without any computer programs.
The first time I took a calligraphy course was about a year ago, and the decision was quite hard. I was sure that it would be painstaking and that I would need excellent handwriting to learn this art. How mistaken I was!
Type is saying things to us all the time. Typefaces express a mood, an atmosphere. They give words a certain coloring.
– Rick Poynor (“Helvetica”, 2007)
Typefaces are always telling us something. We receive information through typography. Type influences us, adds coloring to words, sets a mood and atmosphere, assists, teaches, scares us, brings us joy and inspires us.
Typography is, foremost, an information medium. At the same time, it fulfils social functions and acts as an indicator of the age it belongs to. The contemporary world has its own rhythm, aesthetic and philosophy; while we are changing, everything is changing around us. In studying historical lettering in calligraphy, we can understand the character and potential of a writing instrument, and, as a result, we can manage its expressive means.
When I joined the calligraphy course, I heard students talking amongst themselves: “I’ll never manage to do it this way!” “I can’t write in such a beautiful way!”
To tell the truth, I felt the same way. But that was nonsense! And I say that as a master of Photoshop who couldn’t handwrite plain lines only a year ago.
Type is a visual language, which connects the writer and the reader.
– Erik Spiekermann
Our first lesson was to write simple strokes, the basis of all letters, with a flat paintbrush.
Tip: A lot of useful resources and online courses are on the Internet. However, I recommend starting by learning from professionals (in workshops, at calligraphy schools). A professional will help you to develop proper technique, answer your questions and prompt you in the nuances of the craft. Even something as seemingly simple as one’s posture and pen-holding technique will substantially influence the result.
Studying in a course had a positive outcome. Writing with different instruments and trying different techniques, I could figure out which instrument suits me best.
I learned the history of calligraphy, I learned how to customize my workplace, and I learned how to choose an instrument. I practiced Cyrillic ornamental script, textura quadrata, italic, English roundhand, modern calligraphy, brush pen lettering and chalk lettering. I also learned how to make my own calligraphy instruments.
Calligraphy is the most intimate, personal, spontaneous form of expression. Like a fingerprint or a voice, it is unique for each person.
– Hermann Zapf
Tip: I recommend devoting your initial lessons to writing with a flat paintbrush. Get accustomed to the instrument, and study the “skeleton” of letters (graphemes). Soon after that, practice Cyrillic ornamental script, textura quadrata and italic.
Write the alphabet, then start with words and continue on to sentences. Next, you could proceed to study the pointed nib and the typefaces that rely on it: English roundhand, modern calligraphy script, flourishing, Spencerian and other Copperplate styles.
With the development of an international exchange of information, there is a need for universal fonts. Today, Texture and other Gothic fonts are only used as a reminder of a bygone era, in particular, in newspaper logos.
– Erik Spiekermann
Each lesson was a meditation. Soon after a lesson, I felt relaxed, energetic and inspired. And I got a good result on paper! The craft is a remedy and exercise for the mind and soul.
Having fallen in love with calligraphy, I came to prefer a sketchbook to a camera while on vacation. At a conference in St. Petersburg this spring, I got inspired by various graphic designers’ presentations and by the talk by renowned calligrapher Pokras Lampas38. I wanted to put everything aside and write something. In such an inspired state, I signed a card to say hello to my friends from that wonderful city. Thus, a simple card began my project “Hello From.” The idea was to show the essence of a place through lettering; I would take a photo of the card with the city in the background.
Other photos from the project can be found below. You can stay up to date on my Instagram account43! There will be many interesting countries and medieval-aged cities soon!
As in sports and music, in calligraphy it is important to train every day, to be patient and to feel inspired.
Here are some tips based on my experience:
Inspiration. From real life. I open my eyes and I travel and I look. And I read everything.
– Erik Spiekermann
It is hard to create something without experience. Therefore, I recommend collecting ideas. However, at the beginning, after looking through hundreds of beautiful pictures, I sometimes lose confidence and think, “I can’t do that!” Calm down. Before you panic, do the following:
Then let’s start! Let’s look at the tools you will need for the first lesson.
Sure, you don’t have to buy everything in this photo! Consider your abilities and preferences. Below is a detailed list to give you a general idea of the tools you’ll need for different styles of writing:
In the beginning, ordinary notebooks, copy books, office paper and even old wallpaper will be enough for practice. Try to get paper with a smooth surface and a higher density than office paper; otherwise, the ink will not flow and the nib will not catch the paper. Rhobia and Fabriano paper are quite good, but try different variants to find the best one for you.
Unused wallpaper and draft work is perfectly suited to writing with brushes and brush pens. At a more advanced level, you could use texture paper and handmade paper, which is great for making postcards and wedding invitations.
This is mandatory: It is impossible to write letters at a proper height or write a line of text without positioning and marking the sheet of paper. The most handy solution would be to put a printed handwriting worksheet under the sheet of paper you’re writing on. The worksheet will show through the paper, guiding you on the height and incline of elements. A ruler and pencil might also help, but lining would take time.
You can download handwriting worksheets67 or make the required adjustments yourself68 (second link in Russian).
Samples of alphabets will show you how to draw letters correctly. Print them out and put them under your sheet of paper as a guideline. Examples can be found and downloaded on Pinterest69.
I recommend writing each letter on a separate sheet of paper, to better remember the motion of letters and to train your hand. This will surely take more time, but after you’ve written a lot of drafts, your hand will move confidently without trembling, and you will remember how letters are drawn by heart. Let’s start!
Stores offer a great selection of calligraphy ink. Choose whatever you want — experiment! For an entry level, ordinary watercolor paint is quite enough.
Chinese ink is perfect for this work. But pay attention to the expiry date. Buy fresh ink, otherwise you risk getting clods, which will impede the flow of ink from the nib.
Dr. Ph. Martin’s ink is one of my favorites. The selection of colors and variants is quite extensive, but it is quite expensive.
Pearl ink looks beautiful on dark and high-contrast surfaces. I like Finetec’s dry golden palettes. Work done with it looks exquisite.
As mentioned, I first learned to write with a flat synthetic paintbrush. It’s a great choice for learning letter graphemes, and it is the most economical choice.
Brush pens are a good tool to learn brush calligraphy and lettering. They come with and without cartridges. Brushes have different quality levels, densities, sizes and shapes. Find one you are comfortable writing with.
Water brushes are handy because you can fill them with ink or watercolor yourself. The disadvantage is that if, used improperly, they can get dry or dirty. I prefer to put my water brushes in ink or paint but not to fill them in. This way, they last longer.
Pen holders can be straight and oblique. A straight pen holder is good for square-cut pens and for writing different typefaces (for example, rustic capitals, square capitals, uncials and artificial uncials, textura quadrata, italics, etc.).
At the same time, an oblique pen holder with a pointed pen better suits cursive writing. Due to its initial incline, you will not have to bend your hand so much. It can be adjusted for different pens or just one particular pen.
Oblique pen holders have a flange at the end of the handle — the metal part of the holder where the pen is put in. This helps to regulate the angle of incline.
I also have a straight holder that looks like a feather. It is more decorative and adds some atmosphere as I’m working, but it is not as comfortable as other holders. I use it mainly for photos.
Nibs are square cut or pointed. As suggested earlier, you’d better learn typefaces with a square-cut nib. These nibs are quite rough, which makes the work easier and which will train you for a pointed nib.
Tip: If you are left-handed, you just need to find a nib that bends from right to left.
Pointed nibs are specially designed for cursive. They come in different sizes and can be used for different line thicknesses and different writing styles. After trying several of them, you will find a favorite.
Tip: Take care of your writing tools. Wash and wipe dry your tools after each exercise.
These are wonderful pens with a square-cut nib! They are very firm and comfortable to use. Though they work with the original cartridges (which are quite expensive), the empty ones can be refilled with a syringe.
Mmm, books! You will find a lot of useful information and tips in books. I recommend beginning with these wonderful ones:
In these books, you will learn the history of calligraphy, find descriptions of diverse alphabets (written using the elements of handwriting worksheets), learn about tools, read tips on how to adjust your workspace, tutorials and more.
Nowadays calligraphy is in fashion, which only makes me happier. In comparison to digital text, handwriting is a distinct art form, and its uniqueness is being valued more and more highly.
The art of beautiful handwriting shouldn’t be forgotten, and I thank everybody who supports and promotes it today.
I hope that I’ve managed to convince you that anyone can learn the art of calligraphy! All you need is daily practice, inspiration and belief in yourself. And I believe in you. Good luck!
To consolidate your knowledge, I suggest you draw a birthday card. Grab a brush, ink or paint and some cartridge paper. Line the paper, and write your text in the middle of the paper with a pencil. Feel free to add some decorative elements around the lettering according to your taste (balloons, flowers, confetti, etc.).
Make sure that the final composition is aligned and symmetrical. Now you can trace around the letters in ink. Not that difficult, right?
Attach your result in the comments. Can’t wait to see them!
Here is mine:
If you have any questions, please feel free to contact me via Twitter111, email112 or Instagram113.
(vf, il, al)
Playing with CSS Grids * Numscrubber.js * Scrimba * Animista * Service Worker Toolchain * hyperHTML * colorfonts.wtf * 3…
Building an app, site or other digital experience? Book your spot now in the 2017 XAwards to compete for 3 prestigious awards and €30,000 worth of cash prizes.
Phew, what a week! Due to an HTML-parsing bug, Cloudflare experienced a major data leak, and the first practical collision for SHA-1 was revealed as well. We should take these events as an occasion to reconsider if a centralized front-end load balancer that modifies your traffic is a good idea after all. And it’s definitely time to upgrade your TLS-certificate if you still serve SHA-1, too. Here’s what else happened this week.
<link preload>
as an experimental feature. Furthermore, they implemented the dynamic JavaScript import
operator and suspended SVG animations on hidden pages.SameSite
to your existing Set-Cookie
header. Of course, you should know how same-site cookies differ from “normal” cookies, but for most sites this should be easy to implement.And with that, I’ll close for this week. If you like what I write each week, please support me with a donation25 or share this resource with other people. You can learn more about the costs of the project here26. It’s available via email, RSS and online.
— Anselm
Besides the user’s needs, what’s another vital aspect of an app? Your first thought might be its design. That’s important, correct, but before you can even think about the design, you need to get something else right: the data. Data should be the cornerstone of everything you create. Not only does it help you to make more informed decisions, but it also makes it easier to account for edge cases, or things you might not have thought of otherwise.
The easiest way to work with real data in Sketch is the with Craft plugin from InVision. It provides a wealth of predefined content, such as names, dates, and addresses, lets you scour a website for the required information, and enables you to feed in a JSON file and work with the provided data. That’s exactly what we will do with our made-up Movie Finder app. You can use it to search for movies based on different terms, such as name, director and year. This data will be provided by a JSON file, an open-standard format that allows data to be stored with key-value pairs, such as "category": "Dramas"
.
Before we can start to pull in some data, we need to take care of the layout of the app. If you are just interested in the “content” part (i.e. how to populate the design with real data), you can download the Sketch file of the template3 and continue reading from the “From One to Many4” section onwards. Otherwise, follow along and let me show you how to create the entire app from A to Z. I won’t explain every bit in detail; you’ll need some basic knowledge of Sketch5. The finished design, filled with movie data, can be found on Dropbox6. In either case, you will need the free font Clear Sans87 from Intel.
I wanted to keep the app as universal as possible, without tying it to a certain platform, so I chose an artboard size of 360 × 640 pixels, and I renamed it to “Movie Finder.” This is a common Android size, but you can easily go to iPhone sizes from there. Select the checkbox “Background Color” in the Inspector panel to give it a white background. Now, press R
(to select the Rectangle tool) and create a rectangle at the top for the header; it should be the full width of the artboard. Be sure to remove the default border by pressing B
on the keyboard, and save this style for future shapes with “Edit” → “Set Style as Default” from the menu bar. The height of the rectangle doesn’t matter at the moment, but the layer name does, so please set it to “BG.” To simplify the process of laying out elements and deciding on their sizes, set up an 8-pixel grid from “View” → “Canvas” → “Grid Settings” in the menu bar. For the “Grid block size,” enter “8px”; for “Thick lines every,” use “0.” Everything else can be left alone.
The first time we will use this grid is for the height of the header. Drag the rectangle we just created to be 32 pixels high. Choose a color of your liking that has enough contrast with white text; I went for #D06002
, which I also saved to the “Document Colors” in the color dialog with a click on the “+” button, for later reference. For the title, “Movie Finder,” create a new text layer (press T
) with a size of 16 pixels and the color white, and center it in both dimensions to the background. My font of choice is Clear Sans87 by Intel due to its clean look, but also the good selection of weights. Choose the “Regular” weight for the title. Complete the header by moving all of the current elements into a “Header” group.
The next task is the search field. Add another rectangle with dimensions of 344 × 32 pixels, and assign it rounded corners of “3,” a white background and a gray border (#B4B4B4
). Rename it to “Field.” Move it 1 grid unit away from the header, and center it to the artboard (using the fourth icon at the top of the Inspector, or a right-click and then “Align Horizontally”). The placeholder consists of an icon and some text. For the former, I have used the plugin Icon Font11, which enables you to easily pull in different icons. It requires you to install the font bundle12, a package of the most popular fonts. In case you need some assistance with this multi-step process, have a look at the screencast13. Now, go to “Plugins” → “Icon Font” → “Grid Insert” → “Ionicons” in the menu bar and enter “search.” Click on the first icon to add it to the artboard, but change its font size to 16 pixels. Drag it over to the search field.
For the placeholder text, add a new text layer with T
, set to the same font size, a “Regular” weight and the content “Movie name, director, year.” Also, make sure the “Clear Sans” font is used. Move it 3 pixels away from the icon, select both elements, and center them vertically with a right-click and “Align Vertically.” Set the color of both to #4A4A4A
. Because this will be our default text color from now on, add it to the “Document Colors.” Create a new group from these elements (named “Placeholder”), which you can tone down to 50% opacity with 5
on the keyboard. Move it a little up afterwards with the arrow key for correct optical alignment. Select this new group together with the field itself, and center them in both dimensions (right-click → “Align Horizontally” and “Align Vertically”), and create a “Search field” group. In the layers list, it should be below the header group; move it there with Cmd + Alt + Ctrl + down arrow
.
Now, duplicate the placeholder text for the search term below with Cmd + D
, but move it down until its text box has a spacing of about 3 pixels from the border of the input field. This new layer doesn’t need to sit on a grid line (you can break this rule — not everything has to align perfectly). Use ‘You have searched for “ridley scott”‘ as the content. Also, drag it out of the groups in the layers list, below the “Search field” group, and center it to the artboard.
Right below the text layer, add a line to clearly distinguish the search results. This can be created with either a thin rectangle (1-pixel high) or a line (1-pixel thick). I prefer the former because it’s a tad easier to handle. Create it with the same width and spacing as the search field, and name it “Line.” Set the fill to #D4D4D4
, and align it on top of a grid line (which should give it a spacing of about 7 pixels from the text layer above). Move it to the bottom of the layers list, together with the text layer.
Now we can finally turn to the search results. Each result consists of the poster of the movie, the name, the director, a short description, the year of release and the running time. It also shows the user rating at a glance. But instead of adding all of the information by hand, we will just create placeholders that will be filled with the actual content later!
Let’s start with the poster. Add a rectangle of 72 × 104 pixels at the left edge, with a spacing of 2 grid units from the artboard’s edge and the line above. Name it “Poster.” A black shadow with the properties “0/4/6/0” (X/Y/blur/spread) and 30% opacity will give it a slightly raised appearance.
Right next to it, with another horizontal spacing of 2 grid units, add a text layer for the “Title” (use exactly that as the content). The font size should already be at 16 pixels. For the color, choose the same as the header’s background (get it from the “Document Colors” in the color dialog). Make it bold with Cmd + B
, and move it so that the top of the text (not the text box) is at the same height as that of the poster. Use the arrow keys to fine-tune this position. Duplicate it for the “Director” (as above, use this as the content), move it down and align its baseline to a grid line. Once you have lowered the font size to 14 pixels, this should give it a spacing of about 2 pixels from the previous text layer. For the weight, use “Regular” again, and for the color, the black color we saved to the “Document Colors” earlier.
Continue with the description in the same fashion: Duplicate the previous text layer with Cmd + D
, and move it down so that there’s spacing of about 2 grid units in between, after you have aligned its baseline to the grid. You just need to make sure that the filler text has two lines: Use the well-known placeholder “Lorem ipsum dolor sit amet, consectetuer adipiscing elit” as the content for now, but set the width of the text layer to 230 pixels in the Inspector panel. This will create a fixed text layer that automatically breaks at the right, creating two lines of text. Tighten the line spacing to 16 pixels in the Inspector panel, which will align both lines to the grid.
Because these will be just the first two lines of the description, we will add a so-called “disclosure triangle” that indicates more text. Create a triangle (from “Insert” → “Shape” → “Triangle” in the toolbar) with dimensions of 8 × 6 pixels, and flip it vertically with a right-click and “Transform” → “Flip Vertical.” In case you have difficulty getting these measurements, switch off the grid temporarily with Ctrl + G
and zoom in a bit with Cmd + +
. Assign this triangle the same color as the text, and center it to the second line of the description (you can switch on the grid again now). To make it independent of the text length, move it to the right edge of the artboard, with a spacing of 2 grid units. Finally, rename it to “Disclosure,” and make sure that it is above the adjacent text layer in the layers list.
For the remaining two text layers — the year and running time — we can take the text layer of the “Director” as the base again. Duplicate it, move it down, so that there is another spacing of about 2 grid units from the description, but change the content to something like “2000” (so that we have a good indication of how long a typical year will be). As before, its baseline should align to the grid. Hold Alt
, drag it to the right with the mouse to create another copy, and change this one to “|” to separate it from the year. You may also want to press Shift
while dragging to keep it on the same line. Add the last layer in the same fashion, with “Running time” as the content. These text layers should have a horizontal spacing of about 4 pixels from each other.
The only thing we have to do before we can start pulling in some real content is the rating. First, it contains a circle with a diameter of 28 pixels (switch off the grid again) and the same fill color as the title of the movie. The second element is a white text layer (“9.9,” for example), with a font size of 14 pixels, a bold weight and center alignment (press Cmd + |
). Changing the character spacing to “–0.8” will give it a tighter feel. Align these two layers to each other, but trust your own eyes for the optical alignment instead of Sketch’s functionality, because that will produce a better result.
After you have combined these into a “Rating” group, move it to the bottom right of the poster so that it sticks out about 10 pixels horizontally and 4 pixels vertically. Make sure that it is above the poster in the layer hierarchy. One last step: Duplicate the line from above so that it acts as a separator between the other search results we are going to create. Move it down until it has a spacing of 2 grid units from the poster and sits on top of a grid line.
The design is ready. There are numerous ways in which we can fill it with real content (with the help of some plugins), but the best and easiest is Craft27. Apart from pulling in data, it also enables us to duplicate elements and vary their contents automatically. After you have installed the plugin, a handy panel next to the Inspector panel will appear (if not, open it with “Craft” → “Toggle Panel” from the menu bar), which will provide more functionality in Sketch. For us, the “data” section (third icon from the top) is of most interest and, in particular, the “JSON” tab.
We will pull our data from the Netflix Roulette30 website, which provides an application programming interface (or an API, which is a way to access different parts of a service) for all shows on Netflix. Because we would like to get all movies from director Ridley Scott, we will use the “director” keyword with his name, which leads to the URL http://netflixroulette.net/api/api.php?director=Ridley%20Scott
. Click on it to see the JSON file. The file might seem like a mess at first, but it’s just an unordered collection of the key-value pairs — i.e. the properties — of the movies, such as show_title
, category
and runtime
. We will bring some order to this list shortly.
Take this exact URL and paste it in the input field of the “JSON” tab in the Craft panel that says “Type URL…”. Clicking on “Import” will bring up a list of (currently) seven entries, which you can extend with the arrow on the left.
Start at 0
, which should be the one for the movie Gladiator; a look at show_title
confirms that. With a simple click on the key, you can assign its value (or any other from the JSON file) to layers on your artboard (for example, after selecting the “Title” text layer). Do the same for the “Director” text layer and the director
key, as well as “Description” and summary
. Unfortunately, the content from Netflix Roulette is much longer than what we have arranged for the text layer. Fix that by dragging the height of the text layer back to 34 pixels again (using the Inspector panel won’t work here).
Now, continue with the remaining text layers. Select “2000” on the artboard, and assign the value from release_year
, as well as “Running time” on the canvas and runtime
from Craft. For the rating in the orange circle, use (you guessed it!) the rating
field. Instead of using a URL for the JSON file, you can employ your own data: Delete the current source with the “x” icon next to the input field, and drag your JSON data to the appropriate field below (or simply click there and select it from the computer).
Note: A list with numerous JSON files for you to play with can be found on GitHub34.
You can use Craft to fill not only text layers but also image layers. Unfortunately, selecting the “Poster” layer and clicking on the poster
field in Craft won’t give us the desired result: it seems that the image path isn’t valid, so nothing more than the three dots (which usually act as a loading indicator) will be shown. Luckily, Craft can do even more: You can use the “Web” tab — which is basically just a browser — to navigate to a website and grab the poster from, say, IMBd. On the page for Gladiator35, scroll down and click on the movie poster, which will assign the correct image to the placeholder on the artboard (make sure that it is selected first). In case you want to follow a link, you need to hold Cmd
before clicking.
Now we have a search result that is fully filled with content about the Gladiator movie, without having typed a single value by hand. Magic!
But it doesn’t stop there. Getting to the other search results, all based on the same JSON file, is just a matter of a few clicks. As preparation, combine all of the layers of the search result into an “Item” group (including the line), and move it to the bottom of the layers list (you can use Ctrl + Alt + Cmd + down arrow
). Now, click on “Duplicate content” in the Craft panel, which will bring you to the “Duplicate” section. This will let you lay out an element in both directions, with a certain count and spacing in between. We want “4” items in total, with a gutter of “10,” aligned vertically. Press “Duplicate Content” and watch the magic unfold.
We’ve gotten three new search results, all filled with more movie data. Craft was clever to use the other entries from the JSON file here. Alongside these additional entries, another new layer was created in the layers list: “Duplicate control.” This one’s really powerful: In case the new entries don’t align to the grid, you can use it to change the spacing on the fly. Just select the “Duplicate control” layer in the layers list, and drag the slider below the “Gutter” field in the Craft panel.
But it does even more. If you need more (or fewer) entries later on, you can resize the “Duplicate control” layer on the canvas, and the plugin will automatically adapt the number of search results! Just mind that plugins sometimes break with each new version of Sketch, so if something doesn’t work as expected, a fix might already be on the way.
The only thing that doesn’t perfectly work are the posters. They seem to be broken in general, so we need to use the “Web” tab again. Navigate to the respective detail pages on IMDb, and select the appropriate poster images. But that’s a small price to pay.
Basically, we are done now, but I want to show you one last trick. Though we have four similar elements, they aren’t tied to each other in any way. Changing one of the entries won’t affect the others. A symbol would help here.
Start by deleting all of the item groups except the first; remove the outer “Group” that Craft created with Shift + Cmd + G
, as well as the “Duplicate control” layer, and select the remaining item. Now, click on “Create Symbol” in the toolbar, but don’t send it to the Symbols page. This will place the symbol next to the artboard, which will make modifications easier later because you can instantly see how all instances will be affected.
In contrast, if you select “Send Symbol to ‘Symbols’ Page,” the symbol will be created on a separate page38. While this is better for organization, it becomes much harder to see the direct correlation between the master symbol and its instances when you change it.
For the rest of the items, you can proceed in the same way as before. Select the first item, the instance of the symbol we just changed (not the master symbol next to the artboard), go to the “Duplicate” panel in Craft (the last icon), enter “4” for the count and “10” for the gutter, and you are done. Because all entries are tied to the same symbol now, you can try to change the size of the title, for example, and see how it adapts in every search result.
The poster problem needs a slightly different approach, however. Selecting the poster of each search result isn’t possible anymore when using a symbol. Instead, click on the small thumbnail next to the “Choose Image” button in the “Overrides” section of the Inspector panel. Navigate to the appropriate IMDb page from the “Web” tab in the Craft panel again, and drag the poster to this thumbnail. This will apply it to the respective instance of the symbol. Goal achieved!
I hope you’ve enjoyed this tutorial, in which I’ve shown how you can stop worrying about dummy content and start using real data with the help of the Craft plugin43. This could not only speed up your design process but also make you think more about edge cases, or how certain parts of a design can interact with each other.
Please feel free to post your questions or point out a different approach to a certain part of the tutorial. You can also contact me on Twitter (@SketchTips44) or visit my little side project (SketchTips45), where I provide more tips about Sketch. For the full package, have a look at The Sketch Handbook46 from Smashing Magazine, which will tell you everything you ever wanted to know about designing with Sketch.
(mb, al, il)
Get unlimited landing pages, leads, visitors. Grow your business online with a powerful tool.
When you examine the most successful interaction designs of recent years, the clear winners are those who provide an excellent functionality. While functional aspect of a design is key to product success, aesthetics and visual details are equally important — particularly how they can improve those functional elements.
In today’s article, I’ll explain how visual elements, such as shadows and blur effects, can improve the functional elements of a design. If you’d like to try adding these elements to your designs, you can download and test Adobe XD1for free and get started right away.
There’s a reason GUI designers incorporate shadows into their designs — they help create visual cues in the interface which tell human brains what user interface elements they’re looking at.
Since the early days of graphical user interfaces, screens have employed shadows to help users understand how to use an interface. Images and elements with shadows seem to pop off of a page, and it gives users the impression that they can physically interact with the element. Even though visual cues vary from app to app, users can usually rely on two assumptions:
You can see how the use of shadows and highlights help users understand which elements are interactive in this Windows 2000 dialog box:
Modern interfaces are layered and take full advantage of the z-axis. The position of several objects in the z-axis act as important cues to the user.
Shadows help indicate the hierarchy of elements by differentiating between two objects. Also, in some cases, shadows help users understand that one object is above another.
Why is it so important to visualize the position of an element within three-dimensional space? The answer is simple — laws of physics.
Everything in the physical world is dimensional, and elements interact in three-dimensional space with each other: they can be stacked or affixed to one another, but cannot pass through each other. Objects also cast shadows and reflect light. The understanding of these interactions is the basis for our understanding of the graphical interface.
Let’s have a look at Google’s Material Design for a moment. A lot of people still call it flat design, but the key feature is that it has dimension — the use of consistent metaphors and principles borrowed from physics help users make sense of interfaces and interpret visual hierarchies in context.
One very important thing about shadows is that they work in tandem with elevation. The elevation is the relative depth, or distance, between two surfaces along the z-axis. Measured from the front of one surface to another, an element’s elevation indicates the distance between surfaces and the depth of its shadow. As you can see from the image below, the shadow gets bigger and blurrier the greater the distance between object and ground.
Some elements like buttons have dynamic elevation, meaning they change elevation in response to user input (e.g., normal, focused, and pressed). Shadows provide useful clues about an object’s direction of movement and whether the distance between surfaces is increasing or decreasing. For users to feel confident that something is clickable or tappable, they need immediate reassurance after clicking and tapping, which elevation provides through visual cues:
When Apple introduced iOS 8, it raised the bar for app design, especially when it came to on-screen effects. One of the most significant changes was the use of blur throughout, most notably in Control Center; when you swipe up from the bottom edge of a screen you reveal the Control Center, and the background is blurred. This blur occurs in an interactive fashion, as you control it completely with the movement of your finger.
Apple moved further in this direction with the latest version of iOS which uses 3D Touch for the flashlight, camera, calculator and timer icons. When a user’s hand presses on those icons, real-time blur effect takes place.
Make User Flow Obvious
Blur effects allow for a certain amount of play within the layers and hierarchy of an interface, especially for mobile apps. It’s a very efficient solution when working with layered UI since it gives the user a clear understanding of a mobile app’s user flow.
The Yahoo Weather20 app for iOS displays a photo of each weather location, and the basic weather data you need is immediately visible, with more detailed data only a single tap away. Rather than cover the photo with another UI layer, the app keeps you in context after you tap — the detailed information is easily revealed, and the photo remains in the background.
Direct the User’s Attention
Humans have a tendency to pay attention to objects that are in focus and ignore objects that aren’t. It’s a natural consequence of how our eyes work, known as accommodation reflex23. App designers can use it to blur unimportant items on the screen in an effort to direct a user’s attention directly to the valuable content or critical controls. The Tweetbot24 app uses blur to draw users attention to what needs to be focused on; the background is barely recognizable, while the focus is on information about accounts and call to action buttons.
Make Overlaid Text Legible
The purpose of text in your app is to establish a clear connection between the app and user, as well as to help your users accomplish their goals. Typography plays a vital role in this process, as good typography makes the act of reading effortless, while poor typography turns users off.
In order to maximize the readability of text, you need to create a proper contrast27 between the text and background. Blur gives designers a perfect opportunity to make overlaid text legible — they can simply blur a part of the underlying image. In the example below, you can see a restaurant feed which features the closest restaurants to the user. Immediately, your attention goes to the restaurant images as they feature a darkened blur with text overlay.
Blurred effect can seamlessly blend into the website design.
Decorative Background
Together with full-screen photo backgrounds, frequently used for website decorations, blur backgrounds have found their niche in modern website design. This decorative effect also has a practical value: by blurring one object, it brings focus to another. Thus, if you want to emphasize your subject and leave the background out of focus, the blurring technique is the best solution.
The website for Trellis Farm uses an iconic image of a farm to give visitors a sense of place for its website. For added interest, the photo is layered with a great typeface to grab a visitor’s attention. The blur is nice because it helps the visitor focus on the text and the next actions to take on the screen.
Progressive Image Loading
As modern web pages load more and more images, it’s good to think of their loading process, since it affects performance and user experience. Using blur effect you can create a progressive image loading effect. One good example is Medium.com, which blurs the post image cover as well as images within the post content until the image is fully loaded. First, it loads a small blurry image (thumbnail) and then makes a transition to the large image.
This technique has two benefits:
If you want to reproduce this effect on your site see the Resources and Tutorials section.
Testing Websites’ Visual Hierarchy
Blur effect can be used not only as visual design technique but also as a good testing technique for page visual hierarchy.
A blur test is a quick technique to help you determine if your user’s eye is truly going where you want it to go. All you need to do is, take a screenshot of your site and add a 5–10 px Gaussian blur in Photoshop. Look at a blurred version of your page (like the Mailchimp example below) and see what elements stand out. If you don’t like what’s projecting, you need to go back and make some revisions.
Mailchimp’s homepage passes the blur test because the prominent items are the sign-up button and text copy which states the benefits of using the product.
Overuse of Blurs in Mobile Apps
Blur effect isn’t exactly free. It costs something — graphics performance and battery usage. Since blurring is a memory bandwidth and power intensive effect, it can affect system performance and battery life. Over-used blurs result in slower apps with largely degraded user experiences.
We all want to create a beautiful design, but at the same time, we can’t make users suffer from long loading or empty battery. Blur effects should be used wisely and sparsely — you need to find a balance between great appearance and the resource utilization. Thus, when using blur effects always check CPU, GPU, Memory and Power usage of your app (see section Resources and Tutorials for more information).
Blur Effect and Text Readability Issues
Another factor that you should remember — blurring is not as dynamic. If your image ever changes, make sure the text is always over the blurry bits. In the example below, you can see what happens when you forget this.
Blur Effect and Content-Heavy Pages
Blurred background can cause a problem when it is used for screens filled with a lot of content. You can compare two examples below. The screen on the left using blur effect looks dirty, and the text seems unreadable. The screen without blur effect is much clearer.
The following resources can help you implement blur effect into your design:
Shadows and blur effects provide visual cues that allow users to better and more easily understand what is occurring. In particular, they allow the designer to inform users on objects’ relationships with each other, as well as potential interactions with these objects. When carefully applied, such elements can (and should) improve a functional aspect of design.
This article is part of the UX design series sponsored by Adobe. The newly introduced Experience Design app42 is made for a fast and fluid UX design process, as it lets you go from idea to prototype faster. Design, prototype and share — all in one app.
You can check out more inspiring projects created with Adobe XD on Behance43, and also visit the Adobe XD blog44 to stay updated and informed. Adobe XD is being updated with new features frequently, and since it’s in public Beta, you can download and test it for free45.
(il, aa)
JavaScript module bundling has been around for a while. RequireJS had its first commits in 2009, then Browserify made its debut, and since then several other bundlers have spawned across the Internet. Among that group, webpack has jumped out as one of the best. If you’re not familiar with it, I hope this article will get you started with this powerful tool.
In most programming languages (including ECMAScript 2015+, which is one of the most recent versions of the standard for JavaScript, but isn’t fully supported across all browsers yet), you can separate your code into multiple files and import those files into your application to use the functionality contained in them. This wasn’t built into browsers, so module bundlers were built to bring this capability in a couple forms: by asynchronously loading modules and running them when they have finished loading, or by combining all of the necessary files into a single JavaScript file that would be loaded via a <script>
tag in the HTML.
Without the module loaders and bundlers, you could always combine your files manually or load your HTML with countless <script>
tags, but that has several disadvantages:
<script>
tags means multiple calls to the server to load all of your code, which is worse for performance.Most module bundlers also integrate directly with npm or Bower to easily allow you to add third-party dependencies to your application. Just install them and throw in a line of code to import them into your application. Then, run your module bundler, and you’ll have your third-party code combined with your application code, or, if you configure it correctly, you can have all of your third-party code in a separate file, so that when you update the application code, users don’t need to download the vendor code when they need to update their cache of your application code.
Now that you have basic knowledge of the purpose of webpack, why should you choose webpack over the competition? There are a few reasons:
I’ve seen only a few other module bundlers and build tools that can say the same thing, but webpack seems to have one thing over those: a large community that can help when you get stuck. Browserify’s community is probably just as big, if not larger, but it lacks a few of the potentially essential features that come with webpack. With all the praise I’ve given webpack, I’m sure you’re just waiting for me to move on and show some code, right? Let’s do that, then.
Before we can use webpack, we need to install it. To do that, we’re going to need Node.js and npm, both of which I’m just going to assume you have. If you don’t have them installed, then the Node.js website1 is a great place to start.
Now, there are two ways to install webpack (or any other CLI package, for that matter): globally or locally. If you install it globally, you can use it no matter what directory you’re in, but then it won’t be included as a dependency for your project, and you can’t switch between versions of webpack for different projects (some projects might need more work to upgrade to a later version, so they might have to wait). So, I prefer to install CLI packages locally and either use relative paths or npm scripts2 to run the package. If you’re not used to installing CLI packages locally, you can read about it in a post I wrote about getting rid of global npm packages3.
We’re going to be using npm scripts for our examples anyway, so let’s just forge ahead with installing it locally. First things first: Create a directory for the project where we can experiment and learn about webpack. I have a repository on GitHub4 that you can clone and whose branches you can switch between to follow along, or you can start a new project from scratch and maybe use my GitHub repository for comparison.
Once you’re inside the project directory via your console of choice, you’ll want to initialize the project with npm init
. The information you provide really isn’t that important, though, unless you plan on publishing this project on npm.
Now that you have a package.json
file all set up (npm init
created it), you can save your dependencies in there. So, let’s use npm to install webpack as a dependency with npm install webpack -D
. (-D
saves it in package.json
as a development dependency; you could also use --save-dev
.)
Before we can use webpack, we should have a simple application to use it on. When I say simple, I mean it. First, let’s install Lodash5 just so that we have a dependency to load into our simple app: npm install lodash -S
(-S
is the same as --save
). Then, we’ll create a directory named src
, and in there we’ll create a file named main.js
with the following contents:
var map = require('lodash/map'); function square(n) { return n*n; } console.log(map([1,2,3,4,5,6], square));
Pretty simple, right? We’re just creating a small array with the integers 1 through 6, then using Lodash’s map
to create a new array by squaring the numbers from the original array. Finally, we’re outputting the new array to the console. This file can even be run by Node.js, which you can see by running node src/main.js
, which should show this output: [ 1, 4, 9, 16, 25, 36 ]
.
But we want to bundle up this tiny script with the Lodash code that we need and make it ready for browsers, which is where webpack comes in? How do we do that?
The easiest way to get started with using webpack without wasting time on a configuration file is just to run it from the command line. The simplest version of the command for webpack without using a configuration file takes an input file path and an output file path. Webpack will read from that input file, tracing through its dependency tree, combining all of the files together into a single file and outputting the file at the location you’ve specified as the output path. For this example, our input path is src/main.js
, and we want to output the bundled file to dist/bundle.js
. So, let’s create an npm script to do that (we don’t have webpack installed globally, so we can’t run it directly from the command line). In package.json
, edit the "scripts"
section to look like the following:
… "scripts": { "build": "webpack src/main.js dist/bundle.js", } …
Now, if you run npm run build
, webpack should get to work. When it’s done, which shouldn’t take long, there should be a new dist/bundle.js
file. Now you can run that file with Node.js (node dist/bundle.js
) or run it in the browser with a simple HTML page and see the same result in the console.
Before exploring webpack some more, let’s make our build scripts a little more professional by deleting the dist
directory and its contents before rebuilding, and also adding some scripts to execute our bundle. The first thing we need to do is install del-cli
so that we can delete directories without upsetting the people who don’t use the same operating system as us (don’t hate me because I use Windows); npm install del-cli -D
should do the trick. Then, we’ll update our npm scripts to the following:
… "scripts": { "prebuild": "del-cli dist -f", "build": "webpack src/main.js dist/bundle.js", "execute": "node dist/bundle.js", "start": "npm run build -s && npm run execute -s" } …
We kept "build"
the same as before, but now we have "prebuild"
to do some cleanup, which will run prior to "build"
every time "build"
is told to run. We also have "execute"
, which uses Node.js to execute the bundled script, and we can use "start"
to do it all with one command (the -s
bit just makes it so that the npm scripts don’t output as much useless stuff to the console). Go ahead and run npm start
. You should see webpack’s output, quickly followed by our squared array, show up in your console. Congratulations! You’ve just finished everything in the example1
branch of the repository I mentioned earlier.
As fun as it is to use the webpack command line to get started, once you start using more of webpack’s features, you’re going to want to move away from passing in all of your options via the command line and instead use a configuration file, which will have more capability but which will also be more readable because it’s written in JavaScript.
So, let’s create that configuration file. Create a new file named webpack.config.js
in your project’s root directory. This is the file name that webpack will look for by default, but you can pass the --config [filename]
option to webpack if you want to name your configuration file something else or to put it in a different directory.
For this tutorial, we’ll just use the standard file name, and for now we’ll try to get it working the same way that we had it working with just the command line. To do that, we need to add the following code to the config file:
module.exports = { entry: './src/main.js', output: { path: './dist', filename: 'bundle.js' } };
We’re specifying the input file and the output file, just like we did with the command line before. This is a JavaScript file, not a JSON file, so we need to export the configuration object — hence, the module.exports
. It doesn’t exactly look nicer than specifying these options through the command line yet, but by the end of the article, you’ll be glad to have it all in here.
Now we can remove those options that we were passing to webpack from the scripts in our package.json
file. Your scripts should look like this now:
… "scripts": { "prebuild": "del-cli dist -f", "build": "webpack", "execute": "node dist/bundle.js", "start": "npm run build -s && npm run execute -s" } …
You can npm start
like you did before, and it should look very familiar! That’s all we needed for the example2
branch.
We have two primary ways to add to webpack’s capabilities: loaders and plugins. We’ll discuss plugins later. Right now we’ll focus on loaders, which are used to apply transformations or perform operations on files of a given type. You can chain multiple loaders together to handle a single file type. For example, you can specify that files with the .js
extension will all be run through ESLint6 and then will be compiled from ES2015 down to ES5 by Babel7. If ESLint comes across a warning, it’ll be outputted to the console, and if it encounters any errors, it’ll prevent webpack from continuing.
For our little application, we won’t be setting up any linting, but we will be setting up Babel to compile our code down to ES5. Of course, we should have some ES2015 code first, right? Let’s convert the code from our main.js
file to the following:
import { map } from 'lodash'; console.log(map([1,2,3,4,5,6], n => n*n));
This code is doing essentially the same exact thing, but (1) we’re using an arrow function instead of the named square
function, and (2) we’re loading map
from 'lodash'
using ES2015’s import
. This will actually load a larger Lodash file into our bundle because we’re asking for all of Lodash, instead of just asking for the code associated with map
by requesting 'lodash/map'
. You can change that first line to import map from 'lodash/map'
if you prefer, but I switched it to this for a few reasons:
(Note: These two ways of loading work with Lodash because the developers have explicitly created it to work that way. Not all libraries are set up to work this way.)
Anyway, now that we have some ES2015, we need to compile it down to ES5 so that we can use it in decrepit browsers (ES2015 support8 is actually looking pretty good in the latest browsers!). For this, we’ll need Babel and all of the pieces it needs to run with webpack. At a minimum, we’ll need babel-core9 (Babel’s core functionality, which does most of the work), babel-loader10 (the webpack loader that interfaces with babel-core) and babel-preset-es201511 (which contains the rules that tell Babel to compile from ES2015 to ES5). We’ll also get babel-plugin-transform-runtime12 and babel-polyfill13, both of which change the way Babel adds polyfills and helper functions to your code base, although each does it a bit differently, so they’re suited to different kinds of projects. Using both of them wouldn’t make much sense, and you might not want to use either of them, but I’m adding both of them here so that no matter which you choose, you’ll see how to do it. If you want to know more about them, you can read the documentation pages for the polyfill14 and runtime transform15.
Anyway, let’s install all of that: npm i -D babel-core babel-loader babel-preset-es2015 babel-plugin-transform-runtime babel-polyfill
. And now let’s configure webpack to use it. First, we’ll need a section to add loaders. So, update webpack.config.js
to this:
module.exports = { entry: './src/main.js', output: { path: './dist', filename: 'bundle.js' }, module: { rules: [ … ] } };
We’ve added a property named module
, and within that is the rules
property, which is an array that holds the configuration for each loader you use. This is where we’ll be adding babel-loader. For each loader, we need to set a minimum of these two options: test
and loader
. test
is usually a regular expression that is tested against the absolute path of each file. These regular expressions usually just test for the file’s extension; for example, /.js$/
tests whether the file name ends with .js
. For ours, we’ll be setting this to /.jsx?$/
, which will match .js
and .jsx
, just in case you want to use React in your app. Now we’ll need to specify loader
, which specifies which loaders to use on files that pass the test
.
This can be specified by passing in a string with the loaders’ names, separated by an exclamation mark, such as 'babel-loader!eslint-loader'
. webpack reads these from right to left, so eslint-loader
will be run before babel-loader
. If a loader has specific options that you want to specify, you can use query string syntax. For example, to set the fakeoption
option to true
for Babel, we’d change that previous example to 'babel-loader?fakeoption=true!eslint-loader
. You can also use the use
option instead of the loader
option which allows you to pass in an array of loaders if you think that’d be easier to read and maintain. For example, the last examples would be changed to use: ['babel-loader?fakeoption=true', 'eslint-loader']
, which can always be changed to multiple lines if you think it would be more readable.
Because Babel is the only loader we’ll be using, this is what our loader configuration looks like so far:
… rules: [ { test: /.jsx?$/, loader: 'babel-loader' } ] …
If you’re using only one loader, as we are, then there is an alternative way to specify options for the loader, rather than using the query strings: by using the options
object, which will just be a map of key-value pairs. So, for the fakeoption
example, our config would look like this:
… rules: [ { test: /.jsx?$/, loader: 'babel-loader', options: { fakeoption: true } } ] …
We will be using this syntax to set a few options for Babel:
… rules: [ { test: /.jsx?$/, loader: 'babel-loader', options: { plugins: ['transform-runtime'], presets: ['es2015'] } } ] …
We need to set the presets so that all of the ES2015 features will be transformed into ES5, and we’re also setting it up to use the transform-runtime plugin that we installed. As mentioned, this plugin isn’t necessary, but it’s there to show you how to do it. An alternative would be to use the .babelrc
file to set these options, but then I wouldn’t be able to show you how to do it in webpack. In general, I would recommend using .babelrc
, but we’ll keep the configuration in here for this project.
There’s just one more thing we need to add for this loader. We need to tell Babel not to process files in the node_modules
folder, which should speed up the bundling process. We can do this by adding the exclude
property to the loader to specify not to do anything to files in that folder. The value for exclude
should be a regular expression, so we’ll set it to /node_modules/
.
… rules: [ { test: /.jsx?$/, loader: 'babel-loader', exclude: /node_modules/, options: { plugins: ['transform-runtime'], presets: ['es2015'] } } ] …
Alternatively, we could have used the include
property and specified that we should only use the src
directory, but I think we’ll leave it as it is. With that, you should be able to run npm start
again and get working ES5 code for the browser as a result. If you decide that you’d rather use the polyfill instead of the transform-runtime plugin, then you’ll have a change or two to make. First, you can delete the line that contains plugins: ['transform-runtime],
(you can also uninstall the plugin via npm if you’re not going to use it). Then, you need to edit the entry
section of the webpack configuration so that it looks like this:
entry: [ 'babel-polyfill', './src/main.js' ],
Instead of using a string to specify a single entry point, we use an array to specify multiple entry files, the new one being the polyfill. We specify the polyfill first so that it’ll show up in the bundled file first, which is necessary to ensure that the polyfills exist before we try to use them in our code.
Instead of using webpack’s configuration, we could have added a line at the top of src/main.js
, import 'babel-polyfill;
, which would accomplish the exact same thing in this case. We used the webpack entry configuration instead because we’ll need it to be there for our last example, and because it’s a good example to show how to combine multiple entries into a single bundle. Anyway, that’s it for the example3
branch of the repository. Once again, you can run npm start
to verify that it’s working.
Let’s add another loader in there: Handlebars. The Handlebars loader will compile a Handlebars template into a function, which is what will be imported into the JavaScript when you import a Handlebars file. This is the sort of thing that I love about loaders: you can import non-JavaScript files, and when it’s all bundled, what is imported will be something useable by JavaScript. Another example would be to use a loader that allows you to import an image file and that transforms the image into a base64-encoded URL string that can be used in the JavaScript to add an image inline to the page. If you chain multiple loaders, one of the loaders might even optimize the image to be a smaller file size.
As usual, the first thing we need to do is install the loader with npm install -D handlebars-loader
. If you try to use it, though, you’ll find that it also needs Handlebars itself: npm install -D handlebars
. This is so that you have control over which version of Handlebars to use without needing to sync your version with the loader version. They can evolve independently.
Now that we have both of these installed, we have a Handlebars template to use. Create a file named numberlist.hbs
in the src
directory with the following contents:
<ul> {{#each numbers as |number i|}} <li>{{number}}</li> {{/each}} </ul>
This template expects an array (of numbers judging by the variable names, but it should work even if they aren’t numbers) and creates an unordered list with the contents.
Now, let’s adjust our JavaScript file to use that template to output a list created from the template, rather than just logging out the array itself. Your main.js
file should now look like this:
import { map } from 'lodash'; import template from './numberlist.hbs'; let numbers = map([1,2,3,4,5,6], n => n*n); console.log(template({numbers}));
Sadly, this won’t work right now because webpack doesn’t know how to import numberlist.hbs
, because it’s not JavaScript. If we want to, we could add a bit to the import
statement that informs webpack to use the Handlebars loader:
import { map } from 'lodash'; import template from 'handlebars-loader!./numberlist.hbs'; let numbers = map([1,2,3,4,5,6], n => n*n); console.log(template({numbers}));
By prefixing the path with the name of a loader and separating the loader’s name from the file path with an exclamation point, we tell webpack to use that loader for that file. With this, we don’t have to add anything to the configuration file. However, in a large project, you’ll likely be loading in several templates, so it would make more sense to tell webpack in the configuration file that we should use Handlebars so that we don’t need to add handlebars!
to the path for every single import of a template. Let’s update the configuration:
… rules: [ {/* babel loader config… */}, { test: /.hbs$/, loader: 'handlebars-loader' } ] …
This one was simple. All we needed to do was specify that we wanted handlebars-loader to handle all files with the .hbs
extension. That’s it! We’re done with Handlebars and the example4
branch of the repository. Now when you run npm start
, you’ll see the webpack bundling output, as well as this:
<ul> <li>1</li> <li>4</li> <li>9</li> <li>16</li> <li>25</li> <li>36</li> </ul>
Plugins are the way, other than loaders, to install custom functionality into webpack. You have much more freedom to add them to the webpack workflow because they aren’t limited to being used only while loading specific file types; they can be injected practically anywhere and are, therefore, able to do much more. It’s hard to give an impression of how much plugins can do, so I’ll just send you to the list of npm packages that have “webpack-plugin”16 in the name, which should be a pretty good representation.
We’ll only be touching on two plugins for this tutorial (one of which we’ll see later). We’ve already gone quite long with this post, so why be excessive with even more plugin examples, right? The first plugin we’ll use is HTML Webpack Plugin17, which simply generates an HTML file for us — we can finally start using the web!
Before using the plugin, let’s update our scripts so that we can run a simple web server to test our application. First, we need to install a server: npm i -D http-server
. Then, we’ll change our execute
script to the server
script and update the start
script accordingly:
… "scripts": { "prebuild": "del-cli dist -f", "build": "webpack", "server": "http-server ./dist", "start": "npm run build -s && npm run server -s" }, …
After the webpack build is done, npm start
will also start up a web server, and you can navigate to localhost:8080
to view your page. Of course, we still need to create that page with the plugin, so let’s move on to that. First, we need to install the plugin: npm i -D html-webpack-plugin
.
When that’s done, we need to hop into webpack.config.js
and make it look like this:
var HtmlwebpackPlugin = require('html-webpack-plugin'); module.exports = { entry: [ 'babel-polyfill', './src/main.js' ], output: { path: './dist', filename: 'bundle.js' }, module: { rules: [ { test: /.jsx?$/, loader: 'babel-loader', exclude: /node_modules/, options: { plugins: ['transform-runtime'], presets: ['es2015'] } }, { test: /.hbs$/, loader: 'handlebars-loader' } ] }, plugins: [ new HtmlwebpackPlugin() ] };
The two changes we made were to import the newly installed plugin at the top of the file and then add a plugins
section at the end of the configuration object, where we passed in a new instance of our plugin.
At this point, we aren’t passing in any options to the plugin, so it’s using its standard template, which doesn’t include much, but it does include our bundled script. If you run npm start
and then visit the URL in the browser, you’ll see a blank page, but you should see that HTML being outputted to the console if you open your developer’s tools.
We should probably have our own template and get that HTML to be spitted out onto the page rather than into the console, so that a “normal” person could actually get something from this page. First, let’s make our template by creating an index.html
file in the src
directory. By default, it’ll use EJS for the templating, however, you can configure the plugin to use any template language18 available to webpack. We’ll use the default EJS because it doesn’t make much difference. Here are the contents of that file:
<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title><%= htmlWebpackPlugin.options.title %></title> </head> <body> <h2>This is my Index.html Template</h2> <div></div> </body> </html>
You’ll notice a few things:
body
tag by default.div
with an id
in there. We’ll be using this now.We now have the template we want; so, at the very least, we won’t have a blank page. Let’s update main.js
so that it appends that HTML to that div
, instead of putting it into the console. To do this, just update the last line of main.js
to document.getElementById("app-container").innerHTML = template({numbers});
.
We also need to update our webpack configuration to pass in a couple options to the plugin. Your config file should now look like this:
var HtmlwebpackPlugin = require('html-webpack-plugin'); module.exports = { entry: [ 'babel-polyfill', './src/main.js' ], output: { path: './dist', filename: 'bundle.js' }, module: { rules: [ { test: /.jsx?$/, loader: 'babel-loader', exclude: /node_modules/, options: { plugins: ['transform-runtime'], presets: ['es2015'] } }, { test: /.hbs$/, loader: 'handlebars-loader' } ] }, plugins: [ new HtmlwebpackPlugin({ title: 'Intro to webpack', template: 'src/index.html' }) ] };
The template
option specifies where to find our template, and the title
option is passed into the template. Now, if you run npm start
, you should see the following in your browser:
That brings us to the end of the example5
branch of the repository, in case you’re following along in there. Each plugin will likely have very different options and configurations of their own, because there are so many of them and they can do a wide variety of things, but in the end, they’re practically all added to the plugins
array in webpack.config.js
. There are also many other ways to handle how the HTML page is generated and populated with file names, which can be handy once you start adding cache-busting hashes to the end of the bundle file names.
If you look at the example project’s repository, you’ll see an example6
20 branch where I added JavaScript minification via a plugin, but that isn’t necessary unless you want to make some changes to the configuration of UglifyJS. If you don’t like the default settings of UglifyJS, check out the repository (you should only need to look at webpack.config.js
) to figure out how to use the plugin and configure it. But if you’re good with the default settings, then all you need to do is pass the -p
argument when you run webpack
on the command line. That argument is the “production” shortcut, which is equivalent to using --optimize-minimize
and --optimize-occurence-order
arguments, the first of which minifies the JavaScript and the second of which optimizes the order in which the modules are included in the bundled script, making for a slightly smaller file size and slightly faster execution. The repository has been done for a while, and I learned about the -p
option later, so I decided to keep the plugin example for UglifyJS in there, while informing you of an easier way. Another shortcut you can use is -d
, which will show more debugging information from the webpack output, and which will generate source maps without any extra configuration. You can use plenty more command line shortcuts21 if that’s easier for you.
One thing that I really enjoyed with RequireJS and couldn’t quite get to work with Browserify (though it may be possible) is lazy-loading modules. One massive JavaScript file will help by limiting the number of HTTP requests required, but it practically guarantees that code will be downloaded that won’t necessarily be used by the visitor in that session.
Webpack has a way of splitting a bundle into chunks that can be lazy-loaded, and it doesn’t even require any configuration. All you need to do is write your code in one of two ways, and webpack will handle the rest. Webpack gives you two methods to do this, one based on CommonJS and the other based on AMD. To lazy-load a module using CommonJS, you’d write something like this:
require.ensure(["module-a", "module-b"], function(require) { var a = require("module-a"); var b = require("module-b"); // … });
Use require.ensure
, which will make sure the module is available (but not execute it) and pass in an array of module names and then a callback. To actually use the module within that callback, you’ll need to require
it explicitly in there using the argument passed to your callback.
Personally, this feels verbose to me, so let’s look at the AMD version:
require(["module-a", "module-b"], function(a, b) { // … });
With AMD, you use require
, pass in an array of module dependencies, then pass a callback. The arguments for the callback are references to each of the dependencies in the same order that they appear in the array.
Webpack 2 also supports System.import
, which uses promises rather than callbacks. I think this will be a useful improvement, although wrapping this in a promise shouldn’t be hard if you really want them now. Note, however, that System.import
is already deprecated in favor of the newer specification for import()
. The caveat here, though, is that Babel (and TypeScript) will throw syntax errors if you use it. You can use babel-plugin-dynamic-import-webpack22, but that will convert it to require.ensure
rather than just helping Babel see the new import
function as legal and leave it alone so webpack can handle it. I don’t see AMD or require.ensure
going away any time soon, and System.import
will be supported until version 3, which should be decently far in the future, so just use whichever one you fancy the best.
Let’s augment our code to wait for a couple seconds, then lazy-load in the Handlebars template and output the list to the screen. To do that, we’ll remove the import
of the template near the top and wrap the last line in a setTimeout
and an AMD version of require
for the template:
import { map } from 'lodash'; let numbers = map([1,2,3,4,5,6], n => n*n); setTimeout( () => { require(['./numberlist.hbs'], template => { document.getElementById("app-container").innerHTML = template({numbers}); }) }, 2000);
Now, if you run npm start
, you’ll see that another asset is generated, which should be named 1.bundle.js
. If you open up the page in your browser and open your development tools to watch the network traffic, you’ll see that after a 2-second delay, the new file is finally loaded and executed. This, my friend, isn’t all that difficult to implement but it can be huge for saving on file size and can make the user’s experience so much better.
Note that these sub-bundles, or chunks, contain all of their dependencies, except for the ones that are included in each of their parent chunks. (You can have multiple entries that each lazy-load this chunk and that, therefore, have different dependencies loaded into each parent.)
Let’s talk about one more optimization that can be made: vendor chunks. You can define a separate bundle to be built that will store “common” or third-party code that is unlikely to change. This allows visitors to cache your libraries in a separate file from your application code, so that the libraries won’t need to be downloaded again when you update the application.
To do this, we’ll use a plugin that comes with webpack, called CommonsChunkPlugin
. Because it’s included, we don’t need to install anything; all we need to do is make some edits to webpack.config.js
:
var HtmlwebpackPlugin = require('html-webpack-plugin'); var UglifyJsPlugin = require('webpack/lib/optimize/UglifyJsPlugin'); var CommonsChunkPlugin = require('webpack/lib/optimize/CommonsChunkPlugin'); module.exports = { entry: { vendor: ['babel-polyfill', 'lodash'], main: './src/main.js' }, output: { path: './dist', filename: 'bundle.js' }, module: { rules: [ { test: /.jsx?$/, loader: 'babel-loader', exclude: /node_modules/, options: { plugins: ['transform-runtime'], presets: ['es2015'] } }, { test: /.hbs$/, loader: 'handlebars-loader' } ] }, plugins: [ new HtmlwebpackPlugin({ title: 'Intro to webpack', template: 'src/index.html' }), new UglifyJsPlugin({ beautify: false, mangle: { screw_ie8 : true }, compress: { screw_ie8: true, warnings: false }, comments: false }), new CommonsChunkPlugin({ name: "vendor", filename: "vendor.bundle.js" }) ] };
Line 3 is where we import the plugin. Then, in the entry
section, we use a different setup, an object literal, to specify multiple entry points. The vendor
entry marks what will be included in the vendor chunk — which includes the polyfill as well as Lodash — and we put our main entry file into the main
entry. Then, we simply need to add the CommonsChunkPlugin
to the plugins
section, specifying the “vendor” chunk as the chunk to base it on and specifying that the vendor code will be stored in a file named vendor.bundle.js
.
By specifying the “vendor” chunk, this plugin will pull all of the dependencies specified by that chunk out of the other entry files and only place them in this vendor chunk. If you do not specify a chunk name here, it’ll create a separate file based on the dependencies that are shared between the entries.
When you run webpack, you should see three JavaScript files now: bundle.js
, 1.bundle.js
and vendor.bundle.js
. You can run npm start
and view the result in the browser if you’d like. It seems that webpack will even put the majority of its own code for handling the loading of different modules into the vendor chunk, which is definitely useful.
And that concludes the example8
branch, as well as the tutorial. I have touched on quite a bit, but it only gives you a tiny taste of what is possible with webpack. Webpack enables easy CSS modules23, cache-busting hashes, image optimization and much much more — so much that even if I wrote a massive book on the subject, I couldn’t show you everything, and by the time I finished writing that book, most (if not all) of it would be outdated! So, give webpack a try today, and let me know if it improves your workflow. God bless and happy coding!
Front page image credit: webpack24 (official site)
(rb, al, il)