Bots and Artificial Intelligence are probably the most hyped concepts right now. And while some people praise the existing technologies, others claim they don’t fear AI at all, citing examples where it fails horribly. Examples of Facebook or Amazon advertising (both claim to use machine learning) that don’t match our interests at all are quite common today.
But what happens if we look at autonomous cars, trains or planes that have the very same machine learning technologies in place? How about the military using AI for its actions? While we’re still experimenting with these capable technologies, we also need to consider the possible consequences, the responsibilities that we have as developers and how all of this might affect the people the technology is being served to.
This week, Firefox 53 rolled out5 to end users, shipping performance benefits, positioned CSS Masks, and the new display: flow-root value that effectively replaces our common clearfix methods6. The update also comes with a revamped media player design. Finally, this is the first Firefox version without Windows XP and Vista support, so if you rely on one of these operating systems, consider switching to the ESR version of Firefox and upgrade to a newer system as soon as possible (the OS are not supported by Microsoft anymore).
Chrome 587 comes with support for IndexedDB 2.0, fullscreen support for progressive web apps, and improvements for sandboxed iframes. Alongside Firefox 53, the new Chrome is the second browser to support display: flow-root, the new clearfix replacement8. There’s also PointerEvents.getCoalescedEvents() as a new method to give you access to all input events that took place since the last time a PointerEvent was delivered — a useful feature for drawing applications but also quite risky when we look at it from a privacy and user tracking perspective.
Mozilla finally simplified the developer experience and got rid of the Firefox Developer Edition9. If you still use it, switch to the Firefox Nightly Edition10. While there’s still a beta channel, I recommend Nightly as it’s relatively free of bugs that impact the general usage while supporting the latest features, deprecations and development tools already weeks or even months ahead of public launch. This is great as you have more time to adjust code on live sites when something breaks in the Nightly channel. I use WebKit Nightly and Chrome Canary similarly.
Peter O’Shaughnessy challenges us to estimate which web browsers have the most users11. And as you can probably assume, our existing idea of Chrome, Firefox, Safari, and IE leading the field isn’t up to date anymore. Instead, we need to acknowledge that UC Browser has an impressive market share, Opera Mini still does, too, Yandex in certain regions and Samsung Internet usage grows fast as more devices are shipped with it. And Google Analytics isn’t telling us the truth anyway — big parts of “Chrome” might actually be Samsung Internet.
Google Chrome can now be run in headless mode, replacing PhantomJS or SlimerJS. Jim Cummins explains how to set it up on Mac OS16. For Windows and Linux it should be similar using bash and a few adaptions to the local commands.
Jeremy Thomas experimented with browsers and tried to disable cookies entirely19. Read about how successful he was with it and what challenges he faced with modern web applications.
Stefan Judis started a discussion about whether it’s time to rethink bundling25 as support for ES6 modules is now landing in browsers (currently in Safari, and behind a flag in Firefox and Edge, while Chrome has it in development).
Jekyll is gaining popularity as a lightweight alternative to WordPress. It often gets pigeonholed as a tool developers use to build their personal blog. That’s just the tip of the iceberg — it’s capable of so much more!
In this article, we’ll take on the role of a web developer building a website for a fictional law firm1. WordPress is an obvious choice for a website like this, but is it the only tool we should consider? Let’s look at a completely different way of building a website, using Jekyll.
Jekyll6 is a static website generator. Instead of software and a database being installed on our server, a Jekyll website is simply a directory of files on our computer. When we run Jekyll on that directory, it generates a static website, which we upload to a hosting provider.
A Jekyll website is essentially a static website with a templating language. It has fewer components to create and maintain. On the server, we only need a web server capable of serving files.
Speed
When visitors view pages on Jekyll sites, the server returns existing files without any extra processing. This is much faster than WordPress, which generates pages dynamically at request time. Note: WordPress Caching plugins can eliminate this performance gap.
Stability
WordPress has more components working together to generate pages for visitors. If a component fails, visitors may not be able to view the website. Much less can go wrong when a web server is serving only files.
Security
WordPress does a lot to mitigate security risks such as CSRF, XSS, or SQL injection attacks however it relies on you always having the latest updates. Statics sites eliminate this problem because there’s no dynamic data storage a hacker can exploit.
Source-controlled
A Jekyll website is a directory of files, so we can store the entire website in a Git repository. Working with a repository gives us many benefits7 (although VersionPress8 is in development and enables this workflow for WordPress).
A client can sign up to WordPress.com, choose a theme and set up a basic website by themselves. Jekyll is a command-line tool, which overwhelms most non-technical users. There are third-party GUIs for Jekyll, including CloudCannon9 (disclaimer: I’m the cofounder), Forestry10, Jekyll Admin11, Netlify CMS12, Prose13 and Siteleaf14. However, these need to be set up by the developer before being handed off to the client.
Build time
In our situation, this isn’t a problem because the website will build in under a second. However, a larger website with 10,000 to 100,000 posts could take minutes to build. This is frustrating when we’re developing because we have to wait for the website to build before previewing it in the browser.
Themes
Jekyll has some themes available, but it’s nothing compared to the thousands of themes available for WordPress.
Extensibility
If we need to add custom functionality to our WordPress website, we can write our own PHP. We can create custom Ruby plugins for Jekyll, however, these run at build time rather than request time.
Support
WordPress has a huge community of experts and other resources to help. Jekyll has similar resources but on a smaller scale.
Jekyll is a great tool for largely informational websites, like this project. If the project is more of an application, we could add dynamic elements using JavaScript, but at some point we would probably need a back end like WordPress’.
A typical development environment for WordPress requires installation of Apache or NGINX, PHP and MySQL. Then, we would install WordPress and configure the web server.
For Jekyll, we need to make sure we have Ruby installed (sometimes this is harder than it sounds). Then we install the Jekyll gem:
gem install jekyll
If you’re on macOS make sure you have Xcode developer installed first.
It’s time to create our first page. Let’s start with the home page. Pages are for standalone content without an associated date. WordPress stores page content in the database.
In Jekyll, pages are HTML files. We’ll start with pure HTML and then add Jekyll features as they’re needed. Here’s index.html in its current state:
In WordPress, we can write PHP to do almost anything. Jekyll takes a different approach. Instead of providing a full programming language, it uses a templating language named Liquid15. (WordPress has templating languages, too, such as Timber16.)
The footer of index.html contains a copyright notice with a year:
We’re building a static website in Jekyll, so this date won’t change until we rebuild the website. If we wanted the date to change without having to rebuild the website, we could use JavaScript.
The bulk of the HTML in index.html is for setting up the overall layout and won’t change between pages. This repetition will lead to a lot of maintenance, so let’s reduce it.
Includes were one of the first things I learned in PHP. Using includes, we can put the header and footer content in different files, then include the same content on multiple pages.
Jekyll has exactly the same feature. Includes are stored in a folder named _includes. We use Liquid to include them in index.html:
{% include header.html %} <p>Justice Law is professional representation. Practicing for over 50 years, our team have the knowledge and skills to get you results.</p> <blockquote> <p>Justice Law are the best of the best. Being local, they care about people and have strong ties to the community.</p> <p> <img src="/images/peter.jpeg" alt="Photo of Peter Rottenburg"> Peter Rottenburg </p> </blockquote> {% include footer.html %}
Includes reduce some of the repetition, but we still have them on each page. WordPress solves this problem with template files that separate a website’s structure from its content.
The Jekyll equivalent to template files is layouts. Layouts are HTML files with a placeholder for content. They are stored in the _layouts directory. We’ll create _layouts/default.html to contain a basic HTML layout:
Then, replace the includes in index.html by specifying the layout. We specify the layout using front matter, which is a snippet of YAML17 that sits between two triple-dashed lines at the top of a file (more on this soon).
--- layout: default --- <p>Justice Law is professional representation. Practicing for over 50 years, our team have the knowledge and skills to get you results.</p> <blockquote> <p>Justice Law are the best of the best. Being local, they care about people and have strong ties to the community.</p> <p> <img src="/images/peter.jpeg" alt="Photo of Peter Rottenburg"> Peter Rottenburg </p> </blockquote>
Now we can have the same layout on all of our pages.
In WordPress, custom fields allow us to set meta data on a post. We can use them to set SEO tags or to show and hide sections of a page for a particular post.
This concept is called front matter18 in Jekyll. Earlier, we used front matter to set the layout for index.html. We can now set our own variables and access them using Liquid. This further reduces repetition on our website.
Let’s add multiple testimonials to index.html. We could copy and paste the HTML, but once again, that leads to increased maintenance. Instead, let’s add the testimonials in front matter and iterate over them with Liquid:
--- layout: default testimonials: - message: We use Justice Law in all our endeavours. They offer an unparalleled service when it comes to running a business. image: "/images/joice.jpeg" name: Joice Carmold - message: Justice Law are the best of the best. Being local, they care about people and have strong ties to the community. image: "/images/peter.jpeg" name: Peter Rottenburg - message: Justice Law were everything we could have hoped for when buying our first home. Highly recommended to all. image: "/images/gibblesto.jpeg" name: D. and G. Gibbleston --- <p>Justice Law is professional representation. Practicing for over 50 years, our team have the knowledge and skills to get you results.</p> <div> {% for testimonial in page.testimonials %} <blockquote> <p>{{ testimonial.message }}</p> <p> <img src="{{ testimonial.image }}" alt="Photo of {{ testimonial.name }}"> {{ testimonial.name }} </p> </blockquote> {% endfor %} </div>
WordPress stores the HTML content, date and other meta data for posts in the database.
In Jekyll, each post is a static file stored in a _posts directory. The file name has the publication date and title for the post — for example, _posts/2016-11-11-real-estate-flipping.md. The source code for a blog post takes this structure:
--- layout: post categories: - Property --- Flipping is a term used primarily in the US to describe purchasing a revenue-generating asset and quickly reselling it for profit. ![House](/images/house.jpeg)
We can also use front matter to set categories and tags.
Below the front matter is the body of the post, written in Markdown19. Markdown is a simpler alternative to HTML.
Jekyll allows us to create layouts that inherit from other layouts. You might have noticed this post has a layout of post. The post layout inherits from the default layout and adds a date and title:
In WordPress, custom post types are useful for managing groups of content. For example, you might use custom post types for testimonials, products or real-estate listings.
The about.html page shows profiles of staff members. We could define the meta data for the staff (name, image, phone number, bio) in the front matter, but then we could only reference it on that page. In the future, we want to use the same data to display information about authors on blog posts. A collection enables us to refer to staff members anywhere on the website.
Configuration of our website lives in _config.yml. Here, we set a new collection:
collections: staff_members: output: false
Now we add our staff members. Each staff member is represented in a Markdown file stored in a folder with the collection name; for example, _staff_members/jane-doe.md.
We add the meta data in the front matter and the blurb in the body:
--- name: Jane Doe image: "/images/jane.jpeg" phone: "1234567" --- Jane has 19 years of experience in law, and specialises in property and business.
Similar to posts, we can iterate over the collection in about.html to display each staff member:
--- layout: default --- <ul> {% for member in site.staff_members %} <li> <div><img src="{{ member.image }}" alt="Staff photo for {{ member.name }}"></div> <p>{{ member.name }} - {{ member.phone }}</p> <p>{{ member.content | markdownify }}</p> </li> {% endfor %} </ul>
Some WordPress plugins can be emulated with core Jekyll. Here’s a photo gallery using front matter and Liquid:
--- layout: default images: - image_path: /images/bill.jpg title: Bill - image_path: /images/burt.jpg title: Burt - image_path: /images/gary.jpg title: Gary - image_path: /images/tina.jpg title: Tina - image_path: /images/suzy.jpg title: Suzy --- <ul> {% for image in page.images %} <li><img src="{{ image.image_path }}" alt="{{ image.title }}"/></li> {% endfor %} </ul>
We just need to add our own JavaScript and CSS to complete it.
Jekyll plugins can emulate the functionality of other WordPress plugins. Keep in mind that Jekyll plugins only run while the website is being generated — they don’t add real-time functionality:
One of the major benefits of using a static site generator like Jekyll is the entire site and content can live in Git. At a basic level, Git gives you a history of all the changes on the site. For teams, it opens up all sorts of workflows and approval processes.
That covers the nuts and bolts of creating the website. If you’re curious to see how an entire Jekyll website fits together, have a look at the Justice template47. It’s a free MIT-licensed template for Jekyll. The snippets above are based on this template.
The WordPress CMS is built into the platform, so we would need to set up an account for the client.
With our Jekyll website, we’d link our Git repository to one of the Jekyll GUIs mentioned earlier. One of the nice things about this workflow is that clients changes are committed back to the repository. As developers, we can continue to use local workflows even with non-developers updating the website.
Some Jekyll GUIs offer hosting, while others have a way to output to an Amazon S3 bucket or to GitHub Pages48.
At this point, our Jekyll website is live and editable by the client. If we need to make any changes to the website, we simply push to the repository and it will automatically deploy live.
Now it’s your turn. Plenty of resources are available to help you build your first Jekyll website:
The official Jekyll website49 is a great place to start with in-depth documentation on all of Jekyll’s features.
Jekyll.tips50 has a video tutorial series covering core Jekyll topics.
Have a look at Jekyll templates on GitHub to see how they’re put together: Frisco51 for marketing websites, Scholar52 for documentation and Urban53 for digital agencies.
If you’re migrating, Jekyll has tools to import posts54 from WordPress and WordPress.com websites. After importing, you’ll need to manually migrate or create the layouts, pages, CSS, JavaScript and other assets for the website.
The beauty of Jekyll is in its simplicity. While WordPress can match many of the features of Jekyll, it often comes at the cost of complexity through extra plugins or infrastructure.
Ultimately, it’s about finding the tool that works best for you. I’ve found Jekyll to be a fast and efficient way to build websites. I encourage you to try it out and post your experience in the comments.
No matter whether you are designing a whole design system or just a couple of screens, symbols in Sketch will help you keep your file organized and will save you a lot of time in the long run. In this article, I’ll share with you a few best practices and tricks to help you unleash symbols’ full potential.
But first, a bit of a backstory. I started using Sketch a few years ago, as a replacement for my favorite design software back then, Fireworks1, which had been discontinued2 by Adobe — leaving a whole generation of designers3 broken-hearted. Since my first days of using Sketch, I was very surprised by how easy and straightforward it is to use. I had, once again, found an application focused on user interface (and icon) design — and nothing else.
The apparent lack of features in Sketch, compared to the alternatives full of menus and stacked panels4 that I was used to, was in fact one of its major advantages and helped me to design faster. Among those few features, symbols were the thing that I used very frequently, and still do, practically every day (yes, even on Sundays… you know, a freelancer’s life).
What are symbols? In a nutshell, symbols enable you to use and reuse an element across a project, keeping a master symbol that automatically updates other instances of the symbol when changes are made to it.
This concept is not exactly new (nor is it exclusive to Sketch, to be honest). However, if you design interfaces, then you’ll find it extremely useful, especially when using components as part of a design system8.
In this article, I’ll outline how to make use of symbols in Sketch in order to unleash their full potential, going from the most basic situations to some more advanced use cases. I’ll also include some tips and tricks that I have learned along the way.
Before digging deeper, and in case you are new to Sketch, let me give you a short introduction to how symbols work.
Symbols can be made from almost any elements in Sketch: text objects, shapes, bitmap images, even other symbols (we’ll talk about this later). Inside every symbol (double-click a symbol to enter edit mode), you’ll find one main artboard containing the symbol’s layers. This artboard also defines the symbol’s boundaries.
Usually, symbols are created for those elements in an interface that you expect to reuse later on (such as buttons, list items, tabs, etc.) and that will be spread across different screens, pages and artboards in your designs.
Note: For future reference, keep in mind that “copies” of one symbol are called instances.
The best thing about using symbols (instead of grouped, independent and disconnected objects) is that if at some point you decide to change some property in a particular symbol (for example, the color, shape, text size, dimensions or whatever else you want), you’ll just need to edit the symbol’s master once, and this change will be automatically replicated to all of the master’s instances, wherever they are. I don’t know about you, but I find this super-convenient!
Just like in life itself, it’s fundamental to keep everything in order. Always design as if someone else will later need to open and work with your design file and understand it without your help! This also applies to the way you name symbols — naming should meet certain criteria.
One recommendation is to use a slash (/) in the symbol’s name. Sketch will automatically create a category with the part before the slash, and will name and place the symbol inside it, using the part of the name following the slash. For example, if you have two symbols named “Button/Primary” and “Button/Secondary,” here is how they will look like when you try to insert them from the toolbar:
You can repeat this many times to have several symbols under the same root, grouped by similar logic, making them easier to find. And if your “tree” grows too big, take a moment to reconsider your naming system and see if there’s any possible way to optimize it and make it more manageable.
There are many different conventions for how symbols should be named, perhaps one convention for every designer out there. Personally, I prefer not to use names that refer to the visual properties of the elements — for example, “Red Button” would be a bad choice in my opinion because if the color of the button changes later on for some reason, the name of the symbol will become incorrect. Instead, I try to differentiate the symbol’s function and state (such as “Primary/Disabled”).
In any case, just be consistent and find something that works for both you and your team, then stick to it; don’t switch the naming system according to the case! This also applies to layers inside symbols: some designers even use emojis15 to mark which of them are meant to be editable (for example, by adding a pencil emoji to the name). To do this, press Control + Command + Space to open a dialog to select emojis16.
Note: Regarding symbols’ names, bear in mind that instances will take their names from the master symbol, but you can change them afterwards to whatever you want. This way, instances of the same symbol can have different names from each other.
When you create a symbol, Sketch asks whether you want to send it to the symbols page18. My advice is to check this box, even if after a while (and a few symbols later) this dedicated page turns into a mess. (Sketch places one symbol next to the other as they are being created, and when you delete a symbol, you’ll notice the blank space left in its spot.)
Instead, what I do to sort this out is to create my own symbols page (which is just a regular page, which I would usually name “Symbols”) where I can arrange symbol instances in the order I want and, thus, ignore the official symbols page.
This way, I can create artboards that follow categories (such as lists, buttons, inputs and so on) and place symbols in a way that I find convenient and that makes sense to me. You’ll still need to invest some time to update this page from time to time, but once it is created, it will make everything much easier and you’ll be able to build a new screen in no time.
Note: If you prefer to use the symbols page instead, there’s the Symbol Organizer plugin21, which could help you keep everything arranged.
Replacing an existing symbol with another is easy. Just select the symbol and choose “Replace with” from the contextual menu that appears when you right-click over the symbol instance. Then, select the new symbol that you want to use. Keep in mind that the new symbol will keep the same size and position as its predecessor; you can fix this by selecting “Set to original size” from the same contextual menu.
Once you’ve made a symbol, you can detach it to recover the elements that form it as a group. To do this, just select “Detach from symbol” in the same contextual menu that I mentioned earlier.
Symbols, like other elements, can also be exported as bitmap images. To do this, you’ll need to mark elements as exportable. (Select the symbol instance, and then choose “Make Exportable” at the bottom of the Inspector.)
The problem that I found during this process is that if the symbol has some padding (for example, if the shapes inside are smaller than the symbol’s total size), when doing the export, Sketch will omit the blank space and will just create an image with the visible content only.
One way to work this around is by using a slice25. When creating the slice, place it over the instance and make sure it matches the size of the instance’s boundaries (width and height); then, select the slice and use the exporting options as needed.
Side note: This same trick also applies to other tools, such as Zeplin26.
In this world full of screens with multiple sizes and aspect ratios, it’s important to make sure your design adapts to many different scenarios. This is easier to accomplish if you don’t have to design everything from scratch every time, by reusing elements (or symbols, as I’m sure you’ve already guessed).
This is where the resizing options in symbols come in handy, helping you to use the same element with different widths and heights with no hassle: If you resize just one instance by selecting it, this won’t affect the other instances. (But remember that resizing options are applied to individual layers inside the master symbol, not to the instance itself. So, even while you can adjust sizes individually from instance to instance, elements inside will always maintain the same behavior.)
Note: The options outlined below apply not only to symbols, but to groups as well. Behaviors are not always predictable, so chances are that you’ll have to play around and explore a bit before finding what you need, combining one or two different settings in most cases.
When the Stretch option is used, a shape that has specified, let’s say, 50% of the symbol’s total width will keep this same relationship when the instance is extended vertically or horizontally. This is the default behavior.
“Pin to Corner” will (as the name suggests) pin an element to the nearest corner, and the element will not resize, keeping the same distance to this corner. Keep in mind that if the object is centered (with equal spacing from both sides), it won’t know which one is the nearest corner, so it’ll stay in the middle.
If you have resized your symbol but aren’t satisfied with the result, you can always go back to the beginning by choosing “Set to original size” from the contextual menu.
Keep in mind that symbols have dedicated artboards, and these will define the symbols’ boundaries (even when shapes inside overflow on them). You can make the symbol’s artboard the same size as of its contents by selecting it and choosing “Resize to fit” from the Inspector.
In the width and height input fields in the Inspector, you can use operators to change values. For instance, you can use 100*2 to set an element’s dimensions to 200 pixels. Other operators are + (add), - (subtract) and / (divide).
Besides mathematical operators, in the same input fields you can also use L to scale an object from the left (this is the default), R to scale it from the right, T to scale it from the top (this is the default), B to scale it from the bottom, and C and M to scale it from the center or middle.
For example, if you have a shape that has a width of 200 pixels and want to resize it so that it scales from the right to the left side, you can use something like 300r in the width input field.
What could be better than one symbol? Perhaps a symbol with another one inside it!
This feature is kind of new in Sketch, and it gives you a lot of possibilities when combining symbols together. You can place one symbol on top of another, select both, and then create a new symbol that contains the two instances. You can repeat this as much as you’d like. Be moderate, though, or else you’ll find yourself digging into levels and levels of nested symbols, one inside another. This could make maintenance much harder and could also be a symptom of bigger organizational problems.
Nesting symbols can be especially useful when you need to create variations of one symbol. For example, you could follow a process like this:
Pick up one symbol that will serve as a base. (this symbol will remain the same in all cases.)
Overlap it with other symbols (such as icons or badges), which could be there or not, depending on the case.
Finally, create another symbol with the resulting design.
In the image below, you can see that all rows share the same characteristics (they have the same size, text properties and amount of padding on the left), so I created a base symbol that contains only these elements (i.e. elements that will be shared with the other symbols). Using this symbol as a starting point, I then created some overlapping elements that are different, saving the result in each case as a different symbol; so, all of the symbols under “Variations” are actually different symbols.
But you don’t — necessarily — need to create a new symbol for every state of the row. There may be a simpler way: using overrides.
If you had to create a lot of different symbols just because one part of their content changes, you’d probably go nuts. One of the main purposes of symbols is precisely to have to design as little as possible and to have fewer elements — and, therefore, more control over them. Enter nested overrides!
One practical example of this workflow could be designing a tab bar with different states. In this case, the main symbol with the inactive tabs would act as the base, and then there would be a different symbol for each one of the highlighted tabs. Just choose the one that you want from the “Overrides” options in the Inspector.
Note: For this technique to work, keep in mind that the inactive tabs inside the main symbol (the navigation bar) need to be symbols as well. Also, be sure that all symbols (both inactive and active ones) have the exact same dimensions (width, height). Otherwise, they won’t appear as available options in the “Overrides” dropdown menu.
Let’s look at another use case. If you have multiple buttons in a design but with different text labels on them, then the Overrides option will enable you to change the text value (not the font family or font size — you have to modify those inside the symbol itself, when editing the symbol master), without having to create a new symbol each time. This is as easy to do as selecting the instance and changing the text content in the Inspector.
Overrides apply not only to text; you can also use them for bitmap images and even for other symbols, as mentioned before. This way, you can have several instances of a symbol, with a different image in each one of them — and all of this without having to modify the symbol’s master.
There are cases when I don’t want to have any particular image as part of a symbol’s master. So, what I usually do is to create an empty PNG file with no visible content, create a shape, and use this image as a pattern fill (you can find this option in the “Fill Options” when selecting a shape). Then, when doing the symbol overriding, I just replace this transparent image with the one that I want in each case!
To get the most out of this practice, I also use a layering system with an icon or element that acts as a placeholder underneath the image and that will be visible only if I keep the original transparent bitmap. One benefit of doing this is that I can simulate this empty state that will appear when images are loading in the finished product, something that I consider necessary to design anyway.
One of the reasons why being organized is a good idea is because the way you name and order layers will affect the way they are displayed in the “Overrides” panel. The labels to the left of the input fields in the Inspector will respect the name and order you’ve previously defined inside the symbol itself, so you’d better pay attention to this order if you want to have a more efficient workflow.
You can replace a nested symbol with another symbol only if the new symbol has the exact same width and height as the current element.
Tip 3: Displacing Elements Depending on Text Length Link
When changing the text’s value in the Overrides options, you can make an element move as needed when the one to its left is longer (see the following illustration).
The secondary text or shape necessarily needs to be to the right of the text for this to work. Also, both elements should have no more than 20 pixels of distance between them (see the “Further Reading” below).
A symbol can look a bit messy because of the options in the Overrides section. If you don’t want an element inside it to be able to be overridden, just lock or hide this layer and it won’t appear in the list.
There’s one way to quickly make a text element disappear in an instance, by using overrides. To do this, just set the text value to a blank space, pressing the space bar and the return key in the Overrides options.
If you have bitmap images inside a symbol, they can be changed by others using the options in the Overrides section. It’s also possible to recover the original image (the one that forms part of the editable symbol) by choosing “Remove image override” — just right-click over the image box next to “Choose Image” in the Inspector.
“Hacking the Button in Sketch40,” Aleksandr Pasevin41 Presents a simple hack to keep an icon to the left of the text (instead of to the right, which is the normal behavior), in just a couple of simple steps.
One good thing about Sketch is that when it falls short of a feature, there’s usually a plugin to make up for it. And some of them work especially well with symbols, making them even more powerful! Some of these plugins have been mentioned, but in case you missed any of them, here’s a list with some additions.
Among its many other features, the Sketch Runner52 plugin will help you easily insert symbols in a document using just a combination of keys. The “go to” option is very useful for jumping right to a particular symbol — very useful if your project has a lot of them and if it’s difficult to find symbols using other means.
If you are working with a team, InVision Craft Library53 will make it easy to create a shared library with assets that everybody can use, allowing you to sync changes when you need to update a symbol, so that you are always sure you’re using the symbols’s latest version.
Automate54 is very powerful and will likely make your work more efficient. Options for managing symbols include ones to remove unused symbols, to select all instances of a symbol, and much more.
With Symbol Organizer56, organize your symbols page alphabetically (including the layers list) and into separate groups determined by your symbol names.
Auto Layout57 integrates seamlessly in Sketch and enables defining and viewing different iPhone and iPad sizes including portrait and landscape. It also supports more advanced features, such as stacks (a special type of group that defines the layouts of its child layers), and presets for both Android and iOS. Look at their “Examples” page58 for more information.
Note: These are only some of the plugins that I think might be most helpful to you, but there are many others. To know more, just visit Sketch’s official plugin page59 or the Sketch App Sources60 website regularly.
Sketch symbols are constantly evolving, so we can expect further improvements that will make them even more valuable and relevant. However, if I had to name just one thing that I would like them to have, that would be the possibility to have shared symbols’ libraries, something like Figma is doing61. This could be extremely useful, especially for team work, when several designers working on the same project need to pick elements from a primary, always up-to-date document stored in the cloud.
(Note: Regarding this feature, I’m aware that Sketch’s team is working on it, so hopefully we’ll see it soon. The more open format in version 4362 is probably laying the groundwork for this feature. In any case, I’m looking forward to it, because this could be a game-changer in many designer workflows.)
Truth be told, there are currently some plugins that help you accomplish more or less the same behavior mentioned above, but I always find it more reliable when they are made a part of Sketch’s core functionality — which ensures that the feature will keep working when the software is updated to the next version.
I’m aware that there are many more techniques and tricks. The way one works tends to be kind of personal sometimes, and there’s no single right way to do something. Here, I’ve shared the techniques that I think are reliable, interesting and don’t require much hacking. That’s why some techniques were left out of this article.
I hope this was a useful read! If it was, then symbols will probably become the backbone of your designs, and you’ll use them quite often. Feel free to share your thoughts and other tips and tricks in the comments below. You can also always reach me on Twitter63 if you need help!
Today, CSS preprocessors are a standard for web development. One of the main advantages of preprocessors is that they enable you to use variables. This helps you to avoid copying and pasting code, and it simplifies development and refactoring.
We use preprocessors to store colors, font preferences, layout details — mostly everything we use in CSS.
But preprocessor variables have some limitations:
You cannot change them dynamically.
They are not aware of the DOM’s structure.
They cannot be read or changed from JavaScript.
As a silver bullet for these and other problems, the community invented CSS custom properties. Essentially, these look and work like CSS variables, and the way they work is reflected in their name.
Custom properties are opening new horizons for web development.
The usual problem when you start with a new preprocessor or framework is that you have to learn a new syntax.
Each preprocessor requires a different way of declaring variables. Usually, it starts with a reserved symbol — for example, $ in Sass and @ in LESS.
CSS custom properties have gone the same way and use -- to introduce a declaration. But the good thing here is that you can learn this syntax once and reuse it across browsers!
You may ask, “Why not reuse an existing syntax?”
There is a reason5. In short, it’s to provide a way for custom properties to be used in any preprocessor. This way, we can provide and use custom properties, and our preprocessor will not compile them, so the properties will go directly to the outputted CSS. And, you can reuse preprocessor variables in the native ones, but I will describe that later.
(Regarding the name: Because their ideas and purposes are very similar, sometimes custom properties are called the CSS variables, although the correct name is CSS custom properties, and reading further, you will understand why this name describes them best.)
So, to declare a variable instead of a usual CSS property such as color or padding, just provide a custom-named property that starts with --:
In case you are not sure what :root6 matches, in HTML it’s the same as html but with a higher specificity.
As with other CSS properties, custom ones cascade in the same way and are dynamic. This means they can be changed at any moment and the change is processed accordingly by the browser.
To use a variable, you have to use the var() CSS function and provide the name of the property inside:
The var() function is a handy way to provide a default value. You might do this if you are not sure whether a custom property has been defined and want to provide a value to be used as a fallback. This can be done easily by passing the second parameter to the function:
.box{ --box-color:#4d4e53; --box-padding: 0 10px; /* 10px is used because --box-margin is not defined. */ margin: var(--box-margin, 10px); }
As you might expect, you can reuse other variables to declare new ones:
.box{ /* The --main-padding variable is used if --box-padding is not defined. */ padding: var(--box-padding, var(--main-padding)); --box-text: 'This is my box'; /* Equal to --box-highlight-text:'This is my box with highlight'; */ --box-highlight-text: var(--box-text)' with highlight'; }
As we got accustomed to with preprocessors and other languages, we want to be able to use basic operators when working with variables. For this, CSS provides a calc() function, which makes the browser recalculate an expression after any change has been made to the value of a custom property:
Before talking about CSS custom property scopes, let’s recall JavaScript and preprocessor scopes, to better understand the differences.
We know that with, for example, JavaScript variables (var), a scope is limited to the functions.
We have a similar situation with let and const, but they are block-scope local variables.
A closure in JavaScript is a function that has access to the outer (enclosing) function’s variables — the scope chain. The closure has three scope chains, and it has access to the following:
its own scope (i.e. variables defined between its braces),
the outer function’s variables,
the global variables.
The story with preprocessors is similar. Let’s use Sass as an example because it’s probably the most popular preprocessor today.
With Sass, we have two types of variables: local and global.
A global variable can be declared outside of any selector or construction (for example, as a mixin). Otherwise, the variable would be local.
Any nested blocks of code can access the enclosing variables (as in JavaScript).
This means that, in Sass, the variable’s scopes fully depend on the code’s structure.
However, CSS custom properties are inherited by default, and like other CSS properties, they cascade.
You also cannot have a global variable that declares a custom property outside of a selector — that’s not valid CSS. The global scope for CSS custom properties is actually the :root scope, whereupon the property is available globally.
Let’s use our syntax knowledge and adapt the Sass example to HTML and CSS. We’ll create a demo using native CSS custom properties. First, the HTML:
global <div> enclosing <div> closure </div> </div>
That’s the first huge difference: If you reassign a custom property’s value, the browser will recalculate all variables and calc() expressions where it’s applied.
Preprocessors Are Not Aware of the DOM’s Structure Link
Suppose we wanted to use the default font-size for the block, except where the highlighted class is present.
.highlighted { --highlighted-size: 30px; } .default { --default-size: 10px; /* Use default-size, except when highlighted-size is provided. */ font-size: var(--highlighted-size, var(--default-size)); }
Because the second HTML element with the default class carries the highlighted class, properties from the highlighted class will be applied to that element.
In this case, it means that --highlighted-size: 30px; will be applied, which in turn will make the font-size property being assigned use the --highlighted-size.
This happens because all Sass calculations and processing happen at compilation time, and of course, it doesn’t know anything about the DOM’s structure, relying fully on the code’s structure.
As you can see, custom properties have the advantages of variables scoping and add the usual cascading of CSS properties, being aware of the DOM’s structure and following the same rules as other CSS properties.
The second takeaway is that CSS custom properties are aware of the DOM’s structure and are dynamic.
CSS custom properties are subject to the same rules as the usual CSS custom properties. This means you can assign any of the common CSS keywords to them:
inherit
This CSS keyword applies the value of the element’s parent.
initial
This applies the initial value as defined in the CSS specification (an empty value, or nothing in some cases of CSS custom properties).
unset
This applies the inherited value if a property is normally inherited (as in the case of custom properties) or the initial value if the property is normally not inherited.
revert
This resets the property to the default value established by the user agent’s style sheet (an empty value in the case of CSS custom properties).
Let’s consider another case. Suppose you want to build a component and want to be sure that no other styles or custom properties are applied to it inadvertently (a modular CSS solution would usually be used for styles in such a case).
But now there is another way: to use the all CSS property26. This shorthand resets all CSS properties.
Together with CSS keywords, we can do the following:
The name of these CSS variables is “custom properties,” so why not to use them to emulate non-existent properties?
There are many of them: translateX/Y/Z, background-repeat-x/y (still not cross-browser compatible), box-shadow-color.
Let’s try to make the last one work. In our example, let’s change the box-shadow’s color on hover. We just want to follow the DRY rule (don’t repeat yourself), so instead of repeating box-shadow’s entire value in the :hover section, we’ll just change its color. Custom properties to the rescue:
One of the most common use cases of custom properties is for color themes in applications. Custom properties were created to solve just this kind of problem. So, let’s provide a simple color theme for a component (the same steps could be followed for an application).
This has everything we need. With it, we can override the color variables to the inverted values and apply them when needed. We could, for example, add the global inverted HTML class (to, say, the body element) and change the colors when it’s applied:
This behavior cannot be achieved in a CSS preprocessor without the overhead of duplicating code. With a preprocessor, you would always need to override the actual values and rules, which always results in additional CSS.
With CSS custom properties, the solution is as clean as possible, and copying and pasting is avoided, because only the values of the variables are redefined.
Previously, to send data from CSS to JavaScript, we often had to resort to tricks37, writing CSS values via plain JSON in the CSS output and then reading it from the JavaScript.
Now, we can easily interact with CSS variables from JavaScript, reading and writing to them using the well-known .getPropertyValue() and .setProperty() methods, which are used for the usual CSS properties:
/** * Gives a CSS custom property value applied at the element * element {Element} * varName {String} without '--' * * For example: * readCssVar(document.querySelector('.box'), 'color'); */ function readCssVar(element, varName){ const elementStyles = getComputedStyle(element); return elementStyles.getPropertyValue(`--${varName}`).trim(); } /** * Writes a CSS custom property value at the element * element {Element} * varName {String} without '--' * * For example: * readCssVar(document.querySelector('.box'), 'color', 'white'); */ function writeCssVar(element, varName, value){ return element.style.setProperty(`--${varName}`, value); }
Let’s assume we have a list of media-query values:
To show how to assign custom properties from JavaScript, I’ve created an interactive 3D CSS cube demo that responds to user actions.
It’s not very hard. We just need to add a simple background, and then place five cube faces with the relevant values for the transform property: translateZ(), translateY(), rotateX() and rotateY().
To provide the right perspective, I added the following to the page wrapper:
The only thing missing is the interactivity. The demo should change the X and Y viewing angles (--rotateX and --rotateY) when the mouse moves and should zoom in and out when the mouse scrolls (--translateZ).
Essentially, we’ve just changed the CSS custom properties’ values. Everything else (the rotating and zooming in and out) is done by CSS.
Tip: One of the easiest ways to debug a CSS custom property value is just to show its contents in CSS generated content (which works in simple cases, such as with strings), so that the browser will automatically show the current applied value:
You can check it in the plain CSS demo42 (no HTML or JavaScript). (Resize the window to see the browser reflect the changed CSS custom property value automatically.)
This means that, you can start using them natively.
If you need to support older browsers, you can learn the syntax and usage examples and consider possible ways of switching or using CSS and preprocessor variables in parallel.
Of course, we need to be able to detect support in both CSS and JavaScript in order to provide fallbacks or enhancements.
This is quite easy. For CSS, you can use a @supports condition46 with a dummy feature query:
@supports ( (--a: 0)) { /* supported */ } @supports ( not (--a: 0)) { /* not supported */ }
In JavaScript, you can use the same dummy custom property with the CSS.supports() static method:
As we saw, CSS custom properties are still not available in every browser. Knowing this, you can progressively enhance your application by checking if they are supported.
For instance, you could generate two main CSS files: one with CSS custom properties and a second without them, in which the properties are inlined (we will discuss ways to do this shortly).
Load the second one by default. Then, just do a check in JavaScript and switch to the enhanced version if custom properties are supported:
<!-- HTML --> <link href="without-css-custom-properties.css" rel="stylesheet" type="text/css" media="all" />
// JavaScript if(isSupported){ removeCss('without-css-custom-properties.css'); loadCss('css-custom-properties.css'); // + conditionally apply some application enhancements // using the custom properties }
This is just an example. As you’ll see below, there are better options.
One advantage of this method of manually checking in the code whether custom properties are supported is that it works and we can do it right now (don’t forget that we have switched to Sass):
This method does have many cons, not least of which are that the code gets complicated, and copying and pasting become quite hard to maintain.
2. Use a Plugin That Automatically Processes the Resulting CSS Link
The PostCSS ecosystem provides dozens of plugins today. A couple of them process custom properties (inline values) in the resulting CSS output and make them work, assuming you provide only global variables (i.e. you only declare or change CSS custom properties inside the :root selector(s)), so their values can be easily inlined.
This plugin offers several pros: It makes the syntax work; it is compatible with all of PostCSS’ infrastructure; and it doesn’t require much configuration.
There are cons, however. The plugin requires you to use CSS custom properties, so you don’t have a path to prepare your project for a switch from Sass variables. Also, you won’t have much control over the transformation, because it’s done after the Sass is compiled to CSS. Finally, the plugin doesn’t provide much debugging information.
This gives you a way to control all of the CSS output from one place (from Sass) and start getting familiar with the syntax. Plus, you can reuse Sass variables and logic with the mixin.
When all of the browsers you want to support work with CSS variables, then all you have to do is add this:
$css-vars-use-native: true;
Instead of aligning the variable properties in the resulting CSS, the mixin will start registering custom properties, and the var() instances will go to the resulting CSS without any transformations. This means you’ll have fully switched to CSS custom properties and will have all of the advantages we discussed.
If you want to turn on the useful debugging information, add the following:
$css-vars-debug-log: true;
This will give you:
a log when a variable was not assigned but was used;
a log when a variable is reassigned;
information when a variable is not defined but a default value gets passed that is used instead.
Now you know more about CSS custom properties, including their syntax, their advantages, good usage examples and how to interact with them from JavaScript.
You have learned how to detect whether they are supported, how they are different from CSS preprocessor variables, and how to start using native CSS variables until they are supported across browsers.
This is the right time to start using CSS custom properties and to prepare for their native support in browsers.
Looking at recent discussions, I feel that more and more people are starting to think about ethically and morally correct work. Many of us keep asking themselves if their work is meaningful or if it matters at all. But in a well-functioning society, we need a variety of things to live a good life. The people writing novels that delight us are just as important as those who fight for our civil rights.
It’s important that we have people building services that ease other people’s lives and it’s time to set our sense of urgency right again. Once we start to value other people’s work, the view we have on our own work will start to change, too. As we rely on book authors, for example, other people rely on us to be able to buy the books via a nice, fast and reliable web service.
Good news if you’re using PostgreSQL: The upcoming PostgreSQL 10 offers some great new features5. It’ll support logical replication in addition to the already existing logical decoding, up to 4x faster parallel query, SCRAM Authentication, and a lot of other useful things.
Alexis Deveria creates the amazing project caniuse.com6, a site that we all use a lot. Now he accepts donations, and in return, you can also get an ad-free experience on the site. If you rely on caniuse.com for your work, consider showing your appreciation for the hard work that the author puts into it by giving something back7.
With the new Windows 10 Creators Update8, Edge 15 went live. It comes with a new tab management interface, a book reader mode, and better energy efficiency. But even more interesting for us are the implementation of the Web Payment Request API, CSS Custom Property support, Brotli support, WebRTC, async/await, and Intersection Observer.
André Staltz shares his most valuable piece of advice to be a better programmer: “Gain a deeper understanding of the system,” and he has strong points in his article9 that reinforce this.
There’s a new DNS resource record: CAA. The Certificate Authority Authorization Record10 lets you specify which certificate authority is allowed to issue a certificate for your domain. From September 2017 on, CAs are required to check against these records, so you should consider adding that record to your DNS records as soon as possible.
Cassie Marketos shares her concerns and discoveries about “What Makes Work Meaningful21.” In search of the answer to the common question if what we do matters at all, this article reveals some thoughts that set our sense of urgency right again.
Josh Clark on why the smart algorithm systems that power Google, Siri, Alexa and other “intelligent” AI services should know when they’re not smart enough23 and indicate that to users.
Microplastics are everywhere. They’re used in most creams, shower gels and a lot of other products we use every day. Scientists now found microplastics in commercial salts24 from several countries, indicating how badly our seas are polluted with these particles. As we’re eating salt, this has a direct effect on our health, and it’s only stoppable if we achieve to not have microplastic particles in everyday products anymore.
Last but not least, if you’re in Europe or Germany, how about joining the awesome CSSconf EU in Berlin on May, 5th25? There are still tickets available. I’ll be around at the sold-out beyondtellerrand in Düsseldorf again, and I’d love to meet you there. If you don’t have a ticket, maybe join one of the side events26? Or consider the Material Conference27 which will take place on August 17th in Iceland, a lovely island, and I’m sure the event will be great as well.
For the past few months, I’ve been building a software-as-a-service (SaaS) application, and throughout the development process I’ve realized what a powerful tool Slack (or team chat in general) can be to monitor user and application behavior. After a bit of integration, it’s provided a real-time view into our application that previously didn’t exist, and it’s been so invaluable that I couldn’t help but write up this show-and-tell.
It all started with a visit to a small startup in Denver, Colorado. During my visit, I started hearing a subtle and enchanting “ding” in the corner of the office every few minutes. When I went to investigate this strange noise, I found a service bell hooked up to a Raspberry Pi, with a tiny metal hammer connected to the circuit board. As it turned out, the Pi was receiving messages from the team’s server, and it swung that little hammer at the bell every time a new customer signed up.
I always thought that was a great team motivator, and it got me thinking of how I could use team chat to achieve a similar experience.
Because we were already using Slack for team chat, and because it has a beautifully documented API1, it was an obvious choice for the experiment.
First, we had to obtain a “webhook URL” from Slack in order to programmatically post messages to our Slack channel.
6 Follow the steps above to obtain a webhook URL from Slack (View large version7)
Now that we had a webhook URL, it was time to integrate Slack messages into our Node.js application. To do this, I found a handy Node.js module named node-slack8.
First, we installed the Node.js module:
npm install node-slack --save
Now, we could send Slack messages to our channel of choice with a few lines of code.
// dependency setup var Slack = require('node-slack'); var hook_url = 'hook_url_goes_here'; var slack = new Slack(hook_url); // send a test Slack message slack.send({ text: ':rocket: Nice job, I'm all set up!', channel: '#test', username: 'MyApp Bot' });
(You can find similar Slack integration packages for Ruby9, Python10 and just about any other language.)
When executed, this code produced the following message in our #test Slack channel:
The code above is minimal, but it’s specific to the Slack API and the node-slack module. I didn’t want to be locked into any particular messaging service, so I created a generic Node.js module function to execute the service-specific code:
// Messenger.js // dependency setup var hook_url = my_hook_url; var Slack = require('node-slack'); var slack = new Slack(hook_url); module.exports = { sendMessage: function(message, channel, username) { if (!message){ console.log('Error: No message sent. You must define a message.') } else { // set defaults if username or channel is not passed in var channel = (typeof channel !== 'undefined') ? channel : "#general"; var username = (typeof username !== 'undefined') ? username : "MyApp"; // send the Slack message slack.send({ text: message, channel: channel, username: username }); return; } } };
Now we can use this module anywhere in the application with two lines of code, and if we ever decide to send messages to another service in the future, we can easily swap that out in Messenger.js.
var messenger = require('./utilities/messenger'); messenger.sendMessage(':rocket: Nice job, I'm all set up!', '#test');
Now that we had the basics set up, we were ready to start firing off messages from within the application.
The first order of business was to achieve service-bell parity. I located the success callback of the user registration function, and I added this code:
messenger.sendMessage('New user registration! ' + user.email);
Now, when someone registered, we’d get this message:
As my curiosity grew with each ding, I began to wonder things like, What if there was a failure to create a new user? What if a user registered, logged in but didn’t complete the onboarding process? What is the result of our scheduled tasks? Now that the groundwork was in place, answering these questions was a piece of cake.
Monitor Exceptions and Critical Errors on Back End Link
One of the most important errors we wanted to know about was if there was a failure to create a new user. All we had to do was find the error callback in the user registration function, and add this code:
messenger.sendMessage(':x: Error While adding a new user ' + formData.email + ' to the DB. Registration aborted!' + error.code + ' ' + error.message);
Now we knew instantly when registrations failed, why they failed and, more importantly, who they failed for:
There were all kinds of interesting places where we could send messages (pretty much anywhere with an error callback). One of those places was this generic catch-all error function:
This code helped us to uncover what a request looks like for unhanded exceptions. By looking at the request that triggered these errors, we could track down the root causes and fix them until there were no more generic errors.
With all of these error notifications in place, we now had comfort in knowing that if something major failed in the app, we would know about it instantly.
Next, I wanted to send a notification when a financial event happens in the application. Because our SaaS product integrates with Stripe, we created a webhook endpoint that gets pinged from Stripe when people upgrade their plan, downgrade their plan, add payment info, change payment info and many other events related to subscription payments, all of which are sent to Slack:
There were a few cases on the front end where we wanted to understand user behavior in ways that the back end couldn’t provide, so we created an endpoint to send Slack messages directly from the front end. Because our Slack webhook URL is protected behind a POST endpoint, it was a minimal risk to expose sending Slack messages to our team via an endpoint.
With the endpoint in place, we could now fire off Slack messages with a simple AngularJS $http.post call:
// send Slack notification from the front end var message = ":warning: Slack disconnected by " + $scope.user.username; $http.post('/endpoint', message);
This helps us to answer important questions about the business: Are people registering and adding a domain name? Are they not? If someone is, is it for a really high-profile domain whose owner we would want to reach out to personally soon after they’ve added it. We can now tap into this:
At one point, we saw a pattern of people adding a domain, removing it, then readding it within a few minutes, which clued us into an obscure bug that we probably would never have discovered otherwise.
There are also signals that a user is unhappy with the service, and these are valuable to know about. Did someone remove a domain name? Did they disconnect Slack?
One of the most interesting things to see in Slack is the result of scheduled tasks. Our SaaS product runs tasks to notify people about their website’s performance (our core service), to send transactional emails, to clean up the database and a few other things. The firing and results of these tasks sends a message to Slack:
Now we know when a task function fires, what the result of that function is (in this case, it sends out several emails) and whether it fails for any reason.
The case study above is a practical example of what we did to monitor the GoFaster.io22 application and service. It has worked fantastic for us, but how would this concept scale to large applications that send hundreds, maybe even thousands, of messages per day? As you can imagine, this would quickly turn into a “Slackbot who cried wolf” situation, and the value would get lost in the noise.
Some notifications are more important than others, and importance will vary depending on the employee and their role. For example, software development and IT operations (DevOps) folk might only care about the server messages, whereas customer service folk would care most about what’s going on with users.
Luckily, Slack has a great solution to this problem: channels.
Channels can be created by anyone, made public or private to your organization, and shared with anyone. Once you’ve subscribed to a channel, you can control how that channel’s activities alert you. Does a new message in the channel ding every time? Does it alert your phone, too? Does it only bold the channel? All of this can be controlled for each channel by each team member to suit their needs.
Putting this idea into practice, here’s how a larger organization might organize monitor-based notifications in Slack via channels:
Having built on this idea for a few months and digested the results, we’ve found it to be an invaluable extension of our application. Without it, we would feel out of touch with what is going on with the service and would have to manually hunt down the same information via the dashboard, or database queries would be a chore.
Every application and user base is different, which means that this concept cannot be built into a service and offered to the masses. In order to be valuable, it requires a small up-front investment of time and resources to deeply integrate in your application. Once it’s up and running, the investment will pay off in the form of your team’s connectedness to your application and its users.
In conclusion, here’s a recap of the benefits of using team chat to monitor your application:
Gain a Fresh Perspective on User and Server Behavior Link
Having a real-time live feed of the metrics that matter most to you and your business will keep you closely connected to what users are doing and how the server is responding.
You will be able to react faster than ever before. You will know about failures at the same time your users do. You can immediately react to that failing endpoint, lost database connection or DDoS attack.
Reach out to that customer who has just disabled their account to offer them a discount, give personal thanks to customers who have upgraded, or just follow up with people to understand their intentions. When you know what users are doing and when they are doing it, you can easily find out why.
Team Connectedness to the Application Will Make You More Efficient Link
When your team is on the same page with the application, collaboration can center on solving problems as they arise, rather than on trying to figure out what happened, where it happened or who it happened to.
Notifications and Channels Can Scale With Your Application Link
As your application and team grow, so will your monitoring needs. Slack does a great job of giving you all of the permission and notification controls necessary to ensure that the right information gets to the right people.
By logging a user name in your Slack messages, you can track every error, success message or event that a user has generated while interacting with your application simply by searching for their user name in Slack. Just know that, with a free Slack account, this is limited to the last 10,000 messages.
I hope you’ve found this concept to be useful, and I’d love to hear other stories of teams that have implemented similar forms of monitoring, or just other interesting ways to use and build on it.
Big news from Google: Within a few months, the infamous search engine will divide its index1 to give users better and fresher content. The long-term plan is to make the mobile search index the primary one. Why does this matter for e-commerce website owners?
Well, it will enable Google to run its ranking algorithm differently for purely mobile content. This means that mobile content won’t be extracted from desktop content to determine mobile rankings. That’s definitely something that retailers can leverage, thanks to AMP. This article outlines how to get started with AMP and how to gain an edge over the competition with your e-commerce website.
So, how do online retailers go about leveraging this big Google announcement? With AMP content! AMP (Accelerated Mobile Pages) just celebrated its one-year anniversary. It is an open-source project supported by Google that aims to reduce page-loading times on mobile. AMP pages are similar to HTML pages, with a few exceptions: Some tags are different, some rules are new, and there are plenty of restrictions on the use of JavaScript and CSS.
AMP pages get their own special carousel in Google mobile search results. No official statement has been made yet about whether these AMP pages will be getting an SEO boost.
While initially geared to blogs and news websites, AMP has introduced components that make it easy to adapt to an e-commerce website. To date, more than 150 million AMP documents are in Google’s index, with over 4 million being added every week. AMP isn’t meant purely for mobile traffic; it renders well on mobile, tablet and desktop. The AMP project’s website9 is actually coded in AMP HTML, in case you are curious to see what AMP looks like on a desktop. eBay was one of the most notable early adopters in the e-commerce realm; by July 2016, it took more than 8 million product pages live in AMP format and plans on going further.
Google is touting a reduction of 15 to 85% in page-loading time on mobile. The main advantage of AMP for retailers is that slow loading times kill conversions. Selling products to people when they want them makes a huge difference to a business’ bottom line. Many shoppers will go to a competitor’s website if yours is too slow to load. Put that in a mobile context, and a slow loading time means losing 40% of visitors — potential customers who will take their dollars elsewhere.
In brick and mortar stores, shop fronts are a big deal in attracting customers. It’s the same online, except that your storefront is supported by the speed of your customers’ Internet connection and the visibility you get on various channels (such as search engines, social media and email). Visibility is another way retailers can leverage AMP. Visibility is also a major element of the AMP equation. This is especially true in countries with limited mobile broadband speed10. And before you think this particular challenge is exclusive to developing nations, keep in mind that the US is not ranked in the top 10 countries in mobile broadband speed.
AMP pages feel like they load blazingly fast. Here’s a comparison:
Non-AMP page loading
AMP page loading
Mobile-Friendly Is A Thing Of The Past For Google Link
User experience is central to most online retailers. A slow website with bloated code, an overwrought UI and plenty of popups is everyone’s nightmare, especially on a mobile device.
The “mobile-friendly” label was introduced by Google in late 2014 as an attempt to encourage websites to ensure a good mobile user experience. After widespread adoption of responsive design, the mobile-friendly label is being retired by Google in favor of the AMP label.
11 This is how AMP results show up in Google mobile currently. (Image: Myriam Jessier) (View large version12)
AMP pages could be featured in a carousel and are labelled with a dedicated icon, highlighting them in search results. The search giant has recently stated that AMP would take precedence over other mobile-friendly alternatives such as in-app indexing. However, AMP is still not a ranking signal13, according to Google Webmaster Trends analyst John Mueller.
AMP: Because Mobile-Friendly Doesn’t Cut It Anymore Link
Media queries adapt the presentation of content to the device. However, the content of the page itself isn’t affected. In contrast, AMP helps make mobile web pages truly fast to load, but at a cost. Developers, designers and marketers will have to learn how to create beautiful web pages that convert using a subset of HTML with a few extensions.
The premise of AMP14 is that mobile-optimized content should load instantly anywhere. It’s a very accessible framework for creating fast-loading mobile web pages. However, compatibility with the AMP format is not guaranteed for all types of websites. This is one of the realities of a constantly evolving project such as AMP. The good news is that many of the arguments against AMP for online retailers no longer hold up.
AMP pages are now able to handle e-commerce analytics thanks to the amp-analytics variable. With this variable, statistics are available to analyze an AMP page’s performance in terms of traffic, revenue generated, clickthrough rate and bounce rate. According to the AMP project’s public roadmap15, better mobile payments are planned, after the addition of login-based access, slated for the fourth quarter of 2016.
Product and listing pages are supported in AMP, and they show great potential to add real value to the online customer journey. Keep in mind that 40% of users will abandon a website if it takes longer than 3 seconds to load16. Worse yet, 75% of consumers would rather visit a competitor website than deal with a slow-loading page.
Some of the drawbacks that have been noted are mostly due to the fact that AMP for e-commerce is rather new. There are a few concerns about the quality of the user experience offered by AMP e-commerce pages because some e-commerce functionality is not yet available, such as search bars, faceted search filters, login and cart features. However, frequent updates to the AMP format are planned, so this shouldn’t be a deterrent to those looking to implement it.
There has been some grumbling about the format among marketers. AMP relies on simplified JavaScript and CSS. As a consequence, tracking and advertising on AMP pages is less sophisticated than on traditional HTML pages. That being said, the main drawback is that implementing AMP pages effectively will take time and effort. The code is proprietary, heavily restricts JavaScript (iframes are not allowed, for example) and even limits CSS (with some properties being outright banned).
How to Develop AMP Pages for an E-Commerce Website Link
To ensure that your website is AMP-compliant20, check the instructions provided in the AMP project’s documentation21. Keep in mind that AMP pages should be responsive22 or mobile-friendly. A best practice would be to test the implementation of AMP pages against your current mobile website using a designated subset of pages. This will give you a sample to determine whether AMP adds value to your business.
You don’t have to make your entire website AMP-compliant. Start upgrading the website progressively: Pick simple static-content pages first, such as product pages, and then move on to other types of content. This way, you can target highly visible pages in SEO results, which will lead to a big payoff for the website without your having to deal with pages that require advanced functionality not yet supported by AMP.
If your website uses a popular CMS, then becoming AMP-compliant could be as easy as installing a plugin.
Magento
The AMP extension by Plum Rocket23 automatically generates AMP versions of your home page, category pages, product pages and blog pages. An interesting feature is that the AMP home page isn’t just “converted”; you can edit it in Magento’s back end.
WordPress
AMP for WP24 is a plugin that lets you create custom AMP designs without having to code. You can customize the logo, header, footer, images and more. It is compatible with WooCommerce and AdSense. The plugin generates AMP versions of your home page, blog articles, WooCommerce shop, products and categories pages.
Shopify
Nothing yet, but it’s under way!
A Step-By-Step Guide To Implementing AMP On Your E-Commerce Website Link
Let’s break down the process according to the customer journey. AMP offers a selection of prebuilt components to help you craft an enjoyable user experience on an e-commerce website (along with some evolving tools to help you collect data in order to improve it). You can implement four major AMP elements along key points in the customer’s purchasing journey, including on the home page, browsing pages, landing pages, product pages and related product widgets:
product descriptions,
reviews,
product shots,
navigation.
The entire purchasing flow can’t be 100% AMP-compliant yet, so you’ll have to plan a gateway to a regular non-AMP page for ordering and completing purchases.
Users will often start their purchasing journey on a website’s home page or a product category page, because these pages are prominent in search engine results. These pages are great candidates for AMP, as eBay has shown25 by making many of its category pages AMP-compliant. Typically, category pages are static and showcase products for sale. The amp-carousel feature26 offers a way to browse other products in a mobile-optimized way. These products can be organized into subcategories that fit the user’s needs. You can view the annotated code to create a product page over on AMP by Example27.
28 AMP e-commerce home page (Image: AMP by Example)
After browsing to a category page, the next step for our user would be to find an interesting product and click on it. In an AMP-compliant flow, this would lead the user to an AMP product page29.
Showing related products36 benefits the retailer’s bottom line and the user’s experience. The first product that a user browses to isn’t always the one that fits their need. You can show related products in AMP in two ways:
Statically publish a list of related products.
Generate the list on the fly using amp-list37 to fire a CORS request to a JSON endpoint that supplies the list of related products. These related products can be populated in an amp-mustache4138 template on the client. This approach is personalized because the content is dynamically generated server-side for each request.
Personalization is a big deal in e-commerce because it increases conversions. To dip into personalization in the AMP format, you can leverage the amp-access39 component to display different blocks of content according to the user’s status. To make it all work, you have to follow the same method as we did with the amp-list40 component: Fire a request at a JSON endpoint, and then present the data in an amp-mustache4138 template. Keep in mind that personalization doesn’t have a leg to stand on without reliable data. Google has been actively extending the tracking options available in AMP.
Sidenote: In case you see cdn.ampproject.org in your Google Analytics data, this is normal for AMP pages; cdn.ampproject.org is a cache that belongs to Google. No need to worry about this strange newcomer to your Google Analytics data!
AMP now supports some analytics products, such as Adobe’s and Google’s own. The type attribute will quickly configure the respective product within the code. Here’s an example of type being used for Google Analytics:
<amp-analytics type="googlenalytics">
And here are the types for some of the most common analytics vendors:
Adobe: adobeanalytics
Google Analytics: googleanalytics
Segment: segment
Webtrekk: webtrekk
Yandex Metrica: metrika
Google Tag Manager44 has taken AMP support one step further with AMP containers. You can now create a container for your AMP pages.
45 Google Tag Manager’s AMP container (Image: Myriam Jessier)
More than 20 tag types are available out of the box, including third-party vendor tags. Alongside a wider selection of tags, Google has provided built-in variables dedicated to AMP tracking, making it easier for marketers and developers to tag their pages.
46 AMP tracking options in Google Tag Manager (Image: Myriam Jessier)
If you are not using Google Tag Manager, you can implement your tag management service in one of two ways:
endpoint
This acts as an additional endpoint for amp-analytics and conducts marketing management in the back end.
config
This manages tags via a dynamically generated JSON config file, unique to each publisher.
The endpoint approach is the same as the standard approach. The config approach consists of creating a unique configuration for amp-analytics that is specific to each publisher and that includes all of the publisher’s compatible analytics packages. A publisher would configure using a syntax like this:
Many online retailers rely on advertising or showing related products throughout their website to boost revenue. The AMP format is bootstrapped to show ads through <amp-ad> and <amp-embed>. The documentation is quite clear47 on how to implement ads, and the good news is that a wide variety of networks are already supported. Although iframes are not allowed in AMP, two embed types support ads with <amp-embed>: Taboola and Zergnet. If you plan on using ads in AMP, follow these principles48 in your development work:
The previous step was a tricky one because it entails maintaining a seamless user experience while the user transitions to a full HTML page. The process should be fast and consistent for the user. An experience that isn’t consistent with the preceding AMP journey could hurt conversions. If your website is a progressive web app, then amp-install-serviceworker49 is an ideal way to bridge both types of pages within the customer journey, because it allows your AMP page to install a service worker on your domain, regardless of where the user is viewing the AMP page. This means that caching content from your progressive web app can be done preemptively to ensure that the transition is smooth for the customer, because all of the content needed is cached in advance. An easy way to experience the entire AMP e-commerce experience is to head on over to eBay50; see how the company handles the transition from AMP to an HTML checkout process.
AMP works within a smart caching model that enables platforms that refer traffic to AMP pages to use caching and prerendering in order to load web pages super-fast. Be aware of this when analyzing traffic and engagement because you might see less traffic to your own origin when AMP pages are originally hosted (this is why we referred to cdn.ampproject.org in Google Analytics data). The balance of traffic will most likely show up through proxied versions of your pages served by AMP caches.
AMP encompasses a lot of best practices for building mobile web pages. Incorporate mobile best practices as part of your regular development lifecycle.
Less forking in code
If you follow AMP best practices when building regular pages as well, you can reuse most of the UI components between AMP and non-AMP pages. That means less forking (except for the JavaScript-based components).
Adding AMP’s ecosystem to one’s internal search would be a very interesting prospect for many online retailers.
Mind you, there are some complex parts:
Infrastructure components
Things such as global headers and footers and tracking modules have some JavaScript, which is a no-go for AMP. This adds complexity to development but can be worked around.
Tracking
AMP provides user-activity tracking through its amp-analytics component6242. The component can be configured in various ways, but it is still not sufficient for the granular tracking needs of most online retailers.
However, once you get past the internal hurdles, the payoff can be great. Check out the examples provided by eBay for camera drones63 and the Sony PlayStation64. (Use a mobile device, of course, otherwise you will be redirected to the desktop version.)
SEO experts are pushing for AMP adoption because some see it as a mobile-visibility asset to be leveraged. Here are some SEO points to ensure you get the most out of AMP:
Host AMP pages on the same domain as other page versions.
The Google AMP cache is a proxy-based content delivery network for delivering all valid AMP documents. It fetches AMP HTML pages, caches them and improves page performance automatically.
An AMP page is served to the user from the Google AMP cache, and it will have a different URL so that duplicate content issues are avoided. If you have both AMP and non-AMP versions of your pages, use the <link rel="canonical" href="[canonical URL]" /> tag on the AMP page and <link rel="amphtml" href="[AMP URL]" /> on the regular page. For a standalone AMP page (one that doesn’t have a non-AMP version), specify it as the canonical version: <link rel="canonical" href="https://www.example.com/url/to/amp-document.html" />.
One of the most common URL structures is to add /amp/ to the path of the URL.
An e-commerce website can’t be 100% compliant with AMP, but there are benefits to adopting the format early on. Online retailers looking for an edge against fierce competition might be wise to turn to this format to grab the attention of mobile customers and nudge open their wallets. More and more websites are converting to the AMP format to increase or maintain their mobile traffic. For an online retailer that has a multi-channel or mobile-first strategy to acquire and retain customers, AMP might be a great way to future-proof their online marketing efforts.
The world is constantly evolving with frameworks, such as the Internet of Things (IoT) and virtual reality (VR). These and many others are opening opportunities to rethink how we approach prototyping: They introduce avenues to marry the digital software with the tangible aspect of the overall user engagement.
This two-article series will introduce readers of different backgrounds to prototyping IoT experiences with minimum code knowledge, starting with affordable proof of concept platforms, before moving to costly commercial offerings.
“Part 1: Building the Hardware” will identify the problem, the criteria for selecting hardware and, finally, how to put the different pieces of equipment together.
In “Part 2: Configuring the Software,” we will continue the discussion by writing the code to control the hardware and connect the hardware to the Internet, and we will design custom interfaces to display the collected data.
We will do this by going over a personal experience I had as a user experience designer while learning the basics of an IoT platform named “Adafruit IO”. This will be a nice introductory case study.
The following are some assumptions about you:
You can read code and may have even written some on your own (we’re not going to learn coding basics here).
You have some understanding of circuitry and electronics. For more on this, see the “Resources” section.
You are curious and like to explore and tinker.
Disclaimer:I am not an electronics engineer or a developer. Please always be careful when exploring electricity and hardware. This tutorial is meant to inspire you to do additional research before finding what works for your circumstances!
IoT talk is sometimes unnecessary complex. To reduce the jargon, I will use some reader-friendly terms, as defined below.
board
You can think of the board as a mini-computer that holds the software in code form (often called firmware) that is responsible for determining how different sensors and the board itself behave in response to inputs from and outputs to their environment. A more technical term is microcontroller unit (MCU).
cloud
There are a ton of definitions and as many philosophical debates regarding this term. For the purpose of this series, let’s define it as any remote servers, plus inbound and outbound connections to and from them.
ecosystem
This is a collection of hardware and software that create a unique overall experience for the end user.
rig
The board, power supply unit and any attached sensors form a cohesive hardware unit.
On a cold winter day, I read an article on smart homes being the future, which immediately inspired me to turn my home into a smart one. This translated into several commercial product purchases, including devices from the Nest family, which just whet my appetite.
Controlling my air conditioning and furnace and detecting possible carbon monoxide emissions were not enough! I wanted to go further by having monitoring capabilities over my home security. This includes:
tracking doors opening and closing,
verifying the occurrence of a fire or flood,
looking at temperature and humidity,
capturing movement in my garage.
Getting to the point of picking Adafruit IO as the solution was not a simple journey. Before deciding on that platform and the HUZZAH ESP8266 board, I tried several other solutions, with varying success:
I gave this a go before even having selected the cloud platform for it. My experience showed me that programming the board requires additional setup, and pushing code to it is often unreliable. For the rigs that I got up and running, I noticed the Wi-Fi connection dropping completely after only a few days.
This is a beautifully designed package of sensors and central wireless hub. It is part of an ecosystem that comes with a handy mobile app and support for third-party products7. I spent considerable time trying to get the sensor-to-hub communication to work, but unfortunately, even with the help of the support team, it did not work. I attributed this to my home being a black hole for all things wireless.
This ecosystem offers dedicated development tools, various IoT-based boards and robust integration with libraries. I gave it a try and was impressed with the performance and flexibility for the end user. I hit roadblocks when integrating Blynk9, a platform for controlling devices via drag-and-drop interfaces. Though I am putting this approach on hold, I look forward to exploring the Particle-Blynk combination again soon.
My vision was to have multiple sensors that could be viewed and controlled from a computer or mobile device independently at any time. To accomplish this, I needed both a Wi-Fi-enabled hardware board and a software platform that could talk to it and any attached sensors.
I decided to be more strategic in my choices, so I came up with a list of criteria, in order of priority:
low cost
For scalability, I wanted to keep the rig (i.e. board, sensors and power supply) in the $25 to $30 range — 50% less costly than commercial offerings.
small board
I wanted a small size for easy mounting and to have enough space to fit multiple sensors in a custom enclosure.
support community
In case of problems, access to learning resources and a community are key.
wired power
In tests, a 9-volt battery lasted only a day. Solar was out of the question due to poor sun coverage of the home. Thus, the board had to be able to be wired to an outlet.
low learning curve
Though I have some development experience and hardware hacking knowledge, I didn’t want this to be an arduous project.
After exploring the three approaches mentioned further above, I ruled out the following additional equipment, based on the five criteria. Keep in mind that I am giving you the high-level details — a whole article could be written on selecting a board!
Offers both wired and wireless Internet connectivity, expandable RAM and onboard memory. The board has a Linux-based distribution, making it a powerful networked computer.
The price of the controller ($69) and the bulky size proved to be too limiting. Also, I didn’t need something so powerful. I ended up buying one to test out for a garden watering project.
In addition to offering wired and wireless connectivity, it has HDMI and audio ports, Bluetooth integration and support for use of a custom-sized SD card with different operating systems.
While I could load a Linux-based operating system and use the Python language to accomplish anything, that’s not what I needed. I ended up using this platform for other projects. The $40 price tag and the bulky size were also limiting factors.
This $5 board packs a big punch. The powerful CPU and large RAM made it a strong contender, as did the small size and large number of GPIO pins.
Two things nixed this board. It doesn’t have on-board Wi-Fi, and so requires additional equipment. And because it is very popular, finding this board is hard. In the US, it is sold only at Micro Center13, which limits it to one per home per month. (Note: At the time of writing v1.3 of the board with on board Wi-Fi and Bluetooth was not yet available.)
Side note: For more information on choosing a board for your hardware prototyping project, you can consult the excellent “Makers’ Guide to Boards14.”
Further researching led me to Adafruit’s HUZZAH ESP826615 board, which is but one variation of the ESP8266 chipset; there are others, such as the NodeMCU LUA16. Each has unique capabilities, so choose wisely. Here is why I selected the HUZZAH:
the price ($10, not counting shipping);
12+ digital input and output pins and 1 analog pin;
an on-board reset button
voltage shifting from 3.3 and 5 volts (but it’s primarily a 3.3-volt board)
a dedicated IoT service (Adafruit IO) with UI building blocks.
Deciding to start small, I wanted to build a sensor that tracks whether a door is open. The rest of this first article will focus on the hardware for this use case, but much of the wiring will scale to other types of sensors.
It’s best to get one that supports 3.3 and 5 volts, in case you want to explore rigs using the original ESP8266 ESP-01 board in the future. It must offer TX and RX LEDs for easier troubleshooting while pushing code to the board. This is used to push the code from the programming computer to the board.
Before getting to the details of how to put the rig together, let’s talk about what the goal is. By the end of this first article, you should have something similar to what you see below. With this setup, you will have a mini-computer (the board) capable of collecting sensor data from your environment and communicating it to the cloud (Adafruit IO) over Wi-Fi.
You can attach multiple sensors, but be aware of the power draw (i.e. how many milliamps each sensor uses) and the length of wire needed to connect them. (View large version3229)
The first step is to assemble the HUZZAH board by soldering on its pin headers, including both the board leg headers and the FTDI header. Adafruit has a step-by-step tutorial30 on this.
When you are soldering the first leg header, ensure that the board is not tilting one way, which would result in the pins being soldered at an angle. A trick I used is to put a bit of putty under the board to even it out as it is being plugged into the breadboard.
Once you have soldered all of the headers, insert the board in the breadboard with the antenna (the wavy gold line) facing outwards.
Insert the supply at the opposite end of the breadboard, with the top and bottom legs fitting in the + and – breadboard rails. This is how power will be passed to the breadboard.
Next, set the yellow jumpers for both rails to 3.3 volts, which is the voltage used by the HUZZAH board.
Note: Depending on your breadboard, the – and + might not match the alignment of the power supply jumpers. That’s fine as long as you remember that the power supply dictates which breadboard rail carries which electrical signal!
Connect the + to the + rail on the breadboard (the power).
Connect the – to the – rail on the breadboard (the ground).
Connect the S to the #5 on the board (digital input pin).
A 9-volt battery is used here to illustrate a power source, but most such batteries might not have adequate power for your rig because they typically carry 400 to 600 milliamps. (View large version31) The power adapter is plugged in here but not powered on. Use different colored wires to distinguish between connections. (View large version3229) Notice how each pin of the sensor is labeled. (View large version33)
As a last step, plug the 9-volt adapter into a power outlet, then into the breadboard power supply. Push the white button. If everything is correctly wired, you should see several lights flicker on, including for the power supply (the green one), the board power (red), the Wi-Fi (blue) and the sensor (red if the magnet is touching the sensor).
At this point, you can start writing the code for the rig, but I find this is a good opportunity to test the mounting of the container box. This is not a permanent mounting, but a trial run to gauge the rig’s overall dimensions and the best fit. Before doing that, you need to take some steps first.
Step 1: Using your soldering iron, melt one hole in the left side of the container for the power adapter plug, and three smaller ones on the right for the individual sensor wires.
Warning: Make sure to do this in a well-ventilated area, so that you don’t breathe in fumes. After that, clean your soldering iron’s tip with a nonabrasive sponge.
A similar box can be bought on the cheap in any large store. (View large version35) Carefully size the holes based on your wires’ thickness. (View large version36)
Step 2: Put the entire rig in the container, and pass the cables through the holes.
While this looks well insulated, it is not suited for outdoor use! (View large version37)
Step 3: Close the container, and mount it on the wall with the putty. Tapes of various types won’t work well. Alternatively, you could punch holes in the bottom of the container to mount with screws, but make sure they are insulated with electrical tape to avoid short-circuiting any electronics.
I have also tried using hot glue. I think it is messy, but it is not all that more expensive, and you can pick one up on the cheap38 if you prefer that method.
Put putty only on the bottom of the container and the sensor. (View large version39) You can stack multiple magnets on the door to match the proper distance from the sensor. Make sure it does not touch the glass tube. (View large version40)
Step 4: Use a combination of LEGO pieces and putty to mount the sensor and the accompanying magnet to the door.
For a firmer fix, you can put a LEGO piece on top of the two blue blocks. (View large version41)
Now that the rig is all wired up, you can connect to it with the FTDI cable and start adding the code that will make the sensor work.
Using a 5-volt 1-amp power adapter (rather than 9 volts and 1 amp) won’t cut it. In my tests, the rig never powered on this way. I attribute this to the voltage conversion in the breadboard’s power supply.
In trying different hall sensors, I was getting readings all over the board. See if they work for you, but my preference remains a reed switch because of its large surface area for interacting with the magnet.
In this first article of our two-part series, we’ve identified the problem (home security), assessed the merit of an IoT setup, and discussed the rationale involved in selecting a particular board. This was followed by a step-by-step guide on how to put together all of the hardware components into a working rig.
In doing so, we’ve learned the basics of electronics. In the second and final article in this series, we will add code to the rig we’ve built here, so that we can start interacting with the environment. Then, we will build custom user interfaces to view the data from anywhere, while discussing at a high level the security implications of the software configuration.
The landscape for the performance-minded developer has changed significantly in the last year or so, with the emergence of HTTP/2 being perhaps the most significant of all. No longer is HTTP/2 a feature we pine for. It has arrived, and with it comes server push!
Aside from solving common HTTP/1 performance problems (e.g., head of line blocking and uncompressed headers), HTTP/2 also gives us server push! Server push allows you to send site assets to the user before they’ve even asked for them. It’s an elegant way to achieve the performance benefits of HTTP/1 optimization practices such as inlining, but without the drawbacks that come with that practice.
In this article, you’ll learn all about server push, from how it works to the problems it solves. You’ll also learn how to use it, how to tell if it’s working, and its impact on performance. Let’s begin!
Accessing websites has always followed a request and response pattern. The user sends a request to a remote server, and with some delay, the server responds with the requested content.
The initial request to a web server is commonly for an HTML document. In this scenario, the server replies with the requested HTML resource. The HTML is then parsed by the browser, where references to other assets are discovered, such as style sheets, scripts and images. Upon their discovery, the browser makes separate requests for those assets, which are then responded to in kind.
The problem with this mechanism is that it forces the user to wait for the browser to discover and retrieve critical assets until after an HTML document has been downloaded. This delays rendering and increases load times.
With server push, we have a solution to this problem. Server push lets the server preemptively “push” website assets to the client without the user having explicitly asked for them. When used with care, we can send what we know the user is going to need for the page they’re requesting.
Let’s say you have a website where all pages rely on styles defined in an external style sheet named styles.css. When the user requests index.html from the server, we can push styles.css to the user just after we begin sending the response for index.html.
3 Web server communication with HTTP/2 server push. (Large preview4)
Rather than waiting for the server to send index.html and then waiting for the browser to request and receive styles.css, the user only has to wait for the server to respond with bothindex.html and styles.css on the initial request. This means that the browser can begin rendering the page faster than if it had to wait.
As you can imagine, this can decrease the rendering time of a page. It also solves some other problems, particularly in front-end development workflows.
While reducing round trips to the server for critical content is one of the problems that server push solves, it’s not the only one. Server push acts as a suitable alternative for a number of HTTP/1-specific optimization anti-patterns, such as inlining CSS and JavaScript directly into HTML, as well as using the data URI scheme5 to embed binary data into CSS and HTML.
These techniques found purchase in HTTP/1 optimization workflows because they decrease what we call the “perceived rendering time” of a page, meaning that while the overall loading time of a page might not be reduced, the page will appear to load faster for the user. It makes sense, after all. If you inline CSS into an HTML document within <style> tags, the browser can begin applying styles immediately to the HTML without waiting to fetch them from an external source. This concept holds true with inlining scripts and inlining binary data with the data URI scheme.
Web server communication with inlined content (Large preview6)
Seems like a good way to tackle the problem, right? Sure — for HTTP/1 workflows, where you have no other choice. The poison pill we swallow when we do this, however, is that the inlined content can’t be efficiently cached. When an asset like a style sheet or JavaScript file remains external and modular, it can be cached much more efficiently. When the user navigates to a subsequent page that requires that asset, it can be pulled from the cache, eliminating the need for additional requests to the server.
When we inline content, however, that content doesn’t have its own caching context. Its caching context is the same as the resource it’s inlined into. Take an HTML document with inlined CSS, for instance. If the caching policy of the HTML document is to always grab a fresh copy of the markup from the server, then the inlined CSS will never be cached on its own. Sure, the document that it’s a part of may be cached, but subsequent pages containing this duplicated CSS will be downloaded repeatedly. Even if the caching policy is more lax, HTML documents typically have limited shelf life. This is a trade-off that we’re willing to make in HTTP/1 optimization workflows, though. It does work, and it’s quite effective for first-time visitors. First impressions are often the most important.
These are the problems that server push addresses. When you push assets, you get the practical benefits that come with inlining, but you also get to keep your assets in external files that retain their own caching policy. There is a caveat to this point, though, and it’s covered toward the end of this article. For now, let’s continue.
I’ve talked enough about why you should consider using server push, as well as the problems that it fixes for both the user and the developer. Now let’s talk about how it’s used.
Using server push usually involves using the Link HTTP header, which takes on this format:
Link: </css/styles.css>; rel=preload; as=style
Note that I said usually. What you see above is actually the preload resource hint9 in action. This is a separate and distinct optimization from server push, but most (not all) HTTP/2 implementations will push an asset specified in a Link header containing a preload resource hint. If either the server or the client opts out of accepting the pushed resource, the client can still initiate an early fetch for the resource indicated.
The as=style portion of the header is not optional. It informs the browser of the pushed asset’s content type. In this case, we use a value of style to indicate that the pushed asset is a style sheet. You can specify other content types10. It’s important to note that omitting the as value can result in the browser downloading the pushed resource twice. So don’t forget it!
Now that you know how a push event is triggered, how do we set the Link header? You can do so through two routes:
your web server configuration (for example, Apache httpd.conf or .htaccess);
a back-end language function (for example, PHP’s header function).
Setting the Link Header in Your Server Configuration Link
Here’s an example of configuring Apache (via httpd.conf or .htaccess) to push a style sheet whenever an HTML file is requested:
<FilesMatch ".html$"> Header set Link "</css/styles.css>; rel=preload; as=style" <FilesMatch>
Here, we use the FilesMatch directive to match requests for files ending in .html. When a request comes along that matches this criteria, we add a Link header to the response that tells the server to push the resource at /css/styles.css.
Side note: Apache’s HTTP/2 module can also initiate a push of resources using the H2PushResource directive. The documentation for this directive states that this method can initiate pushes earlier than if the Link header method is used. Depending on your specific setup, you may not have access to this feature. The performance tests shown later in this article use the Link header method.
As of now, Nginx doesn’t support HTTP/2 server push, and nothing so far in the software’s changelog11 has indicated that support for it has been added. This may change as Nginx’s HTTP/2 implementation matures.
Another way to set a Link header is through a server-side language. This is useful when you aren’t able to change or override the web server’s configuration. Here’s an example of how to use PHP’s header function to set the Link header:
If your application resides in a shared hosting environment where modifying the server’s configuration isn’t an option, then this method might be all you’ve got to go on. You should be able to set this header in any server-side language. Just be sure to do so before you begin sending the response body, to avoid potential runtime errors.
All of our examples so far only illustrate how to push one asset. What if you want to push more than one? Doing that would make sense, right? After all, the web is made up of more than just style sheets. Here’s how to push multiple assets:
When you want to push multiple resources, just separate each push directive with a comma. Because resource hints are added via the Link tag, this syntax is how you can mix in other resource hints with your push directives. Here’s an example of mixing a push directive with a preconnect resource hint:
Multiple Link headers are also valid. Here’s how you can configure Apache to set multiple Link headers for requests to HTML documents:
<FilesMatch ".html$"> Header add Link "</css/styles.css>; rel=preload; as=style" Header add Link "</js/scripts.js>; rel=preload; as=script" <FilesMatch>
This syntax is more convenient than stringing together a bunch of comma-separated values, and it works just the same. The only downside is that it’s not quite as compact, but the convenience is worth the few extra bytes sent over the wire.
Now that you know how to push assets, let’s see how to tell whether it’s working.
So, you’ve added the Link header to tell the server to push some stuff. The question that remains is, how do you know if it’s even working?
This varies by browser. Recent versions of Chrome will reveal a pushed asset in the initiator column of the network utility in the developer tools.
12 Chrome indicating that an asset has been pushed by the server (Large preview13)
Furthermore, if we hover over the asset in the network request waterfall, we’ll get detailed timing information on the asset’s push:
14 Chrome showing detailed timing information of the pushed asset (Large preview15)
Firefox is less obvious in identifying pushed assets. If an asset has been pushed, its status in the browser’s network utility in the developer tools will show up with a gray dot.
16 Firefox indicating that an asset has been pushed by the server (Large preview17)
If you’re looking for a definitive way to tell whether an asset has been pushed by the server, you can use the nghttp command-line client18 to examine a response from an HTTP/2 server, like so:
nghttp -ans https://jeremywagner.me
This command will show a summary of the assets involved in the transaction. Pushed assets will have an asterisk next to them in the program output, like so:
Here, I’ve used nghttp on my own website19, which (at least at the time of writing) pushes five assets. The pushed assets are marked with an asterisk on the left side of the requestStart column.
Now that we can identify when assets are pushed, let’s see how server push actually affects the performance of a real website.
Measuring the effect of any performance enhancement requires a good testing tool. Sitespeed.io20 is an excellent tool available via npm21; it automates page testing and gathers valuable performance metrics. With the appropriate tool chosen, let’s quickly go over the testing methodology.
I wanted measure the impact of server push on website performance in a meaningful way. In order for the results to be meaningful, I needed to establish points of comparison across six separate scenarios. These scenarios are split across two facets: whether HTTP/2 or HTTP/1 is used. On HTTP/2 servers, we want to measure the effect of server push on a number of metrics. On HTTP/1 servers, we want to see how asset inlining affects performance in the same metrics, because inlining is supposed to be roughly analogous to the benefits that server push provides. Specifically, these scenarios are the following:
HTTP/2 without server push
In this state, the website runs on the HTTP/2 protocol, but nothing whatsoever is pushed. The website runs “stock,” so to speak.
HTTP/2 pushing only CSS
Server push is used, but only for the website’s CSS. The CSS for the website is quite small, weighing in at a little over 2 KB with Brotli compression22 applied.
Pushing the kitchen sink
All assets in use on all pages across the website are pushed. This includes the CSS, as well as 1.4 KB of JavaScript spread across six assets, and 5.9 KB of SVG images spread across five assets. All quoted file sizes are, again, after Brotli compression has been applied.
HTTP/1 with no assets inlined
The website runs on HTTP/1, and no assets are inlined to reduce the number of requests or increase rendering speed.
Inlining only CSS
Only the website’s CSS is inlined.
Inlining the kitchen sink
All assets in use on all pages across the website are inlined. CSS and scripts are inlined, but SVG images are base64-encoded and embedded directly into the markup. It should be noted that base64-encoded data is roughly 1.37 times larger23 than its unencoded equivalent.
In each scenario, I initiated testing with the following command:
If you want to know the ins and outs of what this command does, you can check out the documentation24. The short of it is that this command tests my website’s home page at https://jeremywagner.me25 with the following conditions:
The links in the page are not crawled. Only the specified page is tested.
The page is tested 25 times.
A “cable-like” network throttling profile is used. This translates to a round trip time of 28 milliseconds, a downstream speed of 5,000 kilobits per second and an upstream speed of 1,000 kilobits per second.
The test is run using Google Chrome.
Three metrics were collected and graphed from each test:
first paint time
This is the point in time at which the page can first be seen in the browser. When we strive to make a page “feel” as though it is loading quickly, this is the metric we want to reduce as much as possible.
DOMContentLoaded time
This is the time at which the HTML document has completely loaded and has been parsed. Synchronous JavaScript code will block the parser and cause this figure to increase. Using the async attribute on <script> tags can help to prevent parser blocking.
page-loading time
This is the time it takes for the page and its assets to fully load.
With the parameters of the test determined, let’s see the results!
Tests were run across the six scenarios specified earlier, with the results graphed. Let’s start by looking at how first paint time is affected in each scenario:
Let’s first talk a bit about how the graph is set up. The portion of the graph in blue represents the average first paint time. The orange portion is the 90th percentile. The grey portion represents the maximum first paint time.
Now let’s talk about what we see. The slowest scenarios are both the HTTP/2- and HTTP/1-driven websites with no enhancements at all. We do see that using server push for CSS helps to render the page about 8% faster on average than if server push is not used at all, and even about 5% faster than inlining CSS on an HTTP/1 server.
When we push all assets that we possibly can, however, the picture changes somewhat. First paint times increase slightly. In HTTP/1 workflows where we inline everything we possibly can, we achieve performance similar to when we push assets, albeit slightly less so.
The verdict here is clear: With server push, we can achieve results that are slightly better than what we can achieve on HTTP/1 with inlining. When we push or inline many assets, however, we observe diminishing returns.
It’s worth noting that either using server push or inlining is better than no enhancement at all for first-time visitors. It’s also worth noting that these tests and experiments are being run on a website with small assets, so this test case may not reflect what’s achievable for your website.
Let’s examine the performance impacts of each scenario on DOMContentLoaded time:
The trends here aren’t much different than what we saw in the previous graph, except for one notable departure: The instance in which we inline as many assets as practical on a HTTP/1 connection yields a very low DOMContentLoaded time. This is presumably because inlining reduces the number of assets needed to be downloaded, which allows the parser to go about its business without interruption.
Now, let’s look at how page-loading times are affected in each scenario:
The established trends from earlier measurements generally persist here as well. I found that pushing only the CSS realized the greatest benefit to loading time. Pushing too many assets could, on some occasions, make the web server a bit sluggish, but it was still better than not pushing anything at all. Compared to inlining, server push yielded better overall loading times than inlining did.
Before we conclude this article, let’s talk about a few caveats you should be aware of when it comes to server push.
In one of the scenarios above, I am pushing a lot of assets, but all of them altogether represent a small portion of the overall data. Pushing a lot of very large assets at once could actually delay your page from painting or being interactive sooner, because the browser needs to download not only the HTML, but all of the other assets that are being pushed alongside of it. Your best bet is to be selective in what you push. Style sheets are a good place to start (so long as they aren’t massive). Then evaluate what else makes sense to push.
You Can Push Something That’s Not on the Page Link
This is not necessarily a bad thing if you have visitor analytics to back up this strategy. A good example of this may be a multi-page registration form, where you push assets for the next page in the sign-up process. Let’s be crystal clear, though: If you don’t know whether you should force the user to preemptively load assets for a page they haven’t seen yet, then don’t do it. Some users might be on restricted data plans, and you could be costing them real money.
Some servers give you a lot of server push-related configuration options. Apache’s mod_http2 has some options for configuring how assets are pushed. The H2PushPriority setting32 should be of particular interest, although in the case of my server, I left it at the default setting. Some experimentation could yield additional performance benefits. Every web server has a whole different set of switches and dials for you to experiment with, so read the manual for yours and find out what’s available!
There has been some gnashing of teeth over whether server push could hurt performance in that returning visitors may have assets needlessly pushed to them again. Some servers do their best to mitigate this. Apache’s mod_http2 uses the H2PushDiarySize setting33 to optimize this somewhat. H2O Server has a feature called Cache Aware server push34 that uses a cookie mechanism to remember pushed assets.
If you don’t use H2O Server, you can achieve the same thing on your web server or in server-side code by only pushing assets in the absence of a cookie. If you’re interested in learning how to do this, then check out a post I wrote about it on CSS-Tricks35. It’s also worth mentioning that browsers can send an RST_STREAM frame to signal to a server that a pushed asset is not needed. As time goes on, this scenario will be handled much more gracefully.
As sad it may seem, we’re nearing the end of our time together. Let’s wrap things up and talk a bit about what we’ve learned.
If you’ve already migrated your website to HTTP/2, you have little reason not to use server push. If you have a highly complex website with many assets, start small. A good rule of thumb is to consider pushing anything that you were once comfortable inlining. A good starting point is to push your site’s CSS. If you’re feeling more adventurous after that, then consider pushing other stuff. Always test changes to see how they affect performance. You’ll likely realize some benefit from this feature if you tinker with it enough.
If you’re not using a cache-aware server push mechanism like H2O Server’s, consider tracking your users with a cookie and only pushing assets to them in the absence of that cookie. This will minimize unnecessary pushes to known users, while improving performance for unknown users. This not only is good for performance, but also shows respect to your users with restricted data plans.
All that’s left for you now is to try out server push for yourself. So get out there and see what this feature can do for you and your users! If you want to know more about server push, check out the following resources:
“Server Push36,” “Hypertext Transfer Protocol Version 2 (HTTP/2),” Internet Engineering Task Force
Thanks to Yoav Weiss39 for clarifying that the as attribute is required (and not optional as the original article stated), as well as a couple of other minor technical issues. Additional thanks goes to Jake Archibald40 for pointing out that the preload resource hint is an optimization distinct from server push.
This article is about an HTTP/2 feature named server push. This and many other topics are covered in Jeremy’s book Web Performance in Action41. You can get it or any other Manning Publications42 book for 42% off with the coupon code sswagner!
The Material Design color tool helps you create, share, and apply color palettes to your UI, as well as measure the accessibility level of any color combination.
From time to time, we need to take some time off, and actually, I’m glad that this reading list is a bit shorter as the ones you’re used to. Because one thing that really stuck with me this week was Eric Karjaluoto’s article.
In his article, he states that, “Taking pride in how busy we are is one of the worst ideas we ever had.” So, how about reading just a few articles this week for a change and then take a complete weekend off to recharge your battery?
I bet many people here have a Samsung TV. And while there are currently not many phones out there from this vendor that run on Tizen, a lot of TVs do. Now, security researchers have found some quite interesting things11 about Tizen’s security12, including vulnerabilities that can give full access to the device and ways to embed malicious code into the system through the app store.
In the past, we tended to teach people to use a VPN if they wanted to stay secure. But now we know that by far not all VPN services have good intentions16. Some even inject advertising and privacy-leaking data into your network requests or sell your history to third parties.
Eric Karjaluoto shares why we need to slow down18, and why taking pride in how busy we are is one of the worst ideas we ever had. What if the best thing you can do for your career — and life — is to press pause, set your number one priority, and then rethink your way of working?
Did you know that bandwidth overage charges are (still) a problem and most users prefer not to rely on a developer? Well, I talked to 917 (real-life) users and created a guide to help others find the e-commerce software that suits them best.
I completed this guide by searching for websites built with e-commerce software (you can verify by looking at the source code — certain code strings are unique to the software). Once I found a website, I (or one of my virtual assistants) would email the owner and ask if they’d recommend a particular software. Typically, they’d reply and I’d record their response in a spreadsheet (and personally thank them). Occasionally, I would even go on the phone to speak with them directly (although I quickly found out that this took too much time).
I calculated customer satisfaction by finding the percentage of active users who recommend the software:
E-commerce software
Recommendation %
Shopify
98%
Squarespace
94%
Big Cartel
91%
WooCommerce
90%
OpenCart
88%
Jumpseller
86%
GoDaddy
83%
CoreCommerce
80%
BigCommerce
79%
Ubercart
78%
Wix
76%
Magento
74%
Weebly
74%
3dcart
72%
PrestaShop
70%
Goodsie
65%
Spark Pay
65%
Volusion
51%
Shopify is the pretty clear winner, with Squarespace close behind — but both companies are actually complementary. Shopify is a complete, robust solution that works for both small and large stores, while Squarespace is a simple, approachable platform that works well for stores just starting out. (Worth noting: I’ve done similar surveys for portfolio builders5 and landing-page builders6, and Shopify is the only company I’ve seen score higher than 95% in customer satisfaction.)
But looking only at customer satisfaction is not enough. After all, e-commerce platforms have different strengths. So, I also asked users what they like and dislike about their software and found some important insights about each company.
“The best thing is that you don’t need a developer to add features… there’s a ton of apps available.” | “Their partner ecosystem is best.” | “Shopify has any feature under the sun — if you think you need it, someone already created an app.” | “Access to Shopify Apps is great.” | “There’s heaps of third-party apps you can integrate easily that I believe are essential to growing a business.” | “So many third-party apps, templates that other platforms aren’t popular enough to have.” | “There are many apps that can help with customization issues.” | “There are a ton of great third-party apps for extended functionality.”
Ease of use
“Easy to set up without having specific skills.” | “Intuitive user interface.” | “Simple to use.” | “It is very easy to start selling online.” | “Easy UI, pretty intuitive.” | “The interface is excellent for managing e-commerce.” | “It’s really clean and easy to manage.” | “Shopify provides a very straightforward way to add products, edit options and to apply different themes.” | “More than anything, very simple.” | “It’s simple and intuitive.” | “Very user-friendly.” | “Super user-friendly for non-computer guys like myself.” | “The back end is exceptional.”
“It’s very easy to use.” | “The e-commerce is so easy to use.” | “It’s easy to configure, simple to add, delete and modify our inventory, and most importantly it allows us to easily keep track of our ins and outs with helpful metrics and sales graphs.” | “It’s very easy to set up.” | “The user interface is easy to use.” | “Commerce is really nice and easy to set up.” | “Love the interface, very easy to work with.” | “I find it easy to use.” | “It was pretty easy to set up and has been a snap to maintain.” | “It’s all pretty smooth and easy.” | “It’s super-easy.” | “I’ve tried Drupal, WordPress… the interface and creative ability of Squarespace is much superior.”
Templates
“Has some great templates for a good-looking website.” | “Squarespace is an easy way to get a great looking site.” | “The sites are beautiful.” | “The templates and editing features on the blog and site are super-easy.” | “The thing I like most are the beautiful and easy templates.”
“The only thing I would say they need to improve is allowing more than one currency on the e-commerce site, which currently is not available.” | “It works pretty good for basic sales of items.” | “There are some limitations in terms of customizing, but they are minor.” | “If you are using it as is and just need the limited feature set that it comes with, it’s a great option.” | “Overall, it’s great for putting a few simple products up, but if you need anything beyond their default cart options, get a proper Squarespace developer or someone to set up a Shopify site for you.” | “It is really a great place to start, but unfortunately a place that is easily transitioned out of once the business begins to grow.”
Shipping
“My partners have had some concerns with the shipping aspect, though.” | “Yes, I would recommend it, but Squarespace needs to have calculated shipping for all the plans.” | “The shipping is still something I wish was a little easier.” | “The only thing I would say is that, for me, the shipping options are more limited than I would like.” | “There are some features I wish were better implemented in the base package (like shipping integration for international orders), but I’d recommend it.”
“I would recommend Big Cartel for smaller shops.” | “I would recommend it, especially startup users.” | “It’s a great place to start out!” | “We’d recommend it for similar businesses, especially those just getting started.” | “It is a great platform for something really simple and was very easy to set up.” | “Big Cartel is great for beginning stages of a store. We’re actually entertaining moving to a new platform right now.” | “It’s quite good for a small company or startup, for sure.” | “I’m finding that in the early stages of the business, it’s extremely handy for stock listing and very straightforward to use.”
Ease of use
“It’s very easy to use.” | “It’s very easy to use, navigate and customize the shopfront.” | “I am particularly fond of the back end and the admin tools. They make maintaining and shipping products a breeze.” | “It’s super-simple and really user-friendly.” | “I’m not savvy, so it works well for my skill level.” | “Easy to set up… and easy to control and set inventory.” | “They make it so easy to have a beautiful website.” | “For just a few items, Big Cartel totally gets the job done and is user-friendly.”
Price
“I only have to pay $9.99 a month for Big Cartel. That’s a huge perk for me.” | “Low price point and easy to use.” | “The rates are the lowest considering all the things you’re able to do.” | “I have found the cost is a lot better than my Etsy store.” | “You get a great platform for a great price.” | “Compared to Etsy, the fees are ridiculously cheap!” | “One fee a month, no item fees per listing… There is an option to open a store for free with five listings. This is an amazing feature.” | “Their prices are also very reasonable.”
“Lacking in features.” | “It is limited in terms of themes… You always know when you’re on a Big Cartel site.” | “It does most of what I expect of it, but also has limitations.” | “The one problem I have is that the only options for receiving payments are PayPal and Stripe.” | “If you want more of an interactive site with blogs and videos and whatnot, I think there are better options out there.” | “We are currently moving over to Shopify because we have maxed out Big Cartel’s limited 300-item store capacity. That is the only downside of Big Cartel.” | “You are limited by what Big Cartel allows you to do. For example, there are certain promotions that I would like to do, but currently Big Cartel has no way of allowing it.”
11 Big Cartel is simple, which makes it easy to use and perfect for stores just starting out. (View large version12)
“Many useful plugins for it.” | “So many features.” | “There are plenty of add-ons with it to customize shop as we need.” | “Fully customizable.” | “The plugin architecture is great.” | “It also has a lot of plugins.” | “It’s very good if you are looking for something that can do anything… there are extensions available, and coders who can write plugins.” | “I’m a fan of the plugins because it allows for a lot of customization.”
Ecosystem
“The ecosystem is well supported.” | “Great support with a whole online community dedicated to it.” | “I’m always able to find the answer to any question I have, either through the official WooCommerce knowledge base or in the community forums.”
“Custom modifications do require somewhat advanced developer knowledge.” | “WooCommerce does require knowledge in website building… At one point, it became extremely slow, and I couldn’t figure out where the problem was.” | “What should be native often requires plugins or coding.” | “Very customizable with some code editing.” | “WooCommerce definitely requires a solid knowledge of the inner workings.” | “There definitely is a learning curve, but it is not too hard to master.” | “It had to be highly customized for us by our website developers.”
13 WooCommerce users love the huge selection of extensions they can add to their stores. (View large version14)
“There are plenty of extensions (free and for purchase).” | “Tons of extensions to make it really awesome.” | “OpenCart extensions… have been very valuable and reliable.” | “Customization does need IT capabilities, though.” | “The software is only as good as its implementers.”
“It took some PHP programming to get it completely as we wish, but now it works fine and suits my goals well.” | “If you do not have someone capable of working behind the scenes, it would be difficult to manage.” | “I’d recommend it if and only if you have at least some knowledge web programming (PHP, JavaScript, XML, MySQL, etc.).” | “Not recommended for anyone without some web programming knowledge.” | “With the right technical staff, yes I would.” | “If you would be a serious user, I can recommend OpenCart, but also I would recommend hiring a developer to make all custom improvements.” | “Yes, I would recommend it as a good platform with cheap extensions.” | “There is also a large amount of high-quality extensions.” | “Tons of plugins, both free to paid.”
Extensions can create bugs
“When you modify it, it does amazing things but is super-finicky.” | “Buying and installing extensions is a bad idea… It’s not a plug-and-play procedure.” | “As we grew bigger, there have been headaches, mostly to do with third-party extensions clashing with each other.”
15 OpenCart offers a large marketplace of extensions, which users love. (View large version16)
“The Jumpseller team is also very helpful… They’ll walk you through the process of making website [changes], so you can really understand.” | “Technical support is great, always helpful and fast.” | “The best thing is its excellent service, very fast and efficient.” | “Support has worked well so far. When we’ve submitted a query, we’ve gotten quick feedback.” | “Fast and good email support.” | “The customer service is very responsive and helpful.” | “The email response time is super-fast. If I have one question or doubt regarding anything, from design to DNS configuration, they’ll reply in less than 15 minutes!”
Using it for Chilean and international stores
“Our store is based in Chile, and another feature we appreciated is that it had full integration with local payment systems.” | “Has local credit-card options (in our country).” | “Recently, they integrated the price list of one of the shipping companies most used in our country.” | “The good thing is the translation tool.” | “I can tell you that we have selected Jumpseller because we are selling in Chile, and the store was very well integrated with the most popular payment methods, couriers, etc.”
17 Users recommend Jumpseller for managing languages and international stores. (View large version18)
“It is easy to set up.” | “Easy to maintain.” | “Fairly user-friendly.” | “They really made everything so simple to make extremely intuitive changes quickly.” | “It’s easy to work with.” | “I would recommend it for a new user because of the ease of use in building a store.” | “Easy to use and have had no issues.”
“There are design limitations, though.” | “It is lacking in several business customization respects.” | “I wish there was a little more customization allowed.” | “There are some design limitations unless you know HTML.” | “Product is good but has many limitations.” | “I like it, but it does have limitations.” | “It has some limitations, but I have been able to work around them.” | “It does have its limitations on customizing, though.”
Credit-card processor options
“It would be better if it allowed shoppers to use a credit card to place an order, even if we don’t use their approved credit-card processor.” | “We were happy with them for years, and then out of the blue, the payment processors affiliated with GoDaddy dropped us.” | “We will be switching all of our stores from GoDaddy in the near future because it does not allow you to use the merchant service of your choice. You are forced to use Stripe.”
“Tech support has always been responsive and friendly.” | “Good customer support.” | “I have been able to live chat or call with questions without issue.” | “The support is excellent.” | “Very quick responses to any of our requests.” | “Their support is very good.” | “Their customer service is absolutely the very best.” | “You can always call them 24/7 if you need any kind of support, and it doesn’t cost any extra money.” | “Their tech support is awesome.” | “Tech support has always been responsive and friendly.” | “CoreCommerce’s service is good. It has a mom and pop feel to it.”
Price
“Price for the features and benefits given is exceptional, and no one we’ve spoken with can come close to the value.” | “It is a very cost-effective solution.” | “It is also very affordable.” | “I have yet to find another platform that offers the same value as CoreCommerce (at least for our particular business).” | “Prices are good.”
“Technologies are old, and they are very slow to update it.” | “It feels like the year 2003.” | “Outdated and uninspiring admin panel.” | “They’ve been a bit behind the times with integrations (still no Bitcoin, for example).” | “They are using an antiquated system, which doesn’t bode well for tie-in structures for the future.”
Difficult to use
“I do find the GUI to be somewhat frustrating and unintuitive.” | “It is annoying when you [have] to update each thing in multiple areas.” | “It is not intuitive or user-friendly.” | “The product was flaky. Flexible but badly designed in lots of areas.” | “Control panel sucks.”
“I emailed the president [of BigCommerce] at 1:00 am requesting help… Within 10 minutes, [he] was on it with compassion and ready to help. They have bent over backward for me.” | “They provide excellent customer support.” | “If nothing else, they seem to have great customer service.” | “More than anything, we care about customer service, and BigCommerce provides excellent customer service.” | “Technical support has been great.” | “Great support.” | “Their tech support is 24/7 and is very responsive to our questions.” | “Customer service… is very helpful.”
“Their pricing structure is punitive for successful businesses… This is surely a recurring theme if you’ve reached out to many B2C website users who have grown their site.” | “A bit pricey when your sales hit over $300,000 a year.” | “[Recently,] my monthly payments increased from $25 to $250 due to my business exceeding the annual sales of their intermediate plan.” | “Because of our sales volume, BigCommerce frequently increases our monthly fees based on increasing sales. This has become very expensive.” | “A bit pricey.” | “We feel it is overpriced these days.” | “Their pricing structure makes no sense, but I’ve been with them for seven years.” | “I would recommend BigCommerce. Pricing is a bit high, though.” | “I personally think the pricing is a little steep.”
“Not as friendly for a non-developer or an individual who just wants to set up shop on their own and doesn’t have a technical background.” | “Ubercart works well as long as you have an experienced programmer.” | “Please note that it would require a developer who knows Drupal, because many aspects needed customization.” | “[I would recommend it ] if you’re comfortable with Drupal.”
Difficult to use
“Ubercart is OK, but it is hard to customize.” | “The learning curve is quite steep.” | “It can be a bit tricky to get your store looking just the way you want.” | “Ubercart isn’t the easiest to set up or work with.” | “The only disadvantage of Ubercart is the complex configuration of the store system.” | “It’s not as plug-and-play as Shopify.”
25 Users found that Ubercart works best if you have a developer on your team. (View large version26)
“The e-commerce site is beyond simple to use.” | “I would recommend it on one level: It’s easy to use. I can do all the building and updating myself, and so that’s good.” | “Easy to use.” | “Easy to build and maintain.” | “It is user-friendly, easy to set up and modify.” | “It’s super-easy to use, and it seems like everyone who’s ordered from me has also done so with ease.” | “If you want a simple storefront, it’s pretty straightforward, easy and cheap.” | “It is easy to set up.” | “It’s easy to use and user-friendly.” | “It was pretty intuitive to set up.”
“It is basic.” | “There are some limitations with shipping and accounting (sending to QuickBooks, etc.).” | “A little limited in some options.” | “I have not been able to make it work in the way I need.” | “I cannot update the inventory amount.” | “We had so many different options, which the configuration of the store and products did not allow us to do.” | “We also wanted to be able to get customers reviews and could not do it.” | “My main complaint is the lack of customization options — for example, not being able to display a price per pound.” | “If you want a variety of options and a wide range of modifications, it is not ideal.”
“We have complete control over our Magento store and have customized it extensively to meet our needs. That’s what I like most about it.” | “The amount of customizations and extensions available are endless.” | “It has an unparalleled level of customization and freedom.” | “It has a lot of great customization features.” | “It’s pretty powerful.”
“Probably the steepest learning curve.” | “It’s very expensive to get changes made.” | “Magento is overkill for what I need to do on my site.” | “User interface is not as easy as it could be.” | “It can be a real pain sometimes.” | “Complicated to set up.” | “It’s got a steep learning curve.” | “Magento has a huge learning curve.” | “It breaks for no reasons, and it breaks if you add anything to the site.” | “Always something going wrong for no apparent reason.”
Often requires professional help
“You will need a good PHP programmer if you intend to add anything to it beyond the default installation.” | “If one wants to really change Magento, one needs an expert.” | “Needs a good specialist to partner with to get the best out of it.” | “I would recommend it as long as you have a true Magento-certified developer to hold your hand the entire way and to create your site and work with you.” | “Magento is good if you’re a web developer and have coding skills.”
“It is very easy to use.” | “It was easy to use without web design experience.” | “It is basic and easy to use.” | “I have enjoyed the ease of Weebly and what you can accomplish with the tools.” | “It is extremely easy for me to use.” | “I’d recommend it because it is so easy to set up and track inventory.” | “This is one of the easiest [e-commerce platforms] I have used.” | “I do like the online store with Weebly because of the ease of use.” | “Weebly is really easy to use.”
“[Weebly] is offered through MacHighway, which I use for my hosting, so there were some glitches in the beginning that probably wouldn’t have been there if I’d gone straight through Weebly.” | “Just make sure you buy the Weebly subscription directly through weebly.com and not through a reseller, because I lost a whole website that way.” | “I would recommend it but only through the Weebly host.” | “The B.S. part is that since day one, iPower (a third-party Weebly host) claimed I was getting an ultra-premium package but was only paying for basic. I would go to edit a product and nothing worked. I’d call customer support and they’d tell me I need to upgrade. This has happened to me twice in three years with them. I’m hoping they get stuck with a class-action suit for fraud.” | “iPage is my host for Weebly. Because of this, I don’t have access to all of the features Weebly offers.” | “… Full access to all of the Weebly features would sort that at once, but iPage (maybe I should change) wants to lock me in for three years and pay the full amount up front!”
Limited features
“If you want a more customizable tool, then this might not work for you.” | “Weebly is missing some of the critcal things that we want from an online store.” | “I am hoping that they have, or will come up with, an automatic shipping calculation.” | “The only hiccup is when I need to change my prices. I have a lot of inventory, and I have found that the easiest way (relatively speaking) to do this is to change each one individually.” | “You can’t do everything design-wise on it.” | “It was perfect for me at first, but I have grown out of it very quickly [because of limited features].” | “The Weebly platform is not scalable. There is no element to customize your cart.” | “The shipping is a problem because it can’t be adjusted for lighter, heavier or multiple items.”
“If you read the forums, one problem that continually arises, and one that I have, is bandwidth. It seems that I’m always going over my bandwidth, even though I have relatively few products and dump files regularly.” | “3dcart charges for bandwidth, so serving lots of digital products from your server might not be a great idea depending on your budget.” | “They charge you for data, and it adds up.” | “It tends to use a lot of bandwidth. My store doesn’t have a huge amount of traffic (yet!), but I still go over my plan just about every month.”
Customer support
“Their comments are snarky, and their help is judgemental in that they always place blame on the customer, and it can take up to a week for them to solve a problem.” | “”Customer support has a laissez-faire attitude.” | “I have to really keep on them when I open a ticket, or I may not get a response for days.” | “I would say the biggest con has been customer service.” | “I would characterize them as almost disrespectful.” | “Their lack of support [was surprising].” | “The tech support also cannot help with even the most basic HTML questions.” | “Technical support online isn’t the best.” | “The help line is not very helpful. If there is a problem, such as the system stops taking orders or accepting credit cards, they assume it’s a problem on your end.” | “Their live support sucks.”
Difficult to use
“I feel the product is terribly cumbersome.” | “The admin interface makes it very difficult to find what settings I’m looking for.” | “It is awkward and not very user-friendly.” | “My website is with 3dcart, but it is overwhelming.” | “It is a little quirky in the back end.” | “I personally find it difficult to make even simple changes to.” | “Some of it is not very intuitive, so you have to keep clicking around until you remember where everything is.”
33 Users found 3dcart difficult to use and were frustrated with the bandwidth overage charges. (View large version34)
“It has quite a lot of modules.” | “It has loads of modules” | “Lots of additional modules and functionalities to add.” | “A lot of modules.” | “They have a lot of free and already installed modules.” | “There are a lot of free modules.” | “Large offer of modules.”
“You need to be quite a good geek to understand everything.” | “We’ve encountered and still are encountering lots of problems with PrestaShop.” | “PrestaShop isn’t as user-friendly as others are nowadays.” | “The admin panel is not user-friendly.” | “I don’t recommend it for a beginner or if you don’t have much technical skill.”
Buggy
“I hate it… It’s buggy and impossible to upgrade easily to newer versions.” | “It’s kind of an unstable, slow system for me, but I think in the near future it will be more stable and fast.” | “We have lots of problems with PrestaShop.” | “No, I would not recommend it. Buggy as hell.” | “No, I would not recommend it. Too heavy and too slow.” | “The back-end pages sometimes take an age to load — even for simple stuff.”
“Quick and easy. I think its simplicity best suits the light or new user.” | “Great for people with no knowledge [of how to build a store].” | “For someone with zero experience building a website, I found their product to be so easy to navigate.” | “I highly recommend it for beginners.”
“Way too expensive.” | “There are cheaper options out there that do the same thing.” | “I liked Goodsie when I started with them five or six years ago, but their prices keep going up.” | “Prices were hiked above what they should be, so I am about to change.” | “The price went from $15 to $30 per month not too long ago.”
37 Users suggest Goodsie’s simplicity makes it good for beginners, although it is expensive. (View large version38)
“Whenever we’ve needed support, their help systems are very responsive.” | “Spark Pay’s technical support is excellent.” | “Very responsive for help.” | “They have been responsive to any needs I’ve had.” | “I find their customer service to be quite responsive.” | “Tech support is very responsive via phone or email.” | “They have been very responsive to helping out with general website questions and problems.”
“The main thing I don’t like are the extra bandwidth charges.” | “Nailed with huge bandwidth charges.” | “They are little hidden fees for going over your bandwidth account file storage and product count if you don’t keep an eye on them.”
Difficult to use
“Spark Pay is not simple!” | “They have a ton of features built in — most of them are half-baked and don’t function 100%, which has led to frustration.” | “[Needs to] reduce the bloat in their software.” | “Unless you have a designer and/or developer on staff, or at the very least a very computer-savvy non-techie, it’s virtually impossible to understand Spark Pay.” | “Their web editor is clumsy.” | “Their platform is buggy.” | “It is crazy complicated to make even some of the most mundane changes.” | “Their system bogs down so much that only the most minor of changes are doable.” | “Clunky UI, way too much complexity. Just a nightmare to deal with.”
While prompt, customer support can be disappointing
“There service desk really isn’t one. They have no formal (or competent) escalation process.” | “They are not nearly as responsive to fixing significant issues as they should be.” | “I feel like the platform has a lot of tools to offer, but few resources to teach you how to use them.” | “Technical support is rather lacking. When you do finally get someone to answer the tickets, they do a very minimal amount of work and effort to correct the problem.”
39 Users found Spark Pay difficult to use and were frustrated by bandwidth overage charges. (View large version40)
“Their technical support department people are top-notch… I’m extremely impressed with them.” | “The [support] team at Volusion is knowledgeable, and that is highly important.” | “Their customer support is excellent.” | “Their support is superb.” | “Support is second to none.” | “There technical support team is also very good in helping to fix any issues that we might have had.”
“The one thing I can’t stand is the amount of bandwidth they provide you with. [It] will easily be gone in a week if you have a lot of visitors.” | “They don’t have adequate bandwidth plans, and their billing for bandwidth overages is highly irritating.” | “Site traffic is pricey.” | “I originally used very large images for my products and received some rather stiff hosting fines for going over the stupidly low bandwidth level.” | “The way they charge for bandwidth caused us to have obscene overage charged for months.”
Expensive
“It is particularly expensive, and the costs weren’t clear [when we started].” | “Once the site is built, they nickel and dime you for every little thing imaginable.” | “I also used the Volusion SEO team and that was a joke. $1600 a month!” | “Not the least expensive around.” | “I would caution new users to be aware of hidden costs. Email addresses are extra. An SSL certificate is extra. A service to check the reliability of each credit card is extra. SEO and design services are phenomenally expensive.” | “Going by the prices they charge for SEO packages, they’re aiming at companies far larger than mine.” | “If you want anything besides barebone offerings, everything else is available… for a price.” | “I just wish it was a little cheaper.” | “Volusion keeps [the initial setup and customization] complicated, hoping that you will pay them to do it for you.”
Difficult to use
“The back end is not user-friendly.” | “The UX is confusing and bloated, but I’m used to it.” | “There is a learning curve, so it takes a while to get going. And if you want customization, be prepared to learn it yourself or pay some hefty fees.” | “It’s not straightforward and is prone to errors.” | “If you change a font size within the text, you then lose all other formatting — nothing major, but annoying and time-consuming.” | “It’s quite clunky to manage content and design.” | “There are random glitches throughout the site that have probably cost me thousands in abandoned carts.” | “One thing that is hard for me is manipulating website elements. GoDaddy was easier for me.”
It’s worth noting that this is not a list of all e-commerce software currently available in the world. Instead, I’ve only included software for which I was able to talk to a minimum of 30 users (and I was not able to find 30 users for several companies).
But this is a fairly comprehensive list of the most popular e-commerce platforms. Furthermore, these are the thoughts of real, verified users. I hope it’s helpful in your search for the right e-commerce software!
This article is based off of the e-commerce software guide originally published here43.
In today’s article, we’ll create a JavaScript extension that works in all major modern browsers, using the very same code base. Indeed, the Chrome extension model based on HTML, CSS and JavaScript is now available almost everywhere, and there is even a Browser Extension Community Group1 working on a standard.
I’ll explain how you can install this extension that supports the web extension model (i.e. Edge, Chrome, Firefox, Opera, Brave and Vivaldi), and provide some simple tips on how to get a unique code base for all of them, but also how to debug in each browser.
So, if you’ve never built an extension before or don’t know how it works, have a quick look at those resources. Don’t worry: Building one is simple and straightforward.
Let’s build a proof of concept — an extension that uses artificial intelligence (AI) and computer vision to help the blind analyze images on a web page.
We’ll see that, with a few lines of code, we can create some powerful features in the browser. In my case, I’m concerned with accessibility on the web and I’ve already spent some time thinking about how to make a breakout game accessible using web audio and SVG14, for instance.
Still, I’ve been looking for something that would help blind people in a more general way. I was recently inspired while listening to a great talk by Chris Heilmann15 in Lisbon: “Pixels and Hidden Meaning in Pixels16.”
Indeed, using today’s AI algorithms in the cloud, as well as text-to-speech technologies, exposed in the browser with the Web Speech API17 or using a remote cloud service, we can very easily build an extension that analyzes web page images with missing or improperly filled alt text properties.
My little proof of concept simply extracts images from a web page (the one in the active tab) and displays the thumbnails in a list. When you click on one of the images, the extension queries the Computer Vision API to get some descriptive text for the image and then uses either the Web Speech API or Bing Speech API to share it with the visitor.
The video below demonstrates it in Edge, Chrome, Firefox, Opera and Brave.
You’ll notice that, even when the Computer Vision API is analyzing some CGI images, it’s very accurate! I’m really impressed by the progress the industry has made on this in recent months.
This is free to use19 (with a quota). You’ll need to generate a free key; replace the TODO section in the code with your key to make this extension work on your machine. To get an idea of what this API can do, play around with it20.
This is also free to use23 (with a quota, too). You’ll need to generate a free key again. We’ll also use a small library24 that I wrote recently to call this API from JavaScript. If you don’t have a Bing key, the extension will always fall back to the Web Speech API, which is supported by all recent browsers.
You can find the code for this small browser extension on my GitHub page27. Feel free to modify the code for other products you want to test.
Tip To Make Your Code Compatible With All Browsers Link
Most of the code and tutorials you’ll find use the namespace chrome.xxx for the Extension API (chrome.tabs, for instance).
But, as I’ve said, the Extension API model is currently being standardized to browser.xxx, and some browsers are defining their own namespaces in the meantime (for example, Edge is using msBrowser).
Fortunately, most of the API remains the same behind the browser. So, it’s very simple to create a little trick to support all browsers and namespace definitions, thanks to the beauty of JavaScript:
This manifest file and its associated JSON is the minimum you’ll need to load an extension in all browsers, if we’re not considering the code of the extension itself, of course. Please check the source34 in my GitHub account, and start from here to be sure that your extension is compatible with all browsers.
For instance, you must specify an author property to load it in Edge; otherwise, it will throw an error. You’ll also need to use the same structure for the icons. The default_title property is also important because it’s used by screen readers in some browsers.
Here are links to the documentation to help you build a manifest file that is compatible everywhere:
The sample extension used in this article is mainly based on the concept of the content script38. This is a script living in the context of the page that we’d like to inspect. Because it has access to the DOM, it will help us to retrieve the images contained in the web page. If you’d like to know more about what a content script is, Opera39, Mozilla40 and Google41 have documentation on it.
console.log("Dare Angel content script started"); browser.runtime.onMessage.addListener(function (request, sender, sendResponse) { if (request.command == "requestImages") { var images = document.getElementsByTagName('img'); var imagesList = []; for (var i = 0; i 64 && images[i].height > 64)) { imagesList.push({ url: images[i].src, alt: images[i].alt }); } } sendResponse(JSON.stringify(imagesList)); } }); view raw
This first logs into the console to let you check that the extension has properly loaded. Check it via your browser’s developer tool, accessible from F12, Control + Shift + I or ⌘ + ⌥ + I.
It then waits for a message from the UI page with a requestImages command to get all of the images available in the current DOM, and then it returns a list of their URLs if they’re bigger than 64 × 64 pixels (to avoid all of the pixel-tracking junk and low-resolution images).
The popup UI page47 we’re using is very simple and will display the list of images returned by the content script inside a flexbox container48. It loads the start.js script, which immediately creates an instance of dareangel.dashboard.js49 to send a message to the content script to get the URLs of the images in the currently visible tab.
Here’s the code that lives in the UI page, requesting the URLs to the content script:
This shows you via an interactive console in a web page how to call the REST API with the proper JSON properties, and the JSON object you’ll get in return. It’s useful to understand how it works and how you will call it.
In our case, we’re using the describe feature of the API. You’ll also notice in the callback that we will try to use either the Web Speech API or the Bing Text-to-Speech service, based on your options.
Here, then, is the global workflow of this little extension:
Download or clone my small extension54 from GitHub somewhere to your hard drive.
Also, modify dareangel.dashboard.js to add at least a Computer Vision API key. Otherwise, the extension will only be able to display the images extracted from the web page.
Click on “…” in the Edge’s navigation bar and then “Extensions” and then “Load extension,” and select the folder where you’ve cloned my GitHub repository. You’ll get this:
Note the “Reload extension” button, which is useful while you’re developing your extension. You won’t be forced to remove or reinstall it during the development process; just click the button to refresh the extension.
Navigate to BabylonJS626158, and click on the Dare Angel (DA) button to follow the same demo as shown in the video.
In Chrome, navigate to chrome://extensions. In Opera, navigate to opera://extensions. And in Vivaldi, navigate to vivaldi://extensions. Then, enable “Developer mode.”
Click on “Load unpacked extension,” and choose the folder where you’ve extracted my extension.
You’ve got two options here. The first is to temporarily load your extension, which is as easy as it is in Edge and Chrome.
Open Firefox, navigate to about:debugging and click “Load Temporary Add-on.” Then, navigate to the folder of the extension, and select the manifest.json file. That’s it! Now go to BabylonJS626158 to test the extension.
The only problem with this solution is that every time you close the browser, you’ll have to reload the extension. The second option would be to use the XPI packaging. You can learn more about this in “Extension Packaging65” on the Mozilla Developer Network.
The public version of Brave doesn’t have a “developer mode” embedded in it to let you load an unsigned extension. You’ll need to build your own version of it by following the steps in “Loading Chrome Extensions in Brave66.”
As explained in that article, once you’ve cloned Brave, you’ll need to open the extensions.js file in a text editor. Locate the lines below, and insert the registration code for your extension. In my case, I’ve just added the two last lines:
// Manually install the braveExtension and torrentExtension extensionInfo.setState(config.braveExtensionId, extensionStates.REGISTERED) loadExtension(config.braveExtensionId, getExtensionsPath('brave'), generateBraveManifest(), 'component') extensionInfo.setState('DareAngel', extensionStates.REGISTERED) loadExtension('DareAngel', getExtensionsPath('DareAngel/')) view raw
Copy the extension to the app/extensions folder. Open two command prompts in the browser-laptop folder. In the first one, launch npm run watch, and wait for webpack to finish building Brave’s Electron app. It should say, “webpack: bundle is now VALID.” Otherwise, you’ll run into some issues.
Tip for all browsers: Using console.log(), simply log some data from the flow of your extension. Most of the time, using the browser’s developer tools, you’ll be able to click on the JavaScript file that has logged it to open it and debug it.
To debug the client script part, living in the context of the page, you just need to open F12. Then, click on the “Debugger” tab and find your extension’s folder.
Open the script file that you’d like to debug — dareangel.client.js, in my case — and debug your code as usual, setting up breakpoints, etc.
If your extension creates a separate tab to do its job (like the Page Analyzer73, which our Vorlon.js74 team published in the store), simply press F12 on that tab to debug it.
If you’d like to debug the popup page, you’ll first need to get the ID of your extension. To do that, simply go into the property of the extension and you’ll find an ID property:
Then, you’ll need to type in the address bar something like ms-browser-extension://ID_of_your_extension/yourpage.html. In our case, it would be ms-browser-extension://DareAngel_vdbyzyarbfgh8/dashboard.html. Then, simply use F12 on this page:
Because Chrome and Opera rely on the same Blink code base, they share the same debugging process. Even though Brave and Vivaldi are forks of Chromium, they also share the same debugging process most of the time.
To debug the client script part, open the browser’s developer tools on the page that you’d like to debug (pressing F12, Control + Shift + I or ⌘ + ⌥ + I, depending on the browser or platform you’re using).
Then, click on the “Content scripts” tab and find your extension’s folder. Open the script file that you’d like to debug, and debug your code just as you would do with any JavaScript code.
For Chrome and Opera, to debug the popup page, right-click on the button of your extension next to the address bar and choose “Inspect popup,” or open the HTML pane of the popup and right-click inside it to “Inspect.” Vivaldi only supports right-click and then “Inspect” inside the HTML pane once opened.
And then, in a separate tab, open the page you’d like to debug like — in my case, chrome-extension://bodaahkboijjjodkbmmddgjldpifcjap/dashboard.html — and open developer tools.
For the layout, you have a bit of help using Shift + F8, which will let you inspect the complete frame of Brave. And you’ll discover that Brave is an Electron app using React!
Note: I had to slightly modify the CSS of the extension for Brave because it currently displays popups with a transparent background by default, and I also had some issues with the height of my images collection. I’ve limited it to four elements in Brave.
For the client script part, it’s the same as in Edge, Chrome, Opera and Brave. Simply open the developer tools in the tab you’d like to debug, and you’ll find a moz-extension://guid section with your code to debug:
Each vendor has detailed documentation on the process to follow to publish your extension in its store. They all take similar approaches. You need to package the extension in a particular file format — most of the time, a ZIP-like container. Then, you have to submit it in a dedicated portal, choose a pricing model and wait for the review process to complete. If accepted, your extension will be downloadable in the browser itself by any user who visits the extensions store.
Please note that submitting a Microsoft Edge extension to the Windows Store is currently a restricted capability. Reach out to the Microsoft Edge team103 with your request to be a part of the Windows Store, and they’ll consider you for a future update.
Some developers remember the pain of working through various implementations to build their extension — whether it meant using different build directories, or working with slightly different extension APIs, or following totally different approaches, such as Firefox’s XUL extensions or Internet Explorer’s BHOs and ActiveX.
It’s awesome to see that, today, using our regular JavaScript, CSS and HTML skills, we can build great extensions using the very same code base and across all browsers!
On days when things don’t seem to go as you’d like them to and inspiration is at its lowest, it’s good to take a short break and go outside to try and empty your mind. That always seems to be the best remedy for me, especially whenever I jump on my bike and go for a short ride.
Now the time has come to enjoy these moments even more as the spring season finally starts to show up in nature. We’re starting to see green leaves on the trees again, and every morning I wake up to the sounds of the birds chirping. I really enjoy these small joys of spring — who doesn’t? Hopefully this new batch of illustrations will feed your creativity tank with extra vitamins to make sure those inspiration levels are up and running at its best.
A sneak peek of a new print the crew at DKNG is working on. Looks like Austin to me. Love the effect of the letters used as masks. How the few colors are applied is just sublime!
One entry of ten finalists that capture the theme of “through young eyes” in this young photographers’ competition that aims to engage youth around the world in wildlife conservation. Check out the other nine submissions29, too.
Beautiful cover for Fabric‘s spring issue. Sam’s work usually has a futuristic element to it, but this one is great too, especially the plants and colors. Those lines and details in each leaf are just fantastically well executed. Perfect light and shadow effects too.
Nice identity for The Digital Arts Expo, an annual showcase of student and faculty projects integrating engineering, computer science, and the visual and performing arts.
Illustrations for Eurostar‘s Metropolitan magazine to accompany an article about what to see and do in Brussels. The butcher chasing the cow is such a nice detail.
This image goes along an article on how Tim Tebow is making a drastic switch from being a football player to a baseball player. Love this vertical stripe collage blend effect. So well done!
On days when things don’t seem to go as you’d like them to and inspiration is at its lowest, it’s good to take a short break and go outside to try and empty your mind. That always seems to be the best remedy for me, especially whenever I jump on my bike and go for a short ride.
Now the time has come to enjoy these moments even more as the spring season finally starts to show up in nature. We’re starting to see green leaves on the trees again, and every morning I wake up to the sounds of the birds chirping. I really enjoy these small joys of spring — who doesn’t? Hopefully this new batch of illustrations will feed your creativity tank with extra vitamins to make sure those inspiration levels are up and running at its best.
A sneak peek of a new print the crew at DKNG is working on. Looks like Austin to me. Love the effect of the letters used as masks. How the few colors are applied is just sublime!
One entry of ten finalists that capture the theme of “through young eyes” in this young photographers’ competition that aims to engage youth around the world in wildlife conservation. Check out the other nine submissions29, too.
Beautiful cover for Fabric‘s spring issue. Sam’s work usually has a futuristic element to it, but this one is great too, especially the plants and colors. Those lines and details in each leaf are just fantastically well executed. Perfect light and shadow effects too.
Nice identity for The Digital Arts Expo, an annual showcase of student and faculty projects integrating engineering, computer science, and the visual and performing arts.
Illustrations for Eurostar‘s Metropolitan magazine to accompany an article about what to see and do in Brussels. The butcher chasing the cow is such a nice detail.
This image goes along an article on how Tim Tebow is making a drastic switch from being a football player to a baseball player. Love this vertical stripe collage blend effect. So well done!