Sympli is a platform that connects product development team members more effectively through plugins for design and development tools, seamless design hand-off, version control, visual diffs & more.
Design is one of the most controversial things in our industry. There are barely any redesigns that aren’t discussed heavily in the community. Changing a well-working design is even harder as people tend to dislike anything new, but if we give them a bit of time, they might start to see things from a different perspective.
Instead of following what everyone else is doing, we shouldn’t hesitate to leave the beaten tracks when designing — by creating a contrast-rich, deep design without using drop shadows, for example. Whatever you do, be sure to explore new options whenever you can.
Chrome 60 beta is available5 and brings us the Paint Timing API, CSS’ font-display property, the Web Budget API, and Credential Management API improvements.
Firefox 54 is out6 — and, thanks to a new multi-process architecture, with huge performance improvements. The update also brings7 us the new Web Extension APIs as well as CSS clip-path improvements with basic shapes.
A new version of Tor Browser was released, too: Tor Browser 7.08. The new version is based on Firefox 52 ESR and, thus, has multi-process mode and sandboxing on Linux and macOS (Windows coming soon) enabled.
Patrick Wardle found evidence that there’s a macOS ransomware spreading and wrote up the key facts about it17. This shows that the Mac platform is growing and attackers start targeting users as well. Better be safe and have a valid, usable backup.
Industries often experience evolution less as slow and steady progress than as revolutionary shifts in modality that change best practices and methodologies seemingly overnight. This is most definitely true for front-end web development. Our industry thrives on constant, aggressive development, and new technologies emerge on a regular basis that change the way we do things in fundamental ways.
Today, we are in the early stages of such a revolutionary shift, brought about by CSS Grid Layout. Much of what we know of the possibilities, limitations and best practices surrounding web layouts is effectively rendered obsolete by this new layout module, and in their place we find new possibilities, limitations and best practices that will take us into the next era of web design.
Much has already been said about the technical aspects of CSS grid and its one-dimensional cousin flexbox by people smarter than me. In this article, I want to bring these concepts into practical use. What you’ll get is a starting point for exploring what new layout opportunities and challenges CSS grid brings, what old problems it solves and how to start using the module in production websites today.
My examples and focus will be on WordPress themes, but the practical approaches and code examples are agnostic and can be used in any web project, regardless of CMS, technology stack or tool.
Think of how you would create the layout below using CSS: a responsive two-column layout with a full-bleed header and footer, the main area center-aligned with a maximum width of 60 ems. The content takes up two thirds of the available space, the sidebar one third, and the two sections below half the space each.
To solve this layout challenge using traditional CSS layout methods, the content, sidebar and two equal-sized sections would be grouped together in a container, with max-width set to 60em; that container would be centered, setting margin-right and margin-left to auto; the different sections would be placed next to one another by setting the main container to display: flex; and each element would be given a flex value or, in the days before flexbox, by floating them left and right.
Now, put away your web design hat for a moment and think of this as a layout you are asked to create in your favorite design application. What’s the first thing you do? You create a grid: four rows and eight columns.
In a print environment, you don’t use margins to center-align content or apply flex sections to match the height of sections — you just place them where they belong on the grid: The header, content and sidebar area, and footer each take up one row, the header and footer take up all eight columns, while the content occupies columns 2 to 5, and the sidebar columns 6 and 7.
CSS grid takes the best parts of print-based grid layouts and enhances them with the fluidity of the browser’s viewport: The six middle columns would have a maximum width, thanks to CSS’ minmax() function10, and the first and last column would be set using the fr unit11, to share a specified fraction of the remaining available space. What we’re left with is a center-aligned responsive grid layout, in the browser. And because this is all powered by CSS, we can use media queries to change the layout for different viewport widths.
Like flexbox, grid honors the text direction of the document, so if the dir attribute is set to rtl, the layout automatically mirrors, placing the sidebar on the left.
As this example shows, to start working with CSS grid, you first need to set aside the habits, assumptions and practices that have enabled you to create advanced CSS-based layouts. In many cases, these established approaches are hacks developed to work around the limitations of CSS as a layout tool. CSS grid is, in my opinion, the first true layout module in CSS. It gives you two dimensions to work with and the ability to place any element in any cell or combination of cells. That requires a whole new mindset: Rather than asking, “How do I make this vertical stack of content behave as if it were laid out in a grid,” you simply define the grid and place each piece of content within it.
Let’s look at one more example to see just how different this is from how we used to do things.
Consider this single-column layout, containing both width-constrained, center-aligned content and full-bleed images and backgrounds:
The grid layout (right) has 8 rows and 4 columns. Using CSS grid, you should be able to position elements on that grid in any way you like. However, a grid container (one whose element is set to display: grid) only treats the elements that are direct descendants as grid items. This means you can’t create a single layout grid here. Instead, you have to create grids at the component level, effectively creating multiple grids with the same properties.
Side note: The grid specification contained a proposal for subgrids, which allow a descendant element to inherit the parent grid and apply it to its children. The subgrid feature has been moved to level 2 of the specification, meaning it will not be available in browsers for the foreseeable future. Rachel Andrew is following this development closely and has published extensive information about where subgrid currently stands14 and what comes next for the feature. Until that time, the most practical workaround is to nest grids within grids, effectively defining grid descendants as their own grids.
To build the layout above in the browser, you would use CSS grid in cohort with other tools and practices, as we’ll see.
If you don’t do anything, each section will be full width out of the box. For each section with center-aligned content sections, you apply the .grid class and create CSS rules that set up a four-column grid and position the content in the two center columns. The layout of three buckets in the third section is achieved using flexbox. The last section, with two full halves, comprises two individual grid items, taking up two columns each. This is achieved using grid-column: span 2; (meaning the grid items span two columns) and allowing grid auto-placement to handle the rest.
/* Four-column layout. Two center columns have a total max-width of 50em: */ .grid { display: grid; grid-template-columns: 1fr repeat(2, minmax(auto, 25em)) 1fr; } /* Center items by placing them in the two center columns: */ .splash-content, .more-content, .buckets ul { grid-column: 2/4; } /* Use automatic grid placement + span to let each item span two columns: */ .twin, .colophon aside { grid-column: span 2; }
This example shows both how CSS grid can be used to solve common layout challenges like full-bleed sections, and how there are still limitations to what we can do with the module. As explained, ideally, you’d want to draw a single grid, and use subgrid to handle all of the sections and subsections. However, for now, only first-level descendants of the container whose display property set to grid can be positioned on the grid, and the subgrid property has not been implemented in browsers16, so we are forced to come up with workarounds like what you’ve seen here. The good news is that this is still better than older methods, and it produces clean and readable CSS.
Now that you have an idea of why CSS grid is so important and how to adopt a CSS grid mindset, let’s look at how to use this module in production.
More than a layout module, CSS grid is an invitation to reaffirm our original intent with web design and development: to create accessible, extensible solutions that bring content to those interested in the best way possible.
At the core of any front-end web project is a simple principle: First make it accessible, then make it fancy, and make sure the fancy doesn’t break accessibility.
We also need to consider responsiveness, cross-browser and cross-platform compatibility, and progressive enhancement. Putting all this together, we end up with an outline for a practical process of development:
Create accessible HTML with a sound hierarchical and semantic structure.
Create a responsive single-column layout for all screen widths.
Apply advanced layouts and functionality (grid, flex, JavaScript, etc.).
This should result in an accessible solution that works in all browsers on all platforms, while taking advantage of the latest technologies.
Before we continue, let’s quickly address the giant neon-orange elephant in the room: CSS grid is bleeding-edge technology, and while it has support in modern browsers, it has zero backwards compatibility. This could be considered an automatic non-starter, but I would argue it’s an opportunity to reframe the way we think about backwards compatibility:
CSS grid is a layout module; it allows us to change the layout of a document without interfering with its source order. In other words, CSS grid is a purely visual tool, and used correctly, it should have no effect on the communication of the contents of the document. From this follows a simple but surprising truth: The lack of support for CSS grid in an old browser should not affect the experience of the visitor, but rather just make the experience different.
If you agree with this (and you should), there is no reason you can’t use CSS grid today!
Here’s how that approach could work in practice. Rather than using fallbacks and shims to ensure a design and layout look the same across all browsers, we’d provide the mobile vertical single-column layout to all browsers and then serve up advanced functionality to those browsers and viewport widths that can take advantage of them. It might sound like progressive enhancement, but it’s actually more of an accessibility-centric approach enabled by a mental shift.
Before applying any CSS, we start with a properly marked-up HTML document, laid out in a single column. The document is navigable with any input device and has a hierarchical structure that ensures that reading it from top to bottom conveys the message in the best possible way. This is what makes it accessible. Based on what we know about the web-using public, we can assume that a significant portion of the visiting audience will come to the website using a smartphone. Using mobile-first design, we use CSS to style the single column of the document to fit the smallest mobile screen. This gives us the baseline of the design.
Now comes the mental shift: If the message of the document comes across in a single column on a narrow screen, then that same layout will work on a wide screen as well! All we have to do is make it responsive to fit the given viewport. This gives us an accessible, cross-browser, backwards-compatible baseline on which we can build advanced layouts and functionality using modern web technologies like CSS grid and advanced JavaScript.
In short, we would use CSS grid for visual and layout enhancements17, not for meaningful changes to the content. The former leads to better design, the latter to broken accessibility.
CSS grid simplifies age-old approaches and opens the door to dynamic new web layouts for all sorts of applications. To see how this works in the wild, let’s use CSS grid in a distributed WordPress theme via the process outlined earlier. Here, I’ll use the _s (or Underscores)18 starter theme as my starting point. And as I stated in the introduction, these approaches are agnostic and can be used with any starter theme or theme framework or even outside of WordPress.
If you want to get a firm handle on how CSS grid works in the browser, I recommend using Firefox and its Grid Inspector tool19. That way, you’ll have a visual reference of where your grids are and what items they are acting on.
To start off the project, I’ve created structural layout draft sketches for the main views of the theme. These will serve as guides as the design and layout process takes place in the browser. You’ll notice these sketches do not contain sizes or other specifications; they only outline the general layout and how they relate to the grid.
For narrow screens, such as mobile devices and for browsers lacking support for CSS grid, the layout would be a single center-aligned column with the content following the document’s hierarchy.
For modern browsers, there are two main layouts:
Index and archive views show published posts in a grid pattern, where featured posts take up twice as much space as other posts. 20 (View large version21)
A single post and page view with grids would change the layout and displayed order of content depending on the viewport’s width. 22 (View large version23)
1. Create Accessible HTML With A Sound Hierarchical And Semantic Structure Link
One of the major advantages of using a content management system such as WordPress is that, once you’ve created accessible, semantically sound and valid templates, you don’t have to worry about the structural components of the generated HTML documents themselves. Using _s as a starting point, we already have an accessible platform to build from, with only a bare minimum of interactive elements, like a small JavaScript file for adding keyboard navigation to the main menu.
Due to the historical limitations of CSS, most frameworks like _s heavily nest containers to group elements together in an effort to make it easier for developers to do things like position the sidebar next to the main content. The simplified schematic below shows how the .site-content div wraps .content-area and .widget-area to allow them to float left and right:
A grid container treats its first-level children as individual grid items, and CSS grid eliminates the need for floated and cleared containers. That means we can reduce the level of nesting and simplify the overall markup of the templates and the final output.
Out of the box, making .page a grid container would make .site-header, .site-content and .site-footer grid items, but .site-main and .widget-area are nested inside these containers and are, therefore, unavailable for placement on the grid.
Drawing out the layout on a piece of paper can help us to identify what level of container nesting is necessary and where to change the markup’s structure to allow for grid layouts. While doing this, it’s important to remember you can nest grids within grids and that this often allows for better layout control.
As explained earlier, _s defines containers whose sole purpose is to allow for wrapping and for layouts to use old CSS methods. You can usually spot these containers because they are generic divs, thus the name “divitis.”
Now that we’re using CSS grid for layout, we can remove some of these containers by flattening the HTML and bringing it back to a cleaner semantic hierarchy:
In practical terms, this means removing <div> from header.php (where it opens) and footer.php (where it ends) and removing <div> from archive.php, index.php, page.php, search.php and single.php. This leaves us with a logical content hierarchy that we can position in our grid:
2. Create A Responsive Single-Column Layout For All Screen Widths Link
Before applying new styles, it is necessary to clean up the existing CSS that ships with the theme out of the box. The _s theme assumes we will float items to create side-by-side layouts, and it takes a brute force approach to resolve container-clearing issues by applying a clearfix at the root of most elements28. This approach introduces invisible pseudo-elements in the DOM that are recognized by the browser and interfere with both flexbox and grid. If these rules are left in place, flexbox and grid will treat the pseudo-elements as flexbox and grid items and cause the layout to include what appears to be empty flexbox and grid cells. The quickest way to resolve this is to remove these CSS rules altogether by deleting the entire clearing section, or wrap them in feature or media queries so that they are applied only when flexbox and grid are not in use. If you need to clearfix specific elements within the layout, there should be a dedicated .clearfix class and CSS rule for this, to be applied when required. Clearfix is a hack to get around issues with floats and clears, and with flex and grid available, it now becomes graceful degradation and should be treated as such.
To provide a great user experience for all visitors, apply style rules to all elements in the document, starting with the smallest agreed-upon viewport width, usually 320 pixels.
Increase the viewport width incrementally, and add media queries to keep the visual experience pleasing, while taking advantage of the available space. The result should be a responsive single-column layout across all possible screen widths.
Animation showing the center-aligned, single-column, mobile-first layout displayed across all viewport widths.>
At this stage, refrain from making layout changes such as horizontal positioning: You are still working on the fallback for browsers without grid support, and we will be handling all layouts beyond the single column using grid in the next step.
From here, we’ll start creating advanced layouts using CSS grid and other modern tools. You may feel compelled to try to make your grid layouts backwards compatible29 using a tool like Autoprefixer, but you’ll find this will have limited success. Instead, the best way forward is progressive enhancement: Give all browsers a well-functioning experience, and allow browsers with grid support to use grid for layouts.
How you approach this will depend on your overall fallback strategy: If your design calls for fallbacks to kick in where grid is not supported, you might be able to resolve the fallback by simply declaring another display property higher up in the cascade. Grid overrides other display property declarations, and browsers without grid support will use these declarations instead. Rachel Andrew has a detailed description30 with examples of this approach. You can also omit entire CSS rules from browsers without grid support using feature queries. In the current example, fallback is handled by serving up the mobile layout to all browsers. This means we can use feature queries to apply what I call “scorched-earth progressive enhancement”: Give all browsers the basic experience we’ve built above, and serve grid layouts only to browsers that can handle them.
We do this using feature queries in the form of @supports rules31. Like with media queries, CSS nested inside an @supports rule will only be applied if the browser supports the specified property or feature. This allows us to progressively enhance the CSS based on browser support. It’s also worth noting that feature queries test for declared feature support in a browser. It does not guarantee that the browser actually supports a feature or honors the specification.
In its most basic form, a feature query for grid support looks like this: @supports (display: grid) {}.
The challenge is Microsoft Edge, the only modern browser that, as of this writing, still uses the old grid specification32. It returns true for @supports (display: grid) {}, even though its grid support is spotty and non-standard. Most relevant for our example is the lack of support for the grid-template-areas and grid-area properties and for the minmax() function. As a result, a feature query targeting display: grid would not exclude Microsoft Edge, and users of this browser would be served a broken layout. To resolve this issue and serve up grid layouts to Microsoft Edge only when the browser gets full support for the specification, target one of the unsupported features instead:
@supports (grid-area: auto) {}
This returns false in current versions of Microsoft Edge, and the browser would not try to render CSS it does not understand.
From here, we continue the mobile-first approach for all rules:
Start with the smallest viewport.
Increase the width until things look strange.
Add a breakpoint.
Go back to step 2.
In most cases, you would not change the single-column layout until the viewport’s width is fairly wide, so the first grid rule would usually appear in a media query. You’ll also find that most layouts require several grids — typically, one for the structural site-wide layout, and others for individual content models like archives, single posts and so on.
In our example, the structural layout calls for a simple two-column grid on medium-width screens and a three-column grid on wide screens, to allow space on the right for the sidebar.
Playing around with the width of the viewport, you’ll find an ideal breakpoint for the medium-width grid. Inside this media query, declare the .site element as the grid container:
To make it easier to understand what is going on within the grid, we’ll use grid-template-areas35 to name different sections of the grid. The grid-template-area property allows us to draw a map of sorts of rectangular, named areas within the grid spanning one or more cells. Once a grid template area has been defined, a grid item can be assigned to that area using the grid-area property and the area name. In the example below, the grid-template-area property is used to “draw” a two-column layout by defining the entire first column as the header area and dividing the second column between main, sidebar and footer:
Now, we can position the header, main content, sidebar and footer in the named template areas. This makes it easy for anyone reading the CSS to understand what is happening:
Extra-wide viewports get their own media query. Here, the sidebar takes up a new column on the right side of the grid. This is accomplished by changing the grid template to place the sidebar area in a new column. Because the sidebar is already positioned in the sidebar template area, we only need to change the .site rule:
Using @supports (grid-area: auto) { }, you can create additional CSS rules elsewhere in your style sheet that kick in only when grid layouts are supported by the browser.
In WordPress, archive pages include the blog index; the category, tag, author and date archives; and the search results view. In _s, they are displayed using the index.php (main blog index), archive.php (category, tag, author, date and other archives) and search.php (search results) template files. Other archive templates can be added using the template hierarchy36.
To minimize the need for dedicated CSS rules for each possible individual archive, we can use a filter in the theme to apply a custom class to the body element of any archive view. The best place for this filter is inc/extras.php, alongside other similar filters. Rather than using conditional statements to target every possible archive type, we can use a bit of reverse psychology to say, “If the current view is not a singular element — so, not a single post or page — then apply the .archive-view class to the body element.” There is already a condition for this in _s, so all we’ll do is add the additional class to it:
function mytheme_body_classes( $classes ) { // Adds classes hfeed and archive-view to non-singular pages. if ( ! is_singular() ) { $classes[] = 'hfeed archive-view'; } return $classes; } add_filter( 'body_class', 'mytheme_body_classes' );
Now, we can use the .archive-view class to restrict CSS rules to just archive views.
In archive views, the template should display each post (green) in a separate grid cell (blue grid). If we provide the browser with a uniform grid — with two, three, four or five columns, depending on the viewport’s width — and let it figure out how many rows are required, it will automatically place each post in a cell and generate the implicit row lines required. Because each column takes up an equal fraction, we can use the fr unit to define the columns’ width. Take note that the fr unit distributes the available space into fractions. It does not constrain the minimum width of the content. This means that if very long words or other wide content appear inside an element sized with the fr unit, then the element will retain the minimum size of the content even if that means it will be wider than the fraction specified. To avoid this, make sure words break or wrap and that content such as images do not have fixed minimum sizes.
The posts are individual article elements nested in a section with the class .site-main. That gives us the following SCSS:
If you feel compelled to add a thin 1-pixel line between each of the grid cells, you’ll quickly discover you can’t use border or outline because the lines will overlap. To get around this problem, you can set the grid-gap property to 1px, adding a 1px gap between each cell horizontally and vertically, and then set the background-color of the grid container to the line’s color and the background-color of each cell to white:
To make the grid a bit more dynamic, the layout calls for featured items to take up two columns. To make this happen, we’ll apply a category named “Featured” to each featured post. WordPress automatically applies category classes to each post’s article element using the formula category-[name], meaning we can target these posts using .archive-view .category-featured as our selector. We are not explicitly defining where on the grid any item will appear, so we’ll use the span keyword to specify how many rows or columns the item should span and let the browser figure out where that space is available. Because we are making the item take up two columns, we’ll do this only for layouts that have two or more columns:
For single posts and pages, we’ll use two CSS grids: one for the overall layout structure of the post, and one to change the visual layout and order of blocks of content when appropriate.
First, we’ll define a grid for the .site-main container, which will hold the post’s article, the post’s navigation and the comments. This is primarily to allow grid to center-align the content within the container in wide viewports:
Now we can use CSS grid to create a dynamic layout that changes the visual order and presentation of the header and meta content depending on viewport’s width:
This is done by changing the visual order of elements without changing the HTML markup or interfering with the content’s hierarchy. Exercise caution45 because it can be tempting to disconnect the visual order from the document’s hierarchy and content order, causing unexpected behavior. For this reason, keep the content hierarchy intact both visually and structurally, and only use the repositioning abilities of CSS grid to change the layout as is done here.
Rule of thumb: Never change the meaning of content by repositioning it.
Out of the box, a post with a featured image has the following hierarchy:
featured image
post header: categories + title + meta data (i.e. author, date, number of comments)
content
Based on the layout in the figure above, in wider viewports we want a visual structure that looks more like this:
post header: categories + title
featured image
meta data (author, date, number of comments)
content
To achieve this, we need to break apart some more of the nesting that comes with _s. Like before, we can get more control over elements by flattening the HTML’s structure. In this case, we just want to move the meta section out of the header to make it available as its own grid item. This is done in header.php:
In our case, posts with no featured image don’t need separate rules: the featimg template area will simply collapse because it is empty. However, if grid-gap is applied to rows, we’ll need to create a separate .post rule with grid-template-areas that do not include featimg.
On wider screens, the post’s meta area should shift down, below the featured image, to appear to the left of the main content. With our flattened markup and the grid in place, this is solved by making a small change to the grid-template-areas property in a separate media query:
By now, you’re probably wondering what this would look like if it were implemented on a real website. Wonder no more. As part of writing this article, I have built a free WordPress theme named Kuhn, incorporating the layouts explained here and more. It is live on my own website46, and you can explore the code, use the theme on your own WordPress website and contribute to make it better on GitHub47.
With this article, I set out to demonstrate both how CSS grid allows you to create dynamic, responsive, modern layouts using pure CSS and how clean and easily managed this code is compared to traditional layout methods. On a more zoomed-out level, I also wanted to give you a new framework for thinking about web layouts. Using CSS grid, much of what we know about placing content within the viewport has changed, and even for simple layouts like what I’ve demonstrated here, the approach is both fundamentally different and much more intuitive. Gone are all of the CSS alchemy and counterintuitive tricks and hacks, replaced by structured grids and declarative statements.
Now it’s time for you to take CSS grid for a spin in your own projects. You can take my hardline approach and pave the cowpaths with this new module, or you can combine it with backwards-compatible code to serve up the same or similar experiences to all visitors. It’s entirely up to you and your development philosophy.
Even if you are working on projects that require backwards compatibility and think CSS grid is too bleeding-edge, start incorporating it in your projects anyway. Using @supports and the approach I’ve outlined in this article enables you to progressively enhance existing websites without causing problems in legacy browsers. CSS grid requires a new mindset, and we all need to start that transition today. So, go forth and gridify the web!
Accomplished musicians often talk about how, at certain moments in their careers, they had to unlearn old habits in order to progress. This process often causes them to regress in performance while they adjust to an ultimately better method. Once the new approach is integrated, they are able to reach new heights that would not have been possible with their previous techniques.
Like musicians, all professionals should frequently question their methodologies and see what other options exist. If one approach was previously the best, that does not mean it remains the best. Then again, many established techniques have been the best for decades and might never be surpassed. The important thing is that one is willing to consider alternative approaches and is not too heavily biased towards the one they are most familiar with. This analysis is often more difficult in software development because new frameworks and technologies emerge almost as quickly as they die off.
This article will apply this analysis to hybrid mobile apps and present why I sincerely believe that React Native is in many ways a superior solution for apps developed in 2017, even if it introduces some temporary pains while you’re getting acclimated. To do this, we will revisit why hybrid apps were created initially and explore how we got to this point. Then, within this context, we’ll discuss how React Native stacks up and explain why it is the better approach in most cases.
It’s 2010. Your company has a pretty awesome web application that uses jQuery (or, if you’re hip, some sort of AngularJS and React precursor like Mustache). You have a team of developers competent in HTML, CSS and JavaScript. All of a sudden, mobile apps are taking over. Everyone has one. Mobile apps are the new Tickle Me Elmo! You frantically research how to make your own mobile app and immediately run into a host of issues. Your team is ill-equipped for the task. You don’t have Java or Objective-C developers. You can’t afford to develop, test and deploy two separate apps!
Not to worry. The hybrid mobile app is your silver bullet. This shiny new technology allows you to quickly and (in theory) easily reuse what you have (code and developers) for your lustrous new mobile app. So, you pick a framework (Cordova, PhoneGap, etc.) and get to work building or porting your first mobile app!
For many companies and developers, their problems were solved. They could now make their very own mobile apps.
Ever since 2010, developer forums, blogs and message boards have been full of arguments about the efficacy of hybrid apps. Despite the great promise and flexibility described in the previous paragraphs, hybrid apps have had and continue to face a very real series of challenges and shortcomings. Here are a few of the most notable problems
Over the past couple of years, the bar for UX in mobile apps has risen dramatically. Most smartphone owners spend the majority of their time using only a handful of premier apps5. They, perhaps unfairly, expect any new app they try to be as polished as Facebook, MLB TV, YouTube and Uber.
With this very high bar, it is quite difficult for hybrid apps to measure up. Issues such as sluggish or limited animations, keyboard misbehavior and frequent lack of platform-specific gesture recognition all add up to a clunkier experience, which makes hybrid apps second-class citizens. Compounding this issue is hybrid apps’ reliance on the open-source community to write wrappers for native functionality. Here is a screenshot from an app that highlights all of these issues. This app6 was selected from Ionic’s showcase7 and was created by Morgan Stanley.
A few things should be immediately apparent. This app has a very low rating (2.5 stars). It does not look like a mobile app and is clearly a port of a mobile web app. Clear giveaways are the non-native segmented control, font size, text density and non-native tab bar. The app does not support features that are more easily implemented when building natively. Most importantly, customers are noticing all of these issues and are summarizing their feelings as “feels outdated.”
The majority of users are very quick to uninstall or forget new apps10. It is crucial that your app makes a great first impression and is easily understood by users. A large part of this is about looking sharp and being familiar. Hybrid apps can look great, but they do tend to be more platform-agnostic in their UI (if they look like a web app) or foreign (if they look like an iOS app on Android or vice versa).
Before even installing an app, many would-be customers will review images in the app store. If those screenshots are unappealing or off-putting, the app might not be downloaded at all. Here is an example app found on the Ionic showcase. This app11 was created by Nationwide, and, as you can tell, both apps look just like a mobile-friendly website, rather than a mobile app.
12Screenshot of the Nationwide app on iOS13Screenshot of the Nationwide app on Android
It is clear from the app store reviews (3 stars on both platforms) that this app has several issues, but it is unlikely that any app with this UI would attract new customers. It is clearly only used by existing customers who think they might as well try it out.
The most common complaints about hybrid apps cite poor performance, bugs and crashes. Of course, any app can have these issues, but performance issues have long plagued hybrid apps. Additionally, hybrid apps often have less offline support, can take longer to open and perform worse in poor network conditions. Any developer has heard any of the above called a “bug” and has had their app publicly penalized as a result.
All of these issues have added up to the vast majority of premier apps being written natively. A quick look at both PhoneGap’s14 and Ionic’s15 showcases demonstrate a noticeable shortcoming in premier apps. One of the most highly touted hybrid apps is Untappd, which despite being a pretty great platform, has fewer than 5 million downloads. This might seem like a large number, but it puts it quite far down the list of most used apps.
Additionally, there is a long list of apps that have migrated from hybrid to native. That list includes Facebook16, TripAdvisor17, Uber18, Instagram3619 and many others.
It would be quite challenging to find a list of high-end apps that have moved from native to hybrid.
The point of this section is not to be overly critical of hybrid apps, but to show that there is room for an alternative approach. Hybrid apps have been a very important technology and have been used successfully in many cases. Returning to the Ionic showcase, there are several apps that look better than the ones above. Baskin Robbins20, Pacifica21 and Sworkit22 are three recent examples.
For the past four years, hybrid app developers and frameworks have been working hard to improve their apps, and they have done an admirable job. However, underlying issues and foundational shortcomings remain, and ultimately better options can be found if you’re building a new app in 2017.
Although it is clear that hybrid apps do not quite stack up against native apps, their advantages and success can’t be ignored. They help solve very real resource, time and capabilities problems. If there was another approach that solved these same problems, while also eliminating the shortcomings of hybrid apps, that would be extremely appealing. React Native might just be the answer.
React Native23 is a cross-platform mobile application development framework that builds on the popular React web development framework. Like React, React Native is an open-source project maintained largely by developers at Facebook and Instagram.
This framework is used to create Android and iOS applications with a shared JavaScript code base. When creating React Native apps, all of your business logic, API calls and state management live in JavaScript. The UI elements and their styling are genericized in your code but are rendered as the native views. This allows you to get a high degree of code reuse and still have a UI that follows each platform’s style guide and best practices.
React Native also allows you to write platform-specific code, logic and styling as needed. This could be as simple as having platform-specific React components or as advanced as using a platform-specific C library in your React Native app24.
Like hybrid app frameworks, React Native enables true cross-platform development. Instagram has shared that it is seeing between 85 and 99% code reuse25 for its React Native projects. Additionally, React Native is built using technologies (JavaScript and React) that many web developers will be familiar with. In the event that a developer is not familiar with React, it is a dramatically easier to learn if they are familiar with AngularJS, jQuery or vanilla JavaScript than it would be to learn Objective-C or Java.
Additionally, debugging26 React Native apps should also be a familiar process for web developers. This is because it is exceptionally easy to use Chrome’s debugging tools to monitor a code’s behavior. Chrome tools can be used when viewing apps in an emulator or on actual devices. As an added bonus, developers can also use more native debuggers as needed.
Unlike hybrid apps, React Native apps run natively, instead of within a web view. This means they are not restricted to web-based UI elements, which can be sluggish when paired with a poor JavaScript interpreter28. Because React Native renders native UI elements, apps immediately feel more at home on the platform and make the user more comfortable on first use. Additionally, developer quality of life can be improved with React Native through more complete use of native tooling and profiling utilities.
Below are two screenshots of a recently released React Native app. These images highlight the platform-specific interface that can be achieved using this framework. As you can see, each app uses its native map and has callouts that follow each platform’s design guidelines. On Android, the callout is a card that rises from the bottom of the map. On iOS, the callout connects to the selected element on the map. The same actions can be performed in both apps, and most of the code is shared, but that extra bit of platform-specific polish really helps with overall usability.
29Screenshot of the Vett Local app on iOS30Screenshot of the Vett Local app on Android (View large version31)
Below is a sample React Native component. It demonstrates some common elements that make up React Native apps and highlights the areas that web developers should already be familiar with. Following the code snippet is a description of what each section is doing.
Much of the code above should be familiar to most web developers. The vast majority of the code is just JavaScript. Much of the rendering logic will be new, but the migration from HTML to the React Native views is pretty straightforward. Additionally, the style attributes are quite similar to CSS. Let’s walk through some of this code:
state
State32 is an object that contains many of the values that our component33MovieList needs to function. When state properties are changed (using this.setState()), the entire component is re-rendered to reflect those changes.
componentWillMount
ComponentWillMount34 is a lifestyle function that is called prior to the component being rendered. Initial network requests often belong in this function.
_fetchMovies
This function makes a network request that returns an array of movie objects. After it successfully completes, it updates state with the list and sets loading to false. Note that it also sets the initial filteredMovies to the returned list.
_applyFilter
This function is called by our imported SearchBar component. For simplicity’s sake, assume that this function is called (likely with some debounce) whenever the value typed into the SearchBar component is changed. This function just contains some JavaScript that filters the filteredMovies list to the relevant titles.
_renderTitleRow
This function outputs the view for a single movie. It contains some logic to make sure our output is uniform and renders a basic text component.
render()
This function outputs the view for the component. It conditionally renders the list of movies or a loading animation, depending on the loading value stored in the state object.
When deciding how to build your own application, it is important to learn from industry leaders. Other companies and developers might have wasted years and millions of dollars building applications, and in minutes you can learn from their mistakes and experiences. Here is a quick list of some large companies that are using React Native in their apps: Facebook35, Instagram3619, Airbnb37, Baidu, Discord, Tencent, Uber38 and Twitter39.
Many of these apps were originally written using other approaches but have transitioned fully to React Native or are now using React Native to augment their existing native applications.
There is a notable trend of many premier apps being moved to React Native as a cross-platform solution, whereas, previously, most technology shifts among this class of apps were from cross-platform to platform-specific. This change simply can’t be ignored.
Just like the musician who has to rethink their approach to progress, so too must mobile app developers reconsider their technologies. It is critical that we make decisions based on the best options available and not rely solely on our familiarities. Even if the transition is uncomfortable initially, our industry and the app marketplace are highly competitive and demand that we continue to progress.
React Native is a highly attractive technology that combines the reusability and cost-effectiveness of hybrid apps with the polish and performance of native apps. It is seeing rapid adoption and should be considered as an alternative approach for any upcoming would-be hybrid apps.
We have all been there. That dreaded moment when after weeks of work we have to present our progress to key stakeholders, and they mercilessly tear it apart. It feels inevitable, but usually, we can avoid these situations.
Wouldn’t life be so much easier if we didn’t need to get other people to buy-in to our work? Unfortunately, it doesn’t work that way, especially in digital. What we do involves so many different disciplines working together. We have to get the support of colleagues, stakeholders and management. But, achieving that can be painful, to say the least.
That said, there are things you can do to make life easier. We begin, by planning ahead.
Often we win or lose the battle for getting approval before we ever present our work. That is why it is so important to start readying the ground as early as possible. That consists of four steps:
We tend to presume that stakeholders and clients know what we expect from them. But that is not necessarily the case, especially if they haven’t worked on a similar project before.
Therefore, you will find decision making a lot easier if everybody is clear about their roles as early on in a project as possible.
Focus the client on identifying problems. Problems with meeting the needs of users or fulfilling business goals. Meanwhile, explain it is your job to find solutions. That will discourage comments such as, “Change the blue to pink” and instead encourage comments like, “I am concerned that the pre-teen girl audience won’t respond well to the blue.” That puts you in control of the solution but still makes them feel involved. That leads to a more healthy working relationship.
5 By defining roles upfront you can reduce the chances of conversations like this. (Large preview6)
If you want a client to sign off on your digital direction, you will need to engage them in the process of creation. If somebody is involved in making something, they feel a sense of ownership over the creation. That makes them more likely to approve it and in turn, defend it to others.
Engaging clients in the creation process offers another advantage too. It provides you with an opportunity to educate the stakeholder about the decisions you are making. The better they understand the reasoning behind decisions, the more likely they are to approve them.
Best of all, there are lots of opportunities for engaging clients. You might want to consider a design sprint7, run a customer journey mapping workshop8 or do some collaborative wireframing exercises. In fact, a site like Gamestorming9 has hundreds of activities you could run with clients to get them engaged.
10 Design Sprints are a great way of educating and engaging stakeholders. (Large preview11)
Before presenting anything to stakeholders, you need to collect evidence to support your approach. Typically you should look for two types of evidence, qualitative and quantitive. In other words, try and gather both hard data and also stories of user experiences.
For quantitive data dive into your analytics, run surveys and carry out un-facilitated usability testing. Take that information, draw out key lessons and display it in an easy to digest way. Do not expect stakeholders to grasp it intuitively.
For qualitative data, interview users, carry out in-field studies and run facilitated usability sessions. Make sure you record these sessions and produce an edited highlight video showing the key lessons you want stakeholders to take away. That might be feature requests, frustrations or obstacles the user expressed.
One reason for this mix is that you will find different stakeholders respond to varying types of evidence. Some react emotionally and so will tend to be influenced by stories of users struggling. Others respond more rationally and will be more affected by the numbers.
12 Make sure you have sufficient evidence to support your approach by doing some testing beforehand. (Large preview13)
If you will be presenting to many stakeholders, it is worth speaking to them individually beforehand. That gives you a chance to get them onboard informally before the presentation and avoid some of the challenges around group dynamics.
Often one senior figure can dominate a group discussion. But if you have won over other attendees beforehand, they are more likely to speak up in the meeting and support you.
Also by meeting with people individually, you can tailor how you present your work to the different stakeholders. For example, you could focus on the benefits to that person’s team or even them personally. That means they will go into the meeting much more likely to support your proposal.
But how you present your ideas on the day will still make or break success, no matter how much preparation you do beforehand.
Let’s be honest; things can go horribly wrong when you ask for approval. Even if you follow the advice here, there is no guarantee. So when possible, put off asking for sign off for as long as you can. It is amazing just how much you can get done without going to stakeholders for their formal agreement, especially if you have involved them informally along the way.
But eventually, the time will come when you need their official “OK”. When that moment comes, there are three areas you should always cover:
We have a tendency to start by providing the background that justifies our approach. But in my experience, a presentation is more compelling when you start by outlining your vision. That is where you show your work. But do so by painting a picture of what that work will ultimately deliver.
For example, if you are presenting a new design or even project plan, don’t focus too much on the thing itself, but instead concentrate on the benefit the thing will bring when you implement it.
Where possible show stakeholders what that future will look like if they proceed. When presenting a design that is easy. But in other situations consider a customer journey map14 or a prototype15. People are much more excited when they can see something tangible.
16 Even something as simple as a customer journey map of a future improved experience can be enough to excite stakeholders. (Large preview17)
Once you have outlined your vision, the next step is to justify it. Do this in two ways.
First, focus on the value of the goal.
What does your vision of the future provide to the business that it needs? For example, if you are proposing that the organisation buys a customer relationship management system with the end goal of better managing customers, you could relate that to the companies desire to increase repeat business.
Second, provide evidence of how your suggested approach will get the organisation to the desired outcome.
For example, how do you know buying a customer relationship management system will help you manage customers better?
With your vision laid out and justified the next step is to explain how you get to the final result. That might be how you will implement the project or build the design. The aim is to give stakeholders an overview of the steps required to make the vision a reality. The steps that they have to approve.
But be careful. Overwhelming stakeholders at this point is easy. If that roadmap includes significant changes or investment, you may find they react negatively. Equally, if it is complex and they don’t fully understand it, then they will be reluctant to commit.
That is why the best approach, once you have laid out the roadmap, is to focus in on the next few steps. Instead of asking them to commit to a significant investment, outline the first action and ask them to agree to that. That might be a discovery phase, building a prototype or running a pilot.
By reducing the commitment, you are reducing the risk. That will make stakeholders much more open to agreeing. It is better to go back to stakeholders seeking approval for small steps than trying to get them to agree to the entire undertaking in one go.
But however well you present, stakeholders will always have questions and feedback. How you deal with these often dictates whether you get final approval or not.
Avoid The Destructive Pitfalls Of Negative Feedback Link
We now reach the part where preparing the ground pays dividends. Hopefully, they will focus on issues rather than dictating the solution to you. However, sometimes the group will start coming up with ‘fixes’ for issues they identify. If that happens, suggest the issue needs more consideration and offer to go away, research solutions, and then feedback. That puts you back in control.
Also, you will have already answered many of the questions in your one-to-one discussions. That and they will understand your approach because some of them at least were involved in its creation.
But you will still get some feedback. The trick is to manage that effectively, and that means structuring the feedback session.
Things can go wrong if you just open up the floor at the end of your presentation for a Q&A time. Instead, I tend to finish a presentation with some questions of my own. I ask whether they feel the project will achieve its desired goals, meet the needs of users and fulfil organisational objectives. That sets the tone for the discussion and focuses everybody on what matters, rather than personal opinion and agendas.
18 If you ask people what they think, you won’t get very useful feedback. (Large preview19)
Where possible keep the Q&A time short, suggest that the stakeholders might want to take their time in formulating their feedback and provide it to you individually. That not only stops on-the-fly fixes, it also means you become the person in control. Because all the feedback comes direct to you, rather than the whole group, you are the only one in possession of all the information. That allows you to pick and choose what you listen to and what you ignore.
Success or failure in getting approval for your ideas is about a lot more than presenting them well. It is about how stakeholders perceive you, what their personal agendas are, the culture of the company and a whole lot besides.
In this post, I’ve shared a fraction of the advice which I’ll be offering in my workshop “How To Convince Clients And Colleagues The Right Way20” this coming October at SmashingConf Barcelona21. But how you present is a huge factor, and taking the time to think through your approach will really make a big difference.
HTTPS is a must for every website nowadays: Users are looking for the padlock when providing their details; Chrome1 and Firefox2 explicitly mark websites that provide forms on pages without HTTPS as being non-secure; it is an SEO ranking factor3; and it has a serious impact on privacy4 in general. Additionally, there is now more than one option to get an HTTPS certificate for free, so switching to HTTPS is only a matter of will.
Setting up HTTPS can be a bit intimidating for the inexperienced user — it takes many steps with different parties, it requires specific knowledge of encryption and server configuration, and it sounds complicated in general.
In this guide, I will explain the individual components and steps and will clearly cover the individual stages of the setup. Your experience should be easy, especially if your hosting provider also supplies HTTPS certificates — chances are you will be able to perform everything from your control panel quickly and easily.
I have included detailed instructions for owners of shared hosting plans on cPanel, administrators of Apache HTTP servers and of nginx on Linux and Unix, as well as Internet Information Server on Windows.
HTTP Vs. HTTPS Vs. HTTP/2 Vs. SSL Vs. TLS: What’s What? Link
A lot of acronyms are used to describe the processes of communication between a client and a server. These are often mixed up by people who are not familiar with the internals.
The Hypertext Transfer Protocol (HTTP) is the basic communication protocol that both clients and servers must implement in order to be able to communicate. It covers things such as requests and responses, sessions, caching, authentication and more. Work on the protocol, as well as on the Hypertext Markup Language (HTML), started in 1989 by Sir Tim Berners-Lee9 and his team at CERN. The first official version of the protocol (HTTP 1.0) was released in 1996, shortly followed by the currently widely adopted version (HTTP 1.1) in 1997.
The protocol transfers information between the browser and the server in clear text, allowing the network, through which the information passes, to see the information transmitted. This is a security concern, so HTTP Secure (HTTPS) was introduced, allowing the client and the server to first establish an encrypted communication channel, and then pass the clear text HTTP messages through it, effectively protecting them from eavesdropping.
The encrypted channel is created using the Transport Layer Security (TLS) protocol, previously called Secure Socket Layer (SSL). The terms SSL and TLS are often used interchangeably, with SSL 3.0 being replaced by TLS 1.0. SSL was a Netscape-developed protocol, while TLS is an IETF standard. At the time of writing, all versions of SSL (1.0, 2.0, 3.0) are deprecated due to various security problems and will produce warnings in current browsers, and the TLS versions (1.0, 1.1, 1.2) are in use, with 1.3 currently a draft.
So, sometime around 1996 and 1997, we got the current stable version of the Internet (HTTP 1.1, with or without SSL and TLS), which still powers the majority of websites today. Previously, HTTP was used for non-sensitive traffic (for example, reading the news), and HTTPS was used for sensitive traffic (for example, authentication and e-commerce); however, increased focus on privacy means that web browsers such as Google Chrome now mark HTTP websites as “not private” and will introduce warnings for HTTP in future.
The next upgrade of the HTTP protocol — HTTP/2 — which is being adopted by a growing number of websites, adds new features (compression, multiplexing, prioritization) in order to reduce latency and increase performance and security.
In HTTP version 1.1, the secure connection is optional (you may have HTTP and/or HTTPS independent of each other), while in HTTP/2 it is practically mandatory — even though the standard defines HTTP/2 with or without TLS, most browser vendors have stated that they will only implement support for HTTP/2 over TLS10.
Why bother with HTTPS in the first place? It is used for three main reasons:
Confidentiality
This protects the communication between two parties from others within a public medium such as the Internet. For example, without HTTPS, someone running a Wi-Fi access point could see private information such as credit cards when someone using the access point purchases something online.
Integrity
This makes sure information reaches its destined party in full and unaltered. For example, our Wi-Fi friend could add extra advertisements to our website, reduce the quality of our images to save bandwidth or change the content of articles we read. HTTPS ensures that the website can’t be modified.
Authentication
This ensures that the website is actually what it claims to be. For example, that same person running the Wi-Fi access point could send browsers to a fake website. HTTPS ensures that a website that says it’s example.com is actually example.com. Some certificates even check the legal identity behind that website, so that you know yourbank.com is YourBank, Inc.
Confidentiality, integrity and authentication aren’t HTTPS-specific: They’re the core concepts of cryptography. Let’s look a little more closely at them.
Confidentiality is privacy — that is, it protects information from being read by an unauthorized third party. The process usually involves turning a readable (i.e. audible and visible) form of the information, called plaintext, into a scrambled, unreadable version, called ciphertext. This process is called encryption. The reverse process — turning the unreadable ciphertext back into readable plaintext — is called decryption. There are many methods — cipher functions (or algorithms) — to encrypt and decrypt information.
In order for two parties to be able to communicate, they should agree on two things:
which algorithm (cipher function) they will use in their communication;
which parameters, password or rules (i.e. secret) will be used with the method selected.
There are two main types of encryption methods:
symmetric
Both parties share a common secret key.
asymmetric
One of the parties has a pair of a secret and a public key, the foundation of public key infrastructure (PKI).
The symmetric class of methods relies on both parties having a shared secret, which the sender uses to encrypt the information, which in turn the receiver decrypts using the same method and key (see the figure below). The problem with these methods is how both parties will negotiate (i.e. exchange) the secret without physically meeting each other — they need to have a secure communication channel of some sort.
The asymmetric methods come to solve this kind of problem — they are based on the notion of public and private keys. The plaintext is encrypted using one of the keys and can only be decrypted using the other complementary key.
So, how does it work? Let’s assume we have two parties who are willing to communicate with each other securely — Alice and Bob (these are always the names of the fictional characters in every tutorial, security manual and the like, so we’ll honor the tradition here as well). Both of them have a pair of keys: a private key and a public one. Private keys are known only to their respective owner; public keys are available to anyone.
If Alice wants to send a message to Bob, she would obtain his public key, encrypt the plaintext and send him the ciphertext. He would then use his own private key to decrypt it.
If Bob would like to send a reply to Alice, he would obtain her public key, encrypt the plaintext and send her the ciphertext. She would then use her own private key to decrypt it.
When do we use symmetric and when do we use asymmetric encryption? Asymmetric encryption is used to exchange the secret between the client and the server. In real life, we usually do not need two-way asymmetric communication — it is sufficient if one of the parties (we’ll just call it a server, for the sake of simplicity) has the set of keys, so it can receive an encrypted message. It really protects the security of information in only one direction — from the client to the server, because the information encrypted with the public key can only be decrypted using the private key; hence, only the server can decrypt it. The other direction is not protected — information encrypted with the server’s private key can be decrypted with its public key by anyone. The other party (we’ll similarly call it a client) begins the communication by encrypting a randomly generated session secret with the server’s public key, then sends the ciphertext back to the server, which, in turn, decrypts it using its own private key, now having the secret.
Symmetric encryption is then used to protect the actual data in transit, since it’s much faster than asymmetric encryption. The two parties (the client and the server), with the previously exchanged secret, are the only ones able to encrypt and decrypt the information.
That’s why the first asymmetric part of the handshake is also known (and referred to) as key exchange and why the actual encrypted communication uses algorithms known (and referred to) as cipher methods.
Another concern, solved with HTTPS, is data integrity: (1) whether the entire information arrived successfully, and (2) whether it was modified by someone in transit. In order to ensure the information is transmitted successfully, message digest algorithms are used. Computing message authentication codes (MACs) for each message exchanged are a cryptographic hashing process. For example, obtaining a MAC (sometimes called a tag) uses a method that ensures that it is practically impossible (the term commonly used is infeasible) to:
change the message without affecting the tag,
generate the same tag from two different messages,
reverse the process and obtain the original message from the tag.
What about authentication? The problem with the real-life application of the public key infrastructure is that both parties have no way of knowing who the other party really is — they are physically separate. In order to prove the identity of the other party, a mutually trusted third party — a certificate authority (CA) — is involved. A CA issues a certificate, stating that the domain name example.com (a unique identifier), is associated with the public key XXX. In some cases (with EV and OV certificates — see below), the CA will also check that a particular company controls that domain. This information is guaranteed by (i.e. certified by) the certificate authority X, and this guarantee is valid no earlier than (i.e. begins on) date Y and no later than (i.e. expires on) date Z. All of this information goes into a single document, called an HTTPS certificate. To present an easily understandable analogy, it is like a country government (a third party trusted by all) issuing an ID or a passport (a certificate) to a person — every party that trusts the government would also accept the identity of the ID holder (assuming the ID is not fake, of course, but that’s outside the scope of this example).
Certification authorities (CAs) are organizations trusted to sign certificates. Operating systems, such as Windows, macOS, iOS and Android, as well as the Firefox browser, have a list of trusted certificates.
You can check which CAs are trusted by your browser:
“Applications” → “Utilities” → “Keychain Access.” Under “Category,” pick Certificates”
All certificates are then checked and trusted — by the operating system or browser if directly trusted or by a trusted entity if verified. This mechanism of transitive trust is known as a chain of trust15:
You can add other unlisted CAs, which is useful when working with self-signed certificates (which we’ll discuss later).
In most common situations, only the server needs to be known to the client — for example, an e-commerce website to its customers — so, only the website needs a certificate. In other situations, such as e-government systems, both the server and the client, requesting a service, should have their identity proven. This means that both parties should be using certificates to authenticate to the other party. This setup is also outside the scope of this article.
The most common type of certificate, a DV certificate verifies that the domain matches a particular public key. The browser establishes a secure connection with the server and displays the closed padlock sign. Clicking the sign will show “This website does not supply ownership information.” There are no special requirements other than having a domain — a DV certificate simply ensures that this is the correct public key for that domain. The browser does not show a legal entity. DV certificates are often cheap (10 USD per year) or free — see the sections on Let’s Encrypt2318 and Cloudflare2419 below.
Extended validation (EV)
EV certificates verify the legal organization behind a website. This is the most trustworthy type of certificate, which is obtained after a CA checks the legal entity that controls the domain. The legal entity is checked with a combination of:
control of the domain (such as a DV certificate);
government business records, to make sure the company is registered and active;
independent business directories, such as Dunn and Bradstreet, Salesforce’s connect.data.com, Yellow Pages, etc.;
a verification phone call;
inspection of all domain names in the certificate (wildcards are explicitly forbidden for EV certificates).
As well as the closed padlock sign, EV HTTPS certificates display the name of the validated legal entity — typically a registered company — before the URL. Some devices, such as iOS Safari, will only show the validated legal entity, ignoring the URL completely. Clicking the sign will show details about the organization, such as the name and street address. The cost is between 150 and 300 USD per year.
Organization validated (OV)
Like EV, OV certificates verify the legal organization behind a website. However, unlike EV, OV HTTPS certificates do not display the verified legal name in the UI. As a result, OV certificates are less popular, because they have high validation requirements, without the benefits of these being shown to users. Prices are in the 40 to 100 USD per year range.
Once upon a time, HTTPS certificates generally contained a single domain in the CN field. Later, the “subject alternative name” (SAN) field was added to allow additional domains to be covered by a single certificate. These days, all HTTPS certificates are created equal: Even a single-domain certificate will have a SAN for that single domain (and a second SAN for the www version of that domain). However, many certificate vendors still sell single- and multi-domain HTTPS certificates for historical reasons.
Single domain
This is the most common type of certificate, valid for the domain names example.com and www.example.com.
Multiple domains (UCC/SAN)
This type of certificate, also known as Unified Communications Certificate (UCC) or Subject Alternative Names (SAN) certificate, can cover a list of domains (up to a certain limit). It is not limited to a single domain — you can mix different domains and subdomains. The price usually includes a set number of domains (three to five), with the option to include more (up to the limit) for an additional fee. Using it with related websites is advised, because the client inspecting the certificate of any of the websites will see the main domain, as well as all additional ones.
Wildcard
This type of certificate covers the main domain as well as an unlimited number of subdomains (*.example.com) — for example, example.com, www.example.com, mail.example.com, ftp.example.com, etc. The limitation is that it covers only subdomains of the main domain.
The variety of HTTPS certificates available is summarized in the table below:
Certificate type
Domain validated (DV)
Organization validated (OV)
Extended validation (EV)
HTTPS
HTTPS
Verified legal owner
HTTPS
Verified legal owner
Owner info displayed in browser
Single domain
example.com, www.example.com
Multiple domains
example.com, www.example.com, mail.example.com, example.net, example.org, etc.
predefined list, up to a certain limit (usually 100)
Wildcard
*.example.com
matches any subdomain
N/A — all names must be included explicitly in the certificate and inspected by the CA.
To recap, four components of HTTPS require encryption:
The initial key exchange
This uses asymmetric (private and public key) algorithms.
The identity certification (the HTTPS certificate, issued by a certification authority)
This uses asymmetric (private and public key) algorithms.
The actual message encryption
This uses symmetric (pre-shared secret) algorithms.
The message digesting
This uses cryptographic hashing algorithms.
Each of these components has a set of used algorithms (some of them deprecated already) that use different key sizes. Part of the handshake involves the client and the server agreeing on which combination of methods they will use — select one out of about a dozen public key (key exchange) algorithms, one out of about a dozen symmetric key (cipher) algorithms and one out of three (two deprecated) message-digesting (hashing) algorithms, which gives us hundreds of combinations.
For example, the setting ECDHE-RSA-AES256-GCM-SHA384 means that the key will be exchanged using the Elliptic Curve Diffie-Hellman Ephemeral (ECDHE) key exchange algorithm; the CA signed the certificate using the Rivest-Shamir-Adleman (RSA) algorithm; the symmetric message encryption will use the Advanced Encryption Standard (AES) cipher, with a 256-bit key and GCM mode of operation; and message integrity will be verified using the SHA secure hashing algorithm, using 384-bit digests. (A comprehensive list of algorithm combinations20 is available.)
So, there are some configuration choices to be made.
Deciding the cipher suites to use is a balance between compatibility and security:
Compatibility with older browsers needs the server to support older cipher suites.
However, many older cipher suites are no longer considered secure.
OpenSSL lists the supported combinations (see above) in order of cryptographic strength, with the most secure at the top and the weakest at the bottom. It is designed in this way because, during the initial handshake between the client and the server, the combination to be used is negotiated until a match is found that is supported by both parties. It makes sense to first try the most secure combinations and gradually resort to weaker security only if there is no other way.
A very useful and highly recommended resource, advising on what cryptographic methods to enable on the server, is the Mozilla SSL Configuration Generator116686722, which we’ll use later on with actual server configurations.
Elliptic Curve Cryptography (ECC) certificates are faster and use less CPU than the RSA certificates, which is particularly important for mobile clients. However, some services, such as Amazon, CloudFront and Heroku, don’t yet, at the time of writing, support ECC certificates.
A 256-bit ECC key is considered sufficient.
Rivest Shamir Adleman (RSA) certificates are slower but compatible with a wider variety of older servers. RSA keys are larger, so a 2048-bit RSA key is considered minimal. RSA certificates of 4096 and above may hurt performance — they’re also likely to be signed by a 2048-bit intermediary, undermining much of the additional security!
You might have noticed the fluidity of the statements above and the lack of any numbers — it is because what is a heavy load on one server is not on another. The best way to determine the impact on performance is to monitor the load on your server, with your real website(s) and your real visitors. And even that will change over time.
To obtain an HTTPS certificate, perform the following steps:
Create a private and public key pair, and prepare a Certificate Signing Request (CSR), including information about the organization and the public key.
Contact a certification authority and request an HTTPS certificate, based on the CSR.
Obtain the signed HTTPS certificate and install it on your web server.
There exists a set of files, containing different components of the public key infrastructure (PKI): the private and public keys, the CSR and the signed HTTPS certificate. To make things even more complicated, different parties use different names (and file extensions) to identify one and the same thing.
To start, there are two popular formats for storing the information — DER and PEM. The first one (DER) is binary, and the second (PEM) is a base64-encoded (text) DER file. By default, Windows uses the DER format directly, and the open-source world (Linux and UNIX) uses the PEM-format. There are tools (OpenSSL) to convert between one and the other.
The files we’ll be using as examples in the process are the following:
example.com.key
This PEM-formatted file contains the private key. The extension .key is not a standard, so some might use it and others might not. It is to be protected and accessible only by the system super-user.
example.com.pub
This PEM-formatted file contains the public key. You do not actually need this file (and it’s never explicitly present), because it can be generated from the private key. It is only included here for illustration purposes.
example.com.csr
This is a certificate signing request. A PEM-formatted file containing organizational information, as well as the server’s public key, should be sent to the certification authority issuing the HTTPS certificate.
example.com.crt
This HTTPS certificate is signed by the certification authority. It is a PEM-formatted file, including the server’s public key, organizational information, the CA signature, validity and expiry dates, etc. The extension .crt is not a standard; other common extensions include .cert and .cer.
File names (and extensions) are not standard; they can be anything you like. I have chosen this naming convention because I think it is illustrative and makes more obvious which component has what function. You can use whatever naming convention makes sense to you, as long as you refer to the appropriate key-certificate files in the commands and server configuration files throughout the process.
The private key is a randomly generated string of a certain length (we’ll use 2048-bit), which looks like the following:
This particular CSR contains the server’s public key and details about the organization ACME Inc., based in London, UK, and which owns the domain name example.com.
Finally, the signed HTTPS certificate looks like the following:
All parts are connected and should match each other. The final certificate was generated for illustration purposes only — it is the so-called self-signed certificate, because it was not signed by a recognized certification authority.
The process will be illustrated with actual steps for cPanel, Linux, FreeBSD and Windows. This is a universal process, valid for all kinds of certificates. If you are interested in getting a free DV certificate, there are other procedures to follow in the sections on Let’s Encrypt2318 and Cloudflare2419.
Step 1: Create A Private Key And A Certificate Signing Request (CSR) Link
In the following examples, we’ll use 2048-bit RSA certificates, for their wide compatibility. If your server provider supports it (for example, if you don’t use Heroku or AWS), you might prefer to use ECC instead.
Scroll down to the “Security” section and click “SSL/TLS.” 25cPanel “Security” section (View large version26)
You are now in the “SSL/TLS Manager” home. Click “Private Keys (KEY)” to create a new private key. 27cPanel “SSL/TLS Manager (View large version28)
You will be redirected to a page to “Generate, Paste or Upload a new “Private
Key.” Select “2048-bit” in the “Key Size” dropdown, and click “Generate.” 29cPanel “Private Key” management (View large version30)
The new private key will be generated, and you will get a confirmation screen: 31cPanel private key confirmation (View large version32)
If you go back to the “Private Keys” home, you will see your new key listed: 33cPanel “Private Keys” with the new key generated (View large version34)
Go back to the “SSL/TLS Manager” home. Click “Certificate Signing Requests (CSR)” to create a new certificate request. 35cPanel “SSL/TLS Manager” (View large version36)
You will now be presented with the “Generate Service Request” form. Select the previously created private key and fill in the fields. Answer all of the questions correctly (they will be public in your signed certificate!), paying special attention to the “Domains” section, which should exactly match the domain name for which you are requesting the HTTPS certificate. Include the top-level domain only (example.com); the CA will usually add the www subdomain as well (i.e. www.example.com). When finished, click the “Generate” button. 37cPanel “Create New Certificate Signing Request” form (View large version38)
The new CSR will be generated, and you will get a confirmation screen: 39cPanel CSR confirmation (View large version40)
If you go back to the “Certificate Signing Request” home, you will see your new CSR listed: 41cPanel “Certificate Signing Request” with the new CSR generated (View large version42)
The private key will be generated, and you will be asked some information for the CSR:
Generating a 2048 bit RSA private key ........................+++ ................................................................+++ writing new private key to 'example.com.key' ----- You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter ‘.', the field will be left blank.
Answer all questions correctly (they will be public in your signed certificate!), paying special attention to the “Common Name” section (for example, server FQDN or YOUR name), which should exactly match the domain name for which you are requesting the HTTPS certificate. Include the top-level domain only (example.com), the CA will usually add the www subdomain as well (i.e. www.example.com):
Country Name (2 letter code) [AU]:GB State or Province Name (full name) [Some-State]:London Locality Name (eg, city) []:London Organization Name (eg, company) [Internet Widgits Pty Ltd]:ACME Inc. Organizational Unit Name (eg, section) []:IT Common Name (e.g. server FQDN or YOUR name) []:example.com Email Address []:admin@example.com Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: An optional company name []:
Open “Start” → “Administrative Tools” → “Internet Information Services (IIS) Manager.” Click the server name. Double-click “Server Certificates” in the middle column: 6943Open “Internet Information Services (IIS) Manager.” Double-click “Server Certificates.” (View large version7044)
Click “Create Certificate Request” in the right column. 45Click “Create Certificate Request” in the right column. (View large version46)
Enter your organization’s details, paying special attention to “Common
Name,” which should match your domain name. Click “Next.” 47Enter your organization’s details. (View large version48)
Leave the default “Cryptographic Service Provider.” Set the “Bit length” to 2048. Click “Next.” 49Set the “Bit length” to 2048. (View large version50)
Browse for a place to save the generated CSR and click “Finish.” 51Browse for a place to save the generated CSR and click ‘Finish’. (View large version52)
In order to get your website certificate, first purchase a HTTPS certificate credit of a chosen type (DV, OV, EV, single site, multisite, wildcard — see above) from an HTTPS certificate provider. Once the process is complete, you will have to provide the certificate signing request, which will spend the purchased credit for your chosen domain. You’ll be asked to provide (i.e. to paste in a field or to upload) the whole CSR text, including the -----BEGIN CERTIFICATE REQUEST----- and -----END CERTIFICATE REQUEST----- lines. If you would like to have an EV or OV certificate, you’ll need to provide the legal entity for which you’re requesting the certificate — you might also be asked to provide additional documents to confirm that you represent this company. The certificate registrar will then verify your request (and any supporting documents) and issue the signed HTTPS certificate.
Your hosting provider or HTTPS registrar might have a different product and registration procedure, but the general logic should be similar.
Find an HTTPS certificate vendor.
Select a type of certificate (DV, OV, EV, single site, multisite, wildcard), and click “Add to cart.” Specify your preferred payment method and complete the payment.
Activate the new HTTPS certificate for your domain. You can either paste or upload the certificate signing request. The system will extract the certificate details from the CSR.
You will be asked to select a method of “Domain Control Validation” — whether by email, uploading an HTML file (HTTP-based) or adding a TXT record to your domain zone file (DNS-based). Follow the instructions for your DCV method of choice to validate.
Wait several minutes until the validation is performed and the HTTPS certificate is issued. Download the signed HTTPS certificate.
It is also possible to sign a certificate yourself, rather than have a certificate authority do it. This is good for testing purposes because it will be cryptographically as good as any other certificate, but it will not be trusted by browsers, which will fire a security warning — you can claim you are anything you want, but it wouldn’t be verified by a trusted third party. If the user trusts the website, they could add an exception in their browser, which would store the certificate and trust it for future visits.
The example certificate above is a self-signed one — you can use it for the domain example.com, and it will work within its validity period.
You can create a self-signed certificate on any platform that has OpenSSL available:
Once the certificate is available, you will have to install it on your server. If you are using hosting and HTTPS registration services from the same provider (many hosting providers also sell HTTPS certificates), there might be an automated procedure to install and enable your newly obtained HTTPS certificate for the website. If you are hosting your website elsewhere, you will need to download the certificate and configure your server to use it.
Step 3: Installing The HTTPS Certificate For Your Website Link
Go back to the “SSL/TLS Manager” home. Click “Certificates (CRT)” to import the new certificate. 53cPanel “SSL/TLS Manager” (View large version54)
You will be redirected to a page to “Paste, Upload or Generate” a new “Certificate.” Paste the contents of the certificate file received from the HTTPS registrar or upload it using the “Browse” button. 55cPanel “Import” a new HTTPS certificate (View large version56)
When you paste the contents of the HTTPS certificate, it will be parsed, and plain text values will be presented to you for confirmation. Review the contents and click the “Save Certificate” button. 57cPanel “Review” and confirm HTTPS certificate (View large version58)
The new HTTPS certificate will be saved, and you will get a confirmation screen: 59cPanel HTTPS certificate confirmation (View large version60)
If you go back to the “Certificates (CRT)” home, you will see your new HTTPS certificate listed: 61cPanel “Certificates” with the new HTTPS certificate listed (View large version62)
Go back to the “SSL/TLS Manager” home. Click “Install and Manage SSL for your website (HTTPS)” to assign the new certificate to your existing website. 63cPanel “SSL/TLS Manager” (View large version64)
You will be presented with the “Install an SSL Website” form. Click the “Browse Certificates” button and select your HTTPS certificate. Select your website domain from the dropdown list (if it’s not automatically selected), and verify that the fields for “Certificate” and “Private Key” are populated. 65cPanel “Install an SSL Website” (View large version66)
Test to see that you can access your website at the address https://www.example.com. If all works OK, you will most probably want to permanently redirect your HTTP traffic to HTTPS. To do so, you’ll have to include several lines of code to an .htaccess file (if you’re using an Apache web server) in your website’s root folder:
RewriteEngine On RewriteCond %{HTTPS} off RewriteRule ^(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]
If the .htaccess file already exists, then paste the RewriteCond and RewriteRule lines only, right after the existing RewriteEngine On directive.
Place the generated private key (example.com.key), Certificate Signing Request (example.com.csr) and the valid HTTPS certificate (example.com.crt) in the appropriate locations:
To enable the HTTPS version of your website, you should:
make sure that mod_ssl is installed on your server,
upload the received HTTPS certificate (.crt) file to your server,
edit the Apache server configuration files.
Start by checking mod_ssl. Depending on your operating system, either one should work:
apache2 -M | grep ssl or httpd -M | grep ssl
If mod_ssl is installed, you should get either this…
ssl_module (shared) Syntax OK
… or something similar.
If it’s not present or not enabled, then try this:
Debian, Ubuntu and clones
sudo a2enmod ssl sudo service apache2 restart
Red Hat, CentOS and clones
sudo yum install mod_ssl sudo service httpd restart
FreeBSD (select the SSL option)
make -C /usr/ports/www/apache24 config install clean apachectl restart
Edit the Apache configuration file (httpd.conf):
Debian, Ubuntu
/etc/apache2/apache2.conf
Red Hat, CentOS
/etc/httpd/conf/httpd.conf
FreeBSD
/usr/local/etc/apache2x/httpd.conf
Listen 80 Listen 443 <VirtualHost *:80> ServerName example.com ServerAlias www.example.com Redirect 301 / https://www.example.com/ </VirtualHost> <VirtualHost *:443> ServerName example.com Redirect 301 / https://www.example.com/ </VirtualHost> <VirtualHost *:443> ServerName www.example.com ... SSLEngine on SSLCertificateFile/path/to/signed_certificate_followed_by_intermediate_certs SSLCertificateKeyFile /path/to/private/key # Uncomment the following directive when using client certificate authentication #SSLCACertificateFile /path/to/ca_certs_for_client_authentication # HSTS (mod_headers is required) (15768000 seconds = 6 months) Header always set Strict-Transport-Security "max-age=15768000" ... </VirtualHost> # intermediate configuration, tweak to your needs SSLProtocol all -SSLv3 SSLCipherSuite ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS SSLHonorCipherOrder on SSLCompression off SSLSessionTickets off # OCSP Stapling, only in httpd 2.3.3 and later SSLUseStapling on SSLStaplingResponderTimeout 5 SSLStaplingReturnResponderErrors off SSLStaplingCache shmcb:/var/run/ocsp(128000)
This configuration was generated using the Mozilla SSL Configuration Generator116686722, mentioned earlier. Check with it for an up-to-date configuration. Make sure to edit the paths to the certificate and private key. The configuration provided was generated using the intermediate setting — read the limitations and supported browser configurations for each setting to decide which one suits you best.
Some modifications to the generated code were made (marked in bold above) to handle redirects from HTTP to HTTPS, as well as non-www to the www domain (useful for SEO purposes).
server { listen 80 default_server; listen [::]:80 default_server; # Redirect all HTTP requests to HTTPS with a 301 Moved Permanently response. return 301 https://$host$request_uri; } server { listen 443 ssl http2; listen [::]:443 ssl http2; # certs sent to the client in SERVER HELLO are concatenated in ssl_certificate ssl_certificate /path/to/signed_cert_plus_intermediates; ssl_certificate_key /path/to/private_key; ssl_session_timeout 1d; ssl_session_cache shared:SSL:50m; ssl_session_tickets off; # Diffie-Hellman parameter for DHE ciphersuites, recommended 2048 bits ssl_dhparam /path/to/dhparam.pem; # intermediate configuration. tweak to your needs. ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS'; ssl_prefer_server_ciphers on; # HSTS (ngx_http_headers_module is required) (15768000 seconds = 6 months) add_header Strict-Transport-Security max-age=15768000; # OCSP Stapling --- # fetch OCSP records from URL in ssl_certificate and cache them ssl_stapling on; ssl_stapling_verify on; ## verify chain of trust of OCSP response using Root CA and Intermediate certs ssl_trusted_certificate /path/to/root_CA_cert_plus_intermediates; resolver <IP DNS resolver>; .... }
This configuration was generated using the Mozilla SSL Configuration Generator116686722, mentioned earlier. Check with it for an up-to-date configuration. Make sure to edit the paths to the certificate and private key. The configuration provided was generated using the intermediate setting — read the limitations and supported browser configurations for each setting to decide which one suits you best.
The generator automatically generates code for handling redirects from HTTP to HTTPS, and it enables HTTP/2 out of the box!
Open “Start” → “Administrative Tools” → “Internet Information Services (IIS) Manager.” Click the server name. Double-click “Server Certificates” in the middle column: 6943Open “Internet Information Services (IIS) Manager.” Double-click “Server Certificates.” (View large version7044)
Click “Complete Certificate Request” in the right column. 71Click “Complete Certificate Request” in the right column. (View large version72)
Select the signed certificate file (example.com.crt) that you obtained from the CA. Enter some name in the “Friendly name” field to be able to distinguish the certificate later. Place the new certificate in the “Personal” certificate store (IIS 8+). Click “OK.” 73Select the signed certificate file. (View large version74)
If the process went OK, you should see the certificate listed under “Server
Certificates.” 75You should see the certificate listed under “Server Certificates.” (View large version76)
Expand the server name. Under “Sites,” select the website to which you want to assign the HTTPS certificate. Click “Bindings” from the right column. 77Select the website and click “Bindings.” (View large version78)
In the “Site bindings” window, click the “Add” button. 79Click the “Add” button. (View large version80)
In the new window, select:
“Type”: “https”
“IP address”: “All Unassigned”
“Port”: “443”
In the “SSL Certificate” field, select the installed HTTPS certificate by its friendly name. Click “OK.”
You should now have both HTTP and HTTPS installed for this website. 83You should now have both HTTP and HTTPS installed for this website. (View large version84)
You might get a warning sign next to the address bar and a message such as “Connection is not secure! Parts of this page are not secure (such as images).” This does not mean that your installation is wrong; just make sure that all links to resources (images, style sheets, scripts, etc.), whether local or from remote servers, do not start with http://. All resources should be pointed to with paths relative to the root (/images/image.png, /styles/style.css, etc.) or relative to the current document (../images/image.png), or they should be full URLs beginning with https://, such as <script src="https://code.jquery.com/jquery-3.1.0.min.js"></script>.
These tips should eliminate the mixed-content warnings, and your browser should display the closed padlock without an exclamation mark.
After you have configured your server and have the website up and running on HTTPS, I strongly recommend checking its security configuration using the Qualys SSL Server Test85. This performs a scan of your website, including a comprehensive evaluation of its configuration, possible weaknesses and recommendations. Follow the advice there to further improve your server’s security configuration.
Your certificate is valid for a set period — typically, a year. Don’t wait to renew it at the last moment — your registrar will start sending you emails as the renewal date approaches. Do issue a new certificate as soon as you get your first reminder. The procedure is pretty much the same: Create a new certificate signing request, get a new HTTPS certificate, and install it on your server. The certificate’s validity will start running at the time of signing, while the expiration will be set one year after your current certificate expires. Thus, there will be a time when both your old and new certificates will be valid, and then a full new year after the expiration of the old certificate. During the overlap, you will be able to make sure that the new certificate is working OK, before the old one expires, allowing for uninterrupted service of your website.
If your server is compromised or if you think someone might have access to your private key, you should immediately revoke your current HTTPS certificate. Different registrars have different procedures, but it generally boils down to marking the compromised certificate as inactive in a special database of your registrar, and then issuing a new HTTPS certificate. Of course, revoke the current certificate as soon as possible, so that nobody can impersonate you, and get the new certificate only after you have investigated and fixed the cause of the security breach. Please ask your registrar for assistance.
Let’s Encrypt is a free, automated, and open certificate authority (CA), run for the public’s benefit. Let’s Encrypt is a service provided by the Internet Security Research Group (ISRG)87.
The key principles behind Let’s Encrypt are:
Free
Anyone who owns a domain name can use Let’s Encrypt to obtain a trusted certificate at zero cost.
Automatic
Software running on a web server can interact with Let’s Encrypt to painlessly obtain a certificate, securely configure it for use, and automatically take care of renewal.
Secure
Let’s Encrypt will serve as a platform for advancing TLS security best practices, both on the CA side and by helping website operators properly secure their servers.
Transparent
All certificates issued or revoked will be publicly recorded and available for anyone to inspect.
Open
The automatic issuance and renewal protocol will be published as an open standard that others can adopt.
Cooperative
Much like the underlying Internet protocols themselves, Let’s Encrypt is a joint effort to benefit the community, beyond the control of any one organization.
To take advantage of Let’s Encrypt, set up your hosting account or server properly. Let’s Encrypt offers short-term certificates that need to be renewed regularly in order to keep an HTTPS website operational.
There are some substantial differences in the mode of operation between Let’s Encrypt and the other CAs. Following the first three points above, here are the main ones:
Free
The Let’s Encrypt HTTPS certificates are completely free for the whole lifespan of your website.
for 90 days88, unlike regular HTTPS certificates, which are valid for one year. People are encouraged to automate their certificate renewal; for example, the administrator of the server would set up a dedicated software service (or would periodically invoke software from cron) to manage the initial domain validation and subsequent renewal for all hosted domains — set-it-and-forget-it style.
Secure
Let’s Encrypt HTTPS certificates are issued with no compromise on security, leading to certain incompatibilities with older and more exotic platforms. Check the compatibility page9289 to determine whether you are fine cutting off incompatible platforms.
Let’s Encrypt provides only DV certificates. OV and EV are not supported, and there are currently no plans to support them. Single- and multiple-domain HTTPS certificates are offered, but no wildcard ones at the moment. See the Let’s Encrypt FAQ90 for more information.
Let’s Encrypt’s automated mode of operation enforces some usage limits91 in order to protect the infrastructure from intentional and unintentional abuse. The rate limits are high enough not to affect regular users with even hundreds of domains. However, if you manage HTTPS certificates at a very large scale, you might want to check them out.
Older and exotic clients (prior to Windows XP SP3) are not supported. Check the compatibility page9289 for details.
Using The Let’s Encrypt HTTPS Certificates In Practice Link
Scroll down to the “Security” section, and click “Let’s Encrypt for cPanel.” 93cPanel “Security” section (View large version94)
You are now in the “Let’s Encrypt for cPanel” section. Check both domain names (example.com and www.example.com), and click “Issue Multiple.” 95Check both domain names and click “Issue Multiple.” (View large version96)
You will be taken to a confirmation screen. Your top-level (i.e. non-www) domain name will be selected as the primary, and your www domain name as an alias, which will be placed in the HTTPS certificate’s “Subject Alt Name” (SAN) record. Click “Issue” to continue. Please be patient and do not refresh the page, because the initial validation might take longer — about a minute or two. 97Click “Issue” and be patient for about a minute or two. (View large version98)
If the process completes successfully, you will see a confirmation message. Click “Go back” to see the installed HTTPS certificate. 99If the process completes successfully, you will see a confirmation message. (View large version100)
You will see your domain listed under “Your domains with Let’s Encrypt certificates.” You may check the certificate’s details and verify that the website opens with the https:// prefix in your browser. 101Your domains with Let’s Encrypt certificates (View large version102)
The easiest way to set up Let’s Encrypt on your server is with Certbot103. Just select your web server and operating system and follow the instructions.
Cloudflare109 is a service that provides a content delivery network (CDN), website security, and protection against distributed denial of service (DDoS) attacks. It features a free HTTPS certificate with all subscription plans, including the free one — a shared DV Cloudflare Universal SSL certificate. In order to have a unique HTTPS certificate, you need to upgrade to the Business plan.
To take advantage, simply create an account, set up your website, and visit the “Crypto” section.
CertSimple110 is an EV-only HTTPS certificate vendor. It is disrupting the EV HTTPS certificate market in a way similar to what Let’s Encrypt is doing in the DV HTTPS certificate market, by providing a faster, easier process of organization validation — an otherwise slow and cumbersome routine. Here are its advantages:
Simplified application procedure
No software to install or command line questions. Live validation, with most details checked before payment.
Fast validation time
Three hours on average versus the industry standard 7 to 10 days.
Free reissues for the lifetime of the certificate
Easily add domain names later or fix a lost private key.
Multiple HTTPS Websites On A Single IP Address Link
Due to the nature of the handshake process, virtual hosts with a single IP address are a problem for TLS. Virtual hosts work by having the client include the domain name as part of the HTTP request header, but when HTTPS is used, the TLS handshake happens before the HTTP headers are sent — the secure channel should be initialized and fully functional before transmitting any plain-text HTTP, including headers. So, the server does not know which HTTPS certificate to present up front to a connecting client, so it presents the first one it finds in its configuration file, which, of course, only works correctly for the first TLS-enabled website.
There are several workarounds: to have a unique IP for each TLS-enabled domain, or to have all domains in a single certificate. Both are impractical — the IPv4 address space is now used up, and having one big HTTPS certificate means that if you want to add a single website to this server, you’ll need to reissue the whole multiple-domain certificate.
An extension to the TLS protocol, named Server Name Indication (SNI)112, was introduced to overcome this limitation. Both servers and clients should support it, and although SNI support is nowadays widely available, it is still not 100% bulletproof, if compatibility with all possible clients is a requirement.
A set of 6 carefully crafted vector illustrations that come with an edgy style, based on the EGO icons design. The drawings come in AI, Sketch and SVG, perfect for illustrating articles or giving a unique touch to your next design project.
We are very happy to share an exclusive illustration set with you to celebrate the EGO icons launch! These unique illustrations have an iconic and angular look like the EGO icons and they can be used for articles, websites and other design projects. We hope you like them!
About the Illustrations
This set of 6 unique and modern vector illustrations come in AI, SVG and Sketch format and are easy to customize. They depict various hot topics and are composed of meaningful details. Their unique EGO look will elevate any article or web design and give them a futuristic touch.
The illustration set contains:
6 unique illustrations with an angular style
AI, SVG and Sketch format
Preview
Have a look at all 6 illustrations:
About the EGO Icon Set
EGO is the latest icon set from Webalys, the creator of Streamline and Nova Icons. The EGO icons took 2 years to be designed and the result is a massive set of 3600 unique and versatile icons with personality including 100 categories and 2 customizable styles. The icons have a futuristic look and they can bring a distinctive touch to any projects.
All icons are provided in .SVG, .PDF, .AI, .SKETCH, .EPS, and .iconjar, so you can fire ‘em up in your favorite graphics software. They come in two modern styles: monoline style, drawn with a single minimal stroke and duotone style, with shaded parts that bring more depth.
Icon colors can be changed in seconds. Using the “Shared Styles” in Sketch, or the “Global Colors” in Illustrator, quickly apply a new color scheme to all icons!
The EGO icon pack is fresh and forward-facing — perfect for making your apps, web interfaces, and UI designs stand out in the crowd. Preview the complete collection with 3,600 fresh-to-death vector icons here:
When did you take your last vacation? For many of us, it was probably a long time ago. However, since quite a while, I stumble across more and more stories about companies that take unusual steps vacation-wise. Companies giving their employees a day off each week in summer or going on vacation together as a team building event instead of traveling somewhere just to work.
But while there’s a new generation building their dream work environments, a lot of people still suffer from very bad working conditions. They work long hours and are discriminated or harassed by colleagues or their managers. And just this week, I heard that many company owners are desperate because “Generation Y” doesn’t want to work long hours anymore.
As with every major break in generations, I think it’s good to have people standing up for their rights, enjoying their life and their work. But we also need to talk with executives who are under pressure to match deadlines. Because only if we show them evidence that working less can be beneficial for their employees’ health and productiveness, we can convince them to let us work more freely.
Git 2.13 was released5 and comes with SHA-1 collision detection, more convenient path specs, and conditional configuration which is especially nice if you work for several teams or on several projects.
Laïla von Alvensleben shares why the company she works at decided to do a different kind of team trip for a change17. When traveling to Sri Lanka, the Hanno team chose to not bring their laptops or plan work-related meetings or activities but focus on Yoga and surfing instead to bond as a team.
Fuse is a toolkit for creating apps that run on both iOS and Android devices. It enables you to create apps using UX Markup, an XML-based language. But unlike the components in React Native1 and NativeScript2, Fuse is not only used to describe the UI and layout; you can also use it to add effects and animation.
Styles are described by adding attributes such as Color and Margin to the various elements. Business logic is written using JavaScript. Later on, we’ll see how all of these components are combined to build a truly native app.
In this article, you will learn what Fuse is all about. We’ll see how it works and how it compares to other platforms such as React Native and NativeScript. In the second half of the article3, you will create your first Fuse app. Specifically, you will create a weather app that shows the weather based on the user’s current location. Here’s what the output will look like:
In creating the app, you will learn how to use some of Fuse’s built-in UI components and learn how to access native device functionality such as geolocation. Towards the end of the article, you will consolidate your learning by looking at the advantages and disadvantages of using Fuse for your next mobile app project.
On the top layer are the UX Markup and JavaScript. This is where we will spend most of our time when working with Fuse. On the middle layer are the libraries that are packaged with Fuse. This includes the JavaScript APIs that allow access to native device features such as geolocation and the camera. Lastly, on the bottom layer is the Uno compiler, which is responsible for translating the UX Markup into pure native code (Objective-C for iOS and C++ for Android). Once the app runs, all of the UI that you will see will be native UI for that particular platform. JavaScript code is executed via a virtual machine on a separate thread. This makes the UI really snappy because JavaScript won’t affect the UI’s performance.
How Does It Compare To React Native And NativeScript? Link
Before we create an app with Fuse, one of the important questions that needs to be answered is how does it stack up against existing tools that do the same job. In this section, we’ll learn about the features and tools available in Fuse compared to those of React Native and NativeScript, as well as how things are done on each platform. Specifically, we’ll compare the following areas:
On all platforms, the UI can be built using an XML-based language. Common UI components such as text fields, switches and sliders are available on each platform.
React Native has the most of these components, although some are not unified, which means that there can be a maximum of two ways to use a particular component. For example, one can be used on both platforms, and one for a specific platform only. A few components, such as the ProgressBar, are also implemented differently on each platform, which means that it’s not totally “write once, run everywhere.”
On the other hand, NativeScript has a unified way of implementing the different UI components on each platform. For every component, there’s an equivalent native component for both Android and iOS.
Fuse has a decent number of UI components that will cover the requirements of most projects. One component that’s not built into either React Native or NativeScript is the Video component, which can be used to play local videos and even videos from the Internet. The only component that is currently missing is the date-picker, which is especially useful during user registration. Though you can always create your own using the components that are already available to Fuse.
In React Native, layout is done with Flexbox. In a nutshell, Flexbox enables you to specify how content should flow through the available space. For example, you can set flex to 1 and flexDirection to row in a container element in order to equally divide the available space among the children and to arrange the children vertically.
In NativeScript, layout is achieved using layout containers, the most basic one being StackLayout, which puts all elements on top of each other, just like in the example below. In a horizontal orientation, they’re placed side by side.
Similarly, Fuse achieves layout by using a combination of the different elements in UX Markup, the most common ones being StackPanel, Grid and DockPanel. StackPanel works similar to StackLayout in NativeScript. Here’s an example:
All of the platforms cover all of the basics with JavaScript APIs. Things like camera functionality, platform information, geolocation, push notifications, HTTP requests and local storage can be done on all platforms. However, looking at the documentation for each platform, you could say that React Native has the most JavaScript APIs that bridge the gap between native and “JavaScript native” features. There’s no official name yet for platforms such as React Native, NativeScript and Fuse, so let’s just stick with “JavaScript native” for now, because they all use JavaScript to write code and they all offer native-like performance.
If you need access to specific device features that don’t expose a JavaScript API yet, each platform also provides ways for developers to tap into native APIs for Android and iOS.
NativeScript gives you access to all of the native APIs of the underlying platform through JavaScript. This means you don’t have to touch any Swift, Objective-C or Java code in order to make use of the native APIs. The only requirement is that you know how the native APIs work.
React Native falls a bit short in accessing native APIs because you’ll have to know the native language in order to extend native functionality. This is done by creating a native module (an Objective-C class for iOS or a Java class for Android), exposing your desired public methods to JavaScript, then importing it into your project.
Fuse allows you to extend functionality through a feature that it refers to as “foreign code.” This allows you to call native code on each platform through the Uno language. The Uno language is the core technology of Fuse. It’s what makes Fuse work behind the scenes. Making use of native features that aren’t supported by the core Fuse library is done by creating an Uno class. Inside the Uno class, you can write the Objective-C or Java code that implements the functionality you want and have it exposed as JavaScript code, which you can then call from your project.
Both React Native and NativeScript supports the use of all npm packages that don’t have dependencies on the browser model. This means you can use a library such as lodash and moment simply by executing npm install {package-name} in your project directory and then importing it in any of your project files, just like in a normal JavaScript project.
Fuse, on the other hand, is currently lacking in this regard. Usage of existing JavaScript libraries is mostly not possible; only a short list of libraries10 are known to work. The good news is that the developers are constantly working on polyfills to improve compatibility with existing libraries.
Another important part of the UX is animation. In React Native, animation is implemented via its Animated API. With it, you can customize the animation a lot. For example, you can specify how long an animation takes or how fast it runs. But this comes with the downside of not being beginner-friendly. Even simple animation such as scaling a particular element requires a lot of code. The good thing is that libraries such as React Native Animatable11 make it easier to work with animation. Here’s sample code for implementing a fadeIn animation using the Animatable library:
<Animatable.View animation="fadeIn">Fade me in!</Animatable.View>
NativeScript animations can be implemented in two ways: via the CSS3 animations API or the JavaScript API. Here’s an example of scaling an element with a class of el:
var view = page.getViewById('box'); //must have an element with an ID of box in the markup view.animate({ scale:{ x:1.5, y:1.5}, duration:1000});
Animation in Fuse is implemented via triggers and animators. Triggers are used to detect whether something is happening in the app, whereas animators are used to respond to those events. For example, to make something bigger when pressed, you would have this:
When it comes to community, React Native is the clear winner. Just the fact that it was created by Facebook is a big deal. Because the main technology used to create apps is React, React Native taps into that community as well. This means that a lot of projects can help you to develop apps. For example, you can reuse existing React components for your React Native project. And because many people use it, you can expect to quickly get help when you get stuck, because you can just search for an answer on Stack Overflow. React Native is also open-source, and the source code is available on GitHub12. This makes development really fast because the maintainers can accept help from developers outside of the organization.
NativeScript, meanwhile, was created by Telerik. The project has a decent-sized community behind it. If you look at its GitHub page13, currently over 10,000 people have starred the project. It has been forked 700 times, so one can assume that the project is getting a lot of contributions from the community. There are also a lot of NativeScript packages on npm and questions on Stack Overflow, so expect that you won’t have to implement custom functionality from scratch or be left alone looking for answers if you get stuck.
Fuse is the lesser known among the three. It doesn’t have a big company backing it up, and Fuse is basically the company itself. Even so, the project comes complete with documentation, a forum, a Slack channel, sample apps, sample code and video tutorials, which make it very beginner-friendly. The Fuse core is not yet open-source, but the developers will be making the code open-source soon.
With React Native and NativeScript, you need to have an actual mobile device or an emulator if you want to view changes while you’re developing the app. Both platforms also support live reloading, so every time you make a change to the source files, it automatically gets reflected in the app — although there’s a slight delay, especially if your machine isn’t that powerful.
Fuse, on the other hand, allows you to preview the app both locally and on any number of devices currently connected to your network. This means that both designers and developers can work at the same time and be able to preview changes in real time. This is helpful to the designer because they can immediately see what the app looks like with real data supplied by the developer’s code.
When it comes to debugging, both React Native and NativeScript tap into Chrome’s Developer Tools. If you’re coming from a web development background, the debugging workflow should make sense to you. That being said, not all features that you’re used to when inspecting and debugging web projects are available. For example, both platforms allow you to debug JavaScript code but don’t allow you to inspect the UI elements in the app. React Native has a built-in inspector that is the closest thing to the element inspector in Chrome’s Developer Tools. NativeScript currently doesn’t have this feature.
On the other hand, Fuse uses the Debugging Protocol in Google’s V8 engine to debug JavaScript code. This allows you to do things like add breakpoints to your code and inspect what each object contains at each part in the execution of the code. The Fuse team encourages the use of the Visual Studio Code14 text editor for this, but any text editor or IDE that supports V8’s Debugging Protocol should work. If you want to inspect and visually edit the UI elements, Fuse also includes an inspector — although it allows you to adjust only a handful of properties at the moment, things like widths, heights, margins, padding and colors.
Now you’re ready to create a simple weather app with Fuse. It will get the user’s location via the GeoLocation API and will use the OpenWeatherMap API to determine the weather in the user’s location and then display it on the screen. You can find the full source code of the app in the GitHub repository15.
To start, go to the OpenWeatherMap website and sign up for an account16. Once you’re done signing up, it should provide you with an API key, which you can use to make a request to its API later on.
Next, visit the Fuse downloads page17, enter your email address, download the Fuse installer for your platform, and then install it. Once it’s installed, launch the Fuse dashboard and click on “New Project”. This will open another window that will allow you to select the path to your project and enter the project’s name.
Do that and then click on the “Create” button to create your project. If you’re using Sublime Text 3, you can click on the “Open in Sublime Text 3” button to open a new Sublime Text instance with the Fuse project20 already loaded. Once you’re in there, the first thing you’ll want to do is install the Fuse package. This includes code completion, “Goto definition,” previewing the app from Sublime and viewing the build.
Once the Fuse plugin is installed, open the MainView.ux file. This is the main file that we will be working with in this project. By default, it includes sample code for you to play with. Feel free to remove all of the contents of the file once you’re done inspecting it.
When you create an app with Fuse, you always start with the <App> tag. This tells Fuse that you want to create a new page.
<App></App>
Fuse allows you to reuse icon fonts that are commonly used for the web. Here, we’re using Weather Icons21. Use the <Font> tag to specify the location of the web font file in your app directory via the File attribute. For this project, it’s in the fonts folder at the root directory of the project. We also need to give it a ux:Global attribute, which will serve as its ID when you want to use this icon font later on.
Next, we have the JavaScript code. We can include JavaScript code anywhere in UX Markup by using the <JavaScript> tag. Inside the tag will be the JavaScript code to be executed.
<JavaScript></JavaScript>
In the <JavaScript> tag, require two built-in Fuse libraries: Observable and GeoLocation. Observable allows you to implement data-binding in Fuse. This makes it possible to change the value of the variable via JavaScript code and have it automatically reflected in the UI of the app. Data-binding in Fuse is also two-way; so, if a change is made to a value via the UI, then the value stored in the variable will also be updated, and vice versa.
var Observable =require('FuseJS/Observable');
GeoLocation allows you to get location information from the user’s device.
var Geolocation =require('FuseJS/GeoLocation');
Create an object containing the hex code for each of the weather icons that we want to use. You can find the hex code on the GitHub page of the icon font22.
Determine whether it’s currently day or night based on the time on the user’s device. We’ll use orange as the background color for the app if it’s day, and purple if it’s nighttime.
var hour =(newDate()).getHours();var color ='#7417C0';if(hour >=5&& hour <=18){ color ='#f38844';}
Add the OpenWeather Map API key that you got earlier and create an observable variable that contains the weather data.
var api_key ='YOUR OPENWEATHERMAP API KEY';var weather_data =Observable();
Get the location information:
var loc = Geolocation.location;
This will return an object containing the latitude, longitude and accuracy of the location. However, Fuse currently has a problem with getting location information on Android. If the location setting is disabled on the device, it won’t ask you to enable it when you open the app. So, as a workaround, you’ll need to first enable location before launching the app.
Make a request to the OpenWeatherMap API using the fetch() function. This function is available in Fuse’s global scope, so you can call it from anywhere without including any additional libraries. This will work the same way as the fetch() function available in modern browsers: It also returns a promise that you need to listen to using the then() function. When the supplied callback function is executed, the raw response is passed in as an argument. You can’t really use this yet since it contains the whole response object. To extract the data that the API actually returned, you need to call the json() function in the response object. This will return another promise, so you need to use then() one more time to extract the actual data. The data is then assigned as the value of the observable that we created earlier.
Export the variables so that it becomes available in the UI.
module.exports ={ weather_data: weather_data, icons: icons, color: color };
Because this project is very small, I’ve decided to put everything in one file. But for real projects, the JavaScript code and the UX Markup should be separated. This is because the designers are the ones who normally work with UX Markup, and the developers are the ones who touch the JavaScript code. Separating the two allows the designer and the developer to work on the same page at the same time. You can separate the JavaScript code by creating a new JavaScript file in the project folder and then link it in your markup, like so:
<JavaScriptFile="js/weather.js">
Finally, add the actual UI of the app. Here, we’re using <DockPanel> to wrap all of the elements. By default, <DockPanel> has a Dock property that is set to Fill, so it’s the perfect container for filling the entire screen with content. Note that we didn’t need to set that property below because it’s implicitly added. Below, we have only assigned a Color attribute, which allows us to set the background color using the color that we exported earlier.
<DockPanelColor="{color}"></DockPanel>
Inside <DockPanel> is <StatusBarBackground>, which we’ll dock to the top of the screen. This allows us to show and customize the status bar on the user’s device. If you don’t use this component, <DockPanel> will consume the entirety of the screen, including the status bar. Simply setting this component will make the status bar visible. We don’t really want to customize it, so we’ll just leave the defaults.
<StatusBarBackgroundDock="Top"/>
Below <StatusBarBackground> is the actual content. Here, we’re wrapping everything in a <ScrollView> to enable the user to scroll vertically if the content goes over the available space. Inside is <StackPanel>, containing all of the weather data that we want to display. This includes the name of the location, the icon representing the current weather, the weather description and the temperature. You can display the variables that we exported earlier by wrapping them in braces. For objects, individual properties are accessed just like you would in JavaScript.
You might also notice that all attributes and their values are always capitalized; this is the standard in Fuse. Lowercase or uppercase won’t really work. Also, notice that Alignment="Center" and TextColor="#fff" are repeated a few times. This is because Fuse doesn’t have the concept of inheritance when it comes to styling properties, so setting TextColor or Alignment in a parent component won’t actually affect the nested components. This means we need to repeat it for each component. This can be mitigated by creating components and then simply reusing them without specifying the same style properties again. But this isn’t really flexible enough, especially if you need a different combination of styles for each component.
The last thing you’ll need to do is to open the {your project name}.unoproj file at the root of your project folder. This is the Uno project file. By default, it contains the following:
This file specifies what packages and files to include in the app’s build. By default, it includes the Fuse and FuseJS packages and all of the files in the project directory. If you don’t want to include all of the files, edit the items in the Includes array, and use a glob pattern to target specific files:
"Includes":["*.ux","js/*.js"]
You can also use Excludes to blacklist files:
"Excludes":["node_modules/"]
Going back to the Packages, Fuse and FuseJS allow you to use Fuse-specific libraries. This includes utility functions such as getting the environment in which Fuse is currently running:
var env =require('FuseJS/Environment');if(env.mobile){debug_log("There's geo here!");}
To keep things lightweight, Fuse includes only the very basics. So, you’ll need to import things like geolocation as separate packages:
"Packages":["Fuse","FuseJS","Fuse.GeoLocation"],
Once Fuse.GeoLocation has been added, Fuse will add the necessary libraries and permissions to the app once you’ve compiled the project.
This lets you pick whether to run on Android, iOS or locally. (Note that there is no iOS option in the screenshot because I’m running on Windows.) Select “Local” for now, and then click on “Start.” This should show you a blank screen because geolocation won’t really work in a local preview. What you can do is close the preview then update the req_url to use the following instead, which allows you to specify a place instead of the coordinates:
var req_url ='http://api.openweathermap.org/data/2.5/weather?q=london,uk&apikey='+ api_key;
You’ll also need to comment out all of the code that uses geolocation:
Run the app again, and it should show you something similar to the screenshot at the beginning of the article.
If you want to run on a real device, please check “Preview and Export25” in the documentation. It contains detailed information on how to deploy your app to both Android and iOS devices.
Now that you have tested the waters, it’s time to look at some of the pros and cons of using Fuse for your next mobile app project. As you have seen so far, Fuse is both developer- and designer-friendly, because of its real-time updates and multi-device preview feature, which enables developers and designers to work at the same time. Combine that with the native UX and access to device features, and you’ve got yourself a complete platform for building cross-platform apps. This section will drive home the point on why you should (or shouldn’t) use Fuse for your next mobile app project. First, let’s look at the advantages.
Fuse is developer-friendly because it uses JavaScript for the business logic. This makes it a very approachable platform for creating apps, especially for web developers and people who have some JavaScript experience. In addition, it plays nice with JavaScript transpilers such as Babel. This means that developers can use new ECMAScript 6 features to create Fuse apps.
At the same time, Fuse is designer-friendly because it allows you to import assets from tools such as Sketch, and it will automatically take care of slicing and exporting the pieces for you.
Aside from that, Fuse clearly separates the business logic and presentation code. The structure, styles and animations are all done in UX Markup. This means that business-logic code can be placed in a separate file and simply linked from the app page. The designer can then focus on designing the user experience. Being able to implement animations using UX Markup makes things simpler and easier for the designer.
Fuse makes it very easy for designers and developers to collaborate in real time. It allows for simultaneous previewing of the app on multiple devices. You only need USB the first time you connect the device. Once the device has been connected, all you need to do is connect the device to the same Wi-Fi network as your development machine, and all your changes will be automatically reflected on all devices where the app is open. The sweetest part is that changes get pushed to all the devices almost instantly. And it works not just on code changes: Any change you make on any linked asset (such as images) will trigger the app to reload as well.
Fuse also comes with a preview feature that allows you to test changes without a real device. It’s like an emulator but a lot faster. In “design mode,” you can edit the appearance of the app using the graphical user interface. Developers will also benefit from the logging feature, which allows them to easily debug the app if there are any errors.
If you need functionality not already provided by the Fuse libraries, Fuse also allows you to implement the functionality yourself using Uno. Uno is a language created by the Fuse team itself. It’s a sub-language of C# that compiles to C++. This is Fuse’s way of letting you access the native APIs of each platform (Android and iOS).
UX Markup is converted to the native UI equivalent at compile time. This makes the UI really snappy and is comparable to native performance. And because animations are also written declaratively using UX Markup, animations are done natively as well. Behind the scenes, Fuse uses OpenGL ES26 acceleration to make things fast.
No tool is perfect, and Fuse is no exception. Here are a few things to consider before picking Fuse.
Structure and style are mixed together. This makes the code a bit difficult to edit because you have to specify styles separately for each element. This can be alleviated by creating components in which you put common styles.
Linux is not supported, and it’s not currently on the road map. Though Linux developers who want to try out Fuse can still use a Windows Virtual Machine or Wine to install and use Fuse on their machine.
It’s still in beta, which means it’s still rough around the edges and not all the features that you might expect from a mobile app development platform is supported. That said, Fuse is very stable and does a good job at the small set of features that it currently supports.
It’s not open-source, though there are plans to open-source the core Fuse platform. This doesn’t mean that you can’t use Fuse for anything though. You can freely use Fuse to create production apps. If you’re interested about licensing, you can read more about it in the Fuse License Agreement.27
We’ve learned about Fuse, a newcomer in the world of JavaScript native app development. From what I’ve seen so far, I can say that this project has a lot of potential. It really shines in multi-device support and animation. And the fact that it’s both designer- and developer-friendly makes it a great tool for developing cross-platform apps.
There’s a lot of hype about WebAssembly in JavaScript circles today. People talk about how blazingly fast it is, and how it’s going to revolutionize web development. But most conversations don’t go into the details of why it’s fast. In this article, I want to help you understand what exactly it is about WebAssembly that makes it fast.
But first, what is it? WebAssembly is a way of taking code written in programming languages other than JavaScript and running that code in the browser.
When you’re talking about WebAssembly, the apples to apples comparison is with JavaScript. Now, I don’t want to imply that it’s an either/or situation — that you’re either using WebAssembly or using JavaScript. In fact, we expect that developers will use WebAssembly and JavaScript hand-in-hand, in the same application. But it is useful to compare the two, so you can understand the potential impact that WebAssembly will have.
JavaScript was created in 1995. It wasn’t designed to be fast, and for the first decade, it wasn’t fast.
Then the browsers started getting more competitive.
In 2008, a period that people call the performance wars began. Multiple browsers added just-in-time compilers, also called JITs. As JavaScript was running, the JIT could see patterns and make the code run faster based on those patterns.
The introduction of these JITs led to an inflection point in the performance of code running in the browser. All of the sudden, JavaScript was running 10x faster.
When you as a developer add JavaScript to the page, you have a goal and a problem.
Goal: you want to tell the computer what to do.
Problem: you and the computer speak different languages.
You speak a human language, and the computer speaks a machine language. Even if you don’t think about JavaScript or other high-level programming languages as human languages, they really are. They’ve been designed for human cognition, not for machine cognition.
So the job of the JavaScript engine is to take your human language and turn it into something the machine understands.
I think of this like the movie Arrival, where you have humans and aliens who are trying to talk to each other.
In that movie, the humans and aliens can’t just translate from one language to the other, word-for-word. The two groups have different ways of thinking about the world, which is reflected in their language. And that’s true of humans and machines too.
So how does the translation happen?
In programming, there are generally two ways of translating to machine language. You can use an interpreter or a compiler.
With an interpreter, this translation happens pretty much line-by-line, on the fly.
Interpreters are quick to get code up and running. You don’t have to go through that whole compilation step before you can start running your code. Because of this, an interpreter seems like a natural fit for something like JavaScript. It’s important for a web developer to be able to have that immediate feedback loop.
And that’s part of why browsers used JavaScript interpreters in the beginning.
But the con of using an interpreter comes when you’re running the same code more than once. For example, if you’re in a loop. Then you have to do the same translation over and over and over again.
The compiler has the opposite trade-offs. It takes a little bit more time to start up because it has to go through that compilation step at the beginning. But then code in loops runs faster, because it doesn’t need to repeat the translation for each pass through that loop.
As a way of getting rid of the interpreter’s inefficiency — where the interpreter has to keep retranslating the code every time they go through the loop — browsers started mixing compilers in.
Different browsers do this in slightly different ways, but the basic idea is the same. They added a new part to the JavaScript engine, called a monitor (aka a profiler). That monitor watches the code as it runs, and makes a note of how many times it is run and what types are used.
If the same lines of code are run a few times, that segment of code is called warm. If it’s run a lot, then it’s called hot. Warm code is put through a baseline compiler, which speeds it up a bit. Hot code is put through an optimizing compiler, which speeds it up more.
Let’s Compare: Where Time Is Spent When Running JavaScript Vs. WebAssembly Link
This diagram gives a rough picture of what the start-up performance of an application might look like today, now that JIT compilers are common in browsers.
This diagram shows where the JS engine spends its time for a hypothetical app. This isn’t showing an average. The time that the JS engine spends doing any one of these tasks depends on the kind of work the JavaScript on the page is doing. But we can use this diagram to build a mental model.
Each bar shows the time spent doing a particular task.
Parsing — the time it takes to process the source code into something that the interpreter can run.
Compiling + optimizing — the time that is spent in the baseline compiler and optimizing compiler. Some of the optimizing compiler’s work is not on the main thread, so it is not included here.
Re-optimizing — the time the JIT spends readjusting when its assumptions have failed, both re-optimizing code and bailing out of optimized code back to the baseline code.
Execution — the time it takes to run the code.
Garbage collection — the time spent cleaning up memory.
One important thing to note: these tasks don’t happen in discrete chunks or in a particular sequence. Instead, they will be interleaved. A little bit of parsing will happen, then some execution, then some compiling, then some more parsing, then some more execution, etc.
This performance breakdown is a big improvement from the early days of JavaScript, which would have looked more like this:
In the beginning, when it was just an interpreter running the JavaScript, execution was pretty slow. When JITs were introduced, it drastically sped up execution time.
The tradeoff is the overhead of monitoring and compiling the code. If JavaScript developers kept writing JavaScript in the same way that they did then, the parse and compile times would be tiny. But the improved performance led developers to create larger JavaScript applications.
This means there’s still room for improvement.
Here’s an approximation of how WebAssembly would compare for a typical web application.
This isn’t shown in the diagram, but one thing that takes up time is simply fetching the file from the server.
It takes less time to download WebAssembly it does the equivalent JavaScript, because it’s more compact. WebAssembly was designed to be compact, and it can be expressed in a binary form.
Even though gzipped JavaScript is pretty small, the equivalent code in WebAssembly is still likely to be smaller.
This means it takes less time to transfer it between the server and the client. This is especially true over slow networks.
Once it reaches the browser, JavaScript source gets parsed into an Abstract Syntax Tree.
Browsers often do this lazily, only parsing what they really need to at first and just creating stubs for functions which haven’t been called yet.
From there, the AST is converted to an intermediate representation (called bytecode) that is specific to that JS engine.
In contrast, WebAssembly doesn’t need to go through this transformation because it is already a bytecode. It just needs to be decoded and validated to make sure there aren’t any errors in it.
As I explained before, JavaScript is compiled during the execution of the code. Because types in JavaScript are dynamic, multiple versions of the same code may need to be compiled for different types. This takes time.
In contrast, WebAssembly starts off much closer to machine code. For example, the types are part of the program. This is faster for a few reasons:
The compiler doesn’t have to spend time running the code to observe what types are being used before it starts compiling optimized code.
The compiler doesn’t have to compile different versions of the same code based on those different types it observes.
More optimizations have already been done ahead of time in LLVM. So less work is needed to compile and optimize it.
Sometimes the JIT has to throw out an optimized version of the code and retry it.
This happens when assumptions that the JIT makes based on running code turn out to be incorrect. For example, deoptimization happens when the variables coming into a loop are different than they were in previous iterations, or when a new function is inserted in the prototype chain.
In WebAssembly, things like types are explicit, so the JIT doesn’t need to make assumptions about types based on data it gathers during runtime. This means it doesn’t have to go through reoptimization cycles.
It is possible to write JavaScript that executes performantly. To do it, you need to know about the optimizations that the JIT makes.
However, most developers don’t know about JIT internals. Even for those developers who do know about JIT internals, it can be hard to hit the sweet spot. Many coding patterns that people use to make their code more readable (such as abstracting common tasks into functions that work across types) get in the way of the compiler when it’s trying to optimize the code.
Because of this, executing code in WebAssembly is generally faster. Many of the optimizations that JITs make to JavaScript just aren’t necessary with WebAssembly.
In addition, WebAssembly was designed as a compiler target. This means it was designed for compilers to generate, and not for human programmers to write.
Since human programmers don’t need to program it directly, WebAssembly can provide a set of instructions that are more ideal for machines. Depending on what kind of work your code is doing, these instructions run anywhere from 10% to 800% faster.
In JavaScript, the developer doesn’t have to worry about clearing out old variables from memory when they aren’t needed anymore. Instead, the JS engine does that automatically using something called a garbage collector.
This can be a problem if you want predictable performance, though. You don’t control when the garbage collector does its work, so it may come at an inconvenient time.
For now, WebAssembly does not support garbage collection at all. Memory is managed manually (as it is in languages like C and C++). While this can make programming more difficult for the developer, it does also make performance more consistent.
Taken together, these are all reasons why in many cases, WebAssembly will outperform JavaScript when doing the same task.
There are some cases where WebAssembly doesn’t perform as well as expected, and there are also some changes on the horizon that will make it faster. I have covered these future features in more depth36 in another article.
I want to take a look now at how that alien brain works — how the machine’s brain parses and understands the communication coming in to it.
There’s a part of this brain that’s dedicated to the thinking , e.g. arithmetic and logic. There’s also a part of the brain near that which provides short-term memory, and another part that provides longer-term memory.
These different parts have names.
The part that does the thinking is the Arithmetic-logic Unit (ALU).
The short term memory is provided by registers.
The longer term memory is the Random Access Memory (or RAM).
The sentences in machine code are called instructions.
What happens when one of these instructions comes into the brain? It gets split up into different parts that mean different things.
The way that this instruction is split up is specific to the wiring of this brain.
For example, this brain might always take bits 4–10 and send them to the ALU. The ALU will figure out, based on the location of ones and zeros, that it needs to add two things together.
This chunk is called the “opcode”, or operation code, because it tells the ALU what operation to perform.
Note the annotations I’ve added above the machine code here, which make it easier for us to understand what’s going on. This is what assembly is. It’s called symbolic machine code. It’s a way for humans to make sense of the machine code.
You can see here there is a pretty direct relationship between the assembly and the machine code for this machine. When you have a different architecture inside of a machine, it is likely to require its own dialect of assembly.
So we don’t just have one target for our translation. Instead, we target many different kinds of machine code. Just as we speak different languages as people, machines speak different languages.
You want to be able to translate any one of these high-level programming languages down to any one of these assembly languages. One way to do this would be to create a whole bunch of different translators that can go from each language to each assembly.
That’s going to be pretty inefficient. To solve this, most compilers put at least one layer in between. The compiler will take this high-level programming language and translate it into something that’s not quite as high level, but also isn’t working at the level of machine code. And that’s called an intermediate representation (IR).
This means the compiler can take any one of these higher-level languages and translate it to the one IR language. From there, another part of the compiler can take that IR and compile it down to something specific to the target architecture.
The compiler’s front-end translates the higher-level programming language to the IR. The compiler’s backend goes from IR to the target architecture’s assembly code.
You might think of WebAssembly as just another one of the target assembly languages. That is kind of true, except that each one of those languages (x86, ARM, etc) corresponds to a particular machine architecture.
When you’re delivering code to be executed on the user’s machine across the web, you don’t know what target architecture the code will be running on.
So WebAssembly is a little bit different than other kinds of assembly. It’s a machine language for a conceptual machine, not an actual, physical machine.
Because of this, WebAssembly instructions are sometimes called virtual instructions. They have a much more direct mapping to machine code than JavaScript source code, but they don’t directly correspond to the particular machine code of one specific hardware.
The browser downloads the WebAssembly. Then, it can make the short hop from WebAssembly to that target machine’s assembly code.
The compiler tool chain that currently has the most support for WebAssembly is called LLVM. There are a number of different front-ends and back-ends that can be plugged into LLVM.
Let’s say that we wanted to go from C to WebAssembly. We could use the clang front-end to go from C to the LLVM intermediate representation. Once it’s in LLVM’s IR, LLVM understands it, so LLVM can perform some optimizations.
To go from LLVM’s IR to WebAssembly, we need a back-end. There is one that’s currently in progress in the LLVM project. That back-end is most of the way there and should be finalized soon. However, it can be tricky to get it working today.
There’s another tool called Emscripten which is a bit easier to use. It also optionally provides helpful libraries, such as a filesystem backed by IndexDB.
We’re working on making this process easier. We expect to make improvements to the toolchain and integrate with existing module bundlers like webpack or loaders like SystemJS. We believe that loading WebAssembly modules can be as easy as as loading JavaScript ones.
There is a major difference between WebAssembly modules and JS modules, though. Currently, functions in WebAssembly can only use WebAssembly types (integers or floating point numbers) as parameters or return values.
For any data types that are more complex, like strings, you have to use the WebAssembly module’s memory.
If you’ve mostly worked with JavaScript, having direct access to memory is unfamiliar. More performant languages like C, C++, and Rust, tend to have manual memory management. The WebAssembly module’s memory simulates the heap that you would find in those languages.
To do this, it uses something in JavaScript called an ArrayBuffer. The array buffer is an array of bytes. So the indexes of the array serve as memory addresses.
If you want to pass a string between the JavaScript and the WebAssembly, you convert the characters to their character code equivalent. Then you write that into the memory array. Since indexes are integers, an index can be passed in to the WebAssembly function. Thus, the index of the first character of the string can be used as a pointer.
It’s likely that anybody who’s developing a WebAssembly module to be used by web developers is going to create a wrapper around that module. That way, you as a consumer of the module don’t need to know about memory management.
On February 28, the four major browsers announced their consensus that the MVP of WebAssembly is complete. Firefox turned WebAssembly support on-by-default about a week after that, and Chrome followed the next week. It is also available in preview versions of Edge and Safari.
This provides a stable initial version that browsers can start shipping.
This core doesn’t contain all of the features that the community group is planning. Even in the initial release, WebAssembly will be fast. But it should get even faster in the future, through a combination of fixes and new features. I detail some of these features65 in another article.
With WebAssembly, it is possible to run code on the web faster. There are a number of reasons why WebAssembly code runs faster than its JavaScript equivalent.
Downloading — it is more compact, so can be faster to download
Parsing — decoding WebAssembly is faster than parsing JavaScript
Compiling & optimizing — it takes less time to compile and optimize because more optimizations have been done before the file is pushed to the server, and the code does need to be compiled multiple times for dynamic types
Re-optimizing — code doesn’t need to be re-optimized because there’s enough information for the compiler to get it right on the first try
Execution — execution can be faster because WebAssembly instructions are optimized for how the machine thinks
Garbage Collection — garbage collection is not directly supported by WebAssembly currently, so there is no time spent on GC
What’s currently in browsers is the MVP, which is already fast. It will get even faster over the next few years, as the browsers improve their engines and new features are added to the spec. No one can say for sure what kinds of applications these performance improvements could enable. But if the past is any indication, we can expect to be surprised.