Sometimes things evolve faster than you think. Something that started as a simple WordPress blog back in September 2006, has evolved into a little Smashing universe — with books, eBooks, conferences, workshops, consultancy, job board and, most recently, 56 fancy cats (upcoming, also known as Smashing Membership). We have a wonderful team making it all happen, but every project requires attention and focus and every project desperately needs time to evolve and flourish and improve.
After more than 11 years of being editor-in-chief at Smashing Magazine, I’ve been struggling a lot to find that perfect balance between all of our projects, often focusing on exciting ideas and neglecting good old ones. I would jump to writing, or teaching, or coding, or designing, or working with conference speakers, instead of reviewing and editing articles, often leaving Smashing Magazine running on the side.
It’s time for a change. It’s not an easy decision for me to make, but I sincerely believe that it’s an important one. Smashing Magazine has been the heart of everything we’ve been working on throughout all this time, and with many new Smashing adventures scheduled for 2018, it deserves a stronger focus and support: a guidance stronger than the one I was providing throughout the last years. Most importantly, it needs much more care and attention.
With this in mind, I can’t be more happy and honored to welcome the one-and-only Rachel Andrew (yep, you got it right) as the new editor-in-chief of Smashing Magazine. Rachel will be helping us bring the focus back to the core of this little Smashing universe — this very magazine that you are reading right now. Rachel doesn’t really need an introduction, and her work for the community speaks for herself. There is one thing worth mentioning though: with Rachel, I’m happy to have a reliable and extremely knowledgeable editor on our side, the one I could only dream of. I’m not going anywhere, of course, but I’ll be spending more time writing and teaching and working on other Smashing projects instead. This is Rachel’s spot to take now.
About Rachel Andrew
For those of you who may have not heard about Rachel, here are a few things to know about her.
Rachel Andrew lives in Bristol, England. She is one half of web development company edgeofmyseat.com, the company behind Perch CMS. Her day to day work can include anything from product development to devops to CSS, and she writes about all of these subjects on her blog at rachelandrew.co.uk.
Rachel has been working on the web since 1996 and writing about the web for almost as long. She is the author or co-author of 22 books including The New CSS Layout, and a regular contributor to a number of publications both on and offline. She is a Google Developer Expert for Web Technologies and a W3C Invited Expert to the CSS Working Group. Rachel is a frequent speaker at web development and design events including An Event Apart, Smashing Conference, and Web Directions Code.
Rachel is a keen distance runner and likes to encourage people to come for a run when attending conferences, with varying degrees of success. She is also a student pilot and aviation geek. You can find her on Twitter as @rachelandrew and find out what she is up to now.
Exciting times indeed! Let’s shape the future together — I can’t wait to see what’s coming up next. So please make Rachel feel welcome — and here’s one for the next adventures!
Hello friend! I hope you like this new world of ours. It’s a lot different than the world of 2007. Quick tip: if you just got a mortgage, go back and cancel it. Trust me.
I’m glad that you’re still interested in computers! Today we have many more of them than we did 10 years ago, and that comes with new challenges. We wear computers on our wrists and faces, keep them in our pockets, and have them in our fridges and kettles. The cars are driving themselves pretty well, and we’ve taught programs to be better than humans at pretty much every game out there — except maybe drinking.
(Web) Apps
You might have seen the release of the iPhone just before you stepped into the time booth. Apple is the biggest and richest tech company, mostly due to the iPhone and its operating system, iOS. Google has this competing thing called Android, and Microsoft tried to get a slice of the ever-growing pie with Windows Phone. It didn’t work out.
We started calling programs apps, and some websites are calling themselves web apps. In 2008, Google released a new browser called “Chrome.” Nine years later it’s the most popular way to get on the Web.
The Chrome team invested a lot in working with JavaScript, and the code gets better every month. Web apps are written using a lot of JavaScript, and they resemble the desktop interfaces of your time.
Companies have also invested in JavaScript to make it better—it now supports classes and modules. We use languages that compileto JavaScript, like TypeScript (from Microsoft, they’re cool now) or Flow.
We write a lot of JavaScript these days, since nobody supports Flash anymore. We even run JavaScript on the server, instead of Perl, using a thing called Node. It sounds easier than it is.
Remember Swing, SWT and the likes of wxWidgets? We had to reinvent them for the browser world. Several new UI programming models emerged, which mostly focused on components.
We had to find a way to design, build, and test apps while keeping them responsive (a term we use to describe a website that doesn’t look like crap on a mobile phone). We also needed to keep it slim — not everybody has a fast connection, but everybody has a browser in their pockets.
To help with all this, there are now component frameworks. The term is vague, since it includes the likes of Angular by Google, React by Facebook, and Vue by the community. But it’s the best term we have.
By the way, I’m not sure you remember Facebook from 2007. It was getting big in the US around that time, and now it’s bigger than huge. Boasting more than a billion users, it’s also one of the largest codebases in the world.
The Facebook development team writes a lot of great code and publishes it online. They have their own conference, F8. Most big companies have their own conferences.
CSS also had to evolve, since the new apps require more intricate layouts. We don’t use tables with images anymore. Frames are gone as well. Instead, we have created new standards, like CSS Floats, Flexbox, and CSS Grid.
People had to iterate on these standards, and they’ve built libraries to make things look consistent, like Bootstrap, Foundation and many more. Similar to JavaScript, we have created languages that compile to CSS. They make up for some of the things that CSS misses, like variables or modules. It’s still hard.
It’s okay to be lost
Don’t feel bad if you’re confused. The truth is that we’re all a little confused — and it’s okay to be so. There are many more developers on the planet now, and tech companies are becoming more successful. For a while we used the term “startup” to describe companies that grew quickly and didn’t know what to do. But even this term has become old.
Data
There are more programmers, more programs, and more devices. We have more data now. Computers had to grow powerful enough to process it all, and we have developed several techniques to turn that data into insight.
First, we created a field called Data Science, which aims to learn about and extract information from data.
For example, a startup called Waze let people install an app on their phones that would track their movements while they were in their cars. Because many people installed the app, Waze got a lot of data about how cars move. They used it to develop programs that understood where traffic jams were.
Now, when you open Waze on your phone, you see traffic jams on the map in real time and choose another route.
Waze has since been bought by Google. This happens a lot with startups.
There were three main challenges with Data Science — storing data, understanding data, and acting on data. We’ve improved in all of these areas. Let’s look at each one.
Storage
We now need to store a lot more information and then find out which part is important. We needed to invent new databases. The likes of MySQL and PostgreSQL weren’t fit to store terabytes of data (we called it Big Data).
Big, internet-first companies typically faced these challenges, and so they were on the forefront of developing the technologies. Most of the time, technologies were first used internally and then open-sourced.
There was a movement we called NoSQL. This new class of databases took some of the conventions of traditional Relational databases and turned them around.
There’s Hadoop, which deals with how the data is stored on many hard computers. It defines a way of processing the data called MapReduce (inspired by a paper from Google — big companies write good scientific papers these days).
Then there’s Cassandra, which looks at data not as tables, but as sets of keys and columns which can be stored on different computers. It also makes sure that any of these computers can go offline without causing data loss.
And we have MongoDB, a database that is easy to install and use for prototyping apps. In 2017, we’re treating technologies the same way we treated pop stars ten years ago — we zealously defend some of them and vehemently hate others. MongoDB — like the band Nickelback — belongs to the latter group.
Learning
In the “understanding data” camp, most of the focus has been in an area called Machine Learning. There have been many new techniques, from naive classification to deep learning, that are now in every Data Scientist’s toolbox. They mostly write Python and work alongside developers to put machine learning pretty much everywhere.
For example, with the help of Data Scientists, a lot of web apps use A/B testing. This technique serves two slightly different versions of the app to different, but similar, groups of users. It is used to see which version leads quicker to our desired goal, whether that’s a sign-up or a purchase.
A lot of big companies like Airbnb (pronounced air-bee-en-bee), Uber, and Netflix are running hundreds and thousands of A/B tests at the same time to make sure their users get the best experience. Netflix is an app where people can binge-watch TV shows on all their devices. ¯\_(ツ)_/¯
Microservices and The Cloud
Companies like Netflix are enormous. Because they serve a lot of people, they have to make sure they are up and running at all times. That means they have to manage their computers pretty well. They can add hundreds of new servers when they’re needed.
This is difficult to achieve in a traditional data center, so the amazing engineers at Netflix use virtual machines. Remember Amazon Web Services, which launched back in 2006? Back then, they started offering Elastic Cloud Compute, known as EC2, to help people get virtual computers in Amazon’s data centers.
Today, they have almost 80 similar services, all built to help companies grow quickly. We used to have a trendy name for that — “The Cloud” — but this term is as difficult to define as NoSQL.
One of my favorite Twitter lists which I follow regularly in Flipboard is “Gigaom Vets.” The list includes people who previously worked for GigaOm magazine, which has been rebooted but unexpectedly fired all its staff in March 2015. They were an amazing group of tech and tech culture journalists, and they still are, except now they all work for different companies. Thanks to the power of Twitter and Flipboard, however, I still read their aggregated ideas via a “single pane of glass” (my Flipboard subscription to their Twitter list) and am regularly both educated and inspired by the thoughts they share.
CJ’s post is not only an encouragement for all of us to blog daily, because of the inherent value of generously sharing reflections about what we notice around us daily, but also the first shout out I’ve read in quite awhile to my old friend RSS. Ah, RSS. Twitter streams and the Facebook news feed have largely eclipsed your name and fame, but I still use you via my Feedly account (at least weekly) and acknowledge your latent power. A few of my older posts here still testify to your greatness:
It’s quite easy to become depressed by the ways Facebook was cleverly used to subvert democratic processes in our last Presidential election and even now, our sitting President uses Twitter to discredit mainstream media sources as “fake news” and obfuscates rather than clarifies truth for many. Our needs for media literacy and the “crap detector” of Neil Postman are as great as ever, as Jason Neiffer (@techsavvyteach) discussed on last week’s EdTech Situation Room podcast.
The last couple of years, since I become an independent school technology director but perhaps even before that, I’ve fallen into a pattern of blogging where I write much longer posts but share MUCH less frequently. I started blogging in 2003, and have shared 6,088 posts here since that time. At one point, I was blogging daily. My routines have changed, but CJ Chilvers and Om Malik have me rethinking those today.
RSS is a free information subscription technology, which is an open standard and is supported (still, despite Google’s painful abandonment of Google Reader in 2013) by multiple applications and platforms. Podcasting is alive and well, in fact thriving far more today in 2017 than it was at the dawn of the podcasting age around 2005 when I started. Blogs like this WordPress-powered website, thousands of Blogger blogs, and others continue to create RSS / ATOM feeds, which permit free subscriptions unfiltered and lacking the black-box modification of secret algorithms like the Facebook news feed.
I remember you, RSS, and have not forgotten your power! I’m podcasting weekly via @edtechSR, and have been now for about 70 weeks, but I also resolve to return to my “short share” blogging roots. Long live the open web, RSS, blogs, podcasts, and information streams unfiltered by corporate (and monetized) secret algorithms.
I will keep noticing ideas of significance in our world, and sharing short reflections about them here on “Moving at the Speed of Creativity.” I encourage you, as well, to consider or reconsider a commitment to regular blogging. We live in an emerging surveillance state, and our understanding of those dynamics should temper our personal sharing stream, but they should not chill or silence our capacity to be inspired and share our inspirations with each other on the social web.
Long live RSS!
Did you know Wes has published several eBooks and “eBook singles?” 1 of them is available free! Check them out!
Actions speak louder than interviews. High-fidelity session playback shows how real users experience your site. Searchable, shareable, flawless on dynamic apps.
With the appearance of voice user interfaces, AI and chatbots, what is the future of graphical user interfaces (GUIs)? Don’t worry: Despite some dark predictions, GUIs will stay around for many years to come. Let me share my personal, humble predictions and introduce multi-modal interfaces as a more human way of communication between user and machine.
What Are Our Primary Sensors?
The old wisdom that a picture is worth a thousand words is still true today. Our brain is an incredible image-processing machine. We can understand complex information faster when we see it visually. According to studies, even when we talk with someone else, nonverbal communication represents two thirds of the conversation. According to other studies, we absorb most information from our sight (83% sight, 11% hearing, 3% smell, 2% touch and 1% taste). In short, our eyes are our primary sensors.
Our ears are the second-most important sensors we have, and in some situations, voice conversation is a very effective communication channel. Imagine for a moment a simple shopping experience. Ordering your favorite pizza is much easier if you pick up the phone and order it, instead of going through all of the different offers on a website. But in a more complex situation, relying just on verbal communication is not enough. For example, would you buy a shoe without seeing it first? Of course not.
Even traditionally text-based messaging platforms have started introducing visual elements. It’s not coincidence that visual UI snippets were the first thing Facebook implemented when it created its chatbot platform. Some information is just easier to understand when we see it.
Text-only and voice-only interfaces can do a good job in some use cases, but today it’s clear they are not optimal for everything. As long as visual image-processing remains people’s main information source, and we are able to process complex information faster visually, the GUI is here to stay. On the other hand, more traditional GUI patterns cannot survive in their current form either. So, instead of radical predictions, I suggest another idea: User interfaces will adapt to our sensors even more.
Designing Voice Experiences
A new interface does not mean that we have to disregard everything we have successfully applied to previous interfaces; we will need to adapt our process for the nuances of voice-driven interfaces, including conversational interactions and the lack of a screen. Read more →
Adaptive Multi-Modal Interfaces
Humans have different input and output devices, just like computers. Our eyes and ears are our main input sensors. We are very good at pattern recognition and at processing images. This means we can process complex information faster visually. On the other hand, our reaction time to sound is faster, so voice is a good option for warnings.
We have output devices, too: we can talk, and we can gesture. Our mouth is the most effective output device we have, because obviously most people can talk faster than they type, write or make signs.
Because humans are good at combining different channels, I predict that machines will follow and that they will use multi-modal interfaces to adapt to human’s capabilities. These interfaces will use different channels for input and output, and different mediums for different information types (for example, asking short questions versus presenting complex information).
Interfaces will adapt to humans by using the medium and message format that is most convenient to humans in the given situation. Let’s look at some examples, including the ones we explored at UX Studio, as well as some established commercial products.
Chatbots Are Getting More And More Visual
Nuru is a chatbot concept that helps with day-to-day problems in Africa. Starting to design it as a pure chat application, we soon discovered the limits of text-only conversational interfaces.
For basic communication, chat is more effective than traditional user interfaces (UIs). In Africa, for example, chat can be used to boost local commerce. Sellers and buyers can find each other and negotiate different deals. In this case, chat is optimal because of the one-on-one communication. But when it comes to more sophisticated interaction, like comparing many different job postings, we need a more advanced UI. In this case, we added cards to the chat interface, which users can swipe through.
Some other companies, such as China’s Tencent, went even further and let developers build mini-apps that run within its chat app, WeChat. This inspired Western designers to imagine a conversational interface in which every single message could contain a different app, each with its own rich interface. For example, you caould play little games together with your chat partner, like we did 15 years ago in MSN Messenger. This is also an attempt to enhance the simple conversational interface that people love with rich UI functions.
Self-Driving Cars With Mixed Interfaces
A year ago, our team imagined the interface of a self-driving car as a pure exercise in multi-modal design. We imagined the whole process and tried to optimize the interaction at each step.
To order a car, you would push a button on your phone. This is the most simple interaction, and it’s enough to order a car. Obviously, there’s no need to talk on the phone if just pushing a button is enough.
Then, once you enter the car, you would spend some time with getting comfortable, placing your belongings and fastening your seatbelt. Following that, verbal communication would be easier, so the car asks you where to go. It is also faster to say the place, rather than typing the location on a touchscreen. In order for this to work properly, the car would have to understand any ambiguous instruction you give it.
Trust is an important issue in self-driving cars. When we are on the road, we want to see whether we are headed in the right direction and whether our self-driving car is aware of the bicycle in front of us. Having to ask the car every time for its status would be impractical, especially if you’re travelling with others. A tablet-like interface, visible to all occupants, would solve this issue. It would always show what the car detects in its surroundings, as well as your position on the map. The fact that it’s always there would build trust. And, of course, showing map information would be easier visually than in any conversational form.
In this example, you could order a car using a touchscreen, give voice commands, receive auditory feedback, as well as check the status on a screen. The car always uses the most convenient medium.
Home Entertainment And Digital Assistants
The Xbox console with the Kinect controller is another example of a mixed interface. You can control its GUI with both voice and hand gestures. In the video below, you can see that the gesture-recognition technology is not perfect yet, but it will certainly get better in the future. The voice recognition is also a bit awkward because you always have to say the magic word, “Xbox,” before every command.
Despite the technical flaws, it is a good example of how a machine can gives continual visual feedback to voice and gesture commands. When you use your hand as a control, you can see a small hand on the screen as a cursor, and as you move it above different content tiles, it always highlights the current one below your cursor, to show which one you are about to activate. When you say the word “Xbox” to give a command, the console displays a command word on each tile with green, so that you know what to say to select an item.
Of course, the goal here is to help you voice-control an interface that was really designed for voice in the first place. In the future, more accurate voice-recognition and language-processing will help people to say commands in their own words. That is an important and necessary step to make mixed interfaces more mainstream.
Amazon is without a doubt one of the great pioneers of voice interfaces and “no GUI” interfaces. But even it added a screen to its new generation of Echo device, after an arguably failed attempt to push the GUI into an app on the user’s phone.
The freedom that a voice UI gives you is truly fascinating, especially the first time you try it. For example, standing in the kitchen and saying “play Red Hot Chili Peppers” is easier than scrolling through Spotify albums with dirty hands.
But after a while, when you want to use it for more advanced tasks, it just doesn’t work. In one video review, a user pointed out how weird it is that once you start a kitchen timer, you have to ask the device for the status, because no screen exists. Now, with the Echo Show, you can see multiple timers on the same dashboard.
And what’s more important for Amazon than shopping? With the old Echo, you could add things to your shopping list, but then you had to open up the mobile app to actually purchase something. Hearing Alexa read out long product names and descriptions from the Amazon store was just a terrible experience. Now, you can handle these tasks on the Echo easily, because it shows you products and you can choose the ones you like.
Unlike the Xbox with the Kinect, the Echo Show is a voice-first device. Its home screen is not loaded with app icons. But when you issue an initial voice command, the screen shows you all related information. It is very simple: When you need to know more, you just look at the screen. It’s a bit like how a person works in the kitchen: We can maintain a basic conversation while we focus on cooking, but when an important or complex question arises, we stop and look at our partner’s face. This is why the Echo Show’s direction towards a multi-modal interface is more natural.
Here’s another design detail. On the home screen, the Echo will display a news headline and highlight a word in the headline in bold, making it the command word you would say if you wanted to hear the full story. In this way, the capabilities of the products are clear, and it’s obvious how you would use it. The Echo effectively sets expectations and gives tips through its visual interface.
One of the main advantages of Google Home, Echo’s main competitor, is that you can ask follow-up questions. After ask, “How many people live in Budapest?,” you could also ask, “What’s the weather like there?” Google Home will know that you’re talking about the same place. Context-awareness is a great feature and will be a must-have in future products.
When we’re designing an interface, if we know the context, we can remove friction. Will the product be used in the kitchen when the user’s hands are full? Use voice control; it’s easier than a touchscreen. Will they use it on a crowded train? Then touching a screen would feel far less awkward than talking to a voice assistant. Will they need a simple answer to a simple question? Use a conversational interface. Will they have to see images or understand complex data? Put it on a screen. To improve interaction, we can ask questions, such as which screen is closer to them, or which one would be more convenient to use given the situation.
One thing that is still missing from Google Home is multiuser support. Devices like this will be used by many different people, bringing us back to the shared computer phenomenon of the early PC age. Switching between users seamlessly will be a tough challenge. Security and UX are not easy to align. Imagine that at one moment you are talking to your virtual assistant, with access to all of your apps and data, then a second later someone else enters the room and does the same.
Both Amazon Echo and Google Home give nice visual feedback when they are listening to you or searching for an answer. They use LED animation. For multi-modal interfaces, keeping the voice and visual outputs in sync is essential; otherwise, people will get easily confused. For instance, when talking to someone, we can easily look at their face to see if they are getting the message. We would probably want to be able to do the same when talking to a product.
Healthcare Products
PD Measure is an app to measure pupillary distance for people who wear prescription glasses. It is a good example of syncing and combining visual and voice interfaces.
Any customer needs to know their pupillary distance in order to purchase glasses online. If they don’t know, then they’d have to go to a retail store and measure there. A measurement tool that is available to anyone at home would open up a huge market for online optics.
With PD Measure, the customer stands in front of a mirror and takes a photo of themselves, keeping their phone in a particular position, following precise instructions. The app then automatically calculates their pupillary distance using an advanced internal algorithm. It is precise enough to make ordering glasses online possible.
PD Measure’s UI is a combination of animated illustrations on the screen, which show you how to hold your phone, and voice instructions, which tell you what to do. The user has to move their hands to the right position, and the app will uses its sensors to give feedback when they are there. When the app finally takes the right image, it provides the user with auditory feedback (a bell rings). This way, the user gets used to the confirmation sound and will take each subsequent measurement more efficiently.
During the prototyping phase, we conducted a lot of user tests, and it turns out that people are more likely to follow voice instructions than visual ones.
In this example, visual and voice interfaces work together: The animated illustrations show you how to hold the phone, while the voice instruction helps you to get in the perfect position.
Examples From Publishing
Back in 2013, a company named Volio experimented with mixed interfaces. One of its flagship clients was Esquire magazine, which created an interactive experience in which people could talk with Esquire’s columnists. As you can see in the video below, this was a series of videos, and you could choose the next one based on the answer you gave to the question in the current video. Of course, you could just choose from a few predefined answers, but the interaction still felt like a live conversation. It also had a good combination of media: voice as input for commands and the screen to display the content.
Many people think of today’s multi-screen world as separate output channels for our content. Mixed interfaces will be much more than that. People will be able to use your app on different devices simultaneously, at the same time (for example, using the Alexa for voice input, while seeing the data on their tablet).
Combining voice and GUI in that way is not necessary either. A sports-streaming app we designed recently enables people to comment on a football game and talk with other fans while watching the match live on their smart TV. The two screens perfectly complete each other.
Such advanced interfaces offer functionality available through many different devices and media simultaneously. This is redundant, which programmers and designers don’t really like. But it also has advantages, because it gives people backup options, in case the main option is not available. It also helps disabled people who can’t use voice or visual interfaces.
How To Choose The Primary Mode?
Having discussed trends and some current products, let’s summarize when to use voice and when to use a visual user interface.
Visual user interfaces work better with:
lists with many items (where reading all items out loud would take too long);
complex information (graphs, diagrams and data with many attributes);
things you have to compare or things you have to choose from;
products you would want to see before buying;
status information that you would want to quietly check from time to time (the time, a timer, your speed, a map, etc.).
Voice user interfaces work better for:
commands (i.e. any situation in which you know exactly what you want, so you can skip the navigation and just dictate your command);
user instructions, because people tend to follow voice instructions better than written instructions;
audio feedback for success and error situations, with different signals;
warnings and notifications (because the reaction time to voice is faster);
simple questions that needs relatively simple answers.
What’s Next?
When I asked my designer friends what mixed interfaces they know about, some of them mentioned the legendary MIT Media Lab video from 1979, “The Put That There.” Nostalgia aside, it is shocking that this technology had a working prototype 38 years ago. Is our super-fast progress just an illusion?
Voice recognition still has some obvious challenges today, and just a few major players provide platforms for products based on voice recognition, including apps such as WeChat and hardware devices such as the Amazon Echo.
A good start would be to develop a mini-app or bot that integrates with these systems. Here are some tips from our own experience of working with multi-modal interfaces:
Speed and accuracy are deal-breakers.
Sync voice and visual interfaces. Always have visual feedback of what’s happening.
Show visual indicators when the device is listening or thinking about an answer.
Highlight voice-command words in the graphical interface.
Set the right expectations with users about the interface’s capabilities, and make sure the product explains how it works.
The product should be aware of the physical and social context of the device and the conversation, and should respond accordingly.
Think about the context of the user, and identify which medium and device would reduce friction and make it easier to perform a task.
Give users options to access a function through alternative devices or media. This will help in situations where something breaks, and it will also make your product more accessible to disabled people.
Don’t ignore security and privacy. Enable people to turn off components (for example, the microphone), and build trust by being transparent. Don’t be too pushy, or else you will frighten everyone away (for example, voice spam is very annoying).
Don’t read out long audio monologues. If it cannot be summarized in a few words, display it on a screen instead.
Take time to understand the specifics of each platform, and choose the right one to build on.
Before starting out, though, keep in mind that, compared to other digital designs, multi-modal interfaces are still quite an unexplored area.
First, we don’t really have a general-purpose language or programming framework to describe mixed interfaces. Such a language could make it possible to define voice and GUI elements in one coherent code base, making it easier to design and develop these interfaces. It would also support multiple output and input options, enabling us to design omni-channel, multi-screen or multi-device experiences.
Secondly, designers have to come up with new design patterns to support the special needs of multi-modal interfaces. (For example, how would you give visual and audio feedback at the same time?)
Although the future looks exciting, and it will happen fast, we still need to reach the tipping point in voice recognition and language processing: where the usability of the voice medium will reach a level of quality that would indeed make it the best option in a range of applications. We will also need better tools to design and code multi-modal interfaces.
Once we accomplish these goals, then nothing will be holding these natural interfaces back, and they will become mainstream.
History Repeats Itself: Be A Part Of It
Humans have multiple senses. Technology and interfaces that use more than just one have a better chance of facilitating strong human-computer interaction.
A similar multi-modal evolution happened before. Radio and silent movies were combined into the movies, which were further enhanced with 3D and so on. I’m positive that this process will happen in the interactive digital world, too. Exciting times, indeed.
Five years ago, when, for the first time ever, I was invited to speak at one of the best front-end conferences in Europe, I had quite a mixture of feelings. Obviously, I was incredibly proud and happy: I had never had a chance to do this before for a diverse audience of people with different skillsets. But the other feelings I had were quite destructive.
I sincerely could not understand how I could be interesting to anyone: Even though I had been working in front-end for many years by then, I was very silent in the community. I hadn’t contributed to popular frameworks or libraries. I was just average. So, the feeling of a mistake having been made, that I did not deserve to be at that conference, was very strong, and I could not believe that I would indeed be speaking until I had bought my plane ticket.
But a plane ticket won’t guarantee that you won’t collapse on stage from pressure, so things got even worse. The line-up of speakers was so fantastic that during the final weeks before the conference, and more so after meeting in person all of those famous people whose books and articles I had been learning from, the only thing I could think of was, “They are gonna find out. All of these great people will find out that I am here by mistake, because I know nothing. It will be the end of my career and the worst embarrassment I could ever have in my professional life.”
Back then, in 2012, I had heard nothing about impostor syndrome. I didn’t even know that those feelings of mine had a name! The only thing I knew was that I had to fake it till I make it. Some years after, I read a lot of articles and research on this phenomenon and, critically, gradually found out how to deal with it in my professional life. Only now is the topic emerging in our industry and getting its deserved acknowledgement.
So, it’s time to shed some light on what impostor syndrome is, how we suffer from it day to day in our jobs, why it happens and what we can do about it. This article will, hopefully, guide you through some seldom-spoken aspects of this phenomenon in our industry.
But first things first: What is impostor syndrome? Let’s find out.
Imposter Syndrome Is Real And We All Have It
How many hours do you spend coding or learning about code outside of work? Front-end fatigue is very real, but thankfully there are a number of ways to help your head from exploding. Read a related article →
What Is Impostor Syndrome?
Simply put, impostor syndrome is the feeling of being a fraud, despite all evidence to the contrary. It’s an inability to internalize your own achievements, which results in a feeling of being less competent than the rest of the world believes you to be.
The term “impostor syndrome” (or “impostor phenomenon,” or sometimes “impostrism”) was coined by Pauline Clance and Suzanne Imes in 1978 in their work on high-achieving women in academics. That’s right: For years, the scientific community believed that this phenomenon was largely confined to women. But many of those same researchers are beginning to realize that the experience is more universal and that it might be even more problematic for men — simply because it is naturally much harder for men to admit to feeling insecure or incompetent. As a result, men hide their fears, unable to unburden themselves or seek help.
There is a difference, though, between impostor syndrome and a simple feeling of insecurity. Insecurity might make you hold on to a position that you have overgrown for some years simply because you don’t feel comfortable with taking action. Someone with impostor syndrome, on the other hand, feels compelled to constantly take action and to be better at whatever they are doing. Hence, people who suffer from it will go further in their career but will be in constant self-doubt about whether they deserve to be where they are. To a large extent, one of the main motivating forces of impostor syndrome is a wish to be successful, to be among the best. That’s why, ironically enough, impostor syndrome is most prevalent among high performers. Research shows that two out of five successful people constantly suffer from it, and up to 70% of the general population has experienced it for at least some part of their career.
Every year, charisma coach and persuasion expert Olivia Fox Cabane asks the incoming class at Stanford Business School, “How many of you in here feel that you are the one mistake that the admissions committee made?” Every year, two thirds of the class instantly raise their hands. How can Stanford students, passing such an intensive admissions process, being selected from among thousands of applicants, with a long list of documented achievements and accomplishments behind them, possibly feel that somehow they don’t belong there? The answer is impostor syndrome. Let’s take a closer look at its main characteristics.
Superwoman/superman
Self-criticism, arising from a tendency towards perfectionism, is one of the most common obstacles to great performance in any field. Ever felt like something you worked on could be improved even after having gotten a lot of praise?
Dissatisfaction caused by comparison
Dissatisfaction arises when one constantly compare oneself to others. Nothing is wrong with wanting to be the best — that is evolution at work. But impostors are far from getting a kick out of this competition. Have you ever thought that the majority around you are smarter than you are, or felt like you don’t belong where you are?
Fear of failure
Have you ever feared that somebody will find out that you are not as skilled as everyone thinks you are? Fear of failure is an underlying motivation of most “impostors.” Therefore, to reduce the risk of failure, impostors tend to overwork.
Denial of competence and praise
Do you relate to the feeling that your success is a result of luck, timing or forces other than your talent, hard work and intelligence? Do you shudder when someone says you’re an expert? According to Pauline Rose Clance, impostors not only discount positive feedback and objective evidence of success, but also focus on evidence or develop arguments to show that they do not deserve praise or credit for their achievements.
If these feelings are familiar to you, then welcome to the club.
Of course, impostor syndrome is not simply a matter of psychological discomfort. Underestimation and deprecation of your own achievements can have a real impact on you and your professional life.
Nature And Impact Of Impostor Syndrome
We probably agree by now — especially if you suffer from it — that impostor syndrome is a rather uncomfortable feeling. I wouldn’t suggest that it does not affect one’s private life, but the feeling of insecurity has a definite effect on the achievements in one’s professional life. So, what happens (or doesn’t happen) in your professional life when you ignore these feelings or simply are not aware of the syndrome?
It might keep you from asking for a well-deserved raise. You might shy away from applying for a job unless you meet every single requirement. In the office, you might be regarded as a private person because you don’t dare share your achievements or even discuss technology with colleagues, because you think they know everything while you’re a fraud. It might even stop you from asking to speak at a conference that you’ve dreamed of speaking at simply because you always think you are not good enough. Truth be told, those who suffer from impostor syndrome and who really, really want to achieve any of the things mentioned here usually do overcome these obstacles (recall the difference between imposter syndrome and insecurity). Impostor syndrome can be highly motivating, spurring us to work harder than anyone else. But at what cost?
In our community, impostor syndrome causes us to criticize ourselves constantly, because a lot of the problems we try to solve for ourselves have already been solved by others. In environments like that, it’s easy to feel that you aren’t smart enough. This feeds the syndrome and compels us to try to catch up on everything going on in our industry, so that we feel competent in whatever we’re doing. And we all know how much information there is to catch up on: This feeling is well known to all of us.
Only a couple of years ago, I had several reading applications on my phone, such as Flipboard, Pocket and Instapaper. I constantly saved the latest news from the world of development to read later. I followed several online magazines (like the one you’re reading right now) for the latest tutorials, how-to’s and developments within the industry. Then, there’s Twitter. Reading Twitter can make things even worse: Seeing a lot of talented people bragging about their achievements does not soothe impostor syndrome at all. But my story doesn’t end there.
There were also RSS feeds, email subscriptions (such as to HTML Weekly and Javascript Weekly), videos from recent conferences. I tried to consume most new articles and videos. Obviously, reading everything was impossible: In this flow of information, I also had to find time to do work that paid the bills. Sound familiar?
At some point, I realized that I wasn’t reading the saved articles anymore. On the best of days, I would quickly look through the titles, pick some, and those would usually lie untouched in my browser for days. Clearly, I didn’t feel more competent or skilled after consuming all of that information.
The reason is that it was not me, really, who was interested in all of that information. It was the “impostor,” pushing me to catch up on everything going on in the community, so that I wouldn’t feel like an incompetent fraud. Rather than pushing us to learn more of what we really want, to apply it in our work, to enjoy and be better in our profession and to feel competent, imposter syndrome pushes us into the state of frustration.
How To Deal With Impostor Syndrome
If you have ever experienced this, I have good news. One of imposter syndrome’s frustrating ironies is that actual frauds rarely seem to experience this phenomenon. English philosopher Bertrand Russell put it more poetically: “The trouble with the world is that the stupid are cocksure and the intelligent are full of doubt.” It’s great to know that those who suffer from this syndrome are intelligent; nevertheless, it is an uncomfortable psychological problem that we have to do something about. Let’s see how we can deal with this feeling.
Below is a list of solutions that could work separately or in combination. Try them to see what works for you.
Embrace It
Pacific Standard magazine once wrote, “Impostor syndrome is, for many people, a natural symptom of gaining expertise.” This makes total sense: In gaining expertise, we enhance our knowledge. And as we expand the boundary of what we know, we become more and more exposed to what we don’t. So, the next time you suffer an attack, do not rush for new information. Instead, stop and enjoy. Most probably, this is a sign that you are gaining experience and gaining the wisdom to accept that there is much more in the industry, and in the world in general, for you to discover.
I deliberately said “most probably” above because some confuse foolish bravery with expertise. However, such people would count as edge cases, suffering from the Dunning-Kruger effect, which essentially means that they cannot recognize their own ignorance.
Reframe Your Understanding of Failure
It would be naive to believe that as you progress in your professional life, you will not make any mistakes. It is OK to be occasionally wrong, to fail or not to know everything. That’s perfectly normal; it doesn’t make you fake or undeserving. Even the best of us make mistakes — we are human, after all. Even Brazil’s football team lost to Norway in the World Cup once (a remarkable thing for anyone living in Norway since Norwegians were not even skiing). Try to reframe failure as an opportunity to learn. There is even a global conference dedicated to failure, called FailCon, which was once held in Silicon Valley, home to the biggest names in the industry. Recognize that failure is simply the path to success, and failing quickly is the surest way to learn what works and what doesn’t and to grow even more.
Measure Yourself by Your Own Rule
It’s easy to feel overwhelmed by other people’s talents, but comparing yourself to others is a game that is impossible to win. Instead, try competing with yourself. Where were you a year ago? Six months ago? Can you measure your improvement over time? I am sure this will give you a much better perspective of your own progress.
Communicate Your Fears and Feelings
This might sound even more frightening, but bear with me. Don’t be afraid to talk about your feelings. The funny thing is that most people who experience impostor syndrome are unaware that others around them feel inadequate as well. This happens simply because impostor syndrome can be hard to spot in others. As mentioned earlier, those who experience it generally do very well in their jobs. But award-winning writer Neil Gaiman has the perfect anecdote. He shares a funny story about attending a gathering of acknowledged figures, and recognizing that he and Neil Armstrong felt exactly the same discomfort because neither thought they deserved to be at the gathering. Communicating these feelings made a big difference to him: “And I felt a bit better. Because if Neil Armstrong felt like an impostor, maybe everyone did.”
So, the next time you start to feel like a fraud at work or are afraid that your colleagues might suspect that you don’t know as much as they thought you did, seek comfort in the knowledge that some of even the most accomplished among us feel similarly. Maybe even your boss.
Conclusion
Impostor syndrome is not a mental disorder, even though it is on the radar of many psychologists and has been extensively researched in recent years. Nevertheless, it is a real psychological issue, rooted deeply in many of us. If we do not pay attention to its symptoms, if we blindly follow its triggers, then we can get into real psychological trouble. The good news is that, even though there is no pill for it, we can change out attitude towards it. Simply acknowledging the feeling can help to neutralize its effect.
I hope you’re now better aware of impostor syndrome, because if you spot the symptoms early enough and try to overcome the effects using the approaches mentioned above, then the practices you integrate will help you to live a more fulfilling life.
P.S.These days, instead of constantly monitoring what’s going on in our industry and diving into each and every piece of news, I dedicate only 20 minutes every morning to it. And let me tell you, that is more than enough time to get what is really important. Stay healthy.
When first learning how to use Grid Layout, you might begin by addressing positions on the grid by their line number. This requires that you keep track of where various lines are on the grid.
Built on top of this system of lines, however, are methods that enable the naming of lines and even grid areas. Using these methods enables easier placement of items by name rather than number, but also brings additional possibilities when creating systems for layout. In this article, I’ll take an in-depth look at the various ways to name lines and areas in CSS Grid Layout, and some of the interesting possibilities this creates.
Naming Lines
We can make a start by naming the lines on a grid layout. If you take the example below, we have a grid with six explicit column tracks and one explicit row track. Items are placed on this grid by way of line numbers.
If we want to name the lines, we do so inside square brackets in the track listing. The key thing here is to remember that you are naming the line, not the track that follows. Having named the lines you can swap line numbers for names when positioning items.
You can name lines anything you like other than the span keyword. For reasons which you will discover later in this article, it is a good idea to name them with the suffix -start for start lines (whether row or column and -end for end lines). You might have main-start and main-end, or sidebar-start and sidebar-end.
Quite often the end line of one part of your grid and the start line of another coincide, and this is not a problem as lines can have multiple names. Create multiple names by adding them separated by a space — inside the square brackets.
This example also demonstrates that you don’t need to name every single line of the grid, and you always still have numbers to use in addition to names.
We have seen how lines can have multiple names, but you can also have multiple lines with the same name. This will happen if you use repeat notation and include named lines in the track listing. The next example creates six named lines, alternately named col-a-start and col-b-start.
If you place an item using col-a-start, it will be placed against the first instance of col-a-start (in this example, that would be the first line of the grid). If you place it against col-b-start, it will be positioned against the second line of the grid.
To target later lines, add a number after the line name to indicate which instance of that line you are targeting. The following CSS will place the item starting on the second line named col-a-start and finishing on the third line named col-b-start.
The specification describes this behaviour as “creating a named set of grid lines” which can be a helpful way of looking at the grid you have created with multiple lines of the same name. By adding the number you are then selecting which line of the set you wish to target.
Maintaining Line Names While Redefining A Responsive Grid
Whether you choose to use line numbers or named lines is completely down to you. In general, lines can be useful where you wish to change the grid definition within media queries. Rather than needing to keep track of which line number you are placing things against at different breakpoints, you can have consistently named lines. Only the definition then needs to change and not the positioning of items.
In the following simple example, I define my grid columns for narrow widths, then redefine them at a width of 550 pixels. The positioned items continue to place themselves against the same named line, despite the fact the location of the line on the grid has changed.
We have so far had a good look at named lines, however, there is another way of naming things on the grid. We can name grid areas.
A grid area is a rectangular area consisting of one or more grid cells. The area is defined by four grid lines marking out the start and end lines for columns and rows.
We name areas of the grid using the grid-template-areas property. This property takes a somewhat unusual value (a set of strings, one for each row) which describe our layout in ascii-art style.
.grid { display: grid; grid-template-columns: repeat(3, 1fr 2fr) ; grid-template-areas: "head head head head head head""side side main main main main""foot foot foot foot foot foot";}
The names we use in the strings for grid-template-areas are assigned to the direct child elements of the grid using the grid-area property. The value of this property when used to assign a name is what is known as a custom identifier, so it should not be quoted.
Whenever we describe our layout as the value of grid-template-areas, it will cause an area to cover more than one cell of the grid we repeat the ident along the row or down the columns. The area created must be a complete rectangle — no L or T-shaped areas. You may also only create one rectangular area per name — disconnected areas are not possible. The specification does note that:
“Non-rectangular or disconnected regions may be permitted in a future version of this module.”
When creating our grid description, we also need to create a complete representation of our grid, otherwise the whole declaration is thrown away as invalid. That means that every cell of the grid needs to be filled.
grid-template-areas: "head head head head head head""side side main main main main""foot foot foot foot foot foot";}
As you might want to leave some cells empty in your design, the spec defines a full-stop character . or a sequence .... with no white space between as a null cell token.
grid-template-areas: "head head head head head head""side side main main main main""....... ....... foot foot foot foot";}
If you haven’t already downloaded Firefox Nightly in order to benefit from all the newest features of the Firefox DevTools Grid Inspector, I can recommend doing so when working with named areas.
From Named Lines Come Areas
Now we come to an interesting part of all of this naming fun. You might remember that when we looked at naming lines I suggested you use the convention of ending the line which begins an area with -start and the end line -end. The reason for this is that if you name lines like this, you will get a named area of the main name used into which you can position an item by giving it a name with grid-area; in the same way that you position items in grid-template-areas by assigning the ident using grid-area.
In this next example, I am naming lines for both rows and columns panel-start and panel-end. This will give me a named area called panel. If I assign that as the value of grid-area to an element on my page it will be placed into the area defined by those lines.
.grid { display: grid; grid-gap: 20px; grid-template-columns: 1fr [panel-start] 2fr 1fr 2fr 1fr [panel-end] 2fr; grid-template-rows: 10vh [panel-start] minmax(200px, auto) 10vh [panel-end]; grid-template-areas: "head head head head head head" "side side main main main main" ".... .... foot foot foot foot";}.panel { grid-area: panel;}
We can also do the reverse of the above, and use lines created from our named areas. Each area creates four named lines using the same -start and -end convention. If you have a named area called main, then you have row lines main-start and main-end for the start and end row lines, and column lines main-start and main-end for the start and end column lines. You can then position an item using line-based placement and the named lines.
In this example, I am positioning the overlay panel using these created named lines.
.grid { display: grid; grid-template-areas: "head head head head head head" "side side main main main main" ".... .... foot foot foot foot";}.panel { grid-column: main-start / main-end; grid-row: head-start / foot-end;}
In addition to line names for start and end, a line is created for the start edge of any named grid area which uses the main name. Therefore, if you have an area called main, you could use the ident main as a value for grid-row-start or grid-column-startand the content would start at the start line of that area. If you used the value for grid-row-end or grid-column-end, then the end line of that area is chosen. In the below example, I am stretching the overlay panel from the start of main to the end of main for columns, and the start of main and the end of foot for rows.
.panel { grid-column: main; grid-row: main / foot ;}
To wrap up all of this magical line business, it is also useful to know something about grid-area. Essentially what we are doing when using grid-area with an ident like main is defining all four lines of the area. A valid value for grid-area is also to use line numbers.
The grid-area property behaves a little differently when you use a custom ident rather the a number. If you use line numbers for the start values in grid-area, any end line number you do not set will be set to auto, therefore grid auto placement will be used to work out where to put your item.
If, however, you use a custom ident and omit some of the lines, then the missing lines are set as follows.
Set Three Line Names
.main { grid-area: main / main / main ;}
If you set three line names, then you are essentially missing grid-column-end. If grid-column-start is a custom ident then grid-column-end is also set to that ident. As we have already seen, an -end property will use the end edge of the area when using the main name to set a line, so with grid-column-start and grid-column-end set to the same name, the content stretches over the columns of that area.
Set Two Line Names
.main { grid-area: main / main ;}
With only two names are set, you are setting the row and column start lines. If grid-column-start is a custom ident then grid-column-end is also set to that ident. If grid-row-start is a custom ident, then grid-row-end will be set to that ident.
Set One Line Name
.main { grid-area: main ;}
Setting one line name is what you do when you set grid-area to the main name of your area. In this case, all four lines are set to that value.
Note: This works forgrid-columnandgrid-row, too.
This technique essentially means you can target a set of columns of rows on the grid to place items. As just as the -end values of grid-area are set to the same index as the start values when they are omitted, so too are the end values of grid-column and grid-row. This means you can place an item between the start and end column lines of main by using:
.main { grid-column: main ;}
In the blog post Breaking Out with CSS Grid explained, I showed how this capability is used to create a useful design pattern of full-width areas breaking out of a constrained content area.
A Grid Can Have A Lot Of Named Lines!
What all of the above means is that a grid can end up with a huge number of named lines. In most cases, you don’t need to worry about this. Pick the ones you want to use and ignore the fact there are others. They will just sit quietly with their names, causing you and your layout no problems at all.
Naming And The grid And grid-template Shorthands
CSS Grid Layout has two shorthands which enable the use of many grid properties in one compact syntax. Personally, I find this quite hard to read. Opinion is divided when I discuss this with other developers – some people love it, and others would rather use the individual properties. Have a look and see which camp you fall into! As with all shorthands, the key thing to remember is that properties you do not use will be reset when you use the shorthand.
The grid-template Shorthand: Creating The Explicit Grid
You can use the grid-template shorthand to set all of the explicit grid properties at once.
This means that you can define named lines and named areas in one. To create the syntax combining named areas and lines, I would first define your grid-template-areas value as in the section above.
Then you might want to add row names. These are placed at the beginning and end of each string – remember that a string represents a row. The row name or names needs to be inside the square brackets, just as when you named lines in grid-template-rows and should be outside of the quotes wrapping the string defining the row.
I have named two row lines in the code example: panel-start comes after the header line (row line 2 of the grid), while panel-end comes after the end footer line (line 4 of our three row track grid). I have also defined the row track sizing for named and un-named rows, adding the value after the string for that row.
.grid { display: grid; grid-gap: 20px; grid-template: " head head head head head head" 10vh [panel-start] "side side main main main main" minmax(200px, auto)".... .... foot foot foot foot" 10vh [panel-end];}
If we also want to name columns, we can’t do this inside the string so we need to add a / separator and then define our column track listing. We name the lines in the same way we would if this listing were the value of grid-template-columns.
.grid { display: grid; grid-gap: 20px; grid-template: " head head head head head head" 10vh[panel-start] "side side main main main main" minmax(200px, auto)".... .... foot foot foot foot" 10vh [panel-end]/ [full-start ] 1fr [panel-start] 2fr 1fr 2fr 1fr [panel-end] 2fr [full-end];}
In this example, which you can see in the codepen below, I am creating an additional set of lines for rows and columns, these lines define an area named panel as I have used the panel-start and panel-end syntax. So I can place an item by giving it a grid-area value of panel.
This looks pretty obscure at first glance, however what we are doing here is creating a column listing that lines up with our ascii-art definition above. You could carefully add white space between everything in order to make the template-areas and template-columns definitions align, if you wanted.
The specification suggests that unless you want to define the implicit grid separately you should use the grid rather than the grid-template shorthand. The grid shorthand will reset all of the implicit values that you do not set. So this shorthand allows the setting, and resets the following properties:
For our purposes, using the grid shorthand would look identical to using the grid-template shorthand as we are creating an explicit grid definition. The only difference would be the resetting of the grid-auto-* properties. The grid shorthand can either be used to set the explicit grid — resetting the implicit properties, or the implicit grid — resting the explicit properties. Doing both at once doesn’t make much sense!
.grid { display: grid; grid-gap: 20px; grid: " head head head head head head" 10vh[panel-start] "side side main main main main" minmax(200px, auto)".... .... foot foot foot foot" 10vh [panel-end]/ [full-start ] 1fr [panel-start] 2fr 1fr 2fr 1fr [panel-end] 2fr [full-end];}
Note: In the initial Candidate Recommendation of the CSS Grid spec, this shorthand also resets the gutter properties grid-column-gap and grid-row-gap. However, this has been changed. Browsers are updating their implementations, but at the time of writing you may find the gap properties being reset to 0 when using this shorthand, so you would need to define them afterwards.
Which Method Should I Use?
Which of all of these different methods, you might be wondering, is the best to use for any given task. Well, there are no hard and fast rules. Speaking personally, I love using grid-template-areas for components while working in my pattern library. It is nice to see the shape of the component right there in the CSS and grid-template-areas makes it easy to try out different layouts as I test a component. Something I have discovered is that, due to it being so easy to move things around, it is really important to check you haven’t disconnected the visual and logical order of your component. A user navigating your site with a keyboard, tabbing between items will be following the order of elements as defined in the source. Make sure that you do not forget to rearrange the source once you have worked out the best way to display your content. For more information about this issue, I advise you to read CSS Grid Layout and Accessibility.
I have been finding that I tend to use named lines for the larger sections of the layout, on the main page grid where I may well be placing different types of components for different layouts. With that said, I’m still exploring how best to use grid in production — probably just like everyone else. I’ve been creating small examples and playing with the ideas for several years, yet it has only been recently that I could use the techniques on real websites. Try not to get hung up on what is “right” or “wrong”. If you find a method confusing, or it doesn’t seem to work in your context, simply don’t use it. The beauty of this is that we can choose the ways that make the most sense for the projects we are working on. And, if you do come up with some guidelines for your own projects based on experience, write them up. I’m really keen to see what is working well for people in the real world of production grid layouts.
Quick Rules When Naming Things
To round up this article, here are some quick rules to remember when naming lines or areas of your grid:
When Naming Lines
You can use almost any name you like (other than the word span), however, you might want to use a named area created from your lines and name them ending in -start and -end.
Lines can have multiple names, space separated inside a single set of square brackets.
Multiple lines can have the same name; just add the number of the line instance that you want to target after the line name.
When Creating Named Areas
When defining an area using grid-template-areas, the shape must be a complete rectangle.
Each row of your grid needs to be wrapped in quotes inside the value of grid-template-areas. You are creating a collection of strings, i.e. one string per grid row.
Each cell needs to be filled. If your design requires some cells to be left empty, then use a full-stop . or multiple ... with no white space to indicate this.
Your named areas create lines with the same name as the area, plus lines named with the area name and -start and -end appended. You can use these to place items.
Here is the question that I am currently pondering. Why did I not become an actor? Actors (that get a break) are typically revered by the public. Is it because they are special? Well, more than likely they acted special for a part. Here is the paradoxical point, if I act in a way that I know will be liked by others, then why have I not succeeded or have material and monetary riches like revered actors? I guess my mistake was not filming it!
Today we’d like to share another way of achieving morphing page transitions. This time, we’ll generate multiple SVG curves with JavaScript, making many different looking shapes possible. By controlling the individual coordinates of the several layers of SVG paths, the curved shapes animate to a rectangle (the overlay) with a gooey motion. We use some nice easing functions from glsl-easings and by tuning the curve, speed and the delay value, we can generate many interesting effects.
Attention: We use some new CSS properties in the demos; please view them with a modern browsers.
Let’s have a look at the SVG which we will use to insert the path coordinates dynamically.
First, we’ll make sure that the whole SVG and the overlay paths are stretched to the size of the screen. For that, we’ll set the preserveAspectRatio attribute to none. Depending on how many layers we want, we’ll create that amount of paths:
Each path element corresponds to a layer of the overlay. We’ll specify the fill for each of these layers in our CSS. The last path element is the background that stays after the overlay expansion:
Note that in our demos, we make use of CSS custom properties to set the path colors.
The JavaScript
For our demos, we define an overlay control class that allows us to set and control a couple of things. By changing each value, you can create unique looking shapes and effects:
class ShapeOverlays { constructor(elm) { this.elm = elm; // Parent SVG element. this.path = elm.querySelectorAll('path'); // Path elements in parent SVG. These are the layers of the overlay. this.numPoints = 18; // Number of control points for Bezier Curve. this.duration = 600; // Animation duration of one path element. this.delayPointsArray = []; // Array of control points for Bezier Curve. this.delayPointsMax = 300; // Max of delay value in all control points. this.delayPerPath = 60; // Delay value per path. this.timeStart = Date.now(); this.isOpened = false; } ... } const elmOverlay = document.querySelector('.shape-overlays'); const overlay = new ShapeOverlays(elmOverlay);
Further methods that determine the appearance of the overlay are the ShapeOverlays.toggle() method and the ShapeOverlays.updatePath() method.
The ShapeOverlays.toggle() method has the function of opening and closing the overlay, and also of setting the delay value of each control point for every time it opens and closes. It is not necessary to set the delay value every time, but by altering it, it will create some nice randomness.
The ShapeOverlays.updatePath() controls the animation by specifying the easing function.
For example, in demo 1, the same easing function is used for all control points, and the delay value is set like a fine wave using trigonometric functions, so that we get a “melting” appearance.
toggle() { const range = 4 * Math.random() + 6; for (var i = 0; i < this.numPoints; i++) { const radian = i / (this.numPoints - 1) * Math.PI; this.delayPointsArray[i] = (Math.sin(-radian) + Math.sin(-radian * range) + 2) / 4 * this.delayPointsMax; } ... } updatePath(time) { const points = []; for (var i = 0; i < this.numPoints; i++) { points[i] = ease.cubicInOut(Math.min(Math.max(time - this.delayPointsArray[i], 0) / this.duration, 1)) * 100 } ... }
In our demos we use this effect to create an overlay in order to show a menu in the end of the animation. But it could also be used for other types of transitions, like page transitions or scroll effects. Your imagination is the limit.
Here are a couple of screenshots:
We hope you enjoyed this effect and find it useful!
Credits
glsl-easings by glslify. Easing functions that use to demos are based on the code of glsl-easings module.
Jamf Now is a mobile device management solution for iPad, iPhone and Mac devices at work. We make managing Apple devices simple and affordable; no IT required.
Editor’s Note: Welcome to this month’s web development update. Anselm has summarized the most important happenings in the web community that have taken place over the past few weeks in one handy list for you. Enjoy!
As web developers, we’re working in a very diverse environment: We have countless options to specialize in, but it’s impossible to keep up with everything. This week I read an article from a developer who realized that even though he has been building stuff for the web for over seven years, sometimes he just doesn’t understand what’s going on: “I’m slamming my keyboard in frustration as another mysterious error appears in my build script,” he writes. For him, writing JavaScript isn’t fun anymore. The tool chain got too complex, the workflows are built mainly for developer convenience, and many things that exist in the languages itself are reinvented in external libraries.
Now when I look at the articles I collected for you this month, I can relate to the kind of frustration he’s feeling. Soon we won’t be able to use .dev domains anymore, HTTPS CAA checks don’t work with private network interfaces, and when I look at a (admittedly great) tutorial on how we can replace scroll events with IntersectionObserver, I see code that might have better performance but that is more complex as what we used to do with EventListener.
The web is developing and changing so fast, and we need to acknowledge that we as individual persons can’t know and understand everything. And that’s fine. Choose what you want to do, set your priorities, and, most importantly of all, don’t hesitate to hire someone else for the things you can’t do on your own.
News
Mattias Geniar reminds us that Chrome, according to a recent commit in Chromium, will very soon preload .dev domains as HTTPS via preloaded HSTS. Google bought the domain name, and they now want it to be accessible only via HTTPS. So if you use a .dev name in your projects (which often is the case on your local machine, registered manually via the hosts file), you should switch to the reserved .test domain name now or consider using localhost instead. Once the patch lands in Chrome, you’ll not be able to access your projects anymore without a valid TLS certificate in place.
React 16 is out now — under a full MIT license which finally ends the debate about the previously used patent-clause copyright license. The new version comes with a rewritten core, better error handling, custom DOM attributes, and it returns fragments and strings (so no more useless span-elements). Also, it’s footprint has decreased by 30%.
Tooling
Infusion is an inclusive, accessible documentation builder.
Sketch 47 is out with two major new features: libraries and smooth corners. Especially libraries are a huge step forward as they allow us to sync, share and update symbols from any Sketch document and even in collaboration with other people.
Web Performance
“Essential Image Optimization” by Addy Osmani is a free eBook that explains almost everything you can and should know about image optimization for the web. Be sure to take a look at it.
News from Cloudflare: You’ll soon be able to deploy JavaScript to Cloudflare’s edge, written against an API similar to Service Workers. Sounds pretty amazing.
CSS
Mozilla built a CSS Grid playground that helps you wrap your head around and build with the new layouting technique.
The Intl.PluralRules API is an extension to the Internationalization API that will soon be available in Firefox 58 and Chrome 63. It solves a quite tricky issue with plurals in internationalized contexts.
Ian Devlin shares how they handle accessibility at Trivago. An interesting read, especially because they explain the internal challenges, how to increase awareness for accessibility, and what they did, do and try to achieve in the future.
Security
The University of Cambridge shares why they can’t issue TLS certificates anymore for their internal network domainprivate.cam.ac.uk due to the now required CAA check. In short: As the hostname cannot be checked by the certificate authority, it declines to issue a certificate. The drawback of the otherwise quite useful mandatory CAA checks.
With PHP7.2 coming in November, the Libsodium extension will be available in PHP. This means we’ll finally have a relatively easy way to use the Argon2 algorithm for hashing user passwords. Here’s how you can do it.
Carl Chenet explores why communicating over Slack can be problematic as nothing of what we write in the app is encrypted. So better never share any business secrets or credentials via Slack.
Privacy
Judith Duportail asked the dating platform Tinder for her data and got back more than 800 pages filled with very personal details and a lot more than she’d remembered. Tinder, of course, is just an example here — the same is probably true for most apps on your phone.
Jonathan Golden from Airbnb shares what they learned from scaling Airbnb. A good article about company management, setting goals and optimizing work.
Going Beyond…
More and more people working at Google, Twitter, Facebook and other tech giants disconnect themselves from smartphones. By radically limiting the feature set to a normal wireless phone, they want to gain back control over their lives. Paul Lewis spoke to some people and researched why tech insiders who actually build the apps and operating systems for smartphones and other smart devices fear a smartphone dystopia. A good read on mental health issues.
Jens Oliver Meiert emailed the companies that are responsible for 71% of all greenhouse gas emissions. Here’s what they replied, but the most important point about this experiment is what the author concludes:
“Perhaps I didn’t get to email the people who’re truly responsible here; and what they do with my requests, I don’t know, either.
But the point is that reaching out is one of the few options we have at our disposal; and if even one small thing changes and improves, it may be a success. And as such I believe more people should reach out. Instead of waiting for politicians or law enforcement to act, let’s act ourselves, let’s make ourselves heard. Constructive action always helps.”
We hope you enjoyed this Web Development Update. The next one is scheduled for November 17th. Stay tuned!
You know that user feedback is crucial — after all, your users will decide whether your app succeeds or not — but how do you know whether users are being fair and objective in their feedback?
We can tell you: They won’t be. All of your users will be giving you biased feedback. They can’t help it.
When soliciting and listening to user feedback, you will inevitably run into bias on both sides of the coin: Biases will influence the people providing feedback, and your own biases will influence the way you receive that feedback.
It’s important to be aware of this, especially when reviewing comments about your user experience (UX). Accurate and unbiased feedback is essential to developing the best possible version of your app. Although you can’t erase your own biases (or those of your users), you can take steps to overcome common biases once you know what they are and how they might appear. The next time you ask your users for input, keep bias in mind and evaluate how you respond to users’ comments. Is your action (or inaction) driven by bias?
Collecting And Analyzing Data
When determining qualitative sample sizes in user research, researchers must know how to make the most of the data they collect. Your sample size won’t matter if you haven’t asked good questions and done thorough analysis. Read a related article →
There are dozens of cognitive biases that take many different forms, although a few dominating types emerge frequently for product teams seeking user feedback. In this article, we’ll take a closer look at four of the most common types of cognitive biases that pop up when collecting and interpreting UX feedback — and how you can nip these biases in the bud, before they skew your production process:
This is probably the most well-known bias encountered by people of all professions. Psychologist Daniel Kahneman, who first introduced the concept of confirmation bias together with mathematical psychologist Amos Tversky, says that confirmation bias exists “when you have an interpretation, and you adopt it, and then, top down, you force everything to fit that interpretation.” Confirmation bias occurs when you gravitate towards responses that align with your own beliefs and preconceptions.
Solely accepting feedback that aligns with your established narrative creates an echo chamber that will severely affect your approach to UX design. One dangerous effect of confirmation bias is the backfire effect, in which you begin to reject any results that prove your opinions wrong. As a designer, you are tasked with creating the UX that best serves your audience, but your design will be based in part on your subjective tastes, beliefs and background. Sometimes, as we learned firsthand, this bias can sneak its way into your process — not so much in how you interpret user feedback, but in how you ask for it.
In the early years of my agency designing web and mobile apps for clients, we used to have our UX designers write user surveys and conduct interviews to get feedback on products. After all, the designers understood the UX like no one else, and, ultimately, they’d be the ones to make any changes. Strangely, after doing this for about a year, we noticed that we weren’t getting a lot of actionable feedback. We began to doubt the value of even creating these surveys. Before tossing them out entirely, we experimented by removing the UX designers from the feedback process. We had one of our quality-assurance (QA) engineers write the user survey and collect the feedback — and we quickly found the results were vastly more interesting and actionable.
Although our UX designers were open to hearing feedback, we realized they were subconsciously formulating survey and interview questions in a manner that would easily confirm their own preconceptions about the design they created. For instance, our UX designers asked, “Did the wide variety of products available make it difficult to find the specific product you wanted?” This phrasing led our respondents to perceive that finding a product was difficult in the first place — leaving no room for those who found it easy to reflect that in their answers. The question also suggested a cause of difficulty (the wide variety of products), leaving no room for respondents to offer other potential reasons for difficulty finding a product.
When our QA engineers took the reins, they wrote the question as, “Did you have any difficulty finding the product you wanted? If so, why?” Having no strong preexisting beliefs about the design, they posed an unbiased question — leaving room for more genuine answers from the respondents who had difficulty finding products, as well as those who didn’t. By following up with an open-ended “Why?,” we were able to collect more diverse and informative answers, helping us to learn more about the various reasons that respondents found difficulty with our design.
Confirmation bias often shows up when one is creating user surveys. In your survey, you might inadvertently ask leading questions, phrased in a way that generates answers that validate what you already believe. Our UX designers asked leading questions like, “Did the branding provide a sense of professionalism and trust?” Questions like this don’t allow space for users to provide negative or opposing feedback. By removing our designers from the feedback process, the questions naturally became less biased in phrasing — our QA engineers asked non-leading questions like, “What sort of impressions did the app’s look and feel provide?” As a result, we began to see far more objective and truly helpful feedback coming from users.
Avoiding Confirmation Bias
Overcoming confirmation bias requires collecting feedback from a diverse group of people. The bigger the pool of users providing feedback, the more perspectives you add to the mix. Don’t survey or interview users from only one group, demographic or background — do your best to get a large sample size filled with users who represent all demographics in your target market. This way, the feedback you receive won’t be limited to one group’s set of preconceptions.
Write survey questions carefully to avoid leading questions. Instead of asking, “How much did you like feature X of the app?,” you might ask, “Rate your satisfaction with feature X on the following scale” (and provide a scale that ranges from “strongly dislike” to “strongly like”). The first phrasing suggests that the user should like the feature in question, whereas the second phrasing doesn’t make an inherent suggestion. Have someone else read your survey questions before sending them out to users, to check that they sound impartial.
UX designers can also avoid falling prey to confirmation bias by using more quantitative data — although, as you’ll see below, even interpretation of numerical data isn’t immune to bias.
Framing Bias
Framing bias is based on how you choose to frame the user feedback you’ve received. This kind of bias can make a designer interpret an objective metric in a positive or negative light. Nielsen Norman Group offers a fascinating example that describes the results of a user feedback survey in two ways. In the first, 4 out of 20 users said they could not find the search function on a website; in the second, 16 out of 20 users stated that they found the search function.
Nielsen Norman Group presented these two results — which communicate the same information — to a group of UX designers, with half receiving the success rate (16 out of 20) and half the failure rate (4 out of 20). It found that only 39% of those who received the positive statement were likely to call for a redesign, compared to 51% of respondents who received the failure rate. Despite the metric being the same, a framing bias led to these professional UX designers behaving differently.
The presence of framing bias when analyzing data can lead to subsequent effects, such as the clustering illusion, in which people mistakenly perceive patterns in data that are actually coincidental, or the anchoring effect, in which people give much more weight to the first piece of data they look at than the rest of the data. These mental traps can influence the decisions you make in the best interest of the product.
Avoiding Framing Bias
You can avoid framing bias by becoming more self-aware of how you look at data — and by adding more frames.
For every piece of feedback you assess, ask yourself how you’re framing the data. This awareness will help you learn not to take your first interpretation as a given, and to understand why your perspective feels positive or negative. Then, identify at least one or two alternative frames you could use to phrase the same result. Let’s say one of your survey results shows that 70% of users feel your UI is intuitive. This gives you a surge of pride and validation, but you also recognize that it’s framed as a positive. Using an alternative frame, you write the result again as such: 30% of users do not feel the UI is intuitive. By looking at both frames, you gain a less biased and more well-rounded perception of what this data means for your product.
Be wise enough to admit when you’re unsure of what action to take based on your data. Get a second opinion from someone on your team. If one piece of feedback seems particularly important and difficult to interpret, consider sending out a new survey or gathering more feedback on that topic. Perhaps you could ask those users who don’t feel your UI is intuitive to elaborate on specific aspects of the UI (colors, button placement, text, etc.). What specific, impartial questions could you create to gain deeper insight from users?
Friendliness Bias
Of course, you want to be civil and professional with the people who provide UX feedback, but it doesn’t pay to be too friendly. In other words, you don’t want to be so accommodating that it skews their responses.
Friendliness bias — also called acquiescence bias or user research bias — occurs when the people providing feedback tell you the answers they think you want to hear. Sometimes, this happens because they think fondly of you and respect your professional opinion, but the reason can also be less flattering.
People might tell you what you want to hear because they’re tired of being questioned, and they think positive answers will get them out of the room (or out of the online survey) faster. This is the principle of least effort, which states that people will try to use the smallest amount of thought, time and energy to avoid resistance and complete a task. This principle has probably already influenced the usability of your UX design, but you might not have considered how it comes into play when collecting feedback.
Whatever the cause, friendliness bias can tarnish the hard work and market research you’ve conducted, giving you ungenuine data you can’t effectively use.
Avoiding Friendliness Bias
Friendliness bias can be avoided by removing yourself from the picture, because most people don’t like to give unfavorable feedback face to face.
If gathering UX feedback involves in-person questionnaires or focus groups, have someone outside of your development team serve as a facilitator. The facilitator should make it clear that he or she is not the one responsible for the design of the product. This way, people might feel more comfortable providing honest — and negative — feedback.
Collecting feedback digitally is also a helpful way to reduce the chance of your data being compromised by friendliness bias. People might open up more when sitting behind a screen, because they don’t have to face the reactions of the survey’s providers.
Be mindful, especially if you go the digital route, of survey fatigue. When you ask too many questions, your users might begin to tire partway through the survey. This can result in people simply selecting answers at random (or choosing the most favorable answers) just to finish faster and expend the least amount of effort. To avoid friendliness bias due to survey fatigue, keep surveys as short as possible and phrase the questions in very simple terms. Don’t make all questions required, and edit the survey diligently to cut all questions that aren’t truly relevant or necessary.
False-Consensus Bias
This form of bias happens when developers overestimate the number of people who will agree with their idea or design, creating a false consensus. Put simply, false consensus is the assumption that others will think the same way as you.
A 1976 Stanford study asked 104 college students to walk around campus wearing a sandwich board advertising a restaurant. The study found that 62% of those who agreed to wear the sign believed others would respond the same way, and that 67% of students who refused to wear the sign thought their peers would also respond negatively. The fact that both of these groups formed a majority in favor of their personal belief is an example of false-consensus bias.
As outlined above, our own UX designers fell into false-consensus bias when writing user survey questions, unintentionally phrasing questions with the assumption that users would appreciate the same UX features that the designers appreciated. Even though the core goal in UX design is to set aside your personal beliefs in favor of the wants and needs of your audience, you can’t help but see your product through your own lens — making it difficult to imagine that others would see it any other way. This underscores the importance of having team members with different backgrounds (especially those with expertise outside of UX design) be involved in the feedback process.
Avoiding False-Consensus Bias
False-consensus bias can be avoided by identifying and articulating your own assumptions. When you begin creating a user survey or crafting a test group, ask yourself, “What do I think the results of this feedback will be?” Write them down — these are your assumptions. Even better, ask a friend or coworker to listen to you describe the product, and have them write down the assumptions and opinions they hear from you.
Once you’re aware of your perceptions, you can design the feedback process to ensure you don’t favor your own opinions. With your assumptions sitting in front of you, try this exercise: Pretend every single one of your assumptions is wrong. If this is the case, which of these assumptions would be the riskiest to your product’s success? Which ones would cause widespread dissatisfaction across users? Craft questions for your users that challenge those risky assumptions.
Just as with confirmation bias, it’s important to collect feedback from a wide range of users, across all demographics and backgrounds. Ensure you’re not just surveying people who work closely with you or who come from a similar background — these people are likely to share the same opinions and preconceptions as you, which can reinforce your false consensus.
Breaking Down Biases Makes User Feedback More Valuable
Bias is universal, but so too are the methods you can take to avoid it. People — including you — can’t lose their own biases, but that doesn’t mean you have to let them interfere with your work. By simply understanding what each bias is and by breaking down the ways that it appears during the feedback process, you can put measures in place to overcome misleading preconceptions and gather the most impartial feedback possible.
Ensure that all of your questions are carefully worded and edited, because this will improve clarity and maintain participant focus. To avoid skewed data, always involve as large and diverse a group as possible, and try to remove yourself from the feedback — both in person as the facilitator of the survey or test group, and emotionally as the reviewer of the comments. This will encourage participants to answer more honestly about their experience with your UX, while preventing you from projecting your own assumptions and framing onto their feedback.
You might find it helpful to bring in a second opinion on the reviews as you start to establish bias awareness as a permanent part of your feedback and testing practices. It’s not an easy process, but when you can reduce the influence of bias on your work, you can get to what really matters: designing a better experience for your users.
With so many JavaScript frameworks around, single-page application (SPA) websites seem to be all the rage nowadays. However, an SPA architecture has the drawback of having a slower first-page load than a server-based application, because all of the JavaScript templates used to render the HTML view must be downloaded before the required view can be generated.
Enter service workers. Through service workers, all framework and application code to output the HTML view can be precached in the browser, thus speeding up both the first meaningful paint and the time to interact. In this article, I will share my experience with implementing service workers for PoP, an SPA website that runs on WordPress, with the goal of speeding up the loading time and providing offline-first capabilities.
Most of the code from the explanation below can be reused to create a solution for traditional (i.e. non-SPA) WordPress websites, too. Anyone want to implement a plugin?
Defining The Application’s Features
In addition to being suitable for an SPA WordPress website, the implementation of service workers below has been designed to support the following features:
loading of assets from external sources, such as a content delivery network (CDN);
multilingual (i18n) support;
multiple views;
selection on runtime of the caching strategy, on a URL-by-URL basis.
Based on this, we’ll make the following design decisions.
Load an Application Shell (or Appshell) First
If the SPA architecture supports it, load an appshell first (i.e. minimal HTML, CSS and JavaScript to power the user interface), under https://www.mydomain.com/appshell/. Once loaded, the appshell will dynamically request content from the server through an API. Because all of the appshell’s assets can be precached using service workers, the website’s frame will load immediately, speeding up the first meaningful paint. This scenario uses the “cache falling back to network” caching strategy.
Watch out for conflicts! For instance, WordPress outputs code that is not supposed to be cached and used forever, such as nonces, which usually expire after 24 hours. The appshell, which service workers will cache in the browser for more than 24 hours, needs to deal with nonces properly.
Extract WordPress Resources Added Through wp_enqueue_script and wp_enqueue_style
Because a WordPress website loads its JavaScript and CSS resources through the wp_enqueue_script and wp_enqueue_style hooks, respectively, we can conveniently extract these resources and add them to the precache list.
While following this strategy reduces the effort to produce the list of resources to precache (some files will still need to be added manually, as we shall see later), it implies that the service-workers.js file must be generated dynamically, on runtime. This suits very well the decision to enable WordPress plugins to hook into the generation of the service-workers.js file, as explained in the next item.
Indeed, I’d argue that there is no other way but to hook into these functions, because to generate the list manually (i.e. finding and listing all resources loaded by all plugins and the theme) is too troublesome of a process, and using other tools to generate the JavaScript and CSS files, such as Service Worker Precache, will actually not work in this context, for two main reasons:
Service Worker Precache works by scanning files in a specified folder and filtering them using wildcards. However, the files that WordPress ships with are many more than the actual ones required by the application, so we would quite likely be precaching plenty of redundant files.
WordPress attachs a version number to the requested file, which varies from file to file, such as:
Service workers intercept each request based on the full path of the requested resource, including the parameters — such as the version number, in this case. Because the Service Worker Precache tool is not aware of the versioning number, it will be unable to produce the required list correctly.
Allow Plugins to Hook Into the Generation of service-workers.js
Adding hooks into our functionality enables us to extend the functionality of service workers. For instance, third-party plugins can hook into the precached resources list to add their own resources, or to specify which caching strategy to use depending on their URL pattern, among others.
Caching External Resources: Define a List of Domains, and Validate the Resources to Precache Originate From Any of These.
Whenever the resource originates from the website’s domain, it can always be handled using service workers. Whenever not, it can still be fetched but we must use no-cors fetch mode. This type of request will result in an opaque response, so we won’t be able to check whether the request was successful; however, we can still precache these resources and allow the website to be browsable offline.
Support for Multiple Views
Let’s assume the URL contains a parameter indicating which view to use to render the website, such as:
Several appshells can be precached, each one representing a view:
https://www.mydomain.com/appshell/?view=default
https://www.mydomain.com/appshell/?view=embed
https://www.mydomain.com/appshell/?view=print
Then, when loading the website, we extract the value of the view parameter from the URL, and load the corresponding appshell, on runtime.
i18n Support
We are assuming that the language code is part of the URL, like this: https://www.mydomain.com/language-code/path/to/the/page/
One possible approach would be to design the website to provide a different service-workers.js file for each language, each to be employed under its corresponding language scope: a service-worker-en.js file for the en/ scope for English, a service-worker-es.js for the es/ scope for Spanish, and so on. However, conflicts arise when accessing shared resources, such as the JavaScript and CSS files located in the wp-content/ folder. These resources are the same for all languages; their URLs don’t carry any information about language. Adding yet another service-workers.js file to deal with all non-language scopes would add undesired complexity.
A more straightforward approach would be to employ the same technique as above for rendering multiple views: Register a unique service-workers.js file that already contains all information for all languages, and decide on runtime which language to use by extracting the language code from the requested URL.
In order to support all of the features described so far for, say, three views (a default, embed and print view) and two languages (English and Spanish), we would need to generate the following appshells:
The application must be able to choose from several caching strategies in order to support different behaviors on different pages. Here are some possible scenarios:
A page can be retrieved from cache most of the time, but on specific occasions it must be retrieved from the network (for example, to view a post after editing it).
A page may have a user state and, as such, cannot be cached (for example, “Edit my account,” “My posts”).
A page can be requested in the background to bring extra data (for example, lazy-loaded comments on a post).
Here are the caching strategies and when to use each:
Static assets (JavaScript and CSS files, images, etc.)
The static content will never be updated: JavaScript and CSS files have the versioning number, and uploading the same image a second time into WordPress’ media manager will change the image’s file name. As such, cached static assets will not become stale.
Appshell
We want the appshell to load immediately, so we retrieve it from the cache. If the application is upgraded and the appshell changes, then changing the version number will install a new version of the service worker and download the latest version of the appshell.
While getting the content from the cache to display it immediately, we also send the request to bring the content from the server. We compare the two versions using an ETag header, and if the content has changed, we cache the server’s response (i.e. the most up to date of the two) and then show a message to the user, “This page has been updated, please click here to refresh it.”
Content that would normally use the “cache then network” strategy can be forced to use a “network only” strategy by artificially adding the parameter sw-strategy=networkfirst or sw-networkfirst=true to the requested URL. This parameter can be removed before the request is sent to the server.
Content with user state
We do not want to cache any content with a user state, due to security. (We could delete the user state cache when the user signs out, but implementing that is more complex.)
Content is lazy-loaded when the user will not see it immediately, allowing the content that the user sees immediately to load faster. (For example, a post would load immediately, and its comments would be lazy-loaded because they appear at the bottom of the page.) Because it will not be seen immediately, fetching this content straight from the cache is not necessary either; instead, always try to get the most up-to-date version from the server.
Ignore Certain Data When Generating the ETag Header for the “Cache Then Network” Strategy
An ETag can be generated using a hash function; a very tiny change in the input will produce a completely different output. As such, values that are not considered important and that are allowed to become stale should not be factored in when generating the ETag. Otherwise, the user might be prompted with the message “This page has been updated, please click here to refresh it” for every tiny change, such as the comments counter going from 5 to 6.
We have decided to automatically extract all JavaScript and CSS files that are to be used by the application, added through the wp_enqueue_script and wp_enqueue_style functions, in order to export these resources into the Service Worker Precache list. This implies that the service-workers.js file will be generated on runtime.
When and how should it be generated? Triggering an action to create the file through admin-ajax.php (for example, calling https://www.mydomain.com/wp-admin/admin-ajax.php?action=create-sw) will not work, because it would load WordPress’ admin area. Instead, we need to load all JavaScript and CSS files from the front end, which will certainly be different.
A solution is to create a private page on the website (hidden from view), accessed through https://www.mydomain.com/create-sw/, which will execute functionality to create the service-workers.js file. The file’s creation must take place at the very end of the execution of the request, so that all JavaScript and CSS files will have been enqueued by then:
function generate_sw_shortcode($atts) { add_action('wp_footer', 'generate_sw_files', PHP_INT_MAX);}add_shortcode('generate_sw', 'generate_sw_shortcode');
File: sw.php
Please note that this solution works because the website is an SPA which loads ALL files in advance for the whole lifecycle of the application usage (the infamous bundled file); requesting any 2 different URLs from this website will always load the same set of .js and .css files. I am currently developing code-splitting techniques into the framework which will, coupled with HTTP/2, load only the required JS resources and nothing else, on a page-by-page basis — it should be ready within a couple of weeks. Hopefully I will then be able to describe how Service Workers + SPA + code-splitting can work all together.
The generated file could be placed in the root of the website (i.e. alongside wp-config.php) to grant it / scope. However, placing files in the root folder of the website is not always advisable, such as for security (the root folder must have very restrictive write permissions) and for maintenance (if service-workers.js were to be generated by a plugin and this one was disabled because its folder was renamed, then the service-workers.js file might never get deleted).
Luckily, there is another possibility. We can place the service-workers.js file in any directory, such as wp-content/sw/, and add an .htaccess file that grants access to the root scope:
function generate_sw_files() { $dir = WP_CONTENT_DIR."/sw/" // Create the directory structure if (!file_exists($dir)) { @mkdir($dir, 0755, true); } // Generate Service Worker .js file save_file($dir.'service-workers.js', get_sw_contents()); // Generate the file to register the Service Worker save_file($dir.'sw-registrar.js', get_sw_registrar_contents()); // Generate the .htaccess file to allow access to the scope (/) save_file($dir.'.htaccess', get_sw_htaccess_contents());}function get_sw_registrar_contents() { return ' if ("serviceWorker" in navigator) { navigator.serviceWorker.register("/wp-content/sw/service-workers.js", { scope: "/" }); } ';}function get_sw_htaccess_contents() { return ' <FilesMatch "service-workers.js$"> Header set Service-Worker-Allowed: / </FilesMatch> ';}function save_file($file, $contents) { // Open the file, write content and close it $handle = fopen($file, "wb"); $numbytes = fwrite($handle, $contents); fclose($handle); return $file;}
File: sw.php
Generation of these files can be included during the website deployment process, in order to automate it and so that all files are created just before the new website version becomes available to users.
For security reasons, we can add some validation in function generate_sw_files() before it executes:
to provide a valid access key as a parameter.
to make sure it can only be requested from within the same server.
From an array of servers behind a load balancer, or from a stack of servers in the cloud using auto-scaling, we can’t execute wget URL, because we don’t know which server will serve the request. Instead, we can directly execute the PHP process, using php-cgi:
If you’re uncomfortable with having this page on a production server, this process could also be run in a staging environment, as long as it has exactly the same configuration as the production server (i.e. the database must have the same data; all constants in wp-config.php must have the same values; the URL to access the website on the staging server must be the same as the website itself, etc.). Then, the newly created service-workers.js file must be copied from the staging to production servers during deployment of the website.
Contents of service-workers.js
Generating service-workers.js from a PHP function implies that we can provide a template of this file, which will declare what variables it needs, and the service worker’s logic. Then, on runtime, the variables will be replaced with actual values. We can also conveniently add hooks to enable plugins to add their own required values. (More configuration variables will be added later on in this article).
function get_sw_contents() { // $sw_template has the path to the service-worker template $sw_template = dirname(__FILE__).'/assets/sw-template.js'; $contents = file_get_contents($sw_template); foreach (get_sw_configuration() as $key => $replacement) { $value = json_encode($replacement); $contents = str_replace($key, $value, $contents); } return $contents;}function get_sw_configuration() { $configuration = array(); $configuration['$version'] = get_sw_version(); … return $configuration;}
File: sw.php
The configuration of the service workers template looks like this:
var config = { version: $version, …};
File: sw-template.js
Resource types
The introduction of resource types, which is a way of splitting assets into groups, allows us to implement different behaviors in the logic of the service worker. We will need the following resource types:
HTML
Produced only when first loading the website. After that, all content is dynamically requested using the application API, whose response is in JSON format
JSON
The API to get and post content
Static
Any asset, such as JavaScript, CSS, PDF, an image, etc.
Resource types can be used for caching resources, but this is optional (it only makes the logic more manageable). They are needed for:
selecting the appropriate caching strategy (for static, cache-first, and for JSON, network-first);
defining paths not to intercept (anything under wp-content/ for JSON but not for static, or anything ending in .php for static, for dynamically generated images, but not for JSON).
function get_sw_resourcetypes() { return array('static', 'json', 'html');}
File: sw.php
function getResourceType(request) { var acceptHeader = request.headers.get('Accept'); var resourceType = 'static'; if (acceptHeader.indexOf('text/html') !== -1) { resourceType = 'html'; } else if (acceptHeader.indexOf('application/json') !== -1) { resourceType = 'json'; } return resourceType;}
File: sw-template.js
Intercepting requests on service workers
We will define for which URL patterns we do not want the service worker to intercept the request. The list of resources to exclude is initially empty, just containing a hook to inject all values.
$excludedFullPaths
Full paths to exclude.
$excludedPartialPaths
Paths to exclude, appearing after the home URL (for example, articles will exclude https://www.mydomain.com/articles/ but not https://www.mydomain.com/posts/articles/). Partial paths are useful when the URL contains language information (for example, https://www.mydomain.com/en/articles/), so a single path would exclude that page for all languages (in this case, the home URL would be https://www.mydomain.com/en/). More on this later.
The value opts.locales.domain will be calculated on runtime (more on this later).
var config = { … excludedPaths: { full: $excludedFullPaths, partial: $excludedPartialPaths }, …};self.addEventListener('fetch', event => { function shouldHandleFetch(event, opts) { var request = event.request; var resourceType = getResourceType(request); var url = new URL(request.url); var fullExcluded = opts.excludedPaths.full[resourceType].some(path => request.url.startsWith(path)), var partialExcluded = opts.excludedPaths.partial[resourceType].some(path => request.url.startsWith(opts.locales.domain+path)); if (fullExcluded || partialExcluded) return false; if (resourceType == 'static') { // Do not handla dynamic images, eg: the Captcha image, captcha.png.php var isDynamic = request.url.endsWith('.php') && request.url.indexOf('.php?') === -1; if (isDynamic) return false; } … } …});
File: sw-template.js
Now we can define WordPress resources to be excluded. Please note that, because it depends on the resource type, we can define a rule to intercept any URL starting with wp-content/, which works only for the resource type “static.”
class PoP_ServiceWorkers_Hooks_WPExclude { function __construct() { add_filter('PoP_ServiceWorkers_Job_Fetch:exclude:full', array($this, 'get_excluded_fullpaths'), 10, 2); } function get_excluded_fullpaths($excluded, $resourceType) { if ($resourceType == 'json' || $resourceType == 'html') { // Do not intercept access to the WP Dashboard $excluded[] = admin_url(); $excluded[] = content_url(); $excluded[] = includes_url(); } elseif ($resourceType == 'static') { // Do not cache the service-workers.js file!!! $excluded[] = WP_CONTENT_DIR.'/sw/service-workers.js'; } return $excluded; }}new PoP_ServiceWorkers_Hooks_WPExclude();
Precaching resources
In order for the WordPress website to work offline, we need to retrieve the full list of resources needed and precache them. We want to be able to cache both local and external resources (for example, from a CDN).
$origins
Define from which domains we enable the service worker to intercept the request (for example, from our own domain plus our CDN).
$cacheItems
List of resources to precache. It is initially an empty array, providing a hook to inject all values.
var config = { … cacheItems: $cacheItems, origins: $origins, …};
In order to precache external resources, executing cache.addAll will not work. Instead, we need to use the fetch function, passing the parameter {mode: 'no-cors'} for these.
Resources to be intercepted with the service worker either must come from any of our origins or must have been defined in the initial precache list (so that we can precache assets from yet other external domains, such as https://cdnjs.cloudflare.com):
self.addEventListener('fetch', event => { function shouldHandleFetch(event, opts) { … var fromMyOrigins = opts.origins.indexOf(url.origin) > -1; var precached = opts.cacheItems[resourceType].indexOf(url) > -1; if (!(fromMyOrigins || precached)) return false; … } …});
File: sw-template.js
Generating the list of resources to precache
Assets loaded through wp_enqueue_script and script_loader_tag can be extracted easily. Finding other assets involves a manual process, depending on whether they are coming from WordPress core files, from the theme or from installed plugins:
images;
CSS and JavaScript not loaded through wp_enqueue_script and script_loader_tag;
JavaScript files conditionally loaded (such as those added between html tags);
resources requested on runtime (for example, TinyMCE’s theme, skin and plugin files);
references to JavaScript files hardcoded another JavaScript file;
Font files referenced in CSS files (TTF, WOFF, etc.);
locale files;
i18n files.
To retrieve all JavaScript files loaded through the wp_enqueue_script function, we would hook into script_loader_tag, and for all CSS files loaded through the wp_enqueue_style function, we would hook into style_loader_tag:
class PoP_ServiceWorkers_Hooks_WP { private $scripts, $styles, $dom; function __construct() { $this->scripts = $this->styles = array(); $this->doc = new DOMDocument(); add_filter('script_loader_tag', array($this, 'script_loader_tag')); add_filter('style_loader_tag', array($this, 'style_loader_tag')); … } function script_loader_tag($tag) { if (!empty($tag)) { $this->doc->loadHTML($tag); foreach($this->doc->getElementsByTagName('script') as $script) { if($script->hasAttribute('src')) { $this->scripts[] = $script->getAttribute('src'); } } } return $tag; } function style_loader_tag($tag) { if (!empty($tag)) { $this->doc->loadHTML($tag); foreach($this->doc->getElementsByTagName('link') as $link) { if($link->hasAttribute('href')) { $this->styles[] = $link->getAttribute('href'); } } } return $tag; } …}new PoP_ServiceWorkers_Hooks_WP();
Then, we simply add all of these resources to the precache list:
class PoP_ServiceWorkers_Hooks_WP { function __construct() { … add_filter('PoP_ServiceWorkers_Job_CacheResources:precache', array($this, 'get_precache_list'), 10, 2); } function get_precache_list($precache, $resourceType) { if ($resourceType == 'static') { $precache = array_merge( $precache, $this->scripts, $this->styles ); } return $precache; }}
WordPress will load a few files that must be manually added. Please note that the reference to the file must be added exactly as it will be requested, including all of the parameters. So, this process involves a lot of copying and pasting from the original code:
class PoP_ServiceWorkers_Hooks_WPManual { function __construct() { add_filter('PoP_ServiceWorkers_Job_CacheResources:precache', array($this, 'get_precache_list'), 10, 2); } function get_precache_list($precache, $resourceType) { if ($resourceType == 'static') { // File json2.min.js is not added through the $scripts list because it's 'lt IE 8' global $wp_scripts; $suffix = SCRIPT_DEBUG ? '' : '.min'; $this->scripts[] = add_query_arg('ver', '2015-05-03', $wp_scripts->base_url."/wp-includes/js/json2$suffix.js"); // Needed for the thickboxL10n['loadingAnimation'] javascript code produced in the front-end, loaded in wp-includes/script-loader.php $precache[] = includes_url('js/thickbox/loadingAnimation.gif'); } return $precache; }}new PoP_ServiceWorkers_Hooks_WPManual();
TinyMCE presents a tough challenge for obtaining its list of resources, because the files it loads (such as plugins, skins and theme files) are actually created and requested on runtime. Moreover, the full path of the resource is not printed in the HTML code, but is assembled in a JavaScript function. So, to obtain the list of resources, one can inspect TinyMCE’s source code and check how it generates the file names, or guess them by creating a TinyMCE editor while inspecting Chrome’s Developer Tools’ “Network” tab and seeing which files it requests. Doing the latter, I was able to deduce all file names (for example, for theme files, the path is a combination of the domain, theme name and versioning as parameters).
To obtain TinyMCE’s configuration that will be used on runtime, we hook into store_tinymce_resources and teeny_mce_before_init and inspect the values set in the $mceInit variable:
class PoP_ServiceWorkers_Hooks_TinyMCE { private $content_css, $external_plugins, $plugins, $others; function __construct() { $this->content_css = $this->external_plugins = $this->plugins = $this->others = array(); // Execute last one add_filter('teeny_mce_before_init', array($this, 'store_tinymce_resources'), PHP_INT_MAX, 1); add_filter('tiny_mce_before_init', array($this, 'store_tinymce_resources'), PHP_INT_MAX, 1); } function store_tinymce_resources($mceInit) { // Code copied from wp-includes/class-wp-editor.php function editor_js() $suffix = SCRIPT_DEBUG ? '' : '.min'; $baseurl = includes_url( 'js/tinymce' ); $cache_suffix = $mceInit['cache_suffix']; if ($content_css = $mceInit['content_css']) { foreach (explode(',', $content_css) as $content_css_item) { // The $cache_suffix is added in runtime, it can be safely added already. Eg: wp-includes/css/dashicons.min.css?ver=4.6.1&wp-mce-4401-20160726 $this->content_css[] = $content_css_item.'&'.$cache_suffix; } } if ($external_plugins = $mceInit['external_plugins']) { if ($external_plugins = json_decode($external_plugins, true)) { foreach ($external_plugins as $plugin) { $this->external_plugins[] = "{$plugin}?{$cache_suffix}"; } } } if ($plugins = $mceInit['plugins']) { if ($plugins = explode(',', $plugins)) { // These URLs are generated on runtime in TinyMCE, without a $version foreach ($plugins as $plugin) { $this->plugins[] = "{$baseurl}/plugins/{$plugin}/plugin{$suffix}.js?{$cache_suffix}"; } if (in_array('wpembed', $plugins)) { // Reference to file wp-embed.js, without any parameter, is hardcoded inside file wp-includes/js/tinymce/plugins/wpembed/plugin.min.js!!! $this->others[] = includes_url( 'js' )."/wp-embed.js"; } } } if ($skin = $mceInit['skin']) { // Must produce: wp-includes/js/tinymce/skins/lightgray/content.min.css?wp-mce-4401-20160726 $this->others[] = "{$baseurl}/skins/{$skin}/content{$suffix}.css?{$cache_suffix}"; $this->others[] = "{$baseurl}/skins/{$skin}/skin{$suffix}.css?{$cache_suffix}"; // Must produce: wp-includes/js/tinymce/skins/lightgray/fonts/tinymce.woff $this->others[] = "{$baseurl}/skins/{$skin}/fonts/tinymce.woff"; } if ($theme = $mceInit['theme']) { // Must produce: wp-includes/js/tinymce/themes/modern/theme.min.js?wp-mce-4401-20160726 $this->others[] = "{$baseurl}/themes/{$theme}/theme{$suffix}.js?{$cache_suffix}"; } // Files below are always requested. Code copied from wp-includes/class-wp-editor.php function editor_js() global $wp_version, $tinymce_version; $version = 'ver=' . $tinymce_version; $mce_suffix = false !== strpos( $wp_version, '-src' ) ? '' : '.min'; $this->others[] = "{$baseurl}/tinymce{$mce_suffix}.js?$version"; $this->others[] = "{$baseurl}/plugins/compat3x/plugin{$suffix}.js?$version"; $this->others[] = "{$baseurl}/langs/wp-langs-en.js?$version"; return $mceInit; }}new PoP_ServiceWorkers_Hooks_TinyMCE();
Finally, we add the extracted resources to the precache list:
class PoP_ServiceWorkers_Hooks_TinyMCE { function __construct() { … add_filter('PoP_ServiceWorkers_Job_CacheResources:precache', array($this, 'get_precache_list'), 1000, 2); } … function get_precache_list($precache, $resourceType) { if ($resourceType == 'static') { // In addition, add all the files in the tinymce plugins folder, since these will be needed during runtime when initializing the tinymce textarea $precache = array_merge( $precache, $this->content_css, $this->external_plugins, $this->plugins, $this->others ); } return $precache; }}
We must also precache all images required by the theme and all plugins. In the code below, we precache all of the theme’s files in the folder img/, assuming that these are requested without adding parameters:
class PoPTheme_Wassup_ServiceWorkers_Hooks_ThemeImages { function __construct() { add_filter('PoP_ServiceWorkers_Job_CacheResources:precache', array($this, 'get_precache_list'), 10, 2); } function get_precache_list($precache, $resourceType) { if ($resourceType == 'static') { // Add all the images from the active theme $theme_dir = get_stylesheet_directory(); $theme_uri = get_stylesheet_directory_uri(); foreach (glob($theme_dir."/img/*") as $file) { $precache[] = str_replace($theme_dir, $theme_uri, $file); } } return $precache; }}new PoPTheme_Wassup_ServiceWorkers_Hooks_ThemeImages();
If we use Twitter Bootstrap, loaded from a CDN (for example, https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.7/css/bootstrap.min.css), then we must precache the corresponding glyphicons’ font files:
class PoPTheme_Wassup_ServiceWorkers_Hooks_Bootstrap { function __construct() { add_filter('PoP_ServiceWorkers_Job_CacheResources:precache', array($this, 'get_precache_list'), 10, 2); } function get_precache_list($precache, $resourceType) { if ($resourceType == 'static') { // Add all the fonts needed by Bootstrap inside the bootstrap.min.css file $precache[] = 'https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.7/fonts/glyphicons-halflings-regular.eot'; $precache[] = 'https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.7/fonts/glyphicons-halflings-regular.svg'; $precache[] = 'https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.7/fonts/glyphicons-halflings-regular.ttf'; $precache[] = 'https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.7/fonts/glyphicons-halflings-regular.woff'; $precache[] = 'https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.7/fonts/glyphicons-halflings-regular.woff2'; } return $precache; }}new PoPTheme_Wassup_ServiceWorkers_Hooks_Bootstrap();
All language-specific resources for all languages must also be precached, so that the website can be loaded in any language when offline. In the code below, we assume the plugin has a js/locales/ folder with the translation files locale-en.js, locale-es.js, etc:
class PoP_UserAvatar_ServiceWorkers_Hooks_Locales { function __construct() { add_filter('PoP_ServiceWorkers_Job_CacheResources:precache', array($this, 'get_precache_list'), 10, 2); } function get_precache_list($precache, $resourceType) { if ($resourceType == 'static') { $dir = dirname(__FILE__)); $url = plugins_url('', __FILE__)); foreach (glob($dir."/js/locales/fileupload/*") as $file) { $precache[] = str_replace($dir, $url, $file); } } return $precache; }}new PoP_UserAvatar_ServiceWorkers_Hooks_Locales();
Non-caching strategies
In the following, we will delve into caching and non-caching strategies. Let’s tackle the non-caching strategy first:
Network only
Whenever JSON requests have user state.
To not cache a request, if we know which URLs must not be cached in advance, then we can simply add their full or partial paths in the list of excluded items. For instance, below, we’ve set all pages that have user state (such as “My posts” and “Edit my profile”) to not be intercepted by the service worker, because we don’t want to cache any personal information of the user:
function get_page_path($page_id) { $page_path = substr(get_permalink($page_id), strlen(home_url())); // Remove the first and last '/' if ($page_path[0] == '/') $page_path = substr($page_path, 1); if ($page_path[strlen($page_path)-1] == '/') $page_path = substr($page_path, 0, strlen($page_path)-1); return $page_path;}class PoP_ServiceWorkers_Hooks_UserState { function __construct() { add_filter('PoP_ServiceWorkers_Job_Fetch:exclude:partial', array($this, 'get_excluded_partialpaths'), 10, 2); } function get_excluded_partialpaths($excluded, $resourceType) { if ($resourceType == 'json') { // Variable $USER_STATE_PAGES contains all IDs of pages that have a user state foreach ($USER_STATE_PAGES as $page_id) { $excluded[] = get_page_path($page_id); } } return $excluded; }}new PoP_ServiceWorkers_Hooks_UserState();
If the non-caching strategy must be applied on runtime, then we can add a parameter, sw-networkonly=true or sw-strategy=networkonly to the requested URL, and dismiss handling it with a service worker in the function shouldHandleFetch:
self.addEventListener('fetch', event => { function shouldHandleFetch(event, opts) { … var params = getParams(url); if (params['sw-strategy'] == 'networkonly') return false; … } …});
File: sw-template.js
Caching strategies
The application uses the following caching strategies, depending on the resource type and functional use:
Cache, falling back to network for the appshell and static resources.
Cache, then network for JSON requests.
Network, falling back to cache for JSON requests that do not keep the user waiting, such as lazy-loaded data (for example, a post’s comments), or that must be up to date (for example, viewing a post after it’s been updated).
Both static and HTML resource types will always require the same strategy. It is only the JSON resource type that can be switched between strategies. We establish the “cache, then network” strategy as the default one, and define rules over the requested URL to switch to “network, falling back to cache”:
startsWith
The URL starts with a predefined full or partial path.
hasParams
The URL contains a predefined parameter. The parameter sw-networkfirst has been defined already, so requesting https://www.mydomain.com/en/?output=json will use the “cache first” strategy, whereas https://www.mydomain.com/en/?output=json&sw-networkfirst=true will switch to “network first.”
// Cache falling back to networkconst SW_CACHEFIRST = 1;// Cache then networkconst SW_CACHEFIRSTTHENREFRESH = 2;// Network falling back to cacheconst SW_NETWORKFIRST = 3;var config = { … strategies: $strategies, …};
Finally, we get to the logic in the service-workers.js file. Please note that to fetch JSON requests, we will also need to add to the URL cache-busting parameter sw-cachebust, with a timestamp to avoid getting the response from the browser’s HTTP cache.
function getCacheBustRequest(request, opts) { var url = new URL(request.url); // Put in a cache-busting parameter to ensure we’re caching a fresh response. if (url.search) { url.search += '&'; } url.search += 'sw-cachebust=' + Date.now(); return new Request(url.toString());}function addToCache(cacheKey, request, response) { if (response.ok) { var copy = response.clone(); caches.open(cacheKey).then( cache => { cache.put(request, copy); }); } return response;}self.addEventListener('fetch', event => { function getStrategy(request, opts) { var strategy = ''; var resourceType = getResourceType(request); // JSON requests have two strategies: cache first + update (the default) or network first if (resourceType === 'json') { var networkFirst = opts.strategies[resourceType].networkFirst; var criteria = { startsWith: networkFirst.startsWith.full.some(path => request.url.startsWith(path)), // The pages do not included the locale domain, so add it before doing the comparison pageStartsWith: networkFirst.startsWith.partial.some(path => request.url.startsWith(opts.locales.domain+path)), // Code for function stripIgnoredUrlParameters is in https://github.com/leoloso/PoP/blob/master/wp-content/plugins/pop-serviceworkers/kernel/serviceworkers/assets/js/jobs/lib/utils.js hasParams: stripIgnoredUrlParameters(request.url, networkFirst.hasParams) != request.url } var successCriteria = Object.keys(criteria).filter(criteriaKey => criteria[criteriaKey]); if (successCriteria.length) { strategy = SW_NETWORKFIRST; } else { strategy = SW_CACHEFIRSTTHENREFRESH; } } else if (resourceType === 'html' || resourceType === 'static') { strategy = SW_CACHEFIRST; } return strategy; } function onFetch(event, opts) { var request = event.request; var resourceType = getResourceType(request); var cacheKey = cacheName(resourceType, opts); var strategy = getStrategy(request, opts); var cacheBustRequest = getCacheBustRequest(request, opts); if (strategy === SW_CACHEFIRST || strategy === SW_CACHEFIRSTTHENREFRESH) { /* Load immediately from the Cache */ event.respondWith( fetchFromCache(request) .catch(() => fetch(request)) .then(response => addToCache(cacheKey, request, response)) ); /* Bring fresh content from the server, and show a message to the user if the cached content is stale */ if (strategy === SW_CACHEFIRSTTHENREFRESH) { event.waitUntil( fetch(cacheBustRequest) .then(response => addToCache(cacheKey, request, response)) .then(response => refresh(request, response)) ); } } else if (strategy === SW_NETWORKFIRST) { event.respondWith( fetch(cacheBustRequest) .then(response => addToCache(cacheKey, request, response)) .catch(() => fetchFromCache(request)) .catch(function(err) {/*console.log(err)*/}) ); } } if (shouldHandleFetch(event, config)) { onFetch(event, config); }});
File: sw-template.js
The “cache then network” strategy uses the refresh function to cache the most updated content coming from the server, and if it differs from the previously cached one, then post a message to the client browser to notify the user. It makes the comparison not of the actual contents, but of their ETag headers (generation of the ETag header will be explained below). The cached ETag value is stored using localForage, a simple yet powerful API wrapping IndexedDB:
function refresh(request, response) { var ETag = response.headers.get('ETag'); if (!ETag) return null; var key = 'ETag-'+response.url; return localforage.getItem(key).then(function(previousETag) { // Compare the ETag of the response with the previous one saved in the IndexedDB if (ETag == previousETag) return null; // Save the new value return localforage.setItem(key, ETag).then(function() { // If there was no previous ETag, then send no notification to the user if (!previousETag) return null; // Send a message to the client return self.clients.matchAll().then(function (clients) { clients.forEach(function (client) { var message = { type: 'refresh', url: response.url }; client.postMessage(JSON.stringify(message)); }); return response; }); }); });}
File: sw-template.js
A JavaScript function catches the messages delivered by the service worker, and prints a message requesting the user to refresh the page:
function showRefreshMsgToUser() { if ('serviceWorker' in navigator) { navigator.serviceWorker.onmessage = function (event) { var message = JSON.parse(event.data); if (message.type === 'refresh') { var msg = 'This page has been updated, <a href="'+message.url+'">click here to refresh it</a>.'; var alert = '<div role="alert"><button type="button" aria-hidden="true" data-dismiss="alert">×</button>'+msg+'</div>'; jQuery('body').prepend(alert); } }; }}
Generating the ETag Header
An ETag header is a hash representing the content being served; because it is a hash, a minimal change in the source will lead to the creation of a completely different ETag. We must make sure that the ETag is generated from the actual website content, and ignore information not visible to the user, such as HTML tag IDs. Otherwise, consider the following sequence happening for the “cache then network” strategy:
An ID is generated, using now() to make it unique, and printed in the page’s HTML.
When accessed for the first time, this page is created and its ETag is generated.
When accessed a second time, the page is served immediately from the service worker cache, and a network request is triggered to refresh the content.
This request generates the page again. Even if it hasn’t been updated, its content will be different, because now() will produce a different value, and its ETag header will be different.
The browser will compare the two ETags and, because they are different, prompt the user to refresh the content, even if the page has not been updated.
One solution is to remove all dynamically generated values, such as current_time('timestamp') and now(), before generating the ETag. For this, we can set all dynamic values in constants, and then use these constants throughout the application. Finally, we would remove these from the input to the hash-generation function:
define('TIMESTAMP', current_time('timestamp'));define('NOW', now());ob_start();// All the application code in between (using constants TIMESTAMP, NOW if needed)$content = ob_get_clean();$dynamic = array(TIMESTAMP, NOW);$etag_content = str_replace($dynamic, '', $content);header('ETag: '.wp_hash($etag_content));echo($content);
A similar strategy is needed for those pieces of information that are allowed to become stale, such as a post’s comments count, mentioned earlier. Because this value is not important, we don’t want the user to receive a notification to refresh the page merely because the number of comments has increased from 5 to 6.
Appshell With Support for Multilingual, Multiple Presentation Modes
No matter which URL is requested by the user, the application will load the appshell instead, which will immediately load the contents of the requested URL (still accessible in window.location.href) through the API, passing along the locale and all needed parameters.
The application has different views and languages, and we want these different appshells to be precached, and then load the appropriate one on runtime, fetching the information from the requested URL: https://www.mydomain.com/language-code/path/to/page/?view=….
As mentioned earlier, given two languages (English and Spanish) and three views (default, embed and print), we will need to precache the following appshells:
In addition to the language and the view, the application might have other parameters (let’s say “style” and “format”). However, adding these would make the combinations of URLs to precache grow tremendously. So, we need to settle on a trade-off, deciding which parameters to precache (the most used ones) and which ones not to. For the latter ones, their corresponding URL can be accessed offline only starting from the second access.
By adding hooks into the configuration, we allow multilingual plugins, such as qTranslate X, to modify the locales and languages and the URLs accordingly.
function get_sw_configuration() { … $configuration['$appshellPages'] = get_sw_appshell_pages(); $configuration['$appshellParams'] = apply_filters('PoP_ServiceWorkers_Job_Fetch:appshell_params', array("themestyle", "settingsformat", "mangled")); …}function get_sw_appshell_pages() { // Locales: can be hooked into by qTranslate to input the language codes $locales = apply_filters('PoP_ServiceWorkers_Job_Fetch:locales', array(get_locale())); $views = array("default", "embed", "print"); $appshellPages = array(); foreach ($locales as $locale) { foreach ($views as $view) { // By adding a hook to the URL, we can allow plugins to modify the URL $appshellPages[$locale][$view] = apply_filters('PoP_ServiceWorkers_Job_Fetch:appshell_url', add_query_arg('view', $view, get_permalink($APPSHELL_PAGE_ID), $locale); } } return apply_filters('PoP_ServiceWorkers_Job_Fetch:appshell_pages', $appshellPages);}
File: sw.php
class PoP_ServiceWorkers_QtransX_Job_Fetch_Hooks { function __construct() { add_filter('PoP_ServiceWorkers_Job_Fetch:locales', array($this, 'get_locales')); add_filter('PoP_ServiceWorkers_Job_Fetch:appshell_url', array($this, 'get_appshell_url'), 10, 2); … } function get_locales($locales) { global $q_config; if ($languages = $q_config['enabled_languages']) { return $languages; } return $locales; } function get_appshell_url($url, $lang) { return qtranxf_convertURL($url, $lang); }}new PoP_ServiceWorkers_QtransX_Job_Fetch_Hooks();
After merging sw-template.js into service-workers.js, it will look like this:
The request is intercepted with the method onFetch, and if it is of the HTML resource type, it will be replaced with the appshell URL instead, just after deciding which strategy is to be employed. (Below we’ll see how to get the current locale, set in opts.locales.current.)
function onFetch(event, opts) { var request = event.request; … var strategy = getStrategy(request, opts); // Allow to modify the request, fetching content from a different URL request = getRequest(request, opts); …}function getRequest(request, opts) { var resourceType = getResourceType(request); if (resourceType === 'html') { // The different appshells are a combination of locale and view. var params = getParams(request.url); var view = params.view || 'default'; // The initial appshell URL has the params that we have precached. var url = opts.appshell.pages[opts.locales.current][view]; // In addition, there are other params that, if provided by the user, must be added to the URL. These params are not originally precached in any appshell URL, so such a page will have to be retrieved from the server. opts.appshell.params.forEach(function(param) { // If the param was passed in the URL, then add it along. if (params[param]) { url += '&'+param+'='+params[param]; } }); request = new Request(url); } return request;}
Lastly, we proceed to precache the appshells:
class PoP_ServiceWorkers_Hooks_AppShell { function __construct() { add_filter('PoP_ServiceWorkers_Job_CacheResources:precache', array($this, 'get_precache_list'), 10, 2); } function get_precache_list($precache, $resourceType) { if ($resourceType == 'html') { foreach (get_sw_appshell_pages() as $locale => $views) { foreach ($views as $view => $url) { $precache[] = $url; } } } return $precache; }}new PoP_ServiceWorkers_Hooks_AppShell();
Obtaining the locale
The appshell has multilingual support, so we need to extract the language information from the requested URL.
Finally, we obtain the language code from the requested URL, and initialize the locale’s current and domain configuration values at the beginning of the fetch event:
self.addEventListener('fetch', event => { config = initOpts(config, event); if (shouldHandleFetch(event, config)) { … }});function initOpts(opts, event) { // Find the current locale and set it on the configuration object opts.locales.current = getLocale(event, opts); opts.locales.domain = getLocaleDomain(event, opts); return opts;}function getLocale(event, opts) { var currentDomain = getCurrentDomain(event, opts); if (currentDomain.length) { return opts.locales.all[currentDomain]; } return opts.locales.default;}function getLocaleDomain(event, opts) { var currentDomain = getCurrentDomain(event, opts); if (currentDomain.length) { return currentDomain[0]; } // Return the default domain return Object.keys(opts.locales.all).filter(function(key) {return opts.locales.all[key] === opts.locales.default})[0];}function getCurrentDomain(event, opts) { return Object.keys(opts.locales.all).filter(path => event.request.url.startsWith(path));}
Dealing With Nonces
A nonce (or a “number used once”) is a cryptographic hash used to verify the authenticity of a person or client. WordPress uses nonces as security tokens to protect URLs and forms from malicious attacks. In spite of their name, WordPress uses a nonce more than once, giving it a limited lifetime, after which it expires. Even though they are not the ultimate security measure, nonces are a good first filter to prevent hacker attacks.
The HTML code printed on any WordPress page will contain nonces, such as the nonce for uploading images to the media manager, saved in the JavaScript object _wpPluploadSettings.defaults.multipart_params._wpnonce. The lifetime of a nonce is, by default, set to 24 hours (configured in the nonce_life hook). However, this value is shorter than the expected duration of the appshell in the service worker cache. This is a problem: After just 24 hours, the appshell will contain invalid nonces, which will make the application malfunction, such as giving error messages when the user attempts to upload images.
There are a few solutions to overcome this problem:
Immediately after loading the appshell, load another page in the background, using the “network only” strategy, to update the value of the nonce in the original JavaScript object:
Implement a longer nonce_life, such as three months, and then make sure to deploy a new version of the service worker within this lifespan:
add_filter('nonce_life', 'sw_nonce_life'); function sw_nonce_life($nonce_life) { return 90*DAY_IN_SECONDS; }
Because this solution weakens the security of nonces, tougher security measures must also be put in place throughout the application, such as making sure that the user can edit a post:
if (!current_user_can('edit_post', $post_id)) wp_die( __( 'Sorry, you are not allowed to edit this item.'));
Additional Considerations
The fact that something can be done doesn’t mean it should be done. The developer must evaluate whether each feature needs to be added according to the requirements of the app, not just because the technology allows it.
Below, I’ll show a few examples of functionality that can be perfectly implemented using other solutions or that need some workaround in order to make them work with service workers.
Showing a simple “You’re offline” message
Service workers can be used to display an offline fallback page whenever the user has no Internet connection. In its most basic implementation of just showing a “You are offline” message, I believe that we are not getting the most value out of this fallback page. Rather, it could do more interesting things:
Provide additional information, such as showing a list of all already-cached resources, which the user can still browse offline (check how Jake Archibald’s offline Wikipedia demo lists all already-cached resources on its home page).
Let the user play a game while waiting for the connection to come back (such as done by The Guardian).
With an SPA, we can offer a different approach: We can intercept the offline state, and just display a small “You’re offline” message at the top of the page that the user is currently browsing. This avoids redirection of the user to yet another page, which could impair the flow of the application.
function intercept_click() { $(document).on('click', 'a[href^="'+WEBSITE_DOMAIN+'"]', function(e) { var anchor = $(this); var url = anchor.attr('href'); $.ajax({ url: url, error: function(jqXHR) { var msg = 'Oops, there was an error'; if (jqXHR.status === 0) { // status = 0 => user is offline msg = 'You are offline!'; } else if (jqXHR.status === 404) { msg = 'That page doesn't exist'; } $('#error-msg').text(msg).css('display', 'block'); } }); });}
Using localStorage to cache data
Service workers are not the only solution offered by browsers for caching the response’s data. An older technology, with even wider support (Internet Explorer and Safari support it), is localStorage. It offers good performance for caching small to medium-sized pieces of information (it can normally cache up to 5 MB of data).
/* Using Modernizr library */function intercept_click() { $(document).on('click', 'a[href^="'+WEBSITE_DOMAIN+'"]', function(e) { var anchor = $(this); var url = anchor.attr('href'); var stored = ''; if (Modernizr.localstorage) { stored = localStorage[url]; } if (stored) { // We already have the data! process(stored); } else { $.ajax({ url: url, success: function(response){ // Save the data in the localStorage if (Modernizr.localstorage) { localStorage[url] = response; } process(response); } }); } });}
Making things prettier
To force the service worker to employ the “network first” strategy, we can add an extra parameter, sw-networkfirst=true, to the requested URL. However, adding this parameter in the actual link would look ugly (details of technical implementation should be hidden from the user, as much as possible).
Instead, a data attribute, data-sw-networkfirst, could be added in the anchor. Then, on runtime, the click by the user would be intercepted to be handled by an AJAX call, checking whether the link clicked has this data attribute; if it does, only then will the parameter sw-networkfirst=true be added to the URL to fetch:
function intercept_click() { $(document).on('click', 'a[href^="'+WEBSITE_DOMAIN+'"]', function(e) { var anchor = $(this); var url = anchor.attr('href'); if (anchor.data('sw-networkfirst')) { url = add_query_arg('sw-networkfirst', 'true', url); } $.ajax({ url: url, … }); });}
Planning for things that do not work
Not everything will work offline. For instance, a CAPTCHA will not work because it needs to synchronize its value with the server. If a form has a CAPTCHA, then attempting to submit the form while offline should not save the value locally, to be sent once the Internet connection comes back. Instead, it could fill the form once again with all values previously filled by the user, request the user to input the CAPTCHA, and only then submit the form.
Conclusion
We’ve seen how service workers can be implemented for a WordPress website with an SPA architecture. SPAs greatly enhance service workers, such as enabling you to choose from different appshells to load during runtime. Integrating with WordPress is not all that smooth, at least to make the website become offline first, because we need to find all resources from the theme and all of the plugins to add to the precache list. However lengthy, though, the integration is worth doing: The website will load faster and will work offline.
Dapulse is the next generation of visual tools, built specifically for designers and developers. See what everyone on your team is working on in a single glance.
Today we’d like to share an interesting distortion effect with you. The main concept of this demo is to use a displacement map in order to distort an underlying image, giving it different types of effects. To demonstrate the liquid-like transitions between images, we’ve created a slideshow.
What a displacement map generally does, is using an image as a texture, that is later applied to an object, giving the illusion that the underlying object is wrapped around that texture. This is a technique commonly used in many different areas, but today we’ll explore how this can be applied to a simple image slideshow.
We’ll be using PixiJS as our renderer and filtering engine and GSAP for our animations.
Getting Started
In order to have a displacement effect, you need a displacement map texture. In this demo’s code we’ve provided with different types of textures you can use, but of course you can create one of your own, for example by using Photoshop’s render tool. Keep in mind that this image’s dimensions affect the end result, so playing around with differently sized textures, might give you different effect looks.
A general rule of thumb is that your texture image should be a power of 2 sized texture. What this means is that its width and height can be doubled-up or divided-down by 2. This ensures that your texture is optimized to run fast, without consuming too much memory. In other words the suggested dimensions for your texture image (width and/or height), would be: 8, 16, 32, 64, 128, 256, 512, 1024, 2048 etc.
For the demos, we’ve created a slideshow that, when navigating, shows the effect as a transition on the slides. We’ll also add some other options, but we’ll just go through the main idea of the distortion effect.
Markup
Our base markup for this demo is really minimal. We just need the navigation buttons for our slider and a wrapper for the slides. We use this wrapper to pass our slides to our component, therefore we hide it by default with CSS. This markup to JS approach may simplify the task of adding images to the slideshow, when working in a more dynamic environment. However, if it suits you better, you could just easily pass them as an array, upon initializing.
The idea is fairly simple: we add all of our slides into a container, apply the displacement filter and render. Then, when clicking the navigation buttons, we set the alpha property of the current image to 0, set the next one’s to 1 and tweak the displacement filter while navigating.
var renderer = new PIXI.autoDetectRenderer(); var stage = new PIXI.Container(); var slidesContainer = new PIXI.Container(); var displacementSprite = new PIXI.Sprite.fromImage( displacementImage ); var displacementFilter = new PIXI.filters.DisplacementFilter( displacementSprite ); // Add canvas to the HTML document.body.appendChild( renderer.view ); // Add child container to the stage stage.addChild( slidesContainer ); // Set the filter to stage stage.filters = [displacementFilter]; // We load the sprites to the slides container and position them at the center of the stage // The sprites array is passed to our component upon its initialization // If our slide has text, we add it as a child to the image and center it function loadPixiSprites( sprites ) { for ( var i = 0; i < sprites.length; i++ ) { var texture = new PIXI.Texture.fromImage( sprites[i] ); var image = new PIXI.Sprite( texture ); if ( texts ) { // Base styles for our Text var textStyle = new PIXI.TextStyle({ fill: '#ffffff', wordWrap: true, wordWrapWidth: 400 }); var text = new PIXI.Text( texts[i], textStyle); image.addChild( text ); // Center each to text to the image text.anchor.set(0.5); text.x = image.width / 2; text.y = image.height / 2; } image.anchor.set(0.5); image.x = renderer.width / 2; image.y = renderer.height / 2; slidesContainer.addChild( image ); } };
That would be the most basic setup you’d need in order for the scene to be ready. Next thing we want to do is handle the clicks of the navigation buttons. Like we said, when the user clicks on the next or previous button, we change the alpha property of the according slide and tweak our Displacement Filter. We use a simple timeline for this, which you could of course customize accordingly.
// We listen at each navigation element click and call the move slider function // passing it the index we want to go to var currentIndex = 0; var slideImages = slidesContainer.children; var isPlaying = false; for ( var i = 0; i < nav.length; i++ ) { var navItem = nav[i]; navItem.onclick = function( event ) { // Make sure the previous transition has ended if ( isPlaying ) { return false; } if ( this.getAttribute('data-nav') === 'next' ) { if ( that.currentIndex >= 0 && that.currentIndex < slideImages.length - 1 ) { moveSlider( currentIndex + 1 ); } else { moveSlider( 0 ); } } else { if ( that.currentIndex > 0 && that.currentIndex < slideImages.length ) { moveSlider( currentIndex - 1 ); } else { moveSlider( spriteImages.length - 1 ); } } return false; } } // Our transition between the slides // On our timeline we set the alpha property of the relevant slide to 0 or 1 // and scale out filter on the x & y axis accordingly function moveSlider( newIndex ) { isPlaying = true; var baseTimeline = new TimelineMax( { onComplete: function () { that.currentIndex = newIndex; isPlaying = false; }}); baseTimeline .to(displacementFilter.scale, 1, { x: 200, y: 200 }) .to(slideImages[that.currentIndex], 0.5, { alpha: 0 }) .to(slideImages[newIndex], 0.5, { alpha: 1 }) .to(displacementFilter.scale, 1, { x: 20, y: 20 } ); };
Finally, we have to render our scene and optionally add some default animations.
// Use Pixi's Ticker class to render our scene // similar to requestAnimationFrame var ticker = new PIXI.ticker.Ticker(); ticker.add( function( delta ) { // Optionally have a default animation displacementSprite.x += 10 * delta; displacementSprite.y += 3 * delta; // Render our stage renderer.render( stage ); });
Working Demo
This should sum up the most basic parts of how the demo works and give you a good starting point if you want to edit it according to your needs. However, if you don’t want to mess with too much code and need a quick working demo to play on your own, there are several options you could use when you initialize the component. So just include the script on your page and add the following code wherever you want to show your slideshow. Play around with different values to get started and don’t forget to try out different displacement map textures for different effects.
// Select all your images var spriteImages = document.querySelectorAll( '.slide-item__image' ); var spriteImagesSrc = []; var texts = []; for ( var i = 0; i < spriteImages.length; i++ ) { var img = spriteImages[i]; // Set the texts you want to display to each slide // in a sibling element of your image and edit accordingly if ( img.nextElementSibling ) { texts.push(img.nextElementSibling.innerHTML); } else { texts.push(''); } spriteImagesSrc.push( img.getAttribute('src' ) ); } // Initialise the Slideshow var initCanvasSlideshow = new CanvasSlideshow({ // pass the images you want as an array sprites: spriteImagesSrc, // if you want your slides to have title texts, pass them as an array texts: texts, // set your displacement texture displacementImage: 'https://imgur.com/a/Ea3wo', // optionally start with a default animation autoPlay: true, // [x, y] controls the speed for your default animation autoPlaySpeed: [10, 3], // [x, y] controls the effect amount during transitions displaceScale: [200, 70], // choose whether or not you slideshow will take up all the space of the viewport fullScreen: true, // If you choose to not have a fullscreen slideshow, set the stage's width & height accordingly stageWidth: 800, stageHeight: 600, // add you navigation element. Should have a 'data-nav' attribute with a value of next/previous navElement: document.querySelectorAll( '.scene-nav' ), // will fit the filter bounding box to the renderer displaceAutoFit: false });
Interactive
Last thing we want to do is optionally make our stage interactive. That is instead of auto playing, have our effect interact with our mouse. Just set the the interactive property to be true and play around with your mouse.
var initCanvasSlideshow = new CanvasSlideshow({ interactive: true ... });
In all mouse interactions we listen for the corresponding event, and based on the event data, we scale our displacement event respectively. It looks like this:
Editor’s Note: We’ve been closely working with Maya on this article, and we’re happy to see the final result now being published on 18F. We highly encourage more teams to share the lessons they learned when building design systems or pattern libraries, and we’re always happy to support them in writing, editing and shaping that article. This post is a re-post of Maya’s final article.
Today, there are nearly 30,000 U.S federal websites with almost no consistency between them. Between the hundreds of thousands of government employees working in technology, there’s nothing in common with how these websites are built or maintained.
As a result, the government is spending considerable resources on services that aren’t meeting the needs of their users. Federal websites are the front door to government services: it’s the first thing someone encounters when interacting with the government. According to research from the Federal Front Door initiative, as government interfaces erode, so does the public’s trust in those services.
I was part of a team of designers and developers who unified a complex system with numerous rules to serve users from all corners of the country. I’ll shed some light on how we built tools to leverage industry-standard best practices and produce a design system with reusable components. You’ll also see how our system is helping agency teams in the federal government create simple, efficient, consistent experiences quickly and at reduced cost.
The Problem: Inconsistent User Experiences Across Government Websites
When the American people go online to access government services, they’re often met with confusing navigation systems, an array of visual brands, and inconsistent interaction patterns. Websites intended to help people access information and services, like a veteran looking for help to go back to college, are splintered off of various agencies and organizations.
For example, consider what it’s like for a young veteran looking to apply for federal student loans to help her cover the cost of attending college. Let’s call this person Joanne. Joanne had to wade through multiple agency websites to access the federal programs that could help her afford college. Joanne was confused. She was overwhelmed by how hard these tools were to use, missed opportunities she was eligible for, and felt frustrated and isolated. The system that was supposed to help her stood in her way. Creating consistency between these systems will help people (like Joanne) more effectively access the services they need and increase their trust in the government.
Why It’s Like This: Limitations To Consistent User Experiences In Government
Dedicated federal workers want to build helpful digital tools for everyone. They want to be able to develop quick prototypes and sites. They choose resources with minimal documentation that allow them to get up and running quickly.
Other one-off designers or front-end developers in an agency are trying to do the right thing but without a lot of time or support. They need tools to cut down on design and development time, and a way to advocate for best practices to higher ups.
Therefore, the question in front of us became:
Could we create a shared set of tools to provide consistent, effective, and easy-to-use government websites?
We think the answer is yes.
The Team
In the summer of 2015, a team from 18F and the U.S. Digital Services formed to work on these tools. We asked ourselves: how do we bring together thousands of public websites into a common design language?
To answer this question, twenty designers and developers working on digital services in government gathered in Washington, DC to work on this problem.
The first question we asked ourselves was: what are the components and patterns we’re looking for in a pattern library? What are the elements that could help us build a library of patterns and systems of styles? We wrote down all the parts that make up our websites and what we would want in a system. We stuck these ideas on a wall and grouped them together to find what was universal across our systems. We then looked for patterns, taking note of what were the most common. Some of the simplest things kept coming up again and again: color, typography, grids, and buttons.
During our meetings, the team mentioned other components. For instance, people also asked about unique components like data visualizations and calendar widgets. However, by limiting components to the basic building blocks, we could get it in the hands of designers and developers as quickly as possible and see for ourselves what was clicking and what wasn’t.
Building a library to create consistency is similar to playing with Lego bricks as opposed to say mud. When you give people a handful of mud and tell them to build a house, each house will look different: a little lopsided and squishy. When you give those same people five kinds of Lego bricks, they can create a million different houses. Each house looks consistent, not uniform.
Building The System
We started to explore how we could bring consistency across interactions, user experiences, and behavior across those websites. Joanne wants to understand she’s on a government site. She wants it to feel familiar and be intuitive, so she knows what to do and can accomplish her task. A consistent look and feel with common design elements will feel familiar, trustworthy, and secure — and people like Joanne will be able to navigate government websites more easily because of a common palette and design.
Interface inventory
We used analytics.usa.gov to look at the top visited .gov domains to surface common colors and component styles. We wondered: “Do we need 32 different shades of blue?” We were surprised by so many different button styles on government website. Do we really need 64 types of buttons? Surfacing and categorizing components across government websites allowed us to see the inconsistencies between government websites as well as what components they had in common.
The interface inventory and results from our workshop were combined and prioritized with the help of government designers. Once we had our list of components to start with, our user researchers began researching, creating wireframes, and conducting user testing of the components and design system website.
The user experience team members researched, created wireframes, and tested components like this sign-in form. Visual designers created higher fidelity designs based on the wireframes, which were later developed in code.
Mood boarding
Our visual designers began to explore what it would look and feel like. We knew we wanted the system to feel simple, modern, accessible, approachable, and trustworthy. They created three mood boards, which looked at various typography and color samples as well as inspirational design imagery.
The three styles we looked at were:
Clean and Classic
Inspiring and Empowering
Modern American
Our team’s designers worked with visual designers across government and conducted a dot-voting exercise surfacing what they liked about each mood board. We put these three directions up on GitHub to solicit feedback from a range of government digital service employees, where we could fine-tune the direction. In the end, people liked the bold, saturated colors of Modern American and the typography of Clean and Classic, which incorporated a sans-serif font and a serif typeface.
Typography
Once the style was defined, our visual designers started to explore which typefaces to use. We needed to find a font that was legible, communicated trust and credibility, and was open source. Since paid fonts would have created additional burdens around licensing, we needed to find fonts that were free and open source to make it easy for government designers to use the font.
To promote legibility, we looked at fonts that had a large x-height, open counters, and a range of font weights. In order to provide the greatest flexibility for government designers, we wanted to find a sans-serif font for its clean, modern aesthetic that’s highly legible on interfaces and a serif font, for a traditional look that could be used for text-dense content or added contrast between headings.
Our visual designers tested typography pairings by replacing fonts on actual government websites with these choices to find the fonts that would meet these needs. By omitting the name of the typeface, designers weren’t influenced by what font it was and could focus on how it read. Then we tested these fonts with government designers to identify which font was the most legible and matched our desired aesthetic. In the end, the fonts we chose were Source Sans Pro and Merriweather.
Source Sans Pro is an open-source sans serif typeface created for legibility in UI design. With a variety of weights that read easily at all sizes, Source Sans Pro provides clear headers as well as highly readable body text. Inspired by twentieth-century American gothic typeface design, its slender but open letters offer a clean and friendly simplicity.
Merriweather is an open-source serif typeface designed for on-screen reading. This font is ideal for text-dense design: the letterforms have a tall x-height but remain relatively small, making for excellent readability across screen sizes while not occupying extra horizontal space. The combination of slim and thick weights gives the font family stylistic range, while conveying a desirable mix of classic, yet modern simplicity. Merriweather communicates warmth and credibility at both large and smaller font sizes.
From a technical standpoint, we needed to ensure the fonts we provide would perform quickly for users. While our visual designers wanted an array of weights, our developers reminded everyone that this would create a burden on users that have to load extra font families. To compromise, we created different font pairings: a robust option with more font weights and a trimmed down version for quicker load times. Armed with this knowledge, government designers can weigh the options themselves to find which would suit their design and performance needs.
Colors
The repeated use of colors found in the interface inventory of government websites informed our color palette. A simple, minimalist palette of cool blue and gray provides a neutral backdrop for brighter shades of red and blue to contrast against. This creates a clean and engaging palette, leaving people feeling like they’re in good hands. The colors are divided by primary, secondary, background, and tertiary colors.
Primary colors are blue, gray, and white. Blue weaves through buttons, links, and headings to bring a sense of calmness, trust, and sincerity through the interface. Clean white content areas allow the typography to “pop” on the page.
Secondary colors are the accent colors of bright blue and red are used sparingly on specialized elements to add lightness and a modern flair. They may be used to highlight important features on a page, like a call to action, or for visual design elements like illustrations.
Background colors are shades of gray used for background blocks of large content areas.
Tertiary colors are used for content-specific needs and are used sparingly, such as in alerts or illustrations.
<
The range of colors in the palette can be flexibly applied to support a range of distinct visual styles. For example, by abstracting color names, such as primary and secondary, we can support agencies that need to conform the palette to their unique brand’s needs. A change once in the color value spreads throughout the system across buttons, accents, and headings.
Because government sites must be accessible to anyone with a disability, in conformance with Section 508, the Standards ensures there is enough contrast between text and its background color. Following WCAG 2.0 guidelines, the Standards provide combinations where the contrast between the text and background is greater than or equal to 4.5:1.
By using bright saturated tints of blue and red, grounded in sophisticated deeper shades of blues and grays, we could communicate warmth and trustworthiness, support a range of distinct visual styles, and meet the highest accessibility standards and color contrast requirements.
Space
The last piece in the building blocks of the design system is how these elements flow in space and provides structure. We provide balanced spacing throughout the type system by placing adequate margins above and below heading elements and paragraph text. By using em’s or relative units, white space is proportionate to the font size and automatically distributes the correct ratio throughout the system. If an agency needs to change a font size, spacing will automatically adjust.
To hold the structure of the content, we provide a 12-column grid system using Neat, a flexible and lightweight Sass grid by thoughtbot. We provide an easy-to-use grid system comprised of a grid container to contain the content centered in the page and sections of halves, thirds, quarters, sixths, and twelfths to lay out content. Simple classes, like usa-grid and usa-width-one-half allow developers to quickly mock up page layouts. We provide three breakpoints, which allows the grid to reflow at smaller sizes, and people may always fine tune the breakpoints to suit their content. A flexible grid system allows visitors to quickly read the page.
Typography, colors, and space form the foundation of the design system, which is used to build components like buttons, forms, and navigation.
Complicated Tasks, Ambitious Goals
The U.S. Web Design Standards launched in September 2015 as a visual style guide and UI component library with the goal of bringing a common design language for government websites all under one hood. In the two years since we were tasked to unify the design and look of all U.S. government websites, over 100 government projects have adopted the standards, helping it evolve, reshape, and move forward in ways we couldn’t imagine. From the Department of Veterans’ Affairs to the U.S. Department of Agriculture, government teams are coming together to set a new bar for federal government websites. In this short time, we’ve begun seeing consistency and better user experiences across government websites. While the goal was to unify a government design language, the unique expression of it has been multifaceted and boundless. And just like building a house out of Lego blocks, expression within the meaningful constraints of a modular design system creates diverse products that are consistent, not uniform.
By providing designers and developers with easy-to-use tools to deliver the highest quality government websites to the American people, the design system is helping create connections across disciplines and move government designers and developers forward — user research, human-centered design, visual design, front-end, and accessibility best practices all come together.
Lessons Learned: Drafting Your Own Standards Within Your Company
Whether you’re a small company or one of the largest governments in the world, you can create your own standards to solve your unique needs. Every pattern library should be different because it should serve the specific needs of the group creating them.
Talk to the people: You’ll need to find out where the problems are and whether or not these problems can be solved by design patterns. Find out if there are common needs across groups. What aspects of what you’re building are required for you to do your job?
Look for duplication of efforts: Where are you repeating yourselves? Where are you wasting time? What takes the longest or is the most challenging when building out websites? Where does friction arise?
Know your values: What your design system will end up looking like will also depend on what’s important to you. What are your values? What principles can guide how you build things?
Empower your team: You need a dedicated group charged with working on this and support from leaders to give you the air cover to do this work. It should be as important as any other project. You’ll need a multidisciplinary team with expertise from user experience research and design, visual design, and front-end development. You’ll need someone to fulfill the role of project manager and product owner to guide the project forward toward the right goals.
Start small and iterate: Figure out what your core needs are, build those out, test them, and listen to what people are asking for. That’s how you’ll find out what is missing. Starting with a limited set of components will save time and give you real answers right away when you start putting it out in the world and in people’s hands.
Don’t work in a vacuum: You’ll need to build consensus, understand what people need, and learn how they build websites, so find people that will use the system. Let that guide your decisions. While you may be more isolated getting the initial system setup, get it out there so you can begin testing and learning. As you build out products with your system and test them with real users, you’ll have the information you need to keep making improvements.
Reuse and specialize: It’s great to see how others have solved problems, and reuse when you can, but know that their solutions are solving their problems. Your problems may need a unique approach. Don’t fall into the trap of “this is what a pattern library should look like” just because someone else is doing it that way.
Promote your system: Get people excited about what you’re doing by talking about the value they’ll get for free by using it: consistent, beautiful, user friendly design with accessible interfaces that will save them time and money.
Be flexible: People don’t like things that are forced on them. Give them opportunities to learn about it and ask questions. Give them permission to make it their own.
Conclusion
When building out a large-scale design system, it can be hard to know where to start. By focusing on the basics, from core styles to coding conventions to design principles, you can create a strong foundation that spreads to different parts of your team. These building blocks can be stacked in many ways to support a multitude of needs and use cases. By building flexibility into the system, users can easily adapt the patterns and designs to support the diverse scope and needs of digital services. Signaling that things can be customized invites people to participate in the system and make it their own. Only when people have a stake in the system will they feel invested to use it and contribute back, making it more robust, versatile, and able to stand the test of time.
It takes a lot of blocks and a lot of time to build these kinds of large design systems, and it’s important to keep people like Joanne in mind. The people on the other side who are scrolling through your designs, clicking your buttons, and filling out your forms so they can access the critical services they need. A solid, usable design system can make all the difference to people like Joanne.
Editor’s Note: We’ve been closely working with Maya on this article, and we’re happy to see the final result now being published on 18F1. We highly encourage more teams to share the lessons they learned when building design systems2 or pattern libraries3, and we’re always happy to support them in writing, editing and shaping that article. This post is a re-post of Maya’s final article.
Today, there are nearly 30,000 U.S federal websites4 with almost no consistency between them. Between the hundreds of thousands of government employees working in technology, there’s nothing in common with how these websites are built or maintained.
As a result, the government is spending considerable resources on services that aren’t meeting the needs of their users5. Federal websites are the front door to government services: it’s the first thing someone encounters when interacting with the government. According to research from the Federal Front Door6 initiative, as government interfaces erode, so does the public’s trust in those services.
I was part of a team of designers and developers who unified a complex system with numerous rules to serve users from all corners of the country. I’ll shed some light on how we built tools to leverage industry-standard best practices and produce a design system with reusable components. You’ll also see how our system is helping agency teams in the federal government create simple, efficient, consistent experiences quickly and at reduced cost.
The Problem: Inconsistent User Experiences Across Government Websites Link
When the American people go online to access government services, they’re often met with confusing navigation systems, an array of visual brands, and inconsistent interaction patterns. Websites intended to help people access information and services, like a veteran looking for help to go back to college, are splintered off of various agencies and organizations.
For example, consider what it’s like for a young veteran looking to apply for federal student loans to help her cover the cost of attending college. Let’s call this person Joanne. Joanne had to wade through multiple agency websites to access the federal programs that could help her afford college. Joanne was confused. She was overwhelmed by how hard these tools were to use, missed opportunities she was eligible for, and felt frustrated and isolated. The system that was supposed to help her stood in her way. Creating consistency between these systems will help people (like Joanne) more effectively access the services they need and increase their trust in the government.
Why It’s Like This: Limitations To Consistent User Experiences In Government Link
Dedicated federal workers want to build helpful digital tools for everyone. They want to be able to develop quick prototypes and sites. They choose resources with minimal documentation that allow them to get up and running quickly.
Other one-off designers or front-end developers in an agency are trying to do the right thing but without a lot of time or support. They need tools to cut down on design and development time, and a way to advocate for best practices to higher ups.
Therefore, the question in front of us became:
Could we create a shared set of tools to provide consistent, effective, and easy-to-use government websites?
In the summer of 2015, a team from 18F and the U.S. Digital Services formed to work on these tools. We asked ourselves: how do we bring together thousands of public websites into a common design language?
To answer this question, twenty designers and developers working on digital services in government gathered in Washington, DC to work on this problem.
The first question we asked ourselves was: what are the components and patterns we’re looking for in a pattern library? What are the elements that could help us build a library of patterns and systems of styles? We wrote down all the parts that make up our websites and what we would want in a system. We stuck these ideas on a wall and grouped them together to find what was universal across our systems. We then looked for patterns, taking note of what were the most common. Some of the simplest things kept coming up again and again: color, typography, grids, and buttons.
During our meetings, the team mentioned other components. For instance, people also asked about unique components like data visualizations and calendar widgets. However, by limiting components to the basic building blocks, we could get it in the hands of designers and developers as quickly as possible and see for ourselves what was clicking and what wasn’t.
Building a library to create consistency is similar to playing with Lego bricks as opposed to say mud. When you give people a handful of mud and tell them to build a house, each house will look different: a little lopsided and squishy. When you give those same people five kinds of Lego bricks, they can create a million different houses. Each house looks consistent, not uniform.
We started to explore how we could bring consistency across interactions, user experiences, and behavior across those websites. Joanne wants to understand she’s on a government site. She wants it to feel familiar and be intuitive, so she knows what to do and can accomplish her task. A consistent look and feel with common design elements will feel familiar, trustworthy, and secure — and people like Joanne will be able to navigate government websites more easily because of a common palette and design.
We used analytics.usa.gov22 to look at the top visited .gov domains to surface common colors and component styles. We wondered: “Do we need 32 different shades of blue?” We were surprised by so many different button styles on government website. Do we really need 64 types of buttons? Surfacing and categorizing components across government websites allowed us to see the inconsistencies between government websites as well as what components they had in common.
The interface inventory and results from our workshop were combined and prioritized with the help of government designers. Once we had our list of components to start with, our user researchers began researching, creating wireframes, and conducting user testing of the components and design system website.
The user experience team members researched, created wireframes, and tested components like this sign-in form. Visual designers created higher fidelity designs based on the wireframes, which were later developed in code.
Our visual designers began to explore what it would look and feel like. We knew we wanted the system to feel simple, modern, accessible, approachable, and trustworthy. They created three mood boards, which looked at various typography and color samples as well as inspirational design imagery.
The three styles we looked at were:
Clean and Classic
Inspiring and Empowering
Modern American
Our team’s designers worked with visual designers across government and conducted a dot-voting exercise29 surfacing what they liked about each mood board. We put these three directions up on GitHub30 to solicit feedback from a range of government digital service employees, where we could fine-tune the direction. In the end, people liked the bold, saturated colors of Modern American and the typography of Clean and Classic, which incorporated a sans-serif font and a serif typeface.
Once the style was defined, our visual designers started to explore which typefaces to use. We needed to find a font that was legible, communicated trust and credibility, and was open source. Since paid fonts would have created additional burdens around licensing, we needed to find fonts that were free and open source to make it easy for government designers to use the font.
To promote legibility, we looked at fonts that had a large x-height, open counters, and a range of font weights. In order to provide the greatest flexibility for government designers, we wanted to find a sans-serif font for its clean, modern aesthetic that’s highly legible on interfaces and a serif font, for a traditional look that could be used for text-dense content or added contrast between headings.
Our visual designers tested typography pairings by replacing fonts on actual government websites with these choices to find the fonts that would meet these needs. By omitting the name of the typeface, designers weren’t influenced by what font it was and could focus on how it read. Then we tested these fonts with government designers to identify which font was the most legible and matched our desired aesthetic. In the end, the fonts we chose were Source Sans Pro and Merriweather.
Source Sans Pro is an open-source sans serif typeface created for legibility in UI design. With a variety of weights that read easily at all sizes, Source Sans Pro provides clear headers as well as highly readable body text. Inspired by twentieth-century American gothic typeface design, its slender but open letters offer a clean and friendly simplicity.
Merriweather is an open-source serif typeface designed for on-screen reading. This font is ideal for text-dense design: the letterforms have a tall x-height but remain relatively small, making for excellent readability across screen sizes while not occupying extra horizontal space. The combination of slim and thick weights gives the font family stylistic range, while conveying a desirable mix of classic, yet modern simplicity. Merriweather communicates warmth and credibility at both large and smaller font sizes.
From a technical standpoint, we needed to ensure the fonts we provide would perform quickly for users. While our visual designers wanted an array of weights, our developers reminded everyone that this would create a burden on users that have to load extra font families. To compromise, we created different font pairings: a robust option with more font weights and a trimmed down version for quicker load times. Armed with this knowledge, government designers can weigh the options themselves to find which would suit their design and performance needs.
The repeated use of colors found in the interface inventory of government websites informed our color palette. A simple, minimalist palette of cool blue and gray provides a neutral backdrop for brighter shades of red and blue to contrast against. This creates a clean and engaging palette, leaving people feeling like they’re in good hands. The colors are divided by primary, secondary, background, and tertiary colors.
Primary colors are blue, gray, and white. Blue weaves through buttons, links, and headings to bring a sense of calmness, trust, and sincerity through the interface. Clean white content areas allow the typography to “pop” on the page.
Secondary colors are the accent colors of bright blue and red are used sparingly on specialized elements to add lightness and a modern flair. They may be used to highlight important features on a page, like a call to action, or for visual design elements like illustrations.
Background colors are shades of gray used for background blocks of large content areas.
Tertiary colors are used for content-specific needs and are used sparingly, such as in alerts or illustrations.
The range of colors in the palette can be flexibly applied to support a range of distinct visual styles. For example, by abstracting color names, such as primary and secondary, we can support agencies that need to conform the palette to their unique brand’s needs. A change once in the color value spreads throughout the system across buttons, accents, and headings.
Because government sites must be accessible to anyone with a disability, in conformance with Section 50851, the Standards ensures there is enough contrast between text and its background color. Following WCAG 2.0 guidelines52, the Standards provide combinations where the contrast between the text and background is greater than or equal to 4.5:1.
By using bright saturated tints of blue and red, grounded in sophisticated deeper shades of blues and grays, we could communicate warmth and trustworthiness, support a range of distinct visual styles, and meet the highest accessibility standards and color contrast requirements.
The last piece in the building blocks of the design system is how these elements flow in space and provides structure. We provide balanced spacing throughout the type system by placing adequate margins above and below heading elements and paragraph text. By using em’s or relative units, white space is proportionate to the font size and automatically distributes the correct ratio throughout the system. If an agency needs to change a font size, spacing will automatically adjust.
To hold the structure of the content, we provide a 12-column grid system using Neat57, a flexible and lightweight Sass grid by thoughtbot58. We provide an easy-to-use grid system comprised of a grid container to contain the content centered in the page and sections of halves, thirds, quarters, sixths, and twelfths to lay out content. Simple classes, like usa-grid and usa-width-one-half allow developers to quickly mock up page layouts. We provide three breakpoints, which allows the grid to reflow at smaller sizes, and people may always fine tune the breakpoints to suit their content. A flexible grid system allows visitors to quickly read the page.
Typography, colors, and space form the foundation of the design system, which is used to build components like buttons, forms, and navigation.
The U.S. Web Design Standards61 launched in September 2015 as a visual style guide and UI component library with the goal of bringing a common design language for government websites all under one hood. In the two years since we were tasked to unify the design and look of all U.S. government websites, over 100 government projects62 have adopted the standards, helping it evolve, reshape, and move forward in ways we couldn’t imagine. From the Department of Veterans’ Affairs63 to the U.S. Department of Agriculture6417, government teams are coming together to set a new bar for federal government websites. In this short time, we’ve begun seeing consistency and better user experiences across government websites. While the goal was to unify a government design language, the unique expression of it has been multifaceted and boundless. And just like building a house out of Lego blocks, expression within the meaningful constraints of a modular design system creates diverse products that are consistent, not uniform.
By providing designers and developers with easy-to-use tools to deliver the highest quality government websites to the American people, the design system is helping create connections across disciplines and move government designers and developers forward — user research, human-centered design, visual design, front-end, and accessibility best practices all come together.
Lessons Learned: Drafting Your Own Standards Within Your Company Link
Whether you’re a small company or one of the largest governments in the world, you can create your own standards to solve your unique needs. Every pattern library should be different because it should serve the specific needs of the group creating them.
Talk to the people: You’ll need to find out where the problems are and whether or not these problems can be solved by design patterns. Find out if there are common needs across groups. What aspects of what you’re building are required for you to do your job?
Look for duplication of efforts: Where are you repeating yourselves? Where are you wasting time? What takes the longest or is the most challenging when building out websites? Where does friction arise?
Know your values: What your design system will end up looking like will also depend on what’s important to you. What are your values? What principles can guide how you build things?
Empower your team: You need a dedicated group charged with working on this and support from leaders to give you the air cover to do this work. It should be as important as any other project. You’ll need a multidisciplinary team with expertise from user experience research and design, visual design, and front-end development. You’ll need someone to fulfill the role of project manager and product owner to guide the project forward toward the right goals.
Start small and iterate: Figure out what your core needs are, build those out, test them, and listen to what people are asking for. That’s how you’ll find out what is missing. Starting with a limited set of components will save time and give you real answers right away when you start putting it out in the world and in people’s hands.
Don’t work in a vacuum: You’ll need to build consensus, understand what people need, and learn how they build websites, so find people that will use the system. Let that guide your decisions. While you may be more isolated getting the initial system setup, get it out there so you can begin testing and learning. As you build out products with your system and test them with real users, you’ll have the information you need to keep making improvements.
Reuse and specialize: It’s great to see how others have solved problems, and reuse when you can, but know that their solutions are solving their problems. Your problems may need a unique approach. Don’t fall into the trap of “this is what a pattern library should look like” just because someone else is doing it that way.
Promote your system: Get people excited about what you’re doing by talking about the value they’ll get for free by using it: consistent, beautiful, user friendly design with accessible interfaces that will save them time and money.
Be flexible: People don’t like things that are forced on them. Give them opportunities to learn about it and ask questions. Give them permission to make it their own.
When building out a large-scale design system, it can be hard to know where to start. By focusing on the basics, from core styles to coding conventions to design principles, you can create a strong foundation that spreads to different parts of your team. These building blocks can be stacked in many ways to support a multitude of needs and use cases. By building flexibility into the system, users can easily adapt the patterns and designs to support the diverse scope and needs of digital services. Signaling that things can be customized invites people to participate in the system and make it their own. Only when people have a stake in the system will they feel invested to use it and contribute back, making it more robust, versatile, and able to stand the test of time.
It takes a lot of blocks and a lot of time to build these kinds of large design systems, and it’s important to keep people like Joanne in mind. The people on the other side who are scrolling through your designs, clicking your buttons, and filling out your forms so they can access the critical services they need. A solid, usable design system can make all the difference to people like Joanne.
Click to see how the layout behaves on bigger screens.#### Charles WongBeethoven’s “Ode to Joy” as a responsive sheet music page. It consists of two CSS grid layouts – one for positioning the bars within the rows of sheet music, and one for positioning musical notes within the bars. Charles shares more insights into the project [here](https://sejikco.github.io/CssGridSheetMusic/).
#### Dannie VintherA Marvel poster made with CSS and Clip-path. A sprinkle of JavaScript helps avoid layout reflow when images are fully loaded.
#### Erik Davidsson
A great one for football fans! A layout featuring the upcoming football game between FC Barcelona and Real Madrid. Erik brought it to life with many different techniques with fallbacks to make the website usable in older browsers such as IE8 and IE9.
#### MathieuInspired by [Justin Avery’s CodePen](https://codepen.io/justincavery/pen/yaRLYE/), Mathieu submitted a dynamic periodic table built with CSS Grid.
#### Amy DeVoogdInspired by the works of Jen Simmons and Rachel Andrew, [Spacebar](https://amydevoogd.github.io/product-showcase/) is a product showcase for a completely invented product that Amy branded and designed, i.e. all imagery is copyright-free.
#### Ieva OzolīteA mobile-first semantic web page for a [band poster](https://www.swissted.com/products/the-cure-at-canterbury-odeon-1979). Quite impressive for a first experiment with CSS Grid, don’t you agree?
Ethan Horger
The goal here was to try out a blog entry layout that Ethan has always wanted to try, in which the author’s bio would always be displayed on the top-left of the article, legal notices at the bottom, and some kind of quote or supplemental material pinned in the middle of the article. With CSS Grid, the layout degrades to using floats in IE 8 and 9, and doesn’t maintain the article quote in the middle of the article, but is otherwise fully readable.
Tanya Syrygina
Tanya Syrygina used Grid to built a fresh, card-style blog layout.
Nelson Leite
For an e-commerce project, Nelson Leite needed to showcase a product listing with some other content in the middle of the products, displayed differently. His solution: CSS Grid.
Robert Mion
Robert Mion combined CSS Grid and Flexbox to build a responsive supermarket add.
Arturo Ríos
A CSS Grid that can be used comfortably in full-screen mode comes from Arturo Ríos.
Bob Mitro
A simple, responsive blog theme based on CSS Grid layout.
Kev Bonett
Kev Bonnet created a mobile-first e-commerce template with fallback to Flexbox, then fallback to basic 2-column inline-block.
Sven Rothe
Sven Rothe’s grid has equal heights over several rows. So if you add more content in a tile in the first row, the second row will increase, too.
Ismail Ghallou
With his To-Do app layout, Ismail Ghallou proves that CSS Grid can handle even the weirdest layouts. And it’s responsive, too.
Juan Garcia
A page of a video game platform comes from Juan Garcia.
Mark McMurray
A multi-column layout as a CV requires it is a perfect CSS Grid project as Mark McMurray proves.
Marissa Douglass
Ever thought of building an interactive cookbook with CSS Grid? Marissa Douglass did.
Melissa Bogemanns
A photo showcase made with CSS Grid. Available as .zip (6MB)
Tyler Argo
Tyler Argo re-built the Google Play Store layout from scratch using CSS Grid with fallbacks. It works all the way back to IE9 and is even more responsive than the original site.
Mauricio Mantilla
This layout is based on a website that was designed by the company where Mauricio works at. He took part of the layout, which is based on Packery (Masonry) and port it to grid with just a few lines of CSS Grid.
Katherine Kato
A portfolio website layout made with CSS Grid and Flexbox as a fallback.
Donny Truong
A minimalistic blog layout comes from Donny Truong.
Anenth Vishnu
A responsive app layout based on Grid.
Amy Carney
A basic layout (with IE fallbacks and web accessibility in mind) that may be useful for getting projects started or migrated.
Last but not least, before you dive right into the challenge, here are some helpful resources to kick-start your CSS Grid adventure.
Resources and References
Ever thought of building an interactive cookbook with CSS Grid? Marissa Douglass did.
Melissa Bogemanns
A photo showcase made with CSS Grid. Available as .zip (6MB)
Tyler Argo
Tyler Argo re-built the Google Play Store layout from scratch using CSS Grid with fallbacks. It works all the way back to IE9 and is even more responsive than the original site.
Mauricio Mantilla
This layout is based on a website that was designed by the company where Mauricio works at. He took part of the layout, which is based on Packery (Masonry) and port it to grid with just a few lines of CSS Grid.
Katherine Kato
A portfolio website layout made with CSS Grid and Flexbox as a fallback.
Donny Truong
A minimalistic blog layout comes from Donny Truong.
Anenth Vishnu
A responsive app layout based on Grid.
Amy Carney
A basic layout (with IE fallbacks and web accessibility in mind) that may be useful for getting projects started or migrated.
Finally, to get your ideas flowing, some inspiring CodePen experiments that illustrate the magic of CSS Grid:
Are You Ready For The Next Challenge?
That’s right! There will be more challenges coming up very soon, and even more prizes to win! Keep an eye on the magazine or follow us on Twitter so you don’t miss out next time.