Only one week left until Christmas, and people already start freaking out again. No gifts purchased yet, work isn’t finished either, and suddenly some budget has to be spent until the end of the year. All of this puts us under pressure. To avoid the stress, I’ve seen a lot of people take a vacation from now until the end of the year — probably a good idea.
And while it’s nice to see so many web advent calendars, I feel like I’ve never written a longer reading list than this one. So save this edition if you don’t have much time currently and read it during some calm moments later this year or early next year. Most articles are still worth reading in a few weeks.
Opera 42 (built upon Chromium 55) is out1 and comes with a built-in currency converter, support for Pointer Events, JavaScript async/await, and CSS hyphens. document.write() on the other hand, will no longer load2 over 2G connections.
Firefox has introduced Telemetry a while ago to its browser and now shares some details on what devices and hardware Firefox users use4. In September 2016, for example, 10% still used Windows XP while only 7% used macOS and 77% of the users still have Flash installed. The most common screen resolutions are 1366x768px and 1920x1080px. There are many more really interesting statistics in there, and we’ll have to see how this develops over the next few years. But for us web developers, this also highlights that we shouldn’t assume that people use QuadCore CPU, 8GB RAM machines but have “lower-end” devices instead. So be aware of this before you create fancy CPU/memory-consuming web applications that a user will not have fun with.
Samsung Internet browser 5.0 has been released5. It has some interesting new technologies built in, such as content provider extensions, 360˚ video, a QR code reader, and a video assistant.
A lot of us are using Disqus’ commenting system on their websites. It’s an easy way to add comments to your static website, but now Disqus announced that they need to lay off about 20% of their employees. But not only that, they will also change their strategy towards data collection and advertising10. Specifically, they elaborate on displaying ads in comments, and there are speculations that they will try to sell (anonymized) user data to advertisers to help them tailor their ads more precisely to users. Maybe time to reconsider if you really want to use the service.
Sergey Chikuyonok dives deep into the technical details of browsers and hardware to explain how to get GPU animation right13 and why it makes a big difference if we render animations on the CPU or on the GPU.
[...'???'] // ["?", "", "?", "", "?"] or ‘???’.length // 8 — do you wonder why that works? Stefan Judis found out and shares the technical details on why the Emoji family works so well with JavaScript operations21 and how you even can dynamically generate the skin color of an emoji22 with color codes and the unicode.
Working long hours doesn’t mean someone has a good “work ethic”. Instead, it just means working too much. Jason Fried on what “work ethic” really is about32.
The blank Photoshop document glows in front of you. You’ve been trying to design a website for an hour but it’s going nowhere. You feel defeated. You were in this same predicament last month when you couldn’t design a website for a project at work. As a developer, you just feel out of your element pushing pixels around.
How do designers do it? Do they just mess around in Photoshop or Sketch for a while until a pretty design appears? Can developers who are used to working within the logical constructs of Boolean logic and number theory master the seemingly arbitrary rules of design?
You can! You don’t have to be blessed by the design gods with special talent. So, how should you, a developer, learn design?
One of the quickest ways to learn something is to ask someone who has done it and pick their brain. I spoke with a handful of developers who did just that. Some learned design to supplement their coding skills, while others switched over completely.
This article is for design beginners. So, throughout the piece, I’ll use simplified definitions of user experience (UX) and visual design. For our purpose, UX design is how a website works, and visual design is how it looks. Both fields are, of course, more nuanced1 than that, but in the interest of time, those definitions should suffice.
Let’s get started with mistakes designers make when learning design!
An 18-year-old freshman has four years to discover what area of design they like. At design school, they’ll dabble in motion graphics for a while, then maybe try UX. At some point, they’ll be obsessed with photography for a semester. Eventually, the student will find an area of design that they enjoy and are good at.
But you’re a working developer with limited time. You need to be specific about what you want to learn.
For most developers, the first step is to pick either UX design or visual design (usually synonymous with graphic design). Many tutorials do not distinguish between the two, which can be frustrating.
So, should you try UX or visual design? Consider which one you’re more drawn to and then try it out. If browsing Dribbble excites you, then try visual design. If you’d rather get lost in one of Edward Tufte’s books, try UX. My guess is that one of the two fields has already whispered to you in the past. Listen to that whisper.
Or you may have already figured out your preference by working in the field. For Jenny Reeves5, a LAMP developer turned UX designer, the transition came naturally. She says:
It happened over time, so it wasn’t like I woke up one day and said I’m going to switch roles. I started doing IA diagrams and user flows when I was a developer as a means to get applications organized, then moved into making wireframes for basic stuff. After I realized my passion for this and my company took notice, I soon started doing all in-house UX.
Jacob Rogelberg6, former JavaScript developer turned UX specialist, thinks you need to try many things before choosing:
I think you have to have a mindset where you’re happy to try lots of different things. Commit to spending 10 hours on visual design and see how it sits with you.
During this trial phase, don’t mistake your inexperience with a lack of natural talent (more on that later).
Why am I recommending that you choose just one? Because when you remove visual or UX design from the equation, you’re left with 50% less material to digest. It’s true that UX and visual design overlap in places. And some people will argue that you should learn both. But that comes at the expense of focus and your limited time.
By picking just one, you’ll get better faster. Mastering either visual or UX design is much better than being mediocre at both.
Let’s say you’ve picked visual design:
That’s a broad field, so we need to get more specific. There’s no sense in learning topics you’re never going to use as a developer. For instance, you probably won’t need to learn all of the aspects of editorial design or book-cover design or logo design.
I recommend that 90% of developers focus on interactive work if they’re learning visual design. (If you’re learning UX design, most work in that field is on the web already, so this point isn’t as relevant.)
I know, I know, you want to try everything. But learning something such as logo design isn’t worth your time. In the span of your lifetime, there will be very few instances when you, the web developer, will be asked to design a logo for a company. However, focusing only on interactive visual design (websites and apps) would benefit you for years to come.
As a web developer, you’re going to be drawn to subjects that seem design-related — for example, CSS, Bootstrap and material design. But learning this stuff will come at the expense of what you should really be focusing on. For visual design, those things are composition, hierarchy and typography. For UX, they’re user research and personas.
Developers are builders. While learning design, you’re going to have an urge to build. Be mindful of when it happens, and tell yourself you can code later.
David Klawitter13 is a design lead at Detroit Labs14, where he previously worked as a developer. Instead of giving in to his urge to program while at work, he hacks away on personal projects at home. He states:
I think that there’s this natural tendency to want to build the thing that’s in your mind — that’s one of the most exciting aspects about being a developer. I found it very difficult to peel myself away from that desire. I still get to scratch that itch on personal (technical) projects, but the scope of our work at Detroit Labs and my responsibilities just don’t allow that. I provide more value by having a broader and deeper understanding of design, and that means focusing in on it.
If you don’t already know the technical side of front-end design work, it will come easily once you decide to learn it. Therefore, work on the areas of design you’re unfamiliar with. That’s where there’s more opportunity to grow.
I know a lot of web developers who can use Photoshop and Illustrator. They know the tools well, but they can’t utilize them to solve design problems. In the graphic design world, these people are known as production artists. It’s a fine profession to be in, but it’s different from being a designer.
It’s tricky. You need to know the tools to be a designer, but knowing them won’t make you a designer.
I fell into this trap at design school. My first year was dedicated to learning Photoshop, Illustrator and InDesign. It felt good to master those tools. At the end of the year, I thought, “I’ve made it. Now I’m a designer!” But I soon realized that learning the tools was just step one. Step two was about making the work look good. My peers in design school who understood that kept pushing themselves and progressed from being production artists into full-fledged designers.
Jacob Rogelberg again:
My mentors always told me, “Don’t get stuck on the tools.” It’s concepts first, tools last. It takes a while for that to become ingrained. You need to understand problems. It’s like how thinking about a coding challenge is more important than knowing the specific programming language it’s written in.
Sean Duran15 is a filmmaker who used to be a designer and developer. He says:
From the beginning, I assumed that if I learned the tools, I could be a designer. But they’re definitely two separate things. And it was a struggle sometimes. You could learn every command and tool in Photoshop and still not be a great designer. What helped me the most was looking at others’ work and thinking about why it was good or bad.
Learn the tools first, and pat yourself on the back for learning them. But then dig deep and hone your design skills.
Concentrating On Visual Effects, Rather Than Design Link
This is related to what we just discussed and only applies to visual design, not UX design. Many tutorials, like this one16, will make you feel like you’re learning design. They’re a great way to learn Photoshop, but they can be a distraction.
Most of these tutorials are in some way related to style, one of many principles under the umbrella of design:
If you only work on style-related tutorials, you’ll never be good at the other principles of visual organization, and then you’ll wonder why your designs look bad. It’s tough because the majority of design tutorials on the web fall into this category of style and technique. That’s because things such as hierarchy, concept and composition are more abstract and don’t make for very compelling or easy-to-understand tutorials.
Instead of repeatedly working in the style category, use the diagram above to research and practice the other design principles.
By the way, none of the professional designers I know pay much attention to Photoshop tutorials or websites like the ones cited above. The only time they might seek them out is if they’re trying to achieve a particular look for a project.
How much time should you spend reading design books to understand the basics? I think being able to make a design decision and then evaluating it based on specific design principles will indicate that you have enough design theory under your belt to move on. For example, you should be able to say to yourself things like, “Those two sections on the home page are too close in size — the hierarchy is off,” or “These three lines of type don’t read well because they aren’t aligned.”
Book knowledge is certainly important. But your skills will improve the most when you’re in practice mode, not research mode. In fact, design theory makes sense more quickly once you start to use it in practice.
Greg Koberger20, a designer, developer and startup founder, is biased towards action over theory:
I always advocate doing rather than reading. All the good developers I know learned by just making something they wanted to make, and figured out the tech. The tech was a means to an end. I’m an advocate of trying and playing around on your own, and then using articles to help with your form once you’ve gotten your feet sufficiently went. The same is true for design.
Learn design theory, but don’t bury yourself in books. Practice!
I’ve seen developers look at work on Dribbble and get so intimidated that they never get started. That’s a trap.
Odds are, if you’re reading this, you want to learn either UX or visual design to supplement your coding skills. You’re a coder first, so at the office your design skills shouldn’t be compared to those of the stars on Dribbble, just as my coding skills as a visual designer wouldn’t measure up to a front-end developer’s.
People get hired based on their strengths. And the Dribbble folks have been perfecting their craft for years, while you’ve been doing it in your spare time. So, of course, your work won’t be as tight.
Smart recruiters and companies understand this. Savoy Hallinan21 is a savvy creative recruiter from Los Angeles:
If I’m asked to find a developer who can also design, I’ll ask “What level of design is needed?” If it’s a significant amount, I’ll recommend two people, because I don’t like to set people up for failure. And if I’m interviewing a developer/designer, I always ask, what’s your primary function? Because people tend to be stronger in one area or the other.
Take a few steps back and understand why you’re really doing this. Are you learning visual design because you’ve always been a creative person and you need an outlet? If so, that’s fine. Do it for the joy, and don’t worry about trying to market yourself that way. Maybe it monetizes later, maybe not. Be OK with that. Lots of people are bored by their paid work but find other outlets to express themselves, and are happy as a result.
Are you learning visual design because you’ve been rejected by employers for your lack of visual design? Are you sure that’s why they rejected you? Sometimes employers don’t give you the real reason, or any reason, for not hiring you.
In a market this tight for talented engineers, it would be very surprising if you passed tech screenings and were a strong culture fit, but the only reason they didn’t hire you was because of a lack of visual skills.
Don’t get so intimidated by great work that you never get started. You’re a developer first, and that’s OK!
Some developers think the ability to design is either in you or it isn’t. I think there’s some truth to that. Many of the best designers in the world were artistically inclined when they were kids. Just as you were compelled to take apart answering machines and blenders, they were busy drawing Rainbow Bright and Ninja Turtles.
Natural talent is certainly a factor in how good you can be at design. However, as we discussed earlier, you aren’t competing with those top designers on Dribbble. Even if you don’t have the natural talent, you can still get very good at design with dedication and practice.
Greg Koberger again:
Design is a hard skill. Everyone has the physical body parts to do it, but not everyone has the desire to practice and work at it. So, it’s less about “Do I have this skill” and more about “How badly do I want it?”
Practicing anything is hard, in particular when it’s outside of your comfort zone. But keep at it and give yourself credit for your small victories along the way.
When it comes to practicing, give yourself a trial run of 20 hours to learn design. That’s a reasonable amount of time to commit. If you don’t see your skills improve in that amount of time or you hate it, then at least you can say you tried your best and didn’t make excuses.
Do you work with designers? Not many people have that resource. Ask them how you can better understand the design files they’re providing you to mark up. Or ask if you can help them with some grunt work and get their feedback afterwards.
Most people like talking about subjects they’re good at. And if you defer to them in a way that makes them feel like a teacher, they’ll behave like a teacher and want to help you out.
Mikey Micheletti24, a creative polymath from Seattle, learned UX at work. He says:
In the mid-90s, I was doing development work on internal-facing desktop applications, laying out controls on the screen, and not doing a very good job of it. I spoke with another developer who was really good at it, and he taught me the basics of layout, alignment, flow. Even with just a smidgen of knowledge, my work improved and people liked it better.
If you’re around designers during the day, use it to your advantage.
This route can be challenging because you have to simultaneously be the teacher and the student. And when you bounce around from tutorial to tutorial, frustration sets in easily. If you were to go this route, I would suggest two steps:
Figure out which particular area of design you want to learn and find the best books on the subject. For visual design, try books by Steven Heller, Ellen Lupton and Philip Meggs. For UX, look at Don Norman and Edward Tufte.
Create assignments for yourself and then get feedback from a professional designer. Having that feedback will accelerate your skills faster than working in isolation.
Greg Koberger thinks copying other people helps you learn:
I learned by basically creating collages of things I liked. A button design from one site, the nav from another. Once I had a frankensite, I would redo it until two things happened:
I learned the tools.
I had made it my own. This really helped me learn the tools. (Copying is a great way to learn, much like how art students repaint classic paintings at museums to learn). So, when I became more creative, I could rapidly test out new things
Many developers don’t want to go this route because of the time commitment. But if you have the time and money, I would recommend this option. Being in a classroom and having a dialogue with a teacher is a higher-bandwidth way of absorbing the material. I know that massive open online courses (MOOCs) are all the rage, but don’t discount the value of physically being in a classroom.
Michael Loomes25, an iOS developer turned visual designer, went this route:
I studied for a Bachelor of Communication Design at Billy Blue College of Design in Sydney for two years. I feel this was a really good decision as it taught me a lot about “design thinking” as well improving the technical side of things. I think this “design thinking” side of things is something that isn’t that easy to teach yourself and would come after years of experience. You can only learn so much by following online tutorials.
One more thing about school: If you’re considering taking night classes, make sure they’re legit. Good design courses are usually taught at a university rather than a community college, where classes are more tool-focused.
I hope this advice is helpful! If you’re interested in visual design and you want to get a good grasp on the basics, I would start with these books: Thinking With Type26, Layout Essentials27, and Meggs’ History of Graphic Design28. I personally don’t recommend “web design” books because they’re heavy on tech and light on design theory. I also have a free course29 that uses specific examples to teach visual web design in a practical way. Lastly, if you want to learn UX design, I recommend checking out 52 Weeks of UX30. It’s a great resource to start with.
With the holidays almost here and the new year already in sight, December is a time to slow down, an occasion to reflect and plan ahead. To help us escape the everyday hectic for a bit and sweeten our days with a delightful little surprise each day up to Christmas, the web community has assembled some fantastic advent calendars this year. They cater for a daily dose of web design and development goodness with stellar articles, inspiring experiments, and even puzzles to solve.
To make the choice of which ones to follow a bit easier, we collected a selection of advent calendars in this Quick Tip for you. No matter if you’re a front-end dev, UX designer, or content strategist, we’re certain you’ll find something to inspire you for the upcoming year. So prepare yourself a nice cup of coffee, cozy up in your favorite chair and, well, enjoy!
Already in its 12th year, 24 Ways1 is a real classic under the advent calendars for the web community. Each day up to Christmas, it caters for a daily dose of web design and development goodness by some of the most renown minds in the web industry.
To cater for some magic moments this holiday season, Christmas Experiments3 delivers a new jaw-dropping digital experiment by top web creatives as well as promising newcomers each day.
You probably know the “12 Days Of Christmas” song. But do you also know 12 Devs Of Christmas5? If not, make sure to check it out — there’s a new article with exciting new thoughts and ideas from the web dev world waiting each day.
To make the holiday season a favorite time for speed geeks, the Performance Calendar7 shares a piece of advice each day up to Christmas that is bound to improve the performance of your website.
“Giving back little gifts of code for Christmas.” That’s the credo of 24 Pull Requests9. The idea is to get involved and contribute to the open-source projects we have benefited from this year — by improving documentation, fixing issues, improving code quality, etc.
What are other web designers and developers excited about these days? The collection of writings over on the Web Advent Calendar15 sheds some light into the dark.
Here’s one for the content strategists amongst you: The 2016 Content Strategy Advent Calendar17 by GatherContent publishes a video a day until Christmas, in which content strategy experts share their advice, talk about their hot content topics and reveal their focus for 2017.
The PHP family shares their thoughts in a daily article over on 24 Days In December21. The article covers a wide range of topics related to PHP: from security to performance to techniques to workflow to tooling!
If you’re looking for something to put your skills to the test, then Advent Of Code23 is for you. Each day up to the 25th, it holds a new small programming puzzle for you to master.
The Perl 6 Advent Calendar25 shares something cool about Perl 6 every day. And if that’s not enough Perl wisdom for you yet, also check out Perl Advent26.
The advent calendar of the open-source machine emulator and virtualizer Qemu28 hosts a collection of Qemu disk images — from operating systems (old and new) to custom demos and neat algorithms — that you can run in the emulator.
AWS Advent30 explores all things related to Amazon Web Services. You’ll find a wide range of article about security, deployment strategy and general tips and techniques to be aware of when using Amazon Web Services.
Last but not least, sys admins get their daily goodie, too, this year — on Sysadvent32. In 25 articles, fellows sys admins share strategies, tips, and tricks about system administration topics.
Do you follow along an advent calendar this year? Maybe it’s in French or Russian, rather than English? Let us know about your favorites in the comments below!
Web components1 are an amazing new feature of the web, allowing developers to define their own custom HTML elements. When combined with a style guide, web components can create a component API2, which allows developers to stop copying and pasting code snippets and instead just use a DOM element. By using the shadow DOM, we can encapsulate the web component and not have to worry about specificity wars3 with any other style sheet on the page.
However, web components and style guides currently seem to be at odds with each other. On the one hand, style guides provide a set of rules and styles that are globally applied to the page and ensure consistency across the website. On the other hand, web components with the shadow DOM prevent any global styles from penetrating their encapsulation, thus preventing the style guide from affecting them.
So, how can the two co-exist, with global style guides continuing to provide consistency and styles, even to web components with the shadow DOM? Thankfully, there are solutions that work today, and more solutions to come, that enable global style guides to provide styling to web components. (For the remainder of this article, I will use the term “web components” to refer to custom elements with the shadow DOM.)
What Should A Global Style Guide Style In A Web Component? Link
Before discussing how to get a global style guide to style a web component, we should discuss what it should and should not try to style.
First of all, current best practices for web components4 state that a web component, including its styles, should be encapsulated, so that it does not depend on any external resources to function. This allows it to be used anywhere on or off the website, even when the style guide is not available.
Below is a simple log-in form web component that encapsulates all of its styles.
Note: Code examples are written in the version 1 specification for web components.
However, fully encapsulating every web component would inevitably lead to a lot of duplicate CSS, especially when it comes to setting up the typography and styling of native elements. If a developer wants to use a paragraph, an anchor tag or an input field in their web component, it should be styled like the rest of the website.
If we fully encapsulate all of the styles that the web component needs, then the CSS for styling paragraphs, anchor tags, input fields and so on would be duplicated across all web components that use them. This would not only increase maintenance costs, but also lead to much larger download sizes for users.
Instead of encapsulating all of the styles, web components should only encapsulate their unique styles and then depend on a set of shared styles to handle styles for everything else. These shared styles would essentially become a kind of Normalize.css6, which web components could use to ensure that native elements are styled according to the style guide.
In the previous example, the log-in form web component would declare the styles for only its two unique classes: .container and .footnote. The rest of the styles would belong in the shared style sheet and would style the paragraphs, anchor tags, input fields and so on.
In short, the style guide should not try to style the web component, but instead should provide a set of shared styles that web components can use to achieve a consistent look.
How Styling The Shadow DOM With External Style Sheets Used To Be Done Link
The initial specification for web components (known as version 0) allowed any external style sheet to penetrate the shadow DOM through use of the ::shadow or /deep/ CSS selectors. The use of ::shadow and /deep/ enabled you to have a style guide penetrate the shadow DOM and set up the shared styles, whether the web component wanted you to or not.
/* Style all p tags inside a web components shadow DOM */ login-form::shadow p { color: red; }
With the advent of the newest version of the web components specification (known as version 1), the authors have removed the capability7 of external style sheets to penetrate the shadow DOM, and they have provided no alternative. Instead, the philosophy has changed from using dragons to style web components8 to instead using bridges. In other words, web component authors should be in charge of what external style rules are allowed to style their component, rather than being forced to allow them.
Unfortunately, that philosophy hasn’t really caught up with the web just yet, which leaves us in a bit of a pickle. Luckily, a few solutions available today, and some coming in the not-so-distant future, will allow a shared style sheet to style a web component.
The only native way today to bring a style sheet into a web component is to use @import. Although this works, it’s an anti-pattern9. For web components, however, it’s an even bigger performance problem.
Normally, @import is an anti-pattern because it downloads all style sheets in series, instead of in parallel, especially if they are nested. In our situation, downloading a single style sheet in series can’t be helped, so in theory it should be fine. But when I tested10 this in Chrome, the results showed that using @import caused the page to render up to a half second slower11 than when just embedding the styles directly12 into the web component.
Note: Due to differences in how the polyfill of HTML imports13 works compared to native HTML imports, WebPagetest.org14 can only be used to give reliable results in browsers that natively support HTML imports (i.e. Chrome).
In the end, @import is still an anti-pattern and can be a performance problem in web components. So, it’s not a great solution.
Because the problem with trying to provide shared styles to web components stems from using the shadow DOM, one way to avoid the problem entirely is to not use the shadow DOM.
By not using the shadow DOM, you will be creating custom elements16 instead of web components (see the aside below), the only difference being the lack of the shadow DOM and scoping. Your element will be subject to the styles of the page, but we already have to deal with that today, so it’s nothing that we don’t already know how to handle. Custom elements are fully supported by the webcomponentjs polyfill, which has great browser support17.
The greatest benefit of custom elements is that you can create a pattern library18 using them today, and you don’t have to wait until the problem of shared styling is solved. And because the only difference between web components and custom elements is the shadow DOM, you can always enable the shadow DOM in your custom elements once a solution for shared styling is available.
If you do decide to create custom elements, be aware of a few differences between custom elements and web components.
First, because styles for the custom element are subject to the page styles and vice versa, you will want to ensure that your selectors don’t cause any conflicts. If your pages already use a style guide, then leave the styles for the custom element in the style guide, and have the element output the expected DOM and class structure.
By leaving the styles in the style guide, you will create a smooth migration path for your developers, because they can continue to use the style guide as before, but then slowly migrate to using the new custom element when they are able to. Once everyone is using the custom element, you can move the styles to reside inside the element in order to keep them together and to allow for easier refactoring to web components later.
Secondly, be sure to encapsulate any JavaScript code inside an immediately invoked function expression (IFFE), so that you don’t bleed any variables to the global scope. In addition to not providing CSS scoping, custom elements do not provide JavaScript scoping.
Thirdly, you’ll need to use the connectedCallback function of the custom element to add the template DOM to the element19. According to the web component specification, custom elements should not add children during the constructor function, so you’ll need to defer adding the DOM to the connectedCallback function.
Lastly, the <slot> element does not work outside of the shadow DOM. This means that you’ll have to use a different method to provide a way for developers to insert their content into your custom element. Usually, this entails just manipulating the DOM yourself to insert their content where you want it.
However, because there is no separation between the shadow DOM and the light DOM with custom elements, you’ll also have to be very careful not to style the inserted DOM, due to your elements’ cascading styles.
<!-- login-form.html --> <template> <style> login-form .container { max-width: 300px; padding: 50px; border: 1px solid grey; } login-form .footnote { text-align: center; } </style> <!-- Rest of template DOM --> </template> <script> (function() { const doc = (document._currentScript || document.currentScript).ownerDocument; const template = doc.querySelector('#login-form-template'); customElements.define('login-form', class extends HTMLElement { constructor() { super(); } // Without the shadow DOM, we have to manipulate the custom element // after it has been inserted in the DOM. connectedCallback() { const temp = document.importNode(template.content, true); this.appendChild(temp); } }); })(); </script>
Aside: A custom element is still a web component for all intents and purposes. The term “web components” is used to describe four separate technologies23: custom elements, template tags, HTML imports and the shadow DOM.
Unfortunately, the term has been used to describe anything that uses any combination of the four technologies. This has led to a lot of confusion around what people mean when they say “web component.” Just as Rob Dodson discovered24, I have found it helpful to use different terms when talking about custom elements with and without the shadow DOM.
Most of the developers I’ve talked to tend to associate the term “web component” with a custom element that uses the shadow DOM. So, for the purposes of this article, I have created an artificial distinction between a web component and a custom element.
Another solution you can use today is a web component library, such as Polymer25, SkateJS26 or X-Tag27. These libraries help fill in the holes of today’s support and can also simplify the code necessary to create a web component. They also usually provide added features that make writing web components easier.
For example, Polymer lets you create a simple web component in just a few lines of JavaScript. An added benefit is that Polymer provides a solution for using the shadow DOM and a shared style sheet28. This means you can create web components today that share styles.
To do this, create what they call a style module, which contains all of the shared styles. It can either be a <style> tag with the shared styles inlined or a <link rel="import"> tag that points to a shared style sheet. In either case, include the styles in your web component with a <style include> tag, and then Polymer will parse the styles and add them as an inline <style> tag to your web component.
<!-- shared-styles.html --> <dom-module> <!-- Link to a shared style sheet --> <!-- <link rel="import" href="styleguide.css"> --> <!-- Inline the shared styles --> <template> <style> :host { color: #333333; font: 16px Arial, sans-serif; } /* Rest of shared CSS */ </style> </template> </dom-module>
The only downside to using a library is that it can delay the rendering time of your web components. This shouldn’t come as a surprise because downloading the library’s code and processing it take time. Any web components on the page can’t begin rendering until the library is done processing.
Again, Polymer does nothing in particular to make the rendering time slower. Downloading the Polymer library and processing all of its awesome features, plus creating all of the template bindings, take time. It’s just the trade-off you’ll have to make to use a web component library.
If none of the current solutions work for you, don’t despair. If all goes well, within a few months to a few years, we’ll be able to use shared styles using a few different approaches.
To declare a custom property, use the custom property notation of --my-variable: value, and access the variable using property: var(--my-variable). A custom property cascades like any other CSS rule, so its value inherits from its parent and can be overridden. The only caveat to custom properties is that they must be declared inside a selector and cannot be declared on their own, unlike a preprocessor variable.
<style> /* Declare the custom property */ html { --main-bg-color: red; } /* Use the custom property */ input { background: var(--main-bg-color); } </style>
One thing that makes custom properties so powerful is their ability to pierce the shadow DOM. This isn’t the same idea as the /deep/ and ::shadow selectors because they don’t force their way into the web component. Instead, the author of the web component must use the custom property in their CSS in order for it to be applied. This means that a web component author can create a custom property API that consumers of the web component can use to apply their own styles.
<template> <style> /* Declare the custom property API */ :host { --main-bg-color: brown; } .one { color: var(--main-bg-color); } </style> <div>Hello World</div> </template> <script> /* Code to set up my-element web component */ </script> <my-element></my-element> <style> /* Override the custom variable with own value */ my-element { --main-bg-color: red; } </style>
Browser support for custom properties is surprisingly good36. The only reason it is not a solution you can use today is that there is no working polyfill37 without Custom Elements version 1. The team behind the webcomponentjs polyfill is currently working to add it38, but it is not yet released and in a built state, meaning that if you hash your assets for production, you can’t use it. From what I understand, it’s due for release sometime early next year.
Even so, custom properties are not a good method for sharing styles between web components. Because they can only be used to declare a single property value, the web component would still need to embed all of the styles of the style guide, albeit with their values substituted with variables.
Custom properties are more suited to theming options, rather than shared styles. Because of this, custom properties are not a viable solution to our problem.
In addition to custom properties, CSS is also getting @apply rules39. Apply rules are essentially mixins for the CSS world40. They are declared in a similar fashion to custom properties but can be used to declare groups of properties instead of just property values. Just like custom properties, their values can be inherited and overridden, and they must be declared inside a selector in order to work.
Browser support for @apply rules is basically non-existent. Chrome currently supports it41 behind a feature flag (which I couldn’t find), but that’s about it. There is also no working polyfill for the same reason as there is no polyfill for custom properties. The webcomponentjs polyfill team is also working to add @apply rules, along with custom properties, so both will be available once the new version is released.
Unlike custom properties, @apply rules are a much better solution for sharing styles. Because they can set up a group of property declarations, you can use them to set up the default styling for all native elements and then use them inside the web component. To do this, you would have to create an @apply rule for every native element.
However, to consume the styles, you would have to apply them manually to each native element, which would still duplicate the style declaration in every web component. While that’s better than embedding all of the styles, it isn’t very convenient either because it becomes boilerplate at the top of every web component, which you have to remember to add in order for styles to work properly.
Due to the need for extensive boilerplate, I don’t believe that @apply rules would be a good solution for sharing styles between web components. They are a great solution for theming, though.
According to the web component specification42, browsers ignore any <link rel="stylesheet"> tags in the shadow DOM, treating them just like they would inside of a document fragment. This prevented us from being able to link in any shared styles in our web components, which was unfortunate — that is, until a few months ago, when the Web Components Working Group proposed that <link rel="stylesheet"> tags should work in the shadow DOM43. After only a week of discussion, they all agreed that they should, and a few days later they added it to the HTML specification44.
If that sounds a little too quick for the working group to agree on a specification, that’s because it wasn’t a new proposal. Making link tags work in the shadow DOM was actually proposed at least three years ago45, but it was backlogged until they could ensure it wasn’t a problem for performance.
Being able to link in the shared styles is by far the most convenient method for sharing styles between web components. All you would have to do is create the link tag, and all native elements would be styled accordingly, without requiring any additional work.
Of course, the way browser makers implement the feature will determine whether this solution is viable. For this to work properly, link tags would need to be deduplicated, so that multiple web components requesting the same CSS file would cause only one HTTP request. The CSS would also need to be parsed only once, so that each instance of the web component would not have to recompute the shared styles, but would instead reuse the computed styles.
Chrome does both of these49 already. So, if all other browser makers implement it the same way, then link tags working in the shadow DOM would definitely solve the issue of how to share styles between web components.
You might find it hard to believe, since we haven’t even got it yet, but a link tag working in the shadow DOM is not a long-term solution50. Instead, it’s just a short-term solution to get us to the real solution: constructable style sheets.
Constructable style sheets51 are a proposal to allow for the creation of StyleSheet objects in JavaScript through a constructor function. The constructed style sheet could then be added to the shadow DOM through an API, which would allow the shadow DOM to use a set of shared styles.
Unfortunately, this is all I could gather from the proposal. I tried to find out more information about what constructable style sheets were by asking the Web Components Working Group52, but they redirected me to the W3C’s CSS Working Group’s mailing list53, where I asked again, but no one responded. I couldn’t even figure out how the proposal was progressing, because it hasn’t been updated in over two years54.
Even so, the Web Components Working Group uses it55 as the solution56 for sharing styles57 between web components. Hopefully, either the proposal will be updated or the Web Components Working Group will release more information about it and its adoption. Until then, the “long-term” solution seems like it won’t happen in the foreseeable future.
After months of research and testing, I am quite hopeful for the future. It is comforting to know that after years of not having a solution for sharing styles between web components, there are finally answers. Those answers might not be established for a few more years, but at least they are there.
If you want to use a shared style guide to style web components today, either you can not use the shadow DOM and instead create custom elements, or you can use a web component library that polyfills support for sharing styles. Both solutions have their pros and cons, so use whichever works best for your project.
If you decide to wait a while before delving into web components, then in a few years we should have some great solutions for sharing the styles between them. So, keep checking back on how it’s progressing.
Keep in mind a few things if you decide to use custom elements or web components today.
Most importantly, the web component specification is still being actively developed, which means that things can and will change. Web components are still very much the bleeding edge, so be prepared to stay on your toes as you develop with it.
If you decide to use the shadow DOM, know that it is quite slow58 and unperformant59 in polyfilled browsers60. It was for this reason that Polymer’s developers created their shady DOM implementation and made it their default.
Lastly, the webcomponentjs polyfill only supports the version 0 implementation of the shadow DOM and custom elements. A version 1 branch of the polyfill71 will support version 1, but it’s not yet released.
Recently, I decided to rebuild my personal website, because it was six years old and looked — politely speaking — a little bit “outdated.” The goal was to include some information about myself, a blog area, a list of my recent side projects, and upcoming events.
As I do client work from time to time, there was one thing I didn’t want to deal with — databases! Previously, I built WordPress sites for everyone who wanted me to. The programming part was usually fun for me, but the releases, moving of databases to different environments, and actual publishing, were always annoying. Cheap hosting providers only offer poor web interfaces to set up MySQL databases and an FTP access to upload files was always the worst part. I didn’t want to deal with this for my personal website.
So the requirements I had for the redesign were:
An up-to-date technology stack based on JavaScript and frontend technologies.
A content management solution to edit content from anywhere.
A good performing site with fast results.
In this article I want to show you what I built and how my website surprisingly turned out to be my daily companion.
Publishing things on the web seems to be easy. Pick a content management system (CMS) that provides a WYSIWYG editor (What You See Is What You Get) for every page that’s needed and all the editors can manage the content easily. That’s it, right?
After building several client websites, ranging from small cafés to growing startups, I figured out that the holy WYSIWYG editor is not always the silver bullet we’re all looking for. These interfaces aim to make building websites easy, but here comes the point:
To build and edit the content of a website without constantly breaking it, you have to have intimate knowledge of HTML and at least understand a tiny bit of CSS. That’s not something you can expect from your editors.
I’ve seen horrible complex layouts built with WYSIWYG editors and I can’t begin to name all the situations when everything falls apart because the system is too fragile. These situations lead to fights and discomfort where all parties are blaming each other for something that was inevitable. I always tried to avoid these situations and create comfortable, stable environments for editors to avoid angry emails screaming, “Help! Everything is broken.”
I learned rather quickly that people rarely break things when I split all the needed website content into several chunks, each related to each other without thinking of any representation. In WordPress, this can be achieved using custom post types. Each custom post type can include several properties with their own easy to grasp text field. I buried the concept of thinking in pages completely.
My job was to connect the content pieces and build web pages out of these content blocks. This meant that editors were only able to do little, if any, visual changes on their websites. They were responsible for the content and only the content. Visual changes had to be done by me – not everyone could style the site, and we could avoid a fragile environment. This concept felt like a great trade-off and was usually well received.
Later, I discovered that what I was doing was defining a content model. Rachel Lovinger defines, in her excellent article “Content Modelling: A Master Skill2,” a content model as follows:
A content model documents all the different types of content you will have for a given project. It contains detailed definitions of each content type’s elements and their relationships to each other.
Beginning with content modeling worked fine for most clients, except for one.
“Stefan, I’m not defining your database schema!” Link
The idea of this one project was to build a massive website that should create a lot of organic traffic by providing tons of content – in all variations displayed across several different pages and places. I set up a meeting to discuss our strategy to approach this project.
I wanted to define all the pages and content models that should be included. It didn’t matter what tiny widget or what sidebar the client had in mind, I wanted it to be clearly defined. My goal was to create a solid content structure that makes it possible to provide an easy-to-use interface for the editors and provides reusable data to display it in any thinkable format.
It turned out, the idea of this project was not very clear, and I couldn’t get answers to all of my questions. The project lead didn’t understand that we should start with proper content modeling (not design and development). For him, this was just a ton of pages. Duplicated content and huge text areas to add a massive amount of text, didn’t seem to be a problem. In his mind, the questions I had about structure were technical, and they shouldn’t have to worry about them. To make a long story short, I didn’t do the project.
The important thing is, content modeling is not about databases.
It’s about making your content accessible and future-proof. If you can’t define the needs for your content at project kick-off, it will be very hard, if not impossible, to reuse it later on.
Proper content modeling is the key to present and future websites.
It was clear that I wanted to follow a good content modeling for my site as well. However, there was one more thing. I didn’t want to deal with the storage layer to build my new website, so I decided to use Contentful3, a headless CMS, on which (full disclaimer!) I’m currently working on. “Headless” means that this service offers a web interface to manage the content in the cloud and it provides an API which will give me my data back in JSON format. Choosing this CMS helped me be productive right away as I had an API available in minutes and I did not have to deal with any infrastructural setup. Contentful also provides a free plan4 which is perfect for small projects, like my personal website.
An example query to get all blog posts looks like this:
The great part about Contentful is that it is great at content modeling, which I required. Using the provided web interface, I can define all the needed content pieces quickly. The definition of a particular content model in Contentful is called a content type. A great thing to point out here is the ability to model relationships between content items. For example, I can easily connect an author with a blog post. This can result in structured data trees, which are perfect to reuse for various use cases.
So, I set up my content model without thinking about any pages I may want to build in the future.
The next step was to figure out what I wanted to do with this data. I asked a designer I knew, and he came up with an index page of the website with the following structure.
Now came the tricky part. So far, I didn’t have to deal with storage and databases, which was a great achievement for me. So, how can I build my website when I only have an API available?
My first approach was the do-it-yourself approach. I started writing a simple Node.js script which would retrieve the data and render some HTML out of it.
Rendering all the HTML files upfront fulfilled one of my main requirements. Static HTML can be served really fast.
So, let’s have a look at the script I used.
'use strict'; const contentful = require('contentful'); const template = require('lodash.template'); const fs = require('fs'); // create contentful client with particular credentials const client = contentful.createClient({ space: 'your_space_id', accessToken: 'your_token' }); // cache templates to not read // them over and over again const TEMPLATES = { index : template(fs.readFileSync(`${__dirname}/templates/index.html`)) }; // fetch all the data Promise.all([ // get posts client.getEntries({content_type: 'content_type_post_id'}), // get events client.getEntries({content_type: 'content_type_event_id'}), // get projects client.getEntries({content_type: 'content_type_project_id'}), // get talk client.getEntries({content_type: 'content_type_talk_id'}), // get specific person client.getEntries({'sys.id': 'person_id'}) ]) .then(([posts, events, projects, talks, persons]) => { const renderedHTML = TEMPLATES.index({ posts, events, projects, talks, person : persons.items[0] }) fs.writeFileSync(`${__dirname}/build/index.html`, renderedHTML); console.log('Rendered HTML'); }) .catch(console.error);
This worked fine. I could build my desired website in a completely flexible way, making all the decisions about the file structure and functionality. Rendering different page types with completely different data sets was no problem at all. Everybody who has fought against rules and structure of an existing CMS that ships with HTML rendering knows that complete freedom can be an excellent thing. Especially, when the data model becomes more complex over time including many relations — flexibility pays off.
In this Node.js script, a Contentful SDK10 client is created and all the data is fetched using the client method getEntries. All provided methods of the client are promise-driven, which makes it easy to avoid deeply nested callbacks. For templating, I decided to use lodash’s templating engine. Finally, for file reading and writing, Node.js offers the native fs module, which then is used to read the templates and write the rendered HTML.
However, there was one downside to this approach; it was very bare-bones. Even when this method was completely flexible, it felt like reinventing the wheel. What I was building was basically a static site generator, and there are plenty of them out there already. It was time to start all over again.
Famous static site generators, for example, Jekyll or Middleman, usually deal with Markdown files which will be rendered to HTML. Editors work with these, and the website is built using a CLI command. This approach was failing one of my initial requirements, though. I wanted to be able to edit the site wherever I was, not relying on files sitting on my private computer.
My first idea was to render these Markdown files using the API. Although this would have worked, it didn’t feel right. Rendering Markdown files to transform to HTML later were still two steps not offering a big benefit compared to my initial solution.
Fortunately, there are Contentful integrations, for e.g. Metalsmith11 and Middleman12. I decided on Metalsmith for this project, as it’s written in Node.js and I didn’t want to bring in a Ruby dependency.
Metalsmith transforms files from a source folder and renders them in a destination folder. These files don’t necessarily have to be Markdown files. You can also use it for transpiling Sass or optimizing your images. There are no limits, and it is really flexible.
Using the Contentful integration, I was able to define some source files which were taken as configuration files and could then fetch everything needed from the API.
--- title: Blog contentful: content_type: content_type_id entry_filename_pattern: ${ fields.slug } entry_template: article.html order: '-fields.date' filter: include: 5 layout: blog.html description: Recent articles by Stefan Judis. ---
This example configuration renders the blog post area with a parent blog.html file, including the response of the API request, but also renders several child pages using the article.html template. File names for the child pages are defined via entry_filename_pattern.
As you see, with something like this, I can build up my pages easily. This setup worked perfectly to ensure all the pages were dependent on the API.
The only missing part was to connect the site with the CMS service and to make it re-render when any content was edited. The solution for this problem – webhooks, which you might be familiar with already if you are using services like GitHub.
Webhooks are requests made by software as a service to a previously defined endpoint which notify you that something has happened. GitHub, for example, can ping you back when someone opened a pull request in one of your repos. Regarding content management, we can apply the same principle here. Whenever something happens with the content, ping an endpoint and make a particular environment react to it. In our case, this would mean to re-render the HTML using metalsmith.
To accept webhooks I also went with a JavaScript solution. My choice hosting provider (Uberspace13) makes it possible to install Node.js and use JavaScript on the server side.
const http = require('http'); const exec = require('child_process').exec; const server = http.createServer((req, res) => { res.setHeader('Content-Type', 'text/plain'); // check for secret header // to not open up this endpoint for everybody if (req.headers.secret === ‘YOUR_SECRET') { res.end('ok'); // wait for the CDN to // invalidate the data setTimeout(() => { // execute command exec('npm start', { cwd: __dirname }, (error) => { if (error) { return console.log(error); } console.log('Rebuilt success'); }); }, 1000 * 120 ); } else { res.end('Not allowed'); } }); console.log('Started server at 8000'); server.listen(8000);
This scripts starts a simple HTTP server on port 8000. It checks incoming requests for a proper header to make sure that it’s the webhook from Contentful. If the request is confirmed as the webhook, the predefined command npm start is executed to re-render all the HTML pages. You might wonder why there is a timeout in place. This is required to pause actions for a moment until the data in the cloud is invalidated because the stored data is served from a CDN.
Depending on your environment this HTTP server may not be accessible to the internet. My site is served using an apache server, so I needed to add an internal rewrite rule to make the running node server accessible to the internet.
Being on the road is an important part of my life, so it was necessary to have information, such as the location of a given venue or which hotel I booked, right at my fingertips – usually stored in a Google spreadsheet. Now, the information was spread over a spreadsheet, several emails, my calendar, as well as on my website.
I had to admit, I created a lot of data duplication in my daily flow.
I dreamed of a single source of truth, (preferably on my phone) to quickly see what events were coming up, but also get additional information about hotels and venues. The events listed on my website didn’t have all the information at this point, but it is really easy to add new fields to a content type in Contentful. So, I added the needed fields to the “Event” content type.
Putting this information into my website CMS was never my intention, as it shouldn’t be displayed online, but having it accessible via an API made me realize that I could now do completely different things with this data.
Building apps for mobile has been a topic for years now, and there are several approaches to this. Progressive Web Apps (PWA) are an especially hot topic these days. Using Service Workers16 and a Web App Manifest17, it is possible to build complete app-like experiences going from a home screen icon to managed offline behavior using web technologies.
There is one downside to mention. Progressive Web Apps are on the rise, but they are not completely there yet. Service Workers, for example, are not supported on Safari today and only “under consideration” from Apple’s side18 so far. This was a deal-breaker for me as I wanted to have an offline-capable app on iPhones, too.
So I looked for alternatives. A friend of mine was really into NativeScript and kept telling me about this fairly new technology. NativeScript is an open source framework for building truly native mobile apps with JavaScript, so I decided to give it a try.
The setup of NativeScript takes a while because you have to install a lot of things to develop for native mobile environments. You’ll be guided through the installation process when you install the NativeScript command line tool for the first time using npm install nativescript -g.
Then, you can use scaffolding commands to set up new projects:
tns create MyNewApp
However, this is not what I did. I was scanning the documentation and came across a sample groceries management app19 built in NativeScript. So I took this app, dug into the code, and modified it step by step, fitting it to my needs.
I don’t want to dive too deep into the process, but to build a good looking list with all the information I wanted, didn’t take long.
NativeScript plays really well together with Angular 2, which I didn’t want to try this time as discovering NativeScript itself felt big enough. In NativeScript you have to write “Views.” Each view consists of an XML file defining the base layout and optional JavaScript and CSS. All these are defined in one folder per view.
Rendering a simple list can be achieved with an XML template like this:
List.xml
<!-- call JavaScript function when ready --> <Page loaded="loaded"> <ActionBar title="All Travels" /> <!-- make it scrollable when going too big --> <ScrollView> <!-- iterate over the entries in context --> <ListView items="{{ entries }}"> <ListView.itemTemplate> <Label text="{{ fields.name }}" textWrap="true"/> </ListView.itemTemplate> </ListView> </ScrollView> </Page>
The first thing happening here is defining a page element. Inside of this page, I defined an ActionBar to give it the classic Android look as well as a proper headline. Building things for native environments can be a bit tricky sometimes. For example, to achieve working scroll behavior you have to use a ‘ScrollView.’ The last thing is to then, simply iterate over my events using a ListView. Overall, it felt pretty straightforward!
But where are these entries coming from that are used in the view? It turns out that there is a shared context object that can be used for that. When reading the XML for the view, you may have noticed already that the page has a loaded attribute set. By setting this attribute, I tell the view to call a particular JavaScript function when the page is loaded.
This JavaScript function is defined in the depending JS file. It can be made accessible by simply exporting it using exports.something. To add the data binding, all we have to do is to set a new Observable to the page property bindingContext. Observables in NativeScript emit propertyChange events which are needed to react to data changes inside of the views, but you don’t have to worry about that, as it works out of the box.
List.js
const context = new Observable({ entries: null}); const fetchModule = require('fetch'); // export loaded to be called from // List.xml when everything is loaded exports.loaded = (args) => { const page = args.object; page.bindingContext = context; fetchModule.fetch( `https://cdn.contentful.com/spaces/${config.space}/entries?access_token=${config.cda.token}&content_type=event&order=fields.start`, { method: "GET", headers: { 'Content-Type': 'application/json' } } ) .then(response => response.json()) .then(response => context.set('entries', response.items)); }
The last thing is to fetch the data and set it to the context. This can be done by using the NativeScript fetch module. Here, you can see the result.
So, as you can see — building a simple list using NativeScript is not really hard. I later extended the app with another view as well as additional functionality to open given addresses in Google Maps and web views to look at the event websites.
One thing to point out here is, NativeScript is still pretty new, which means that the plugins found on npm usually do not have a lot of downloads or stars on GitHub. This irritated me at first, but I used several native components (nativescript-floatingactionbutton23, nativescript-advanced-webview24 and nativescript-pulltorefresh25) which helped me achieve a native experience and all worked perfectly fine.
You can see the improved result here:
The more functionality I put into this app, the more I liked it and the more I used it. The best part is, I could get rid of data duplication, managing the data all in one place while, being flexible enough to display it for various use cases.
Pages Are Yesterday: Long Live Structured Content! Link
Building this app showed me once more that the principle of having data in page format is a thing of the past. We don’t know where our data will go — we have to be ready for an unlimited number of use cases.
Looking back, what I achieved is:
Having a content management system in the cloud
Not having to deal with database maintenance
A complete JavaScript technology stack
Having an efficient static website
Having an Android app to access my content every time and everywhere
And the most important part:
Having my content structured and accessible helped me to improve my daily life. Link
This use case might look trivial to you right now, but when you think of the products you build every day — there are always more use cases for your content on different platforms. Today, we accept that mobile devices are finally overtaking the old school desktop environments, but platforms like cars, watches and even fridges are already waiting for their spotlight. I can not even think of the use cases that will come.
So, let’s try to be ready and put structured content in the middle because at the end it’s not about database schemas — it’s about building for the future.
Have you ever wondered what it takes to create a SpriteKit1 game? Do buttons seem like a bigger task than they should be? Ever wonder how to persist settings in a game? Game-making has never been easier on iOS since the introduction of SpriteKit. In part three of this three-part series, we will finish up our RainCat game and complete our introduction to SpriteKit.
If you missed out on the previous lesson52, you can catch up by getting the code on GitHub3. Remember that this tutorial requires Xcode 8 and Swift 3.
This is lesson three in our RainCat journey. In the previous lesson52, we had a long day going though some simple animations, cat behaviors, quick sound effects and background music.
We need a way to keep score. To do this, we can create a heads-up display (HUD). This will be pretty simple; it will be an SKNode that contains the score and a button to quit the game. For now, we will just focus on the score. The font we will be using is Pixel Digivolve, which you can get at Dafont.com7. As with using images or sounds that are not yours, read the font’s license before using it. This one states that it is free for personal use, but if you really like the font, you can donate to the author from the page. You can’t always make everything yourself, so giving back to those who have helped you along the way is nice.
Next, we need to add the custom font to the project. This process can be tricky the first time.
Download and move the font into the project folder, under a “Fonts” folder. We’ve done this a few times in the previous lessons, so we’ll go through this process a little more quickly. Add a group named Fonts to the project, and add the Pixel digivolve.otf file.
Now comes the tricky part. If you miss this part, you probably won’t be able to use the font. We need to add it to our Info.plist file. This file is in the left pane of Xcode. Click it and you will see the property list (or plist). Right-click on the list, and click “Add Row.”
When the new row comes up, enter in the following:
Fonts provided by application
Then, under Item 0, we need to add our font’s name. The plist should look like the following:
The font should be ready to use! We should do a quick test to make sure it works as intended. Move to GameScene.swift, and in sceneDidLoad add the following code at the top of the function:
If it works, then you’ve done everything correctly. If not, then something is wrong. Code With Chris has a more in-depth troubleshooting guide11, but note that it is for an older version of Swift, so you will have to make minor tweaks to bring it up to Swift 3.
Now that we can load in custom fonts, we can start on our HUD. Delete the “Hello World” label, because we only used it to make sure our font loads. The HUD will be an SKNode, acting like a container for our HUD elements. This is the same process we followed when creating the background node in lesson one.
Create the HudNode.swift file using the usual methods, and enter the following code:
import SpriteKit class HudNode : SKNode { private let scoreKey = "RAINCAT_HIGHSCORE" private let scoreNode = SKLabelNode(fontNamed: "PixelDigivolve") private(set) var score : Int = 0 private var highScore : Int = 0 private var showingHighScore = false /// Set up HUD here. public func setup(size: CGSize) { let defaults = UserDefaults.standard highScore = defaults.integer(forKey: scoreKey) scoreNode.text = "(score)" scoreNode.fontSize = 70 scoreNode.position = CGPoint(x: size.width / 2, y: size.height - 100) scoreNode.zPosition = 1 addChild(scoreNode) } /// Add point. /// - Increments the score. /// - Saves to user defaults. /// - If a high score is achieved, then enlarge the scoreNode and update the color. public func addPoint() { score += 1 updateScoreboard() if score > highScore { let defaults = UserDefaults.standard defaults.set(score, forKey: scoreKey) if !showingHighScore { showingHighScore = true scoreNode.run(SKAction.scale(to: 1.5, duration: 0.25)) scoreNode.fontColor = SKColor(red:0.99, green:0.92, blue:0.55, alpha:1.0) } } } /// Reset points. /// - Sets score to zero. /// - Updates score label. /// - Resets color and size to default values. public func resetPoints() { score = 0 updateScoreboard() if showingHighScore { showingHighScore = false scoreNode.run(SKAction.scale(to: 1.0, duration: 0.25)) scoreNode.fontColor = SKColor.white } } /// Updates the score label to show the current score. private func updateScoreboard() { scoreNode.text = "(score)" } }
Before we do anything else, open up Constants.swift and add the following line to the bottom of the file — we will be using it to retrieve and persist the high score:
let ScoreKey = "RAINCAT_HIGHSCORE"
In the code, we have five variables that pertain to the scoreboard. The first variable is the actual SKLabelNode, which we use to present the label. Next is our variable to hold the current score; then the variable that holds the best score. The last variable is a boolean that tells us whether we are currently presenting the high score (we use this to establish whether we need to run an SKAction to increase the scale of the scoreboard and to colorize it to the yellow of the floor).
The first function, setup(size:), is there just to set everything up. We set up the SKLabelNode the same way we did earlier. The SKNode class does not have any size properties by default, so we need to create a way to set a size to position our scoreNode label. We’re also fetching the current high score from UserDefaults12. This is a quick and easy way to save small chunks of data, but it isn’t secure. Because we’re not worried about security for this example, UserDefaults is perfectly fine.
In our addPoint(), we’re incrementing the current score variable and checking whether the user has gotten a high score. If they have a high score, then we save that score to UserDefaults and check whether we are currently showing the best score. If the user has achieved a high score, we can animate the size and color of scoreNode.
In the resetPoints() function, we set the current score to 0. We then need to check whether we were showing the high score, and reset the size and color to the default values if needed.
Finally, we have a small function named updateScoreboard. This is an internal function to set the score to scoreNode‘s text. This is called in both addPoint() and resetPoints().
We need to test whether our HUD is working correctly. Move over to GameScene.swift, and add the following line below the foodNode variable at the top of the file:
private let hudNode = HudNode()
Add the following two lines in the sceneDidLoad() function, near the top:
hudNode.setup(size: size) addChild(hudNode)
Then, in the spawnCat() function, reset the points in case the cat has fallen off the screen. Add the following line after adding the cat sprite to the scene:
hudNode.resetPoints()
Next, in the handleCatCollision(contact:) function, we need to reset the score again when the cat is hit by rain. In the switch statement at the end of the function — when the other body is a RainDropCategory — add the following line:
hudNode.resetPoints()
Finally, we need to tell the scoreboard when the user has earned points. At the end of the file in handleFoodHit(contact:), find the following lines up to here:
//TODO increment points print("fed cat")
And replace them with this:
hudNode.addPoint()
Voilà!
You should see the HUD in action. Run around and collect some food. The first time you collect food, you should see the score turn yellow and grow in scale. When you see this happen, let the cat get hit. If the score resets, then you’ll know you are on the right track!
That’s right, we are moving to another scene! In fact, when completed, this will be the first screen of our app. Before you do anything else, open up Constants.swift and add the following line to the bottom of the file — we will be using it to retrieve and persist the high score:
let ScoreKey = "RAINCAT_HIGHSCORE"
Create the new scene, place it under the “Scenes” folder, and call it MenuScene.swift. Enter the following code in the MenuScene.swift file:
import SpriteKit class MenuScene : SKScene { let startButtonTexture = SKTexture(imageNamed: "button_start") let startButtonPressedTexture = SKTexture(imageNamed: "button_start_pressed") let soundButtonTexture = SKTexture(imageNamed: "speaker_on") let soundButtonTextureOff = SKTexture(imageNamed: "speaker_off") let logoSprite = SKSpriteNode(imageNamed: "logo") var startButton : SKSpriteNode! = nil var soundButton : SKSpriteNode! = nil let highScoreNode = SKLabelNode(fontNamed: "PixelDigivolve") var selectedButton : SKSpriteNode? override func sceneDidLoad() { backgroundColor = SKColor(red:0.30, green:0.81, blue:0.89, alpha:1.0) //Set up logo - sprite initialized earlier logoSprite.position = CGPoint(x: size.width / 2, y: size.height / 2 + 100) addChild(logoSprite) //Set up start button startButton = SKSpriteNode(texture: startButtonTexture) startButton.position = CGPoint(x: size.width / 2, y: size.height / 2 - startButton.size.height / 2) addChild(startButton) let edgeMargin : CGFloat = 25 //Set up sound button soundButton = SKSpriteNode(texture: soundButtonTexture) soundButton.position = CGPoint(x: size.width - soundButton.size.width / 2 - edgeMargin, y: soundButton.size.height / 2 + edgeMargin) addChild(soundButton) //Set up high-score node let defaults = UserDefaults.standard let highScore = defaults.integer(forKey: ScoreKey) highScoreNode.text = "(highScore)" highScoreNode.fontSize = 90 highScoreNode.verticalAlignmentMode = .top highScoreNode.position = CGPoint(x: size.width / 2, y: startButton.position.y - startButton.size.height / 2 - 50) highScoreNode.zPosition = 1 addChild(highScoreNode) } }
Because this scene is relatively simple, we won’t be creating any special classes. Our scene will consist of two buttons. These could be (and possibly deserve to be) their own class of SKSpriteNodes, but because they are different enough, we will not need to create new classes for them. This is an important tip for when you build your own game: You need to be able to determine where to stop and refactor code when things get complex. Once you’ve added more than three or four buttons to a game, it might be time to stop and refactor the menu button’s code into its own class.
The code above isn’t doing anything special; it is setting the positions of four sprites. We are also setting the scene’s background color, so that the whole background is the correct value. A nice tool to generate color codes from HEX strings for Xcode is UI Color15. The code above is also setting the textures for our button states. The button to start the game has a normal state and a pressed state, whereas the sound button is a toggle. To simplify things for the toggle, we will be changing the alpha value of the sound button upon the user’s press. We are also pulling and setting the high-score SKLabelNode.
Our MenuScene is looking pretty good. Now we need to show the scene when the app loads. Move to GameViewController.swift and find the following line:
let sceneNode = GameScene(size: view.frame.size)
Replace it with this:
let sceneNode = MenuScene(size: view.frame.size)
This small change will load MenuScene by default, instead of GameScene.
Buttons can be tricky in SpriteKit. Plenty of third-party options are available (I even made one myself), but in theory you only need to know the three touch methods:
touchesBegan(_ touches: with event:)
touchesMoved(_ touches: with event:)
touchesEnded(_ touches: with event:)
We covered this briefly when updating the umbrella, but now we need to know the following: which button was touched, whether the user released their tap or clicked that button, and whether the user is still touching it. This is where our selectedButton variable comes into play. When a touch begin, we can capture the button that the user started clicking with that variable. If they drag outside the button, we can handle this and give the appropriate texture to it. When they release the touch, we can then see whether they are still touching inside the button. If they are, then we can apply the associated action to it. Add the following lines to the bottom of MenuScene.swift:
override func touchesBegan(_ touches: Set, with event: UIEvent?) { if let touch = touches.first { if selectedButton != nil { handleStartButtonHover(isHovering: false) handleSoundButtonHover(isHovering: false) } // Check which button was clicked (if any) if startButton.contains(touch.location(in: self)) { selectedButton = startButton handleStartButtonHover(isHovering: true) } else if soundButton.contains(touch.location(in: self)) { selectedButton = soundButton handleSoundButtonHover(isHovering: true) } } } override func touchesMoved(_ touches: Set, with event: UIEvent?) { if let touch = touches.first { // Check which button was clicked (if any) if selectedButton == startButton { handleStartButtonHover(isHovering: (startButton.contains(touch.location(in: self)))) } else if selectedButton == soundButton { handleSoundButtonHover(isHovering: (soundButton.contains(touch.location(in: self)))) } } } override func touchesEnded(_ touches: Set, with event: UIEvent?) { if let touch = touches.first { if selectedButton == startButton { // Start button clicked handleStartButtonHover(isHovering: false) if (startButton.contains(touch.location(in: self))) { handleStartButtonClick() } } else if selectedButton == soundButton { // Sound button clicked handleSoundButtonHover(isHovering: false) if (soundButton.contains(touch.location(in: self))) { handleSoundButtonClick() } } } selectedButton = nil } /// Handles start button hover behavior func handleStartButtonHover(isHovering : Bool) { if isHovering { startButton.texture = startButtonPressedTexture } else { startButton.texture = startButtonTexture } } /// Handles sound button hover behavior func handleSoundButtonHover(isHovering : Bool) { if isHovering { soundButton.alpha = 0.5 } else { soundButton.alpha = 1.0 } } /// Stubbed out start button on click method func handleStartButtonClick() { print("start clicked") } /// Stubbed out sound button on click method func handleSoundButtonClick() { print("sound clicked") }
This is simple button-handling for our two buttons. In touchesBegan(_ touches: with events:), we start off by checking whether we have any currently selected buttons. If we do, we need to reset the state of the button to unpressed. Then, we need to check whether any button is pressed. If one is pressed, it will show the highlighted state for the button. Then, we set selectedButton to the button for use in the other two methods.
In touchesMoved(_ touches: with events:), we check which button was originally touched. Then, we check whether the current touch is still within the bounds of selectedButton, and we update the highlighted state from there. The startButton‘s highlighted state changes the texture to the pressed-state’s texture, where the soundButton‘s highlighted state has the alpha value of the sprite set to 50%.
Finally, in touchesEnded(_ touches: with event:), we check again which button is selected, if any, and then whether the touch is still within the bounds of the button. If all cases are satisfied, we call handleStartButtonClick() or handleSoundButtonClick() for the correct button.
Now that we have the basic button behavior down, we need an event to trigger when they are clicked. The easier button to implement is startButton. On click, we only need to present the GameScene. Update handleStartButtonClick() in the MenuScene.swift function to the following code:
If you run the app now and press the button, the game will start!
Now we need to implement the mute toggle. We already have a sound manager, but we need to be able to tell it whether muting is on or off. In Constants.swift, we need to add a key to persist when muting is on. Add the following line:
let MuteKey = "RAINCAT_MUTED"
We will use this to save a boolean value to UserDefaults. Now that this is set up, we can move into SoundManager.swift. This is where we will check and set UserDefaults to see whether muting is on or off. At the top of the file, under the trackPosition variable, add the following line:
private(set) var isMuted = false
This is the variable that the main menu (and anything else that will play sound) checks to determine whether sound is allowed. We initialize it as false, but now we need to check UserDefaults to see what the user wants. Replace the init() function with the following:
private override init() { //This is private, so you can only have one Sound Manager ever. trackPosition = Int(arc4random_uniform(UInt32(SoundManager.tracks.count))) let defaults = UserDefaults.standard isMuted = defaults.bool(forKey: MuteKey) }
Now that we have a default value for isMuted, we need the ability to change it. Add the following code to the bottom of SoundManager.swift:
This method will toggle our muted variable, as well as update UserDefaults. If the new value is not muted, playback of the music will begin; if the new value is muted, playback will not begin. Otherwise, we will stop the current track from playing. After this, we need to edit the if statement in startPlaying().
Find the following line:
if audioPlayer == nil || audioPlayer?.isPlaying == false {
This toggles the sound in SoundManager, checks the result and then appropriately sets the texture to show the user whether the sound is muted or not. We are almost done! We only need to set the initial texture of the button on launch. In sceneDidLoad(), find the following line:
Now that the music is hooked up, we can move to CatSprite.swift to disable the cat meowing when muting is on. In the hitByRain(), we can add the following if statement after removing the walking action:
if SoundManager.sharedInstance.isMuted { return }
This statement will return whether the user has muted the app. Because of this, we will completely ignore our currentRainHits, maxRainHits and meowing sound effects.
After all of that, now it is time to try out our mute button. Run the app and verify whether it is playing and muting sounds appropriately. Mute the sound, close the app, and reopen it. Make sure that the mute setting persists. Note that if you just mute and rerun the app from Xcode, you might not have given enough time for UserDefaults to save. Play the game, and make sure the cat never meows when you are muted.
Now that we have the first type of button for the main menu, we can get into some tricky business by adding the quit button to our game scene. Some interesting interactions can come up with our style of game; currently, the umbrella will move to wherever the user touches or moves their touch. Obviously, the umbrella moving to the quit button when the user is attempting to exit the game is a pretty poor user experience, so we will attempt to stop this from happening.
The quit button we are implementing will mimic the start game button that we added earlier, with much of the process staying the same. The change will be in how we handle touches. Get your quit_button and quit_button_pressed assets into the Assets.xcassets file, and add the following code to the HudNode.swift file:
private var quitButton : SKSpriteNode! private let quitButtonTexture = SKTexture(imageNamed: "quit_button") private let quitButtonPressedTexture = SKTexture(imageNamed: "quit_button_pressed")
This will handle our quitButton reference, along with the textures that we will set for the button states. To ensure that we don’t inadvertently update the umbrella while trying to quit, we need a variable that tells the HUD (and the game scene) that we are interacting with the quit button and not the umbrella. Add the following code below the showingHighScore boolean variable:
private(set) var quitButtonPressed = false
Again, this is a variable that only the HudNode can set but that other classes can check. Now that our variables are set up, we can add in the button to the HUD. Add the following code to the setup(size:) function:
The code above will set the quit button with the texture of our non-pressed state. We’re also setting the position to the upper-right corner and setting the zPosition to a high number in order to force it to always draw on top. If you run the game now, it will show up in GameScene, but it will not be clickable yet.
Now that the button has been positioned, we need to be able to interact with it. Right now, the only place where we have interaction in GameScene is when we are interacting with umbrellaSprite. In our example, the HUD will have priority over the umbrella, so that users don’t have to move the umbrella out of the way in order to exit. We can create the same functions in HudNode.swift to mimic the touch functionality in GameScene.swift. Add the following code to HudNode.swift:
func touchBeganAtPoint(point: CGPoint) { let containsPoint = quitButton.contains(point) if quitButtonPressed && !containsPoint { //Cancel the last click quitButtonPressed = false quitButton.texture = quitButtonTexture } else if containsPoint { quitButton.texture = quitButtonPressedTexture quitButtonPressed = true } } func touchMovedToPoint(point: CGPoint) { if quitButtonPressed { if quitButton.contains(point) { quitButton.texture = quitButtonPressedTexture } else { quitButton.texture = quitButtonTexture } } } func touchEndedAtPoint(point: CGPoint) { if quitButton.contains(point) { //TODO tell the gamescene to quit the game } quitButton.texture = quitButtonTexture }
The code above is a lot like the code that we created for MenuScene. The difference is that there is only one button to keep track of, so we can handle everything within these touch methods. Also, because we will know the location of the touch in GameScene, we can just check whether our button contains the touch point.
Move over to GameScene.swift, and replace the touchesBegan(_ touches with event:) and touchesMoved(_ touches: with event:) methods with the following code:
override func touchesBegan(_ touches: Set, with event: UIEvent?) { let touchPoint = touches.first?.location(in: self) if let point = touchPoint { hudNode.touchBeganAtPoint(point: point) if !hudNode.quitButtonPressed { umbrellaNode.setDestination(destination: point) } } } override func touchesMoved(_ touches: Set, with event: UIEvent?) { let touchPoint = touches.first?.location(in: self) if let point = touchPoint { hudNode.touchMovedToPoint(point: point) if !hudNode.quitButtonPressed { umbrellaNode.setDestination(destination: point) } } } override func touchesEnded(_ touches: Set, with event: UIEvent?) { let touchPoint = touches.first?.location(in: self) if let point = touchPoint { hudNode.touchEndedAtPoint(point: point) } }
Here, each method handles everything in pretty much the same way. We’re telling the HUD that the user has interacted with the scene. Then, we check whether the quit button is currently capturing the touches. If it is not, then we move the umbrella. We’ve also added the touchesEnded(_ touches: with event:) function to handle the end of the click for the quit button, but we are still not using it for umbrellaSprite.
Now that we have a button, we need a way to have it affect GameScene. Add the following line to the top of HudeNode.swift:
var quitButtonAction : (() -> ())?
This is a generic closure19 that has no input and no output. We will set this with code in the GameScene.swift file and call it when we click the button in HudNode.swift. Then, we can replace the TODO in the code we created earlier in the touchEndedAtPoint(point:) function with this:
if quitButton.contains(point) && quitButtonAction != nil { quitButtonAction!() }
Now, if we set the quitButtonAction closure, it will be called from this point.
To set up the quitButtonAction closure, we need to move over to GameScene.swift. In sceneDidLoad(), we can replace our HUD setup with the following code:
Run the app, press play, and then press quit. If you are back at the main menu, then your quit button is working as intended. In the closure that we created, we initialized a transition to the MenuScene. And we set this closure to the HUD node to run when the quit button is clicked. Another important line here is when we set the quitButtonAction to nil. The reason for this is that a retain cycle is occurring. The scene is holding a reference to the HUD where the HUD is holding a reference to the scene. Because there is a reference to both objects, neither will be disposed of when it comes time for garbage collection. In this case, every time we enter and leave GameScene, another instance of it will be created and never released. This is bad for performance, and the app will eventually run out of memory. There are a number of ways to avoid this, but in our case we can just remove the reference to GameScene from the HUD, and the scene and HUD will be terminated once we go back to the MenuScene. Krakendev has a deeper explanation20 of reference types and how to avoid these cycles.
Now, move to GameViewController.swift, and remove or comment out the following three lines of code:
With the debugging data out of the way, the game is looking really good! Congratulations: We are currently into beta! Check out the final code from today on GitHub21.
This is the final lesson of a three-part tutorial, and if you made it this far, you just did a lot of work on your game. In this tutorial, you went from a scene that had absolutely nothing in it, to a completed game. Congrats! In lesson one22, we added the floor, raindrops, background and umbrella sprites. We also played around with physics and made sure that our raindrops don’t pile up. We started out with collision detection and worked on culling nodes so that we would not run out of memory. We also added some user interaction by allowing the umbrella to move around towards where the user touches on the screen.
In lesson two23, we added the cat and food, along with custom spawning methods for each of them. We updated our collision detection to allow for the cat and food sprites. We also worked on the movement of the cat. The cat gained a purpose: Eat every bit of food available. We added simple animation for the cat and added custom interactions between the cat and the rain. Finally, we added sound effects and music to make it feel like a complete game.
In this last lesson, we created a heads-up display to hold our score label, as well as our quit button. We handled actions across nodes and enabled the user to quit with a callback from the HUD node. We also added another scene that the user can launch into and can get back to after clicking the quit button. We handled the process for starting the game and for controlling sound in the game.
We put in a lot of time to get this far, but there is still a lot of work that can go into this game. RainCat continues development still, and it is available in the App Store24. Below is a list of wants and needs to be added. Some of the items have been added, while others are still pending:
Add in icons and a splash screen.
Finalize the main menu (simplified for the tutorial).
Fix bugs, including rogue raindrops and multiple food spawning.
Refactor and optimize the code.
Change the color palette of the game based on the score.
Update the difficulty based on the score.
Animate the cat when food is right above it.
Integrate Game Center.
Give credit (including proper credit for music tracks).
Keep track on GitHub25 because these changes will be made in future. If you have any questions about the code, feel free to drop us a line at hello@thirteen23.com26 and we can discuss it. If certain topics get enough attention, maybe we can write another article discussing the topic.
Around a year ago, while working at a digital agency, I was given the objective of streamlining our UX design process. Twelve months later, this article shares my thoughts and experiences on how lean thinking helped to instill efficiencies within our UX design process.
When I arrived at the agency, wireframes were already being created and utilized across a variety of projects. Winning advocates for the production of wireframes was not the issue. All stakeholders (both internally and externally) understood the purpose of wireframes and appreciated their value in shaping and modeling digital experiences.
However, up until this point, rather than dictate a promoted “way of working,” the agency had encouraged UX designers to “do things their way.” While this increased autonomy had once allowed UX designers to work at speed, using their preferred tools and processes, it was now starting to create problems.
When you stepped back and looked across past projects, you could see the different tools and processes in action, ranging from low-fidelity wireframing tools such as Balsamiq1 and Moqups.com2, to mid-high fidelity outputs from tools such as Axure3 and UX Pin4.
For clients undertaking multiple projects, the lack of consistent wireframe deliverables was confusing and disorientating, with the client having to remember multiple URLs and logins while also learning how to navigate the various outputs.
Meanwhile, for the agency, the absence of a standardized UX design process was costly both in time and money. The lack of a shared file structure across projects meant, if resource patterns changed across a project, which they often did, it was hard for new UX designers to pick up where their counterparts had left off. At the same time, many routine tasks were unnecessarily repeated across multiple projects.
It was clear we needed to establish some rules and guidelines to create a more cohesive approach. We needed to set a new direction, and now was the time to start.
Before introducing a new company-wide process to wireframing, I needed to highlight to the UX team where the lack of process was causing us issues and how establishing a standardized process could help.
To ensure buy-in from the wider stakeholder group, I gathered the UX team together and presented them with the challenge:
How can we establish a standard wireframing process that allows us to work at speed, while also improving cross-project consistency?
As the team discussed the issue, I quickly mapped out each key milestone in the wireframing process on a whiteboard. We discussed the potential enhancement opportunities for each milestone. For an enhancement to be accepted, it had to deliver against one of the following criteria:
By introducing a template file, we felt we could help save the UX designer time during a project’s initial set-up. The template file would save the UX designer from having to complete the routine tasks that come with setting up a new project (creating responsive views, grid systems, document structures, changelogs, and so on).
We also felt that creating a template file would define a baseline for what a working project file should look like. We believed the file template would establish a core foundation and structure for the wireframe document and therefore promote cross-project consistency.
The standard wireframe file template we created included elements such as:
Introductory page
To welcome stakeholders to the wireframe, explaining how to navigate the wireframe document and introduce the changelog.
Component master
List of all components and pages grouped into categories, with direct links to lower level pages and components (signposting, forms, and so on).
Document structure
Folders for each type of wireframe page, helping stakeholders easily distinguish page layouts from components and user journeys from sitemaps.
Preset breakpoints and grid systems
Standard responsive wireframe breakpoints and grid systems.
Agreeing on a set of common breakpoints for our template file was perhaps the hardest task to accomplish. After some lengthy debates with the UX team and other internal stakeholders, we came to the following conclusion:
The wireframe’s purpose is to communicate page functionality, visual hierarchy, and interactions. It is not to provide a pixel-perfect representation of the end product.
Project stakeholders are most interested in seeing how the layout would respond across desktop, tablets and mobile devices.
The breakpoints demonstrated within the wireframes would only be representative, and considered again later within the design phase.
After coming to this shared agreement, we collectively decided we would produce wireframes to represent the three common breakpoints stakeholders were interested in (desktop, tablet and mobile). The final breakpoints would then be explored throughout the design phase, as we considered browser trends and the client’s current web analytics data.
We settled on the following grids and breakpoints for our template file:
Benefits: Save time, improve consistency, facilitate rapid working.
The next and most significant enhancement we identified was the idea to introduce our own wireframe design language.
For those of you who are new to design languages, a design language provides a unified set of UX and design rules which promote harmony across various media outputs. Unlike a style guide that creates rules for acceptable fonts, colors, imagery, and tone of voice, a design language, creates rules for UI patterns providing guidance on layout, user input, animation and feedback.
In recent years, design languages have become more popular thanks to the introduction of Google’s Material Design11. However, companies such as the BBC12 and IBM13 have been developing digital assets that adhere to well establish design languages for some time.
Typical elements you would expect to see in a design language are:
Grid systems
Accessibility guidelines
Layout principles
Typography and iconography
Interaction guidelines
UI components
By introducing a design language, we believed we could ensure a consistent look and feel across our wireframes while also reminding UX designers of common responsive UI patterns.
To make it easy for UX designers to align with the design language we wanted to turn our design language into a responsive wireframe library UX designers could use as they work, allowing them to build responsive wireframes at speed.
To define our design language, we slowly started to pull together a list of the commonly used UI patterns across our last four major projects. We then aligned these with our new responsive grid system and built it into a responsive wireframe library.
Our responsive wireframe library quickly became our living and breathing design language. We initially started with a small set of 30 components, however, over time, this has now expanded contain over 100+ responsive UI elements some of which you can see below.
If you are looking to create your own wireframe design language, I would highly recommend getting your whole team involved as early as possible. Taking people’s input early-on in a venture like this can help justify and shape the proposition. It will also assist to ensure the result is a collaborative effort where all UX designers support the end solution.
The key thing to remember here, the objective for a design language is to assist and support UX designers on projects by reminding them of common UI patterns fit for the medium in which content is being digested. Its purpose is not to set hard and fast rules that all projects must follow. Otherwise, all projects would end up looking too similar.
To run a “design language” workshop session, block out a day in your team’s calendar, gather everyone together with some Sharpies, snacks, Post-it notes, Blu-Tack, and paper.
Depending on the size of your audience, split into groups of four to six. Armed with pens and Post-its, sketch out common UI elements you have used on recent web projects, adding a suitable name to your component as you go.
Once you have sketched each component, place it up on the wall. When everyone has completed their components, take some time to sort them into common themes, (i.e. Navigation, Experience, Conversion, Browse) removing any duplicates as you go.
At the end of the workshop, you should have a list of common UI components you can use to build your design language. Initially, try to keep this list concise, with around 20 to 30 components maximum.
From here, you will need a medium to capture your components into a shared location the whole team can access, and allow them to collaborate further and feed in new requirements.
We used a Trello2725 board to communicate all the components we had captured from our workshop and displayed them within their groupings. An additional “Ideas” column was created to provide a space where new components could be added by any team member and would be discussed in upcoming team meetings.
Being able to see things in a consolidated view within our Trello board allowed us to discuss and prioritize which components we would build into our design language. Anything which was not a core requirement was moved to a later phase.
The key thing we communicated to the UX team was that our design language would become a ‘living library’ that extends over time. Therefore, the initial phases of our design language would be an MVP where we would use feedback from the UX team to shape the library when moving forward.
Note:Trello is a great lightweight tool for managing and assigning tasks across small teams. However, if your organization has a JIRA account, I would recommend using this as your primary tool to manage this process. JIRA29 has a lot more functionality than Trello, allowing access to features such as development reports and components requested and added. You can also monitor time spent on jobs which may prove useful for reporting your progress up the management chain.
It was always our intention to build our design language into a UI kit that UX designers could utilize throughout the creative process. To achieve this, we transformed our design language into a “Responsive Axure Widget Library.”
When it comes to selecting the wireframing tool that is right for your business, there are a number factors to consider:
Ease of use
How easy is it for beginners to grasp? Will any training be required? If so, what support and training resources are available?
Accessibility/Scalability
Does it allow multiple UX designers to work in collaboration on a single document? How does it handle large documents with multiple page revisions?
Features
What built-in features does it have? Can you create responsive wireframes, sticky headers, and parallax scrolling?
Fidelity
What does a typical output look like? For example, sketch-based page layouts or high-fidelity interactive prototypes?
We selected Axure as our wireframing tool due to its rich feature set, ability to handle large documents, collaboration capabilities and ability to support third-party widget libraries. Not to mention, all UX designers had experience using the tool.
However, Axure may not be the best choice for your business due to its steep learning curve and licensing arrangement. The key message is when selecting your wireframing tool you should consider your business needs.
Most wireframing tools now support the inclusion of custom libraries UXPin30, Justinmind31 and Sketch32 to name but a few. Sadly Adobe XD33 does not support custom libraries yet, however this feature is expected to make an appearance in the near future, according to Adobe’s Blog34.
The final enhancement for our new process was to create some guidelines for documentation and annotations. In the past, various approaches had been taken to produce documentation depending on the size, scale, and timelines for the project. For example:
In-Page (High-level functionality annotations)
Each page of the wireframe should clearly describe the role of the component(s) within.
Describe each atomic component to the minute detail (what information CMS editors can edit or update, along with limitations, restrictions and responsive behaviours).
Wiki (Functional specifications)
Create wikis that document the entire project from a functional perspective. Include all page and component functionality along with rules around other items, such as web analytics, browser support, roles, permissions, and governance.
When discussing annotations as a team, we felt, despite “low-level functional specification annotations” within the wireframes being seen by project managers as a way to reduce documentation timelines, this often created more problems than it solved.
For a start, it meant the only member of the project team who could access and update the specification was the UX designer who had access to the wireframing tool. Secondly, the document didn’t allow for cross-team collaboration (such as input from other teams, i.e. QA, or Project Management).
After discussing as a team, we agreed when moving forward:
We would only ever use in-page annotations within wireframes to communicate the role of the component.
Atomic, component level specifications would always be delivered via the Functional Specification in a Wiki format.
We would agree on a common structure for our functional specification wikis.
The structure we settled on for our Functional Specification Wiki’s was as follows:
1. Introduction
This is where we introduce stakeholders to the project and explain what content can be found throughout the specification. Typical sub-pages within this section would contain the role of the project, scope of the project, sitemap, and changelong.
2. Page Templates
This is where we grouped all page templates together and identified static versus dynamic pages. For example, content pages versus search results pages or product pages. For each page template, we would describe the role of the page and page-specific functionality as well as a full-page screen grab and a link to the wireframes.
Where a page template used a common component, (i.e., Site Navigation, Breadcrumb, Hero) we would simply link to the component page rather than re-document the component multiple times. Not only did this reduce the documentation, it also allowed stakeholders reviewing the document to dip in and out of the content they needed to review easily.
3. Components
This is where we grouped all UI components together. For each component, we then identified what content fields were available to the CMS user, whether content was manual or automated, and defined validation rules as well as interaction behaviors.
4. Special Considerations
This is where we listed out all other wider topics related to the project that needed to be documented but were not specific to any given page. Typical topics which lived in this section were:
Rethinking our UX workflow from the ground up allowed us to deliver both time and cost savings across the UX team. By introducing key templates wherever possible, we were able to relieve our UX designers from some of the more tedious routine tasks they need to perform on each project, while at the same time, promote cross-project consistency.
An integral part of the new workflow was the introduction of our design language, which has transformed the way we wireframe projects. Introducing the design language has allowed us to work lean, enabling us to build responsive wireframes at speed.
Being able to establish 60%-70% of page layouts more quickly has meant concepts can be demonstrated to stakeholders for feedback much earlier, providing UX designers with more time to obsess over the intricate details of the project that surprise and delight. It is often those little details that get sacrificed when demanding project deadlines loom.
The design language should be used to help shape pages and components in the early phases of a project rather than dictate all component functionality in its entirety.
Each project is unique in its own right. It comes with its own set of users, requirements, expectations, and challenges. In any web project, the early phases bring a lot of uncertainty. The earlier we produce artifacts such as wireframes, the quicker we learn from stakeholders what works, what doesn’t, and more importantly why. It’s this understanding that helps to direct and steer future adaptations along with the end product.
Furthermore, your design language isn’t a “set it and forget it” tool. Your design language should be a living part of the UX design process that changes and adjusts over time as technology changes and new interaction patterns emerge. For this reason, your design language should always adapt over time based on feedback from your UX designers, clients, and users.
Below you can see concepts for a homepage of a news site and landing page of a product focused website (based on Vessyl). Both concepts were produced as responsive wireframes using Axure RP 8.
Being able to leverage the Responsive Axure Library (built as part of introducing a design language) meant these concepts that had previously taken a day to complete could be produced in just one and a half hours. Not only that, they now look consistent, utilizing the same visual presentation for elements, such as images and video.
Being able to produce artifacts rapidly means more time can be spent with the client to discuss initial thoughts on look, feel, layout, and the responsive treatment of components. You can also spend time on smaller details such as like versus commenting functionality, taxonomy, content prioritization, (manually curated vs. automated feeds) and so on.
This is a concept for a news and media-based site that produces article based content across a number of categories, from technology to health and nutrition. The aim of this site is to drive engagement and loyalty, with users expected to return to this site multiple times throughout the week. As such, keeping content fresh and relevant to the end user is key to drive repeat engagements.
This concept is a simplified version of the landing page displayed on Vessyl39. The role of this page is to educate and build interest in the Vessyl product. Remember, this may be the first page users see for the product (as they may be linking from various news or PR sites). Therefore, this page should utilize story-telling principles, as well social proofing, to bring the product to life and make users aware of how using the product will benefit their daily lives.
We shouldn’t let ourselves get distracted by people who work on different projects than we do. If a developer advocate works on a web-based QR code application, for example, their way of tackling things most certainly won’t fit your project. If someone builds a real-time dashboard, their concept won’t relate to the company portfolio website you’re building. Bear in mind that you need to find the best concept, the best technologies, the best solution for your specific project.
Thinking about the right decisions rather than following cool, new trends blindly, is the first step to building responsible web solutions. That’s what we call progressive enhancement. The only subjective matter in this undertaking is you, judging what level of progressive enhancement a solution should have.
Aaron Gustafson wrote a thoughtful piece on why progressive enhancement is not an anti-JavaScript concept3 but a concept of finding the best way to adapt to the nature of the web. It’s a subtle, inclusive concept that takes the environment and its quirks into account.
Microsoft’s Inclusive Design guidelines7 and resources are very helpful to understand how you as a company can take advantage of creating an inclusive product by design.
Tim Kadlec describes8 what a new project called “The Web, Worldwide9” is about and why it’s important for developers and project owners to understand the role of the Internet in various markets. I wrote a similar post this week about choosing browser support in a project10 and why we’re often doing it wrong because we base our assumptions on misleading data.
These fun statistics on HTML and SVG usage11 are really insightful. By analyzing eight million websites, some interesting facts could be discovered: href="javascript:void(0)", for example, is still used massively, and span.button can also be found in a lot of codebases.
Unfortunately, there’s no further source to back up this statement, but Domenic Denicola found out that the Filesystem API might be removed from the specification12 as it turned out that it’s used for incognito mode detection in browsers in 95% of the use cases.
The parallax effect isn’t going away anytime soon, so if we need to make use of it, we should at least do it in the most effective, most performant way. Paul Lewis shares how to achieve that13.
Remy Sharp reports how he got started with React.js and how he finally made Server Side React14 work in his project.
Holger Bartel wrote about the value of attending conferences18 and how different things are in Asia, for example, when compared to other parts of the world.
Christmas is just around the corner, and what better way to celebrate than with some free goodies? We sifted through the web (and our archives) to find holiday-themed icon sets for you that’ll give your creative projects some holiday flair. Perfect for Christmas cards, gift tags, last-minute wrapping paper, or whatever else you can think of.
All icons can be downloaded for free, but please consult their licenses or contact the creators before using them in commercial projects. Reselling a bundle is never cool, though. Have a happy holiday season!
For even more holiday spirit, you might also want to check out the following posts:
Roast turkey, gingerbread men, reindeer, and that comfy Christmas sweater that waits in the back of the closet to be dug out. With 110 icons in total, Anastasia Kolisnichenko’s Christmas icon set4 has everything a Christmas lover’s heart could ask for. The icons are available in AI, PSD, PNG, and EPS formats and you can customize stroke width, size, color, and shape to your liking. The license allows you to use the illustrations for anything you want – think postcards, posters, gift tags – also commercially.
Do you prefer a more minimalistic approach? Then George Neocleous’ festive Christmas icon set7 is for you. It includes 20 vector icons in EPS format, with both color and grayscale versions available. These are free to use without any restrictions in personal and commercial projects. Now, imagine that cute nutcracker on a Christmas card…
With their storybook-like look, Manuela Langella’s icon set10 stirs those familiar warm feelings. In this set, you’ll find 24 icons in total. Among them, Langella has included some unique motifs, such as Santa stuck in a chimney, as well as the obligatory cookies and milk, and stockings hung by the fireplace. The icons come in six formats (AI, PSD, EPS, PDF, SVG, and PNG) and can be customized not only in size, color, and shape but, thanks to full layered Illustrator and Photoshop assets, also assembled in any way you like. Free to use for private and commercial projects.
Another from the creative mind of Manuela Langella, is the Advent icon set13. It features 25 icons to celebrate the Advent season: decoration, food, and even Santa’s little helper is there to join the party. The download includes AI, EPS, SVG, PNG, and PDF formats that you can modify to your liking. A Creative Commons Attribution 3.0 Unported license allows you to use the set in private as well as commercial projects.
RocketTheme’s Christmas icon set16 shines with its love for detail: the little cracks in the gingerbread man, the bubbles on the milk, the chiffon bow wrapped around the present. There are ten icons in the set in total, all of which come as 256×256 PNGs. A Creative Commons Attribution-NoDerivs 3.0 Unported License allows you to use them in both commercial and private projects, but please be sure to always credit the designer.
One set, three styles: IconEden’s three-in-one Christmas set19 comes in a realistic 3D style, a simple shape style, and a button style. The 39 icons are available in vector and pixel format and can be used freely both in private and commercial projects. Talk about versatility!
This fun and cartoonish icon set22 comes from Andrey Stelya. The fresh colors and the unusual way of applying them, by shifting the underlying color layer outside the line art, gives the icons a modern feel. The set includes twelve icons and comes in SVG and PNG (90×90) formats.
Now this is a versatile set! Olha Filipenko created 78 icons in AI format with everything winter and holiday-themed25: sweets, snowflakes, candles, ornaments, even a cute little postage stamp. There are so many ways to use the minimalistic line art with a unique and sophisticated twist.
How about some Christmas cheer as sweet as the icing on the cookies? The icon set28 of Bangkok-based designer Sunbzy is striking with an unusual pastel color palette. The 20 icons can be downloaded for free as an AI file and are as in-demand as Grandma’s cookies!
Explore the snowy mountains, take your friends skiing, or go ice-skating on the frosty lake with Benjamin Bely’s beautiful winter icon set31 that cherishes these outdoor winter moments. The bundle consists of twelve icons in AI format and can be downloaded for free. All the love of those chilly adventures from the warmth of your home.
Pixel time! Anna Sereda’s and Maryan Ivasyk’s icon set34 looks as if it came straight out of a Christmas arcade game. There are 16 little pixel artworks in the bundle for you to use in personal and commercial projects. Available in AI, EPS, PSD, SVG, PNG, and JPEG formats.
This fine small set38 comes from Dasha Ermolova. It includes four motifs – a snow globe, a stocking, a Christmas wreath, and a ball ornament. The EPS is free to download. This is minimalistic line art with a nifty touch.
With her Christmas icon set41, Magda Gogo gives classical motifs, like stockings, Christmas trees, and candy canes, a fresh makeover. There are eight icons in the bundle, and they come in both EPS and AI formats. True classics never go out of style.
Another lovely set of flat illustrations44 comes from Vector4Free. The softly colored motifs range from snowflakes and stockings to a pipe-smoking hipster reindeer. 33 icons are available in AI, EPS, SVG, PSD, and PNG formats. The Creative Commons Attribution 3.0 Unported License allows you to use them for any purpose, including commercially as long as you give appropriate credit.
Even though nature is sparse in winter in many parts of the world, there are little treasures you can find on your walk through the forest: pine cones, fir needles, acorns. To celebrate their beauty, Freepik released an icon set with watercolor leaves and branches47. They are available in AI and EPS formats and can be used for personal and commercial projects as long as you credit the designer.
What would Christmas be without cookies? Anna Zhemchuzhina created a lovely little set of six gingerbread-inspired icons50 that look as if they came freshly baked out of the oven. Available as PSD, these icons look good enough to eat!
Spread the joy! Non-traditional colors and a playful design are at the base of Livi Po’s retro-inspired Christmas icon set53. The ten icons are available in AI format, and a wonderful new take on the truly retro past.
Ever dream of spending the holidays in a cabin in the woods? You go out to choose the Christmas tree, spend time sitting by the fireplace, and drink tea as you watch the snow fall. That’s the feeling that Freepik captures with their Lovely Christmas Icon Set56. Classic, calm Christmas colors and the choice of motifs bring some wintery flair to your designs. The 20 icons are available in EPS and AI and can be used for free in personal and commercial projects as long as you credit the designer.
Also designed by the folks at Freepik, this cheerful set59 consists of 16 AI and EPS icons with a unique nostalgic ’50s charm. You can use them for free in both personal and commercial projects, but please remember to credit the designer.
Christmas and snow go together like bread and butter. To add a bit of those snow flurries to your projects, Teela Cunningham has hand-drawn a set of snowflakes62 and turned them into vector graphics. The download includes AI and EPS files for private use only.
James Oconnell’s Ho Ho Ho icon set65 features ten Christmasy line art icons with a nifty twist; some lines are dotted while others are highlighted with an accent color. The centerpiece of the set is a squiggly “Ho Ho Ho” framed by snowflakes. The icons are available in AI format and you’re free to use them as you please.
Another set that pairs minimalistic line art with some lovely details comes from Cvijovic Zarko. These twelve icons68 cover classical Christmas motifs (a present, a snowman, reindeer, a candle, a sleigh, and more) and are available in EPS format.
In order to help kick-start the celebrations, here is a fantastic free Christmas and Winter-themed icon set to use in your own seasonal designs. With 27 color outlines, this New Year Celebration71 icon set has everything one could ask for. This will help you quickly design a holiday-themed UI, website, theme, or presentation.
A versatile three-in-one set74 that can be used in both personal and commercial projects comes from the folks at IconShock. It includes 40 icons with everything you’ll need to add some holiday flair to your projects — think presents, candles, stockings, a snowman, and much more. The vectors come in three styles (line, flat and line with colors) and are 100% editable.
Last but not least, to give the season of reindeer and Christmas elves a bit of a geeky glam, Tatiana Lapina designed 54 geeky Christmas vector graphics77. Among them, you’ll discover characters from Star Wars and famous computer games, geeky tech stuff, even a delivery drone to deliver the presents. The illustrations come in SVG, AI, EPS, PDF, and PNG formats and are free to use in personal and commercial projects.
Have you designed a free holiday icon set yourself? Is there perhaps that one set you keep using year after year? We’d love to hear about your favorites in the comments below!
Have you ever wondered what it takes to create a SpriteKit game from beginning to beta? Does developing a physics-based game seem daunting? Game-making has never been easier on iOS since the introduction of SpriteKit1.
In this three-part series, we will explore the basics of SpriteKit. We will touch on SKPhysics, collisions, texture management, interactions, sound effects, music, buttons and SKScenes. What might seem difficult is actually pretty easy to grasp. Stick with us while we make RainCat.
The game we will build has a simple premise: We want to feed a hungry cat, but it is outside in the rain. Ironically, RainCat does not like the rain all that much, and it gets sad when it’s wet. To fix this, we must hold an umbrella above the cat so that it can eat without getting rained on. To get a taste of what we will be creating, check out the completed project3. It has some more bells and whistles than what we will cover here, but you can look at those additions later on GitHub. The aim of this series is to get a good understanding of what goes into making a simple game. You can check in with us later on and use the code as a reference for future projects. I will keep updating the code base with interesting additions and refactoring.
We will do the following in this article:
check out the initial code for the RainCat game;
add a floor;
add raindrops;
prepare the initial physics;
add in the umbrella object to keep our cat dry from the rain;
begin collision detection with categoryBitMask and contactTestBitMask;
create a world boundary to remove nodes that fall off screen.
You will need to follow along with a few things. To make it easier to start, I’ve provided a base project. This base project removes all of the boilerplate code that Xcode 8 provides when creating a new SpriteKit project.
Get something to test on! In this case, it should be an iPad, which will remove some of the complexity of developing for multiple screen sizes. The simulator is functional, but it will always lag and run at a lower frame rate than a proper iOS device.
I’ve given you a head start by creating the project for the RainCat game and completing some initial steps. Open up the Xcode project. It will look fairly barebones at the moment. Here is an overview of what has happened up to this point: We’ve created a project, targeted iOS 10, set the devices to iPad, and set the orientation to landscape only. We can get away with targeting previous versions of iOS, back to version 8 with Swift 3, if we need to test on an older device. Also, a best practice is to support at least one version of iOS older than the current version. Just note that this tutorial targets iOS 10, and issues may arise if you target a previous version.
Side note on the usage of Swift 3 for this game: The iOS development community has been eagerly anticipating the release of Swift 3, which brings with it many changes in coding styles and improvements across the board. As new iOS versions are quickly and widely adopted by Apple’s consumer base, we decided it would be best to present the lessons in this article according to this latest release of Swift.
In GameViewController.swift, which is a standard UIViewController5, we reworked how we load the initial SKScene6 named GameScene.swift. Before this change, the code would load the GameScene class through a SpriteKit scene editor (SKS) file. For this tutorial, we will load the scene directly, instead of inflating it using the SKS file. If you wish to learn more about the SKS file, Ray Wenderlich has a great example7.
Before we can start coding, we need to get the assets for the project. Today we will have an umbrella sprite, along with raindrops. You will find the textures on GitHub8. Add them to your Assets.xcassets folder in the left pane of Xcode. Once you click on the Assets.xcassets file, you will be greeted with a white screen with a placeholder for the AppIcon. Select all of the files in a Finder window, and drag them below the AppIcon placeholder. If that is done correctly, your “Assets” file will look like this:
Now that we have a lot of the initial configuration out of the way, we can get started making the game.
The first thing we need is a floor, since we need a surface for the cat to walk and feed on. Because the floor and the background will be extremely simple, we can handle those sprites with a custom background node. Under the “Sprites” group in the left pane of Xcode, create a new Swift file named BackgroundNode.swift, and insert the following code:
import SpriteKit public class BackgroundNode : SKNode { public func setup(size : CGSize) { let yPos : CGFloat = size.height * 0.10 let startPoint = CGPoint(x: 0, y: yPos) let endPoint = CGPoint(x: size.width, y: yPos) physicsBody = SKPhysicsBody(edgeFrom: startPoint, to: endPoint) physicsBody?.restitution = 0.3 } }
The code above imports our SpriteKit framework. This is Apple’s library for developing games. We will be using this in pretty much every file we create from now on. This object that we are creating is an SKNode10. We will be using it as a container for our background. Currently, we just add an SKPhysicsBody11 to it when we call the setup(size:) function. The physics body will tell our scene that we want this defined area, currently a line, to interact with other physics bodies, as well as with the physics world12. We also snuck in a change to restitution. This property determines how bouncy the floor will be. To have it show up for us to use, we need to add it to GameScene. Move to the GameScene.swift file, and near the top of the file, underneath our group of TimeInterval variables, we can add this:
private let backgroundNode = BackgroundNode()
Then, inside the sceneDidLoad() function, we can set up and add the background to the scene with the following lines:
Now, if we run the app, we will be greeted with this game scene:
If you don’t see this line, then something went wrong when you added the node to the scene, or else the scene is not showing the physics bodies. To turn these options on and off, go to GameViewController.swift and modify these values:
Make sure that showsPhysics is set to true for now. This will help us to debug our physics bodies. Right now, this isn’t anything special to look at, but this will act as our floor for our raindrops to bounce off of, as well as the boundary within which the cat will walk back and forth.
Next, let’s add some raindrops.
If we think before we just start adding them to the scene, we’ll see that we’ll want a reusable function to add one raindrop to the scene at a time. The raindrop will be made up of an SKSpriteNode and another physics body. An SKSpriteNode can be initialized by an image or a texture. Knowing this, and also knowing that we will likely spawn a lot of raindrops, we need to do some recycling. With this in mind, we can recycle the texture so that we aren’t creating a new texture every time we create a raindrop.
At the top of the GameScene.swift file, above where we initialized backgroundNode, we can add the following line to the file:
let raindropTexture = SKTexture(imageNamed: "rain_drop")
We can now reuse this texture every time we create a raindrop, so that we aren’t wasting memory by creating a new one every time we want a raindrop.
Now, add in the following function near the bottom of GameScene.swift, so that we can constantly create raindrops:
This function, when called, will create a raindrop using the raindropTexture that we just initialized. Then, we’ll create an SKPhysicsBody from the shape of the texture, position the raindrop node at the center of the scene and, finally, add it to the scene. Because we added an SKPhysicsBody to the raindrop, it will be automatically affected by the default gravity and fall to the floor. To test things out, we can call this function in touchesBegan(_ touches:, with event:), and we will see this:
Now, as long as we keep tapping the screen, more raindrops will appear. This is for testing purposes only; later on, we will want to control the umbrella, not the rate of rainfall. Now that we’ve had our fun, we should remove the line that we added to touchesBegan(_ touches:, with event:) and tie in the rainfall to our update loop. We have a function named update(_ currentTime:), and this is where we will want to spawn our raindrops. Some boilerplate code is here already; currently, we are measuring our delta time, and we will use this to update some of our other sprites later on. Near the bottom of that function, before we update our self.lastUpdateTime variable, we will add the following code:
// Update the spawn timer currentRainDropSpawnTime += dt if currentRainDropSpawnTime > rainDropSpawnRate { currentRainDropSpawnTime = 0 spawnRaindrop() }
This will spawn a raindrop every time the accumulated delta time is greater than rainDropSpawnRate. Currently, the rainDropSpawnRate is 0.5 seconds; so, every half a second, a new raindrop will be created and fall to the floor. Test and run the app. It will act exactly as it did before, but instead of our having to touch the screen, a new raindrop will be created every half a second.
But this is not good enough. We don’t want one location from which to release the raindrops, and we certainly don’t want it to fall from the center of the screen. We can update the spawnRaindrop() function to position each new drop at a random x location at the top of the screen.
let xPosition = CGFloat(arc4random()).truncatingRemainder(dividingBy: size.width) let yPosition = size.height + raindrop.size.height raindrop.position = CGPoint(x: xPosition, y: yPosition)
After creating the raindrop, we randomize the x position on screen with arc4Random(), and we make sure it is on screen with our truncatingRemainder method. Run the app, and you should see the following:
We can play with the spawn rate, and we can spawn raindrops faster or slower depending on what value we enter. Update rainDropSpawnRate to be 0, and you will see many pretty raindrops. If you do this, you will notice that we have a big problem now. We are currently spawning unlimited objects and never getting rid of them. We will eventually be crawling at four frames per second and, soon after that, we’ll be out of memory.
Right now, there are only two types of collision. We have one collision between raindrops and one between raindrops and the floor. We need to detect when the raindrops hit something, so that we can tell it to be removed. We will add in another physics body that will act as the world frame. Anything that touches this frame will be deleted, and our memory will thank us for recycling. We need some way to tell the physics bodies apart. Luckily, SKPhysicsBody has a field named categoryBitMask. This will help us to differentiate between the items that have come into contact with each other.
To accomplish this, we should create another Swift file named Constants.swift. Create the file under the “Support” group in the left pane of Xcode. The “Constants” file enables us to hardcode values that will be used in many places across the app, all in one place. We won’t need many of these types of variables, but keeping them in one location is a good practice, so that we don’t have to search everywhere for these variables. After you create the file, add the following code to it:
let WorldCategory : UInt32 = 0x1
The code above uses a shift operator16 to set a unique value for each of the categoryBitMasks17 in our physics bodies. 0x1 is the hex value of 1, and 0x1 is the value of 2. 0x1 equals 4, and each value after that is doubled. Now that our unique categories are set up, navigate to our BackgroundNode.swift file, where we can update the physics body to the new FloorCategory. Then, we need to tell the floor physics body what we want to touch it. To do this, update the floor’s contactTestBitMask to contain the RainDropCategory. This way, when we have everything hooked up in our GameScene.swift, we will get callbacks when the two touch each other. BackgroundNode should now look like this:
import SpriteKit public class BackgroundNode : SKNode { public func setup(size : CGSize) { let yPos : CGFloat = size.height * 0.10 let startPoint = CGPoint(x: 0, y: yPos) let endPoint = CGPoint(x: size.width, y: yPos) physicsBody = SKPhysicsBody(edgeFrom: startPoint, to: endPoint) physicsBody?.restitution = 0.3 physicsBody?.categoryBitMask = FloorCategory physicsBody?.contactTestBitMask = RainDropCategory } }
The next step is to update the raindrops to the correct category, as well as update what it should come into contact with. Going back to GameScene.swift, in spawnRaindrop() we can add the following code after we initialize the raindrop’s physics body:
Notice that we’ve added in the WorldCategory here, too. Because we are working with a bitmask18, we can add in any category here that we want with bitwise operations19. In this instance for raindrop, we want to listen for contact when the raindrop hits either the FloorCategory or WorldCategory. Now, in our sceneDidLoad() function, we can finally add in our world frame:
In the code above, we’ve create a frame that is the same as the scenes, but we’ve increased the size so that it extends 100 points on either side. This way, we will have a buffer so that items aren’t deleted on screen. Note that we’ve used edgeLoopFrom, which creates an empty rectangle that allows for collisions at the edge of the frame.
Now that we have everything in place for detection, we need to start listening to it. Update the game scene to inherit from SKPhysicsContactDelegate. Near the top of the file, find this line:
class GameScene: SKScene {
And change it to this:
class GameScene: SKScene, SKPhysicsContactDelegate {
We now need to tell our scene’s physicsWorld20 that we want to listen for collisions. Add in the following line in sceneDidLoad(), below where we set up the world frame:
self.physicsWorld.contactDelegate = self
Then, we need to implement one of the SKPhysicsContactDelegate functions, didBegin(_ contact:). This will be called every time there is a collision that matches any of the contactTestBitMasks that we set up earlier. Add this code to the bottom of GameScene.swift:
Now, when a raindrop collides with the edge of any object, we’ll remove the collision bitmask of the raindrop. This prevents the raindrop from colliding with anything after the initial impact, which finally puts an end to our Tetris-like nightmare!
If there is a problem and the raindrops are not acting like in the GIF above, double-check that every categoryBitMask and contactTestBitMasks is set up correctly. Also, note that the nodes count in the bottom-right corner of the scene will keep increasing. The raindrops are not piling up on the floor anymore, but they are not being removed from the game scene. We will continue running into memory issues if we don’t start culling.
In the didBegin(_ contact:) function, we need to add the delete behavior to cull the nodes. This function should be updated to the following:
Now, if we run our code, we will notice that the node counter will increase to about six nodes and will remain at that count. If this is true, then we are successfully culling off-screen nodes!
The background node has been very simple until now. It is just an SKPhysicsBody, which is one line. We need to upgrade it to make the app look a lot nicer. Initially, we would have used an SKSpriteNode, but that would have been a huge texture for such a simple background. Because the background will consist of exactly two colors, we can create two SKShapeNodes to act as the sky and the ground.
Navigate to BackgroundNode.swift and add the following code in the setup(size) function, below where we initialized the SKPhysicsBody.
In the code above, we’ve created two SKShapeNodes that are basic rectangles. But a new problem arises from zPosition. Note in the code above that skyNode’s zPosition is 0, and the ground’s is 1. This way, the ground will always render in front of the sky. If you run the app now, you will see the rain draw in front of the sky but behind the ground. This is not the behavior we want. If we move back to GameScene.swift, we can update the spawnRaindrop() function and set the zPosition of the raindrops to render in front of the ground. In the spawnRaindrop() function, below where we set the spawn position, add the following line:
raindrop.zPosition = 2
Run the code again, and the background should be drawn correctly.
Now that the rain is falling the way we want and the background is set up nicely, we can start adding some interaction. Create another file under the “Sprites” group, named UmbrellaSprite.swift. Add the following code for the initial version of the umbrella.
import SpriteKit public class UmbrellaSprite : SKSpriteNode { public static func newInstance() -> UmbrellaSprite { let umbrella = UmbrellaSprite(imageNamed: "umbrella") return umbrella } }
The umbrella will be a pretty basic object. Currently, we have a static function to create a new sprite node, but we will soon add a custom physics body to it. For the physics body, we could use the function init(texture: size:), as we did with the raindrop, to create a physics body from the texture itself. This would work just fine, but then we would have a physics body that wraps around the handle of the umbrella. If we have a body around the handle, the cat would get hung up on the umbrella, which would not make for a fun game. Instead, we will add a SKPhysicsBody from the CGPath that we created in the static newInstance() function. Add the code below in UmbrellaSprite.swift, before we return the umbrella sprite’s newInstance() function.
We are creating a custom path for the umbrella’s SKPhysicsBody for two reasons. First, as mentioned, we only want the top part of the umbrella to have any collision. The second reason is so that we can be a little forgiving with the umbrella’s collision size.
The easy way to create a CGPath is to first create a UIBezierPath and append lines and points to create our basic shape. In the code above, we’ve created this UIBezierPath and moved the start point to the center of the sprite. The umbrellaSprite’s center point is 0,0 because our anchorPoint23 of the object is 0.5,0.5. Then, we add a line to the far-left side of the sprite and extend the line 30 points past the left edge.
Side note on usage of the word “point” in this context: A “point,” not to be confused with CGPoint or our anchorPoint, is a unit of measurement. A point may be 1 pixel on a non-Retina device, 2 pixels on a Retina device, and more depending on the pixel density of the device. Learn more about pixels and points on Fluid’s blog24.
Next, go to the top-center point of the sprite for the top edge, followed by the far-right side, and extend them the same 30 points out. We’re extending the edge of the physics body past the texture to give us more room to block raindrops, while maintaining the look of the sprite. When we add the polygon to SKPhysicsBody, it will close the path for us and give us a complete triangle. Then, set the umbrella’s physics to not be dynamic, so that it won’t be affected by gravity. The physics body that we drew will look like this:
Now make your way over to GameScene.swift to initialize the umbrella object and add it to the scene. At the top of the file and below our other class variables, add in this line:
private let umbrellaNode = UmbrellaSprite.newInstance()
Then, in sceneDidLoad(), beneath where we added backgroundNode to the scene, insert the following lines to add the umbrella to the center of the screen:
We will update the umbrella to respond to touch. In GameScene.swift, look at the empty functions touchesBegan(_ touches:, with event:) and touchesMoved(_ touches:, with event:). This is where we will tell the umbrella where we’ve interacted with the game. If we set the position of the umbrella node in both of these functions based on one of the current touches, it will snap into place and teleport from one side of the screen to the other.
Another approach would be to set a destination in the UmbrellaSprite object, and when update(dt:) is called, we can move toward that location.
Yet a third approach would be to set SKActions to move the UmbrellaSprite on touchesBegan(_ touches:, with event:) or touchesMoved(_ touches:, with event:), but I would not recommend this. This would cause us to create and destroy these SKActions frequently and likely would not be performant.
We will choose the second option. Update the code in UmbrellaSprite to look like this:
import SpriteKit public class UmbrellaSprite : SKSpriteNode { private var destination : CGPoint! private let easing : CGFloat = 0.1 public static func newInstance() -> UmbrellaSprite { let umbrella = UmbrellaSprite(imageNamed: "umbrella") let path = UIBezierPath() path.move(to: CGPoint()) path.addLine(to: CGPoint(x: -umbrella.size.width / 2 - 30, y: 0)) path.addLine(to: CGPoint(x: 0, y: umbrella.size.height / 2)) path.addLine(to: CGPoint(x: umbrella.size.width / 2 + 30, y: 0)) umbrella.physicsBody = SKPhysicsBody(polygonFrom: path.cgPath) umbrella.physicsBody?.isDynamic = false umbrella.physicsBody?.restitution = 0.9 return umbrella } public func updatePosition(point : CGPoint) { position = point destination = point } public func setDestination(destination : CGPoint) { self.destination = destination } public func update(deltaTime : TimeInterval) { let distance = sqrt(pow((destination.x - position.x), 2) + pow((destination.y - position.y), 2)) if(distance > 1) { let directionX = (destination.x - position.x) let directionY = (destination.y - position.y) position.x += directionX * easing position.y += directionY * easing } else { position = destination; } } }
A few things are happening here. The newInstance() function has been left untouched, but we’ve added two variables above it. We’ve added a destination variable (the point that we want to be moving towards); we’ve added a setDestination(destination:) function, to where we will ease the umbrella sprite; and we’ve added an updatePosition(point:) function.
The updatePosition(point:) will act exactly as though the position property was being updated directly before we made this update. Now we can update the position and the destination at the same time. This way, the umbrellaSprite will be positioned at this point and will stay where it is, because it will already be at its destination, instead of moving towards it immediately after setup.
The setDestination(destination:) function will only update the destination property; we will perform our calculations off of this property later. Finally, we added update(dt:) to compute how far we need to travel towards the destination point from our current position. We computed the distance between the two points, and if it is greater than one point, we compute how far we want to travel using the easing function. The easing function just finds the direction that the umbrella needs to travel in, and then moves the umbrella’s position 10% of the distance to the destination for each axis. This way, we won’t snap to the new location, but rather will move faster if we are further from the point, and slow down as the umbrella approaches its destination. If it is less than or equal to 1 pixel, then we will just jump to the final position. We do this because the easing function will approach the destination very slowly. Instead of constantly updating, computing and moving the umbrella an extremely short distance, we just set the position and forget about it.
Moving back to GameScene.swift, we should update our touchesBegan(_ touches: with event:) and touchesMoved(_ touches: with event:) functions to the following:
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) { let touchPoint = touches.first?.location(in: self) if let point = touchPoint { umbrellaNode.setDestination(destination: point) } } override func touchesMoved(_ touches: Set<UITouch>, with event: UIEvent?) { let touchPoint = touches.first?.location(in: self) if let point = touchPoint { umbrellaNode.setDestination(destination: point) } }
Now our umbrella will respond to touch. In each function, we check to see whether the touch is valid. If it is, then we tell the umbrella to update its destination to the touch’s location. Now we need to modify the line in sceneDidLoad():
Thus, our initial position and destination will be set correctly. When we start the scene, we won’t see the umbrella move without us interacting with the app. Lastly, we need to tell the umbrella to update in our own update(currentTime:) function.
Add the following code near the end of our update(currentTime:) function:
umbrellaNode.update(deltaTime: dt)
When we run the code, we should be able to tap and drag around the screen, and the umbrella will follow our touching and dragging.
So, that’s lesson one! We’ve covered a ton of concepts today, jumping into the code base to get our feet wet, and then adding in a container node to hold our background and ground SKPhysicsBody. We also worked on spawning our raindrops at a constant interval, and had some interaction with the umbrella sprite. The source code for today is available on GitHub27.
How did you do? Does your code look almost exactly like mine? What changed? Did you update the code for the better? Was I not clear in explaining what to do? Let me know in the comments below.
Thank you for making it this far. Stay tuned for lesson two of RainCat!
We have great new technology available to enhance our websites. But while theoretical articles explain well what the technologies do, we often struggle to find real use cases or details on how things worked out in actual projects.
This week I stumbled across a couple of great posts that share exactly these precious real-life insights: stories about HTTP/2 implementation, experiences from using the Cascade of CSS in large-scale projects, and insights into employing Service Worker and BackgroundSync to build solid forms.
Michael Scharnagl explains how you can enhance a basic form9 (i.e. a login or comment form) with custom validation, AJAX requests, auto-expansion of a textarea, and, finally, Service Worker and BackgroundSync to store input when a connection is unstable.
I don’t want to write too much about this weird Black Friday thing that causes millions of people to buy stuff they’ll never use or need, but Jason Koebler has the best Black Friday deal ever: repair a gadget you already own13 instead of buying new stuff.
Bill Sourour’s article “The Code I’m Still Ashamed Of16” points out an important aspect of our jobs as developers: responsibility. An incredibly important story.
The resurgence of hand lettering, calligraphy, signage, penmanship, or really anything that is graphic and handmade is increasingly difficult to ignore. Along with letters drawn in any of the categories just mentioned, drawing, sketching, sketchnoting1, and any hybrid style (combinations of the above) have also been gaining attention among designers, illustrators, and other professionals. A quick look around social media or simply googling lettering2 will quickly show impressive and notable work.
Last year I deliberately started practicing brush lettering, meaning I had a dedicated time to practice exercises, write out words and practice letterforms. In the process, I learned a few things I would like to share. These tips are not just about how to write messages or repeating letters over and over. Instead, I have become familiar with methods and approaches that have helped me in my journey to improve my lettering work.
This is the first part of two articles which aim to provide you with a good foundation of why lettering is more than just drawing pretty letters and explains the principles behind the practice. I believe that knowing why we do things is important because it helps us create a mindful practice. With that in mind, we will start with a little background, context, and supplies. In the second part of this article, we will move on towards practical advice, how-to videos, and freebies. If you would like to see my work and how I have progressed, please visit my Instagram page3 or go to my blog4.
As I mentioned before, hand lettering, calligraphy, signage, and penmanship, have experienced a resurgence among designers, as well as non-designers. These graphic forms of expression have one common element: they are activities performed manually through observation and, often, repetition. The acts of drawing, doodling, illustrating, and by extension, drawing or illustrating letters, and writing are considered extremely beneficial to increase memory, retention, and to improve eye-hand coordination.
Recent studies (one referenced in Science Daily8 and the others referenced in The New York Times9) claim there is evidence that the act of drawing itself helps to boost memory and learning. These new findings are important as they show the act of drawing itself is what matters — not what is being drawn. Thus, one could infer that drawing letters would contribute to the benefits gathered by simply engaging in the act of creating them. It has also been noted that taking notes with the keyboard is not as beneficial in the learning process and retention as taking notes by hand.
“Handwriting seems to have lost some of its attraction over the last years. Nobody writes beautiful handwritten letters, and uses digital means of communication with smileys, abbreviations and standard lettering instead. And that’s a pity. Since handwriting is unique, it has a tremendous expressive power a standard lettering isn’t able to achieve.”
This article published in The Washington Post titled “Cursive Handwriting Is Disappearing From Public Schools12” by T. Ress Shapiro in 2013 echoes Friedman’s sentiment and explains how handwriting was disappearing from the school curriculum in the United States mainly due to two factors:
Technology has been slowly replacing handwriting.
Common Core Standards do not require students to learn cursive handwriting. It leaves it as a decision to be made by local school officials.
In contrast, the Spencerian Penmanship Theory book15, a book that focuses on teaching both the theory and practical lessons of handwriting, states in its Introductory Remarks:
“Writing is a secondary power of speech, and they who cannot write are in part dumb.”
Because there is still much debate going on regarding teaching or not teaching handwriting, at least in the United States, we may have young designers who have not learned specifics of writing by hand. In turn, these designers who may become interested in lettering as an extension of typography, will find themselves looking for ways to learn to do lettering. However, even those of us who learned handwriting and also find typography intriguing, find ourselves fascinated by lettering. Thus, I decided to learn and practice.
Engaging in deliberate practice was not new to me. Back in January 2010, I committed to do something creative every day inspired by Jad Limaco16‘s article Design Something Every Day17. Because Instagram had not been released yet — it was released in October 201018 — I documented all the work in a blog19 and on Twitter20.
A little over a year ago, I started practicing lettering and sometimes penmanship deliberately each day. If you ask me why, I could offer several reasons. Among those reasons: drawing letters has been something I practiced since I was young. Another: I am fascinated (ahem, “obsessed” would be the best term) with typographic forms and letters. I was one of those kids who could write their name on long stretches of notebook paper in three or four different styles. One of my hobbies was to try to imitate my father’s signature because it was not only beautiful but also complicated.
Though my affinity for letterforms has been a long-standing affair, as soon as I started to post my work on Instagram, it was clear that I did not have a full grasp of the basic concepts as I once thought. That said, Instagram is very encouraging and, while it may sound silly and childish, the likes are a good encouragement to keep going.
As they say, practice makes progress. Here are examples of my lettering in chronological order of my progress:
As we discussed, teaching cursive has been in decline in the education system. I believe that it is precisely due to the lack of exposure in the formative school years, that lettering, calligraphy, and penmanship in general has grown among designers. For instance, on any social media outlet: Twitter26, Pinterest27, Flickr28, Facebook29, and Instagram30, one can find evidence of its popularity. Instagram has become the preferred online space to share work for calligraphers and letterers. Using any of the terms on the list below would prompt various examples.
Hashtags (each is a link to a page featuring the results of the hashtag search)
Rather than delving into a historical and background account of brush lettering and its relationship to its first cousins, calligraphy, penmanship, and signage, I thought we would discuss the tools, paper, the lessons I have learned, and my favorite letterers and calligraphers. However, before we do that, let’s go over some terms and parameters so that we are all on the same page.
Because the terms “calligraphy”, “lettering”, and “typography” are often used interchangeably, I think it would be beneficial to define each of these. For the most part, people are not obsessively trying to correct each other. Nonetheless, understanding the differences and similarities is very helpful.
“The word ‘calligraphy’ comes from the Greek words kallos, ‘beauty’, and graph, ‘writing’.”
Thus, calligraphy is an art, but it is the art of writing out the words. It has a close relationship to penmanship. The dictionary defines penmanship as the art or skill of writing by hand; a person’s handwriting. Calligraphy, on the other hand, is the art of producing decorative handwriting or lettering with a pen or brush. There are rules to follow and alphabets, called hands in calligraphy, to learn: Neuland49, Roman Capitals50, Italic51, Foundational52, Uncial53, Carolingian54, Gothic55, and Versals56. (Each term is linked to a Google Image search for your benefit). However, here are examples of some of the calligraphic hands:
When you practice one of these alphabets, or a calligraphic hand, it is crucial that you do them correctly. There are many technical nuances in the construction of any calligraphic alphabet to be mindful of. To me, calligraphy is the equivalent of classical ballet. Learn it well and you are versed in the principles and theory that open the key to more expressive styles. Not every letterer is a calligrapher but every calligrapher can be and is a letterer.
Mary Kate McDevitt69, in her book Hand Lettering Ledger70, defines lettering as the art of drawing letters. Lettering is the customization of letters. In other words, lettering allows the designer and artist to create a style that was not there before or to embellish an existing letterform beyond its original form. You can see many examples of lettering by doing an image search on Google or just by clicking here71.
In the book Typographic Design: Form and Communication75, Rob Carter76 states that typography and writing are magic because they create a “record of the spoken language.” Before the computer, letters (typography) were set to specific types or molds and arranged in the printing press. The letters were carved out of wood or metal and moved to create words, sentences, paragraphs, and pages. These letters needed to be functional, meaning that, because the letters belonged to a family, they needed to share many elements to look like they belong.
In this aspect, typeface design is similar to calligraphy. The letters in each font as in calligraphy had to look alike. Robert Bringhurst, in his book The Elements of Typographic Style80, states that letters need to be “set with affection, intelligence, knowledge, and skill.”
Now that we have these terms defined, let’s focus on brush lettering. You may be wondering why I make a point to differentiate between lettering (drawing letters) and brush lettering as if they are not the same. Well, they are not the same. In my case, specifically, I don’t tend to create illustrative letters. Perhaps I am a little too impatient for it. I admire and respect those who create illustrative lettering, like the work of Jason Roeder8173.
As its name says, brush lettering is lettering done with a brush. Though it has been around for a long time, it is still often seen as a very modern style of writing based on calligraphy. The brush width is not as important as the angle you hold it. You can use a flat, angled brush, or a pointed brush. Though, something to keep in mind: the size or width of the brush determines the size of the lettering.
The best pen to use is the one you have. Believe this. The specific pen helps, but it is your ability to use the pen correctly and your understanding of the basic principles of forming a letter that ultimately will make you a good letterer (brush or otherwise). That said, there are many pens out there. Some are inexpensive while others are very expensive. The best advice I can give you is to try as many pens as you are able. One thing to remember is, the larger the size of the tip of the brush pen, the larger the letters will be. Below are some of the best pens to get started with.
Crayola9385 markers broad tips are a great way to get used to changing the pressure of the marker on the page. Also, they are very inexpensive. (Large preview86)
At the beginning, it was easier for me to use Crayola markers than other brush pens. Why would I use these markers if I wanted to practice brush lettering? Here are a few reasons:
Crayola broad tip markers are great to use when you are just starting out because they can take a lot of abuse. Their tip is broad and sturdy. Thus, I did not have to be concerned with the tip. Brush pens are delicate and the tip may fray after a lot of use. Plus, while learning, replacing the Crayola markers is a lot cheaper than replacing a Tombow87 brush pen. I also used the Crayola broad tip markers to help me get the feel of the most basic principle: pressure when drawing a down stroke and very light pressure when going up. More on the pressure later. If you are like me, get a set of Crayola markers and be happy practicing.
Crayola broad tip markers have become popular to the point that there is now a term mixing calligraphy and Crayola. The term is Crayligraphy88. The term was coined by Colin Tierny89 and he defines it as:
“The art of stylistically writing with a Crayola marker.”
Here is an example of what lettering looks like with Crayola markers.
Once I began getting used to the techniques, I decided to try these brush pens. They are very inexpensive and last for a while. The tip is very sturdy, so it is hard to fray, but one has to be careful when and how to apply the pressure. The brush tip is very flexible; thus it may be hard to make the transition between the sturdiness of the Crayola broad tip markers and their brush pens. However, the good news is that replacing them will not be painful as they are inexpensive. I find the color in them to be very rich, vibrant, and it runs very well.
These are the pens I have been using in all the videos for this article (you will be able to see them in part 2). These are a great next step from the Crayola markers or pen brushes. Tombow Fudensouke pens have a tip that feels like a brush but is actually plastic. The soft pen tip is very flexible, which allows you to experiment with the pressure and bending the tip. Since the tip is not a brush and there are not brush hairs to speak of, the tips will not fray. The hard tip is very firm and it really helps you to be conscious of how much pressure you need to apply on each stroke to get the thick and thins on paper. These are more expensive than the Crayola markers but, you can get them in sets. Plus, they last a long time.
Many brush letterers consider these pens the top of the line. Their tip is large compared to the Fudenosuke pens above, so these brush pens are more appropriate when lettering at a larger scale. They are very smooth and work best when used with marker paper or soft paper. However, I like texture, and sometimes I use them on paper with some teeth in it. This is where you need to be careful, because that can fray the tip of the brush.
You can buy these pens individually or in sets at several art supply stores.
These are my favorite brush pens. I even make excuses to use them. The brush tip is very smooth and glides on the paper even when the paper has some texture. Here are some advantages to these brush pens:
The body or barrel can be filled with water, ink, or simply liquid watercolors. If filling with ink or liquid watercolor, I found that using a dropper works best to fill the barrel. The liquid tends to create a bubble that you will need to burst, or you will be a mess of ink or color! I use a dropper and a paper towel to burst the bubble on the opening. Though this process can be slow, the good news is that the ink or liquid watercolor tends to last a long time. When filled with water, the brush pen will likely be used with watercolors. Thus, it runs out of water a little faster. However, if I want to take my supplies with me when traveling (and I do take my supplies with me everywhere), these are great if using watercolors as there is no need to change the water regularly.
These brush pens can be bought separately or in a set of three: fine, medium, and large. I rarely use the medium and large, but it is more cost effective to buy them as a set. I find that the fine tip allows me to do different sizes in my lettering depending on how I hold the brush and how much I bend it.
These brush pens are also one of my favorites. The tip is plastic and flexible like the Fudenosuke Soft above, but they come in sets of color and can be used for small to medium lettering. The color is very crisp, and it looks like watercolor depending on the paper you are using. You can also buy them in sets of six and twelve.
My best friend gave me these watercolor brush pens as Christmas gift and I must say they are divine. They have a very delicate tip so I don’t use them to practice with. I use them to do final pieces or nice lettering for someone. The color is beautiful, really beautiful, and the tip just glides on the paper. These brush pens are a must have if you want a piece with a more delicate and polished finish.
There are of course more brush pens available at different online stores. JetPens119, for instance, offers two sets of brush pens; an assortment of different pens to try. This deal may be a good option before you commit to a brand or if you just want to explore.
One thing to remember about the brush pens: the size of the tip determines the size of the letters you will write. Keep that in mind when choosing pens for your work. I know I keep reminding you of this, but it is important.
These brush pens are both firm and flexible. They allow me to create drastic thick and thin strokes, making for a beautiful contrast effect. They are small, thin, and light to hold. However, they do not last as long as other brands.
This is my newest pen, and I love it. I can write really small and delicately and the tip is firm but not so hard that it has no flexibility. The pen is sold in parts: the body and the fill and you can buy the refill in a variety of colors. See how small I can write with it in these images below.
There are of course more brush pens available. However, the list offered to you here is a good place to start. When I started, I had no idea which pens were best, and the Crayola markers were my favorite for a long time. Now, I switch it around depending on the type of work I am doing. If you feel like you need to have one pen to start with, I would recommend the Tombow Fudenosuke130 soft and hard tip brush pens.
Let’s now discuss the second most important supply: paper.
There are many brush letterers and brush calligraphers, and each one of them will have a list of which papers are best. What I am listing here are the ones I have used and like.
I started out practicing on laser paper. It is an inexpensive way to get started, and some of it is thick enough to hold the moisture. Lindsey Bugbee131 from The Postman’s Knock recommends using Georgia Pacific 20lb on one of her blog posts132. I have found that laser paper, both in 24lb and 32lb weights, is very nice and the brushes handle it very well. Please note that if you use your brush pens on laser paper, your pens will eventually fray. For practice purposes, laser paper is good but use the least expensive brushes on it. Also, remember that the cheapest writing tool you can practice with is a pencil.
If you plan to keep your practice sheets, it is best to use archival quality paper. For final pieces, it is best to find paper that is not only archival but also smooth. The smooth type of paper will be very gentle on your brush pen markers. Amanda Arneilla133 has a great post on her site about paper that you may want to check out!
Many brush letterers and brush calligraphers prefer Rhodia pads. I have a pad with a quad-grid, and I love its texture. It’s very smooth, and the brush pen glides on it. The only drawback is that it can be a tad expensive, or not as cost effective as others.
A few years ago, I took calligraphy classes and my teacher, Clint Voris, had me practice on a large (11″ x 14″) drawing paper pad. The brand he recommended was Strathmore135. The ink did not bleed through the paper, and I have found that some drawing papers, as long as they are smooth, can work well for brush pens. However, I would use the plastic and synthetic brush tips with this paper. As the Tombow markers are delicate, I would only use them on either the Rhodia pads or smooth laser paper.
This is my favorite paper as I do a lot of watercolor lettering and it comes in pads and large rolls. However, for some reason, the paper in the roll is lighter than in the pads. Regardless, it is very thick and holds a lot of moisture. Since this paper is intended for both dry and wet media, it works very well with the plastic and synthetic brushes due to its soft texture. The stroke you make with the brush will reflect that (see picture below). I don’t mind a little bit of texture; in fact, I find it gives the lettering a very rich feel. This pad is also great if you are experimenting with watercolor lettering or watercolor backgrounds.
These pads are not expensive and for practicing purposes, and I use both sides. The best news? If your practice turns out well, you may be able to sell it, as this is good paper!
As with the Canson and Strathmore, I don’t use the Tombow dual brush pens on this paper. Even the soft texture of this paper would fray the Tombow brushes a little. However, if you do like some texture, you can use the frayed brush tips to give you that feel of texture. Below are two example of lettering done on this paper.
Since I teach design at a university, I have access to lots and lots of discarded pieces of printmaking paper. You may find this paper strange because it has a lot of texture. That is true, but it also holds a lot of moisture. Sometimes I use my brush pens, such as the Crayola brush pens, on it, and I get great results. The texture of the paper makes it even better for my purposes. I love to see how the ink and the watercolor interact with it. See this image below.
Other Items You Will Need But Probably Have Around Link
I can’t stress how crucial it is to plan out your letters before marking the paper with ink. Once there is ink, well, it will not be erased. Plus, sketching out or planning where you want your letters will help you get better at understanding proportions: how large the letters should be in relation to the size of the sheet you are using. If you use soft pencils, you can practice creating thick and thin strokes in your letters. Any soft pencil will do but among some brush letterers the Palomino Blackwing Pencils143 are a favorite.
I will be honest and confess that I dislike drawing straight lines because, for some reason, they always end up in a slight diagonal. But, better some lines than none, right? As you get better, your sense for letter placement will improve, and you will rely less on the ruler. But please, at least use one to create a few guidelines to start. I will say that if you practice on a grid or lined paper, you will get used to writing on a line. After a while, even when no line exists, you may notice that you are able to write fairly straight.
There is no explanation needed except that make sure to use plastic erasers. They are usually white and are not abrasive on the paper. These erasers are so good that they fade dried watercolor.
We have discussed brush lettering, its definition, its context, and the supplies you will need to get started. In the second part of this article, we will discuss the practice itself. Plus, there will be some videos and freebies. Stay tuned!
Chatbot fever has infected Silicon Valley. The leaders of virtually every tech giant — including Facebook, Google, Amazon and Apple — proclaim chatbots as the new websites, and messaging platforms as the new browsers. “You should message a business just the way you would message a friend,” declared Mark Zuckerberg when he launched the Facebook Messenger Platform for bots. He and the rest of the tech world are convinced that conversation is the future of business.
But is chatting actually good for bots? Early user reviews of chatbots suggest not. Gizmodo writer Darren Orf describes1 Facebook’s chatbot user experiences as “frustrating and useless” and compares using them to “trying to talk politics with a toddler.” His criticisms are not unfair.
Here’s an example of a “conversation” I had with the 1–800-Flowers Messenger bot2 after I became stuck in a nested menu and was unable to return to the main menu. Not exactly a pleasant or productive user experience.
Designers who are new to conversational interfaces often have the misconception that chatbots must “chat.” At the same time, they underestimate the extraordinary writing skill, technical investment and continual iteration required to implement an excellent conversational user experience (UX).
This article explores when conversation benefits and when conversation hurts the chatbot user experience. We’ll walk through case studies for both sides of the argument and compare divergent opinions from Ted Livingston, CEO of Kik, who advises6 bot makers to deprioritize open-ended chat, and Steve Worswick, the creator of “the most human chatbot,” who encourages developers to invest in truly conversational experiences.
As you’ll see from the examples below, both strategies can lead to successful chatbot experiences. The key is to choose the right level of conversational ability for your bot given your business goals, team capabilities and user needs.
Steve Worswick is the developer behind Mitsuku7, one of the world’s most popular chatbots. Mitsuku has twice won the Loebner Prize8, an artificial intelligence award given to the “most human-like chatbot.” The popular chatbot has conversed with more than 5 million users and processed over 150 million total interactions. 80% of Mitsuku’s users come back for more chats.
The longest a user has chatted with Mitsuku is nine hours in a single day — a testament to the bot’s extraordinary conversational abilities. Mitsuku does not help you find makeup products, buy flowers or perform any functional utility. The chatbot’s sole purpose is to provide entertainment and companionship. You won’t be surprised to find out that Worswick thinks “chatbots should be about the chat.”
Building a conversational chatbot that isn’t awful is extremely hard. Worswick nearly gave up many times when Mitsuku repeatedly gave unsatisfactory answers and users called her “stupid.” One major breakthrough occurred when Worswick programmed in a massive database with thousands of common objects such as “chair,” “tree” and “cinema,” along with their relationships and attributes.
Suddenly, Mitsuku could give sensible answers to strange user questions, such as, “Is a snail slower than a train?” or “Can you eat a tree?” According to Worswick, “Let’s say a user asks Mitsuku if a banana is larger than X, but she doesn’t recognize what X is. She knows that a banana is a relatively small object so can deduce that X is probably larger.”
Even if a chatbot is utilitarian, providing spontaneous answers in a conversation — especially if unexpected — can delight and engage users. Poncho12 is a Messenger bot that gives you basic weather reports, but the creators gave the bot the personality of a Brooklyn cat. Poncho can conduct small talk and even recognizes other cats. “Weather is boring,” admits Poncho founder Kuan Huang. “We make it awesome.”
When You Should Add Conversation To Delight Users Link
Making a bot conversational takes tremendous effort, but if you are up to the challenge, here are the top situations in which conversation could distinguish your chatbot from competitors’ and truly delight users.
If You Need to Differentiate From Competition Link
As seen earlier, Poncho’s conversational personality distinguishes the chatty weather cat from boring, routine weather apps. Bots launch at a more rapid pace than mobile apps due to the lower technical barriers to entry. Dozens of bots already exist to service identical use cases, so winners need to stand out with a superior conversational UX.
Just like weather apps, public transit apps are soulless and boring. We use them out of necessity and not delight. Enter Bus Uncle16, a bot that can tell you anything you want to know about the Singaporean bus system in his quirky, broken English and suggest funny things to do while you wait.
Comprehensive, detailed guides and maps for the bus system exist on the Internet to help expats and locals find their way home, but Bus Uncle’s conversational interface both simplifies and adds joy to a routine task.
Beware that the bot is not all fun and games. Like any proper Asian uncle, Bus Uncle stays in character by occasionally forcing you to solve math problems.
E-commerce is a challenging space for bots due to product diversity and language variability. Many conversational shopping bots malfunction when users use unrecognized vocabulary or suddenly switch contexts. Such failures are usually technical in nature, where a bot simply doesn’t have the requisite data set or intelligence to handle the edge input.
ShopBot23 from eBay avoids common e-commerce bot UX failures by combining limited option menus with the ability to handle unexpected user input. While many shopping bots hem users into a narrow series of menus, ShopBot was able to quickly adapt when I switched from shopping for jeans to shopping for blouses.
Shopping is a difficult use case for chatbots to master. Superior conversational experiences in e-commerce bots are a function not just of great copy, but of powerful technologies that process natural language, keep track of shoppers’ contexts and preferences, and anticipate diverse needs accurately.
RJ Pittman, chief product officer at eBay, explains27, “Shoppers have complex needs, which are often not fully met by traditional search engines. The science of AI provides contextual understanding, predictive modeling, and machine learning abilities. Combining AI with eBay’s breadth of inventory and unique selection will enable us to create a radically better and more personal shopping experience.”
Chatting is an intimate act we do with close friends and family, which is why chatting with a “brand” is often an awkward and strange experience. Strong conversational skills in a chatbot can overcome this barrier and establish an authentic connection.
Maintaining a consistent and compelling brand voice in chatbots is not easy. PullString28, a conversational AI platform founded by ex-Pixar CTO Oren Jacob, employs an entire department of expert Hollywood screenwriters to bring brands like Mattel’s Barbie and Activision’s Call of Duty to life.
Its demo chatbot, Jessie Humani29, is powered by over 3,500 lines of carefully selected dialog to create the impression that she’s your messed-up millennial friend who can’t get her life together without your help.
Many bot industry experts believe the word “chatbot” sets the wrong expectation among users that bots should have human-level conversational abilities. The hard reality is that natural-language processing and artificial intelligence still have much progress to make before bots will impress you with their gift of gab.
Ted Livingston, CEO of Kik, a popular messaging platform with a thriving bot store, is squarely on the side of no chatting. “The biggest misconception is that bots need to be about ‘chat.’ What we discovered is that bots that don’t have suggested responses simply don’t work. Users don’t know what to do with an empty input field and a blinking cursor,” he shared33 at a recent bot conference.
Kik started building a conversational platform two years ago34, long before bots suddenly became cool. In the beginning, its bots allowed freeform responses the same way Facebook Messenger bots do now. What resulted was user confusion and error, as well as complaints from developers about having to deal with the unnecessary complexity of processing open-ended conversation. Kik now restricts user responses to a limited set of predefined options and intentionally makes typing freeform text difficult.
For example, when Sephora’s Kik bot asks what type of beauty products a user would like to see, the bot follows the question with a menu of suggested responses to choose from. A user has to go out of their way to tap “Tap a message” in order to type normally.
When You Should Restrict Chat For A Better UX Link
There are many cases in which designers of chatbots should restrict conversation to provide a superior experience. Below are a few common situations in which letting users type freeform conversational text complicates development and decreases your bot’s usability.
If User Error Would Lead to a Failed Transaction Link
1–800-Flower’s bot for Facebook Messenger originally gave users three options for flower delivery dates: “Today,” “Tomorrow” or “Choose another date.” The third option allowed users to type in dates freeform, which often resulted in error, confusion and an abandoned or failed transaction.
By removing the third option for users to type in a date manually, 1–800-Flowers actually increased the number of transactions and overall customer satisfaction. Restricting conversation helped it focus on its most important users, the ones who want to send flowers urgently.
Chatbots should give users the key advantage of completing tasks with fewer taps and context switches than regular mobile apps. Enabling open-ended chat can undermine this simplicity and add development complexity related to handling variable input.
An example is the simple meditation bot Peaceful Habit41 for Amazon Echo and Facebook Messenger. The bot is designed to help regular meditators build a daily practice and should be quicker to use than meditation apps.
On the Amazon Echo, a user can start a 5-, 10- or 20-minute meditation completely hands-free, with voice alone. On Facebook Messenger, the bot sends a daily reminder with limited user options, so only a single tap is required to start a meditation practice.
Many user requests appear simple on the surface but are extremely complex to handle in an open-ended conversational interface due to variability of vocabulary, grammatical structures and cultural norms. For example, a user can ask to schedule a meeting by asking any of the following questions:
When’s Bob’s next open time slot?
Let me know the next three times Bob can chat.
Is Bob available at 4 PM PST today?
Turns out the complexity of handling seemingly simple meeting requests requires powerful artificial intelligence capabilities. Several well-funded companies have emerged just to solve narrow scheduling challenges with specialized technology.
When you consider more complex requests, such as asking for restaurant recommendations, limiting conversations often means less confusion for both your bot and your user. Sure45, a bot that offers local restaurant recommendations, asks users to type in what they are craving, but it often can’t understand the responses.
By contrast, a similar bot named OrderNow49 finds local restaurants that deliver and offers a limited menu of cuisines to choose from.
These examples demonstrate that complex artificial intelligence, machine learning or natural-language processing is not required to create a great user experience using a chatbot. As Ted Livingston, CEO of Kik, warns53, “AI is not the killer app for bots. In fact, AI holds most bots back. Bots are just a better way to deliver a software experience. They should do one thing really well.”
How “chatty” your chatbot should be will depend on your users’ mental models of chatbots and the goals and needs your chatbot fulfills for them. Bots on Kik that only offer limited responses can be just as successful and engaging as Mitsuku and Jessie Humani.
Problems occur when designers do not decide up front who their audience is, how the chatbot fits into their business or brand strategy, what domains the chatbot will and will not cover, and what a successful experience should look like.
When you are deciding how much “conversation” to design into your chatbot experience and are defining the right level of engagement, answer the following questions:
How are you setting user expectations?
If you brand your chatbot as a character or a human replacement, users will expect a minimum level of conversational ability. If your bot’s functionality is utilitarian or limited, then guide conversations towards specific outcomes.
Is your chatbot utilitarian or entertainment-driven?
Mitsuku is an artificial-intelligence companion, so she’s required to master the art of conversation. On the other hand, a Slackbot that performs SQL queries or pulls CRM data has no need to support chat.
Does your chatbot reflect your brand’s voice?
Major brands such as Disney54 and Universal Studios55 use chatbots to engage audiences beyond simple ad clicks and video views. A chatbot working as a brand ambassador needs to authentically reflect the domain and voice of the company it represents.
Is your chatbot a familiar service or product?
Businesses such as 1–800-Flowers and Domino’s Pizza already have millions of buyers who use their websites, mobile apps and phone numbers to order products. Users who already know what you offer and what they like won’t require as much explanation and hand-holding.
Does your chatbot need to differentiate itself in a competitive market?
Weather apps are a dime a dozen. Poncho the Weather Cat differentiates itself by having a distinct personality and delightful reactions, making the bot stand out against other weather services.
How strong is your technical team and AI platform?
Building an adaptable and user-friendly conversational AI is incredibly challenging. Worswick invested over a decade to make Mitsuku the award-winning chatbot she is today. Each conversational AI platform has strengths and weaknesses that will affect your chatbot’s UX.
How strong is your writing team?
In the world of bots, writers are the new designers. Do your writers understand how to write engaging, emotional copy that draws users in? Bots reflect the communication skills of their makers.
As natural-language understanding, machine learning and artificial intelligence improve, chatbots will inevitably become smarter and more capable in interactions with humans.
For now, just be sure that your bot either sticks with utilitarian offerings or stays within a comfortable zone of conversational topics. Take a cue from how Mitsuku gracefully avoids confrontation by excusing herself from a potentially awkward political conversation.
When I was a developer, I often had a hundred questions when building websites from wireframes that I had received. Some of those questions were:
How will this design scale when I shrink the browser window?
What happens when this shape is filled out incorrectly?
What are the options in this sorting filter, and what do they do?
These types of questions led me to miss numerous deadlines, and I wasted time and energy in back-and-forth communication. Sadly, this situation could have been avoided if the wireframes had provided enough detail.
Now that I am a UX designer, I notice that some designers tend to forget that wireframes are equally creative and technical. We are responsible for designing great ideas, but we are also responsible for creating product specifications. I admit that there can be so many details to remember that it’s easy to lose track. To save time and energy for myself, I gathered all of my years of wireframing knowledge into a single checklist that I refer to throughout the process. And now I am sharing this knowledge with you, so that you can get back to being creative.
If you’re starting fresh on a wireframe project, I recommend going through each section’s guidelines like a checklist. If not, feel free to jump to the appropriate section.
Note: These guidelines are more appropriate for wireframes — and prototypes, to an extent — during a production life cycle (i.e. preparing your product to be built). By this point, the main idea has been established, the features have been fairly locked down, and the core layouts are not going to dramatically change. That being said, some guidelines, like in the first section, can be used for conceptual wireframes in a discovery phase. Just be wary of getting too detailed, because your designs might drastically change, and you will have wasted time.
Finally, onto the guide!
Decisions To Consider Before Wireframing
These guidelines ensure that you ask the difficult but crucial questions early on and establish a foundation for a smooth wireframing process.
Usually, the stakeholders will dictate which devices to support, but sometimes you will have a reasonable suspicion that it will be cross-platform or responsive. Other times, you might have to select the best devices to use with your product. In either case, here’s a list of reference devices:
desktop browsers
mobile website (browser)
native mobile app: Android, iOS, Windows
tablet
smartwatch: Android Wear, Watch OS(iOS), WebOS, Band (Microsoft)
smart TV: Android TV, Tizen OS (Samsung), Firefox OS (Panasonic), WebOS (LG)
console gaming: Xbox One, Playstation 4, Wii U, Steam
Confirm devices expected to be supported. More often than not, responsive mobile wireframes are requested later in the process or are suddenly included by the client. These late requests can cause massive design reworking when you have already started on the wireframes. So, ensure the following:
Align with your team’s expectations.
Make sure you have client approval in writing, or refer to a statement of work.
While they are not devices, content management systems are frequently forgotten.
Match features to context of use. While composing a review for a product would work in a desktop browser, it might be seldom done on a mobile phone. Review your features one by one and determine whether your users will get added value from each device. If a feature does not match the user’s goals, either change it to be more device-specific or cut it out completely for the device in question. Reducing the design will save you time and money.
Verify design differences on multiple devices. A common scenario is building a desktop application with a mobile and tablet component. Discuss the following with your team and stakeholders:
Will the layout be identical (responsive) or completely separate?
Are features shared across all experiences, or will there be subsets?
Which operating systems are being supported? For instance, Android and iOS apps follow different design patterns.
Maintain documentation on differences in layouts and features, so that you can easily design for them later in your wireframes.
Consider implications of screen orientation. For mobile and tablet, designs can appear different in portrait and landscape modes.
Left alone, a landscape layout will appear like the portrait version but with the width stretched out and less horizontal viewing space.
Screens can be locked to an orientation. Tread carefully around design layouts that are exclusive to portrait or landscape, because you are forcing users to work a certain way. If in doubt, conduct a usability test.
A layout and its UI elements can scale in a variety of ways when the window’s size changes. The most popular scaling patterns are:
fixed or static
Remains the same no matter what.
fluid or liquid
Incrementally shrinks or stretches with each pixel.
adaptive
Layout changes at certain breakpoints in the window’s width.
responsive
Follows a mix of fluid and adaptive behavior.
Liquidapsive566 is an interactive tool for visualizing how each scaling pattern affects the layout when the window’s size changes.
Get team consensus. First discuss with your team the best approach, because this will have an impact on the time for you, the developers and the visual designers. Users expect most websites to have desktop and mobile versions, although they don’t always have to share the same set of features.
Align with your developer on mechanics. Consult with your developers on how the design should scale. Also, discuss breakpoints and the scaling behavior for layouts and UI elements that you are unsure of.
Prepare for many sets of wireframes. Responsive and adaptive layouts will need many sets of screens; so, let your project managers know it will require extra time. I usually add an extra 50 to 75% of the total time for each set.
Establishing a default screen size is important for everyone on your team because they will need to know what to develop and design for. You also want to avoid accidentally starting too large and then realizing you need to shrink it down.
Start small and prepare for a large size. 1024 × 768 pixels for desktop and tablet and 320 × 480 for mobile are generally safe resolutions to work with. Anything higher can be risky to begin with. It is prudent to start with a low resolution and then scale up so that the design still looks adequate when the window is larger.
Additionally, consider that many people do not maximize their browser window, so their view of the layout will be even smaller.
Use analytics to guide your decision. Analytics for screen resolutions can reveal interesting trends, which can help you make an informed decision. You might need the data to convince your client of the resolution you are pushing for as well. Remember that it’s better to be inclusive than exclusive.
Going for that larger, prettier resolution might cut off a chunk of your audience. To help with your analysis, here are some scenarios I have seen before:
The most popular resolution can sometimes be safe if it shows a trend of continued growth over many years, while the lower resolutions are declining.
A group of small resolutions with a low percentage of use could add up to a sizeable population — in which case, choose the lowest resolution.
For a large variation in resolutions, go with the lowest one.
If at least 5% of users have a a low resolution, such as 1024 × 768, I would err on the side of caution and select that.
Viewing trends for the past year is sometimes better than viewing trends since the beginning, because the latter could include resolutions that are no longer relevant. Sanitize your data by removing old data and mobile and tablet resolutions — trends will be easier to identify.
Know your audience and usage environment. Although 1366 × 768 is the most common resolution for the desktop (as of May 2016, according to StatCounter5711 and W3Counter5812), think about your audience’s context of use. For instance, 1024 × 768 makes sense if you’re dealing with users who are on old computers and can’t change their equipment (usually corporate environments).
Be clear on your definitions of fidelity because it can mean different things to different people. Below is how I define wireframe fidelity:
Low fidelity has these elements:
Focuses on layout and high-level interactions and concepts.
UI elements and content can be represented as boxes or lines, with or without label descriptions.
Gray-scale.
Can be paper sketch.
High fidelity has these elements:
Emphasizes visual aesthetics and branding, such as tone, colors, graphics and font style.
Can include realistic images and copy.
UI elements look realistic and might include aesthetic touches such as textures and shadows.
Sometimes known as a mockup.
Medium fidelity has these elements:
Varies between low and high fidelity.
More realistic UI elements, but not styled.
Filler images and copy.
Gray-scale.
Has some visual design (such as hierarchical typography).
Indulge your stakeholders. Observe your stakeholders’ behavior to figure out what makes them happy, or have a discussion with them to determine what they expect.
Are they the type to hone in on minor irrelevant details? Start with low fidelity, and iterate on that until they are satisfied. Then, move up to medium or high fidelity. Gradually building fidelity over multiple check-ins will make approvals easier, while saving precious design time.
Are they the type to react to something flashy? Go with high fidelity, but create only a couple at first, so that the stakeholders get an idea of what to expect.
If in doubt, medium fidelity is always a safe bet, although it takes a bit more time.
Confirm the expectations of fidelity. Whichever fidelity you decide on, make sure your team and stakeholders are on the same page — consider giving examples to be sure. It is scary to walk into a meeting with low-fidelity designs when “low fidelity” in people’s mind is actually “medium fidelity.”
When used in a wireframe tool, a grid system saves you time by easily aligning your UI through snap-to-grid features. This feature basically keeps your designs looking aligned and polished.
Using a grid system will also help you maintain consistency in the layout and balance the design’s hierarchy. The article “All About Grid Systems19” does a great job of briefly explaining the theory behind grid systems and the key advantages to using them.
Know your alignment and spacing features. I achieve pixel-perfection by using the alignment and space-distribution features of many wireframe tools. Learn to use the most common keyboard shortcuts, such as centering, space distribution, grouping and ungrouping. It will save you from having to manually polish your layouts.
Perform visual checks. Your visual designer and developer will end up doing most of the actual pixel-pushing, but you’ll want to get it as close as possible; a sloppy design will lose the trust of your user. The key to saving time and cleaning up a design is to stand back a bit from the design and see if any misalignments are visible to the naked eye.
Now that you have checked off all of the pre-wireframing activities, you can start designing the actual wireframes. The next section will focus on polishing the design.
Detailing The Design Elements
Once you have completed the bulk of your main designs, it’s time to include all of the cumbersome details. Go through each of your screens and use the guidelines below as a checklist to fill in anything missing.
For any action that a user executes, there should always be feedback to let them know what happened or what the next step is.
Validate forms. Forms will usually have one of these validation responses:
Invalid syntax: For email address, phone number, area code, URLs and credit-card numbers.
Incorrect login: Include a message for the username or password being incorrect, but also include one for both being wrong.
Empty form fields.
Required check boxes: For agreeing to terms of service.
Age requirement: Dropdown to fill in birth date.
Intermediary messages and modals. Many user actions will generate some form of UI to inform the user of what is happening or to ask them to do something. These can include:
confirmation
warning or alert
success
failed action
error
long-term progress indicator: This is when the user might have to wait for a long behind-the-scenes process to complete (even days long).
progress indicator or bar: Include states for zero progress, in progress, completion and failure.
spinner: For indeterminate time estimations or short operations.
Write user-friendly messages. No one likes to read an obscure error message like “Exception 0xc000000933” or a generic one like “Login error.” And unless you have a technical writer on the team, you are actually the most qualified person to write user-friendly messaging.
For consistency, I follow these criteria when writing messages:
Include the specific problem if possible.
Mention the next step to resolve the issue.
Keep it brief and concise, about one to two sentences.
Explain in colloquial terms.
Avoid technical jargon.
Maintain a positive and apologetic tone. Pretend you’re a down-to-earth customer service rep.
Enhance with the brand’s tone, if there is one.
Implement supportive form-input techniques. Your developer will know clever tricks to show users error messages immediately or to support users with input fields. Below is a list of common techniques to think about:
“Remember me” checkbox for logging in.
Links for forgotten password or username. Don’t forget the extra screens such as for “Send a reset password to email” and confirmation that the reset was successfully sent.
Suggested results upon typing keywords in a search.
Validating each keyboard character entered.
Character limit, with a counter or some visual representation when reaching a character limit.
Dropdown lists that auto-filter items when typing. Select223 is a great plugin for combo boxes.
Letting a user know when they have entered an old password.
A checklist for password strength or password syntax that marks off syntax requirements as the user types.
Walk through each of your UI components, and design for best- and worst-case scenarios, as well as any other potential states.
Dropdown lists. While the closed state will already be in your design, include the expanded state in your annotations or on a separate page. Do you have any of the following types of dropdowns?
navigation menu
combo box (states, date, age)
filter
sorting
context menu
Titles, labels and names. Account for all cases in which the text is exceptionally long in a confined space and can’t break to a new line (for example, in a table). Try to shorten the text as below, and then include the full-length text in a tooltip.
Ellipsis: “Deoxyribo…”
Truncation: “Deoxy”
Abbreviations: “DNA”
Names: “G. Washington or George W.”
Numbers: “100K,” “50M,” “140+”
Dynamic content. Your developers will need to know how content population scales or behaves in different situations:
Pagination: search result listings, forums, news articles
Scrolling and loading content
Brief paragraph text or tags
Galleries: image, video and banner carousels
Trees: nested file or folder directories
Comments and posts
These states are commonly associated with dynamic content:
First, middle and last pages: Pagination controls can sometimes appear different depending on where the user is, and final pages can include a “next step” action to fill in empty space.
Standard content length: This is default.
Empty or unpopulated content: Empty space can look awkward, so consider a friendly message or a subtle background image to fill the void.
Overflowing content: Try using “More…” links or a form of truncation.
Nested posts: Stack Exchange has a forum thread26 with a variety of suggestions to nest infinite posts.
Icons with numbers. Icons sometimes include values such as how many notifications or emails the user has. You will want to include states for the following:
zero items
typical items: double or triple digits
large numbers: depending on the design, it could go up to hundreds or thousands, but indicating “100K” or “2000+” is also acceptable.
Form fields. Forms will typically have a few states:
enabled: This is the default.
disabled: For fields that appear unusable in certain conditions.
corrected: Optionally have a different style for fields whose validation error has been corrected.
validation error: Some designers place error messages adjacent to the relevant field and/or provide an error list.
Tooltips. Consider adding a tooltip (a brief one to two sentences) for any interactions that fit these criteria:
New or uncommon interaction pattern: Helpful text for interactions that users wouldn’t intuitively understand at first glance. Think of different design solutions before using this as a fallback.
Advanced areas and actions: Common with programs such as Photoshop and Word, in which advanced functionality improves the user’s productivity or work quality.
Less frequent actions: Actions that rarely occur will need help text to jog the user’s memory.
Keyboard shortcuts.
Shortened content: Abbreviated or truncated text that needs a fully written version.
Content preview: Text or images that include previews of content. This is common with listings where users would want to peek at the content before committing to an action that would take them off the page.
Reference information: Extra details that quickly support the user with a task (for example, the security number on a credit card).
File uploads. When the user uploads a file in the browser, there are a few states to design for:
File upload dialog: Usually provided by default in the OS or browser, but include your custom design if you have one.
Upload progress bar.
Upload completion bar: For having reached 100%.
Canceling in-progress upload: Cancelling an upload in progress can take time.
Success: Appears after the progress bar goes away. Include the file name that was uploaded and a way to remove the file.
Remove confirmation: Confirmation to remove the uploaded file.
There is typically one way your users will physically interact with your UI, but there can also be multiple gestures per interaction, as with a responsive design or buttons with keyboard shortcuts. For each user action, look for the other gestures below and note the differences in your annotations later.
click
double-click
right-click
swipe or flick
pinch and spread
press
hover
drag and drop
keyboard input
keyboard shortcut
link to external website
link to default application: email, phone number, map.
Be sensitive to cross-device interaction. If you are working on a responsive website, be aware of gestures that will not translate smoothly:
Hovering: Some designers replace this with a touch gesture. But watch out for actions that already use touch for something else.
Dragging and dropping: An alternative design is best.
Uploading files: Not a common mobile task; consider removing it for mobile.
Consider how much time you have and what would be useful to your audience. You can also mix and match styles, like using real text for the home-page banner but filler for other areas. Listed below are various styles, with their key advantages.
Demonstrating how an interaction works will reduce miscommunication with the developer and ensure that it is technically feasible. Typically, the developer will feel more confident about deciding which plugin or implementation to use. But if you feel technically savvy, then it would be really helpful for you, as the architect, to take the first step. Below are some useful tips.
Search for plugins. If you are working online, you can often find the latest and trusted plugins being used by the development community on GitHub3736. Search for something like “accordion” to browse through accordion plugins. If you are working on other devices, such as Android mobile or Apple Watch, search the web for official design guideline documentation.
Select relevant plugins. Developers consider community approval, constant updates and framework compatibility to be important when selecting a plugin. On GitHub3736, you can sort for “most stars” and “recently updated.” Also, look for demo links in the documentation or website to see how it will work, and then share it with your developer to see if it is viable.
Build a demonstration. If you are building a custom interaction or flow that does not yet exist, think about using prototyping tools or coding it yourself if you are technically savvy. Some prototyping tools with motion design features include Principle6338, Flinto6439 and Axure40. Some that might require a bit of programming knowledge are Origami6641 and Framer6742.
If you can actually code, then CodePen43 is useful for building something quickly and easily.
Once you feel content with the level of polish in your design, finish up with annotations.
Annotating The Wireframes
At this point, most of your design will be complete, and you are ready to annotate your work.
Being strategic in where and how you place your annotations on the page can improve your workflow and overall quality.
Determine a location. There are different advantages to where you place annotations. See which style below would be most useful to you:
Left or right column: Allows the reader to focus on the design or on reading the annotations.
Inline: Promotes a more connected reading experience by linking explanations directly to UI elements.
Marked tooltips: Similar to inline annotations. Prototyping software such as Axure allows you to place markers directly in the design that can be clicked on to reveal the annotation.
Separate page: Gives ample, dedicated space to view the design and annotations.
Load in wireframe images. Designing for a large resolution can leave little space for annotations. Instead, you could wireframe your layouts with the accurate resolutions, and have another document where you drop the same wireframe images into a fixed-size container. This gives you consistent space for annotations, without having to worry about how big the design is.
I’ve created a template46 (Zip, 1.51 MB) that enables you simply to export wireframe images to the image folder and then drag and drop the same images to an annotated InDesign document. As long as you use the same file names for your export, you can continually overwrite the old wireframe images, and they will automatically update in the InDesign document.
Follow these technical writing rules when revising your annotations:
Avoid being wordy. Be direct and concise. Excise words and sentences that do not add any information or that are redundant. Your team and stakeholders want to quickly and easily get the information they need.
Avoid figures of speech and metaphors if your audience might be foreign.
Stick to simple vocabulary so that everyone can understand, but use technical terms if the audience would understand them.
Favor the active voice over the passive voice to cut down wordiness and to be clearer in your explanations. Purdue OWL has an article47 showing examples of the active and passive voice.
Write for developers. Although stakeholders will enjoy story-driven annotations, developers will prefer straightforward, technical explanations. Be extremely detailed about how an interaction works, and include values if necessary. Write your annotations so that anyone could take the design and build it without needing to talk to you.
To compare and contrast, the annotation below is written for a stakeholder who is working on conceptual wireframes (notice the second-person usage and descriptive wording):
As you scroll toward the bottom of the page, you will seamlessly receive more search results so that you can continue reading without needing to perform another action.
Here is the same example written for a developer:
When scroll bar reaches last search result (about 25% height from bottom):
Dynamically load 10 more search results below current list.
If end of search results is reached, remove “Load more…” button.
Show spinner when loading; remove when finished.
Use fade-in and fade-out animation when loading results.
List multiple gestures, states and demos. For each action, indicate the gesture(s), label each annotation by its state, and include demo links if available. If there are multiple states, define their conditions or business logic.
One of the most common challenges for UX designers is convincing other people of your design decisions. There is a tendency among people to feel that your decisions are arbitrary. If you can muster the time, I have found it helpful to connect previous UX work to a finished design. It goes a long way to reminding your team or stakeholders of the evidence that supports your design.
Site map. In each wireframe, show where this particular screen lives in your site map. Don’t show the entire site map, though, just the immediate parents, siblings and children. You can also refer to page IDs if you are using them in your site map.
Personas and user goals. Include a label of the persona and the major user goals, tasks or behaviors that the screen is targeting. Alternatively, include user story statements and/or features from a feature priority matrix.
User flows. Show the reader which step in the flow they are looking at. Include a portion of the user flow (enough for context), as opposed to the entire flow.
User research findings. Include user research findings (as bullet points) that directly support your design decisions. These could come from user interviews, ethnographic studies, analytics, competitive analysis, surveys, focus groups or usability tests.
Open questions. If you still need more information from stakeholders, developers or subject-matter experts, provide a list of questions under the annotation to give the reader context. Putting questions in the wireframe has several advantages:
reader will see that you are thinking deeply about the design,
promotes iteration and reduces anxiety about finalizing the design.
convenient for bringing up in presentations.
Usability testing questions. Your designs are not always going to be perfect from the first iteration. Minimize design “paralysis analysis” by choosing a hypothetical direction and then conducting usability tests to guide your design changes. List your user-testing questions so that people can see your thought process and so that you can keep track of contentious topics for your usability test plan.
Inevitably, you will iterate through many versions of your wireframes. Including wireframe meta data and keeping track of changes will be helpful not only to other people, but also to you in the long run. Forming this habit takes time, but it’s well worth it.
Include a table of contents. A table of contents is especially important in production wireframes because there will be many pages. Also, InDesign can re-update your table of contents based on any new page changes. Setting up header styles can be finicky, so feel free to use my template54 (ZIP, 1.51 MB) if you need a quick start.
Add a footer. The footer is the best place to include ancillary information about your wireframes. It also keeps your document together in case you print the wireframes and pages go missing. Consider including the following information:
Document name: Could simply include the project’s name, deliverable type and version number.
Page number
Confidential labels: Some companies require that you label documents “Private” or “Confidential.”
Track wireframe revisions. It can be difficult for readers to know what has changed when your wireframes have over 100 pages. So, ahead of your table of contents, include a table of revisions containing the following details:
date,
name of the person who made the revisions,
brief notes or bullet points about what has changed.
Maintain file versioning. Every time you work on a new batch of wireframes (i.e. sending it out to people), create a copy of the previous file before working on the new one. Then, rename it and include at least these details:
Brand or client’s company name: Optional if it doesn’t make sense or if the file name gets too long.
Project name
Type of deliverable: wireframes, in this case
Version number: lead with two 0’s for proper file sorting, because “1” can get incorrectly sorted with “10” and “100.”
Your name or initials: If you expect to collaborate with another designer
Delimiters: Use underscore or hyphen, instead of a space, which sometimes causes issues with file systems.
It could look like this: Google_Youtube_Wireframes_001.graffle.
Or, if you’re collaborating, it could look like this: Apple-iTunes-Wireframes-EL-001.sketch.
Striving for perfection is a great goal, but be practical with your time, too. Most real-world projects don’t have ample timelines, so figure out your priorities, and use the guidelines that make sense.
My goal is to help fill in the gaps of every UX designer’s wireframing process with all of the knowledge I’ve gained over the years. I hope it helps you to perfect your wireframes.
I love to help people out, so let me know in the comments below or contact me if you have questions. Also, please share and spread the love if you’ve found this helpful!
I’m big on modular design1. I’ve long been sold on dividing websites into components, not pages, and amalgamating those components dynamically into interfaces. Flexibility, efficiency and maintainability abound.
But I don’t want my design to look like it’s made out of unrelated things. I’m making an interface, not a surrealist photomontage.
As luck would have it, there is already a technology, called CSS, which is designed specifically to solve this problem. Using CSS, I can propagate styles that cross the borders of my HTML components, ensuring a consistent design with minimal effort. This is largely thanks to two key CSS features:
inheritance,
the cascade (the “C” in CSS).
Despite these features enabling a DRY2, efficient way to style web documents and despite them being the very reason CSS exists, they have fallen remarkably out of favor. From CSS methodologies such as BEM and Atomic CSS through to programmatically encapsulated CSS modules, many are doing their best to sidestep or otherwise suppress these features. This gives developers more control over their CSS, but only an autocratic sort of control based on frequent intervention.
I’m going to revisit inheritance, the cascade and scope here with respect to modular interface design. I aim to show you how to leverage these features so that your CSS code becomes more concise and self-regulating, and your interface more easily extensible.
Despite protestations by many, CSS does not only provide a global scope. If it did, everything would look exactly the same. Instead, CSS has a global scope and a local scope. Just as in JavaScript, the local scope has access to the parent and global scope. In CSS, this facilitates inheritance.
For instance, if I apply a font-family declaration to the root (read: global) html element, I can ensure that this rule applies to all ancestor elements within the document (with a few exceptions, to be addressed in the next section).
html { font-family: sans-serif; } /* This rule is not needed ↷ p { font-family: sans-serif; } */
Just like in JavaScript, if I declare something within the local scope, it is not available to the global — or, indeed, any ancestral — scope, but it is available to the child scope (elements within p). In the next example, the line-height of 1.5 is not adopted by the html element. However, the a element inside the p does respect the line-height value.
html { font-family: sans-serif; } p { line-height: 1.5; } /* This rule is not needed ↷ p a { line-height: 1.5; } */
The great thing about inheritance is that you can establish the basis for a consistent visual design with very little code. And these styles will even apply to HTML you have yet to write. Talk about future-proof!
There are other ways to apply common styles, of course. For example, I could create a .sans-serif class…
.sans-serif { font-family: sans-serif; }
… and apply it to any element that I feel should have that style:
<p>Lorem ipsum.</p>
This affords me some control: I can pick and choose exactly which elements take this style and which don’t.
Any opportunity for control is seductive, but there are clear issues. Not only do I have to manually apply the class to any element that should take it (which means knowing what the class is to begin with), but in this case I’ve effectively forgone the possibility of supporting dynamic content: Neither WYSIWYG editors nor Markdown parsers provide sans-serif classes to arbitrary p elements by default.
That class="sans-serif" is not such a distant relative of style="font-family: sans-serif" — except that the former means adding code to both the style sheet and the HTML. Using inheritance, we can do less of one and none of the other. Instead of writing out classes for each font style, we can just apply any we want to the html element in one declaration:
Some types of properties are not inherited by default, and some elements do not inherit some properties. But you can use [property name]: inherit to force inheritance in some cases.
For example, the input element doesn’t inherit any of the font properties in the previous example. Nor does textarea. In order to make sure all elements inherit these properties from the global scope, I can use the universal selector and the inherit keyword. This way, I get the most mileage from inheritance.
Note that I’ve omitted font-size. I don’t want font-size to be inherited directly because it would override user-agent styles for heading elements, the small element and others. This way, I save a line of code and can defer to user-agent styles if I should want.
Another property I would not want to inherit is font-style: I don’t want to unset the italicization of ems just to code it back in again. That would be wasted work and result in more code than I need.
Now, everything either inherits or is forced to inherit the font styles I want them to. We’ve gone a long way to propagating a consistent brand, project-wide, with just two declaration blocks. From this point onwards, no developer has to even think about font-family, line-height or color while constructing components, unless they are making exceptions. This is where the cascade comes in.
I’ll probably want my main heading to adopt the same font-family, color and possibly line-height. That’s taken care of using inheritance. But I’ll want its font-size to differ. Because the user agent already provides an enlarged font-size for h1 elements (and it will be relative to the 125% base font size I’ve set), it’s possible I don’t need to do anything here.
However, should I want to tweak the font size of any element, I can. I take advantage of the global scope and only tweak what I need to in the local scope.
If the styles of CSS elements were encapsulated by default, this would not be possible: I’d have to add all of the font styles to h1 explicitly. Alternatively, I could divide my styles up into separate classes and apply each to the h1 as a space-separated value:
<h1>Hello World</h1>
Either way, it’s more work and a styled h1 would be the only outcome. Using the cascade, I’ve styled most elements the way I want them, with h1 just as a special case, just in one regard. The cascade works as a filter, meaning styles are only ever stated where they add something new.
We’ve made a good start, but to really leverage the cascade, we should be styling as many common elements as possible. Why? Because our compound components will be made of individual HTML elements, and a screen-reader-accessible interface makes the most of semantic markup.
To put it another way, the style of “atoms” that make up your interface “molecules” (to use atomic design terminology3) should be largely addressable using element selectors. Element selectors are low in specificity4, so they won’t override any class-based styles you might incorporate later.
The first thing you should do is style all of the elements that you know you’re going to need:
The next part is crucial if you want a consistent interface without redundancy: Each time you come to creating a new component, if it introduces new elements, style those new elements with element selectors. Now is not the time to introduce restrictive, high-specificity selectors. Nor is there any need to compose a class. Semantic elements are what they are.
For example, if I’ve yet to style button elements (as in the previous example) and my new component incorporates a button element, this is my opportunity to style button elements for the entire interface.
Now, when you come to write a new component that also happens to incorporate buttons, that’s one less thing to worry about. You’re not rewriting the same CSS under a different namespace, and there’s no class name to remember or write either. CSS should always aim to be this effortless and efficient — it’s designed for it.
Using element selectors has three main advantages:
The resulting HTML is less verbose (no redundant classes).
The resulting style sheet is less verbose (styles are shared between components, not rewritten per component).
The resulting styled interface is based on semantic HTML.
The use of classes to exclusively provide styles is often defended as a “separation of concerns.” This is to misunderstand the W3C’s separation of concerns5 principle. The objective is to describe structure with HTML and style with CSS. Because classes are designated exclusively for styling purposes and they appear within the markup, you are technically breaking with separation wherever they’re used. You have to change the nature of the structure to elicit the style.
Wherever you don’t rely on presentational markup (classes, inline styles), your CSS is compatible with generic structural and semantic conventions. This makes it trivial to extend content and functionality without it also becoming a styling task. It also makes your CSS more reusable across different projects where conventional semantic structures are employed (but where CSS ‘methodologies’ may differ).
Before anyone accuses me of being simplistic, I’m aware that not all buttons in your interface are going to do the same thing. I’m also aware that buttons that do different things should probably look different in some way.
But that’s not to say we need to defer to classes, inheritance or the cascade. To make buttons found in one interface look fundamentally dissimilar is to confound your users. For the sake of accessibility and consistency, most buttons only need to differ in appearance by label.
Remember that style is not the only visual differentiator. Content also differentiates visually — and in a way that is much less ambiguous. You’re literally spelling out what different things are for.
There are fewer instances than you might imagine where using style alone to differentiate content is necessary or appropriate. Usually, style differences should be supplemental, such as a red background or a pictographic icon accompanying a textual label. The presence of textual labels are of particular utility to those using voice-activation software: Saying “red button” or “button with cross icon” is not likely to elicit recognition by the software.
I’ll cover the topic of adding nuances to otherwise similar looking elements in the “Utility Classes” section to follow.
Semantic HTML isn’t just about elements. Attributes define types, properties and states. These too are important for accessibility, so they need to be in the HTML where applicable. And because they’re in the HTML, they provide additional opportunities for styling hooks.
For example, the input element takes a type attribute, should you want to take advantage of it, and also attributes such as aria-invalid6 to describe state.
I don’t need to set color, font-family or line-height here because these are inherited from html, thanks to my use of the inherit keyword. If I want to change the main font-family used application-wide, I only need to edit the one declaration in the html block.
The border color is linked to color, so it too inherits the global color. All I need to declare is the border’s width and style.
The [aria-invalid] attribute selector is unqualified. This means it has better reach (it can be used with both my input and textarea selectors) and it has minimal specificity. Simple attribute selectors have the same specificity as classes. Using them unqualified means that any classes written further down the cascade will override them as intended.
The BEM methodology would solve this by applying a modifier class, such as input--invalid. But considering that the invalid state should only apply where it is communicated accessibly, input--invalid is necessarily redundant. In other words, the aria-invalid attribute has to be there, so what’s the point of the class?
My absolute favorite thing about making the most of element and attribute selectors high up in the cascade is this: The composition of new components becomes less a matter of knowing the company or organization’s naming conventions and more a matter of knowing HTML. Any developer versed in writing decent HTML who is assigned to the project will benefit from inheriting styling that’s already been put in place. This dramatically reduces the need to refer to documentation or write new CSS. For the most part, they can just write the (meta) language that they should know by rote. Tim Baxter also makes a case for this in Meaningful CSS: Style It Like You Mean It7.
So far, we’ve not written any component-specific CSS, but that’s not to say we haven’t styled anything. All components are compositions of HTML elements. It’s largely in the order and arrangement of these elements that more complex components form their identity.
Which brings us to layout.
Principally, we need to deal with flow layout — the spacing of successive block elements. You may have noticed that I haven’t set any margins on any of my elements so far. That’s because margin should not be considered a property of elements but a property of the context of elements. That is, they should only come into play where elements meet.
Fortunately, the adjacent sibling combinator8 can describe exactly this relationship. Harnessing the cascade, we can instate a uniform default across all block-level elements that appear in succession, with just a few exceptions.
The use of the extremely low-specificity lobotomized owl selector9 ensures that any elements (except the common exceptions) are spaced by one line. This means that there is default white space in all cases, and developers writing component flow content will have a reasonable starting point.
In most cases, margins now take care of themselves. But because of the low specificity, it’s easy to override this basic one-line spacing where needed. For example, I might want to close the gap between labels and their respective fields, to show they are paired. In the following example, any element that follows a label (input, textarea, select, etc.) closes the gap.
Once again, using the cascade means only having to write specific styles where necessary. Everything else conforms to a sensible baseline.
Note that, because margins only appear between elements, they don’t double up with any padding that may have been included for the container. That’s one more thing not to have to worry about or code defensively against.
Also, note that you get the same spacing whether or not you decide to include wrapper elements. That is, you can do the following and achieve the same layout — it’s just that the margins emerge between the divs rather than between labels following inputs.
Achieving the same result with a methodology such as atomic CSS10 would mean composing specific margin-related classes and applying them manually in each case, including for first-child exceptions handled implicitly by * + *:
Bear in mind that this would only cover top margins if one is adhering to atomic CSS. You’d have to prescribe individual classes for color, background-color and a host of other properties, because atomic CSS does not leverage inheritance or element selectors.
Atomic CSS gives developers direct control over style without deferring completely to inline styles, which are not reusable like classes. By providing classes for individual properties, it reduces the duplication of declarations in the stylesheet.
However, it necessitates direct intervention in the markup to achieve these ends. This requires learning and being commiting to its verbose API, as well as having to write a lot of additional HTML code.
Instead, by styling arbitrary HTML elements and their spacial relationships, CSS ‘methodology’ becomes largely obsolete. You have the advantage of working with a unified design system, rather than an HTML system with a superimposed styling system to consider and maintain separately.
Anyway, here’s how the structure of our CSS should look with our flow content solution in place:
global (html) styles and enforced inheritance,
flow algorithm and exceptions (using the lobotomized owl selector),
element and attribute styles.
We’ve yet to write a specific component or conceive a CSS class, but a large proportion of our styling is done — that is, if we write our classes in a sensible, reusable fashion.
The thing about classes is that they have a global scope: Anywhere they are applied in the HTML, they are affected by the associated CSS. For many, this is seen as a drawback, because two developers working independently could write a class with the same name and negatively affect each other’s work.
CSS modules11 were recently conceived to remedy this scenario by programmatically generating unique class names tied to their local or component scope.
<!-- my module's button --> <button>Press me</button> <!-- their module's button --> <button>Hit me</button>
Ignoring the superficial ugliness of the generated code, you should be able to see where disparity between independently authored components can easily creep in: Unique identifiers are used to style similar things. The resulting interface will either be inconsistent or be consistent with much greater effort and redundancy.
There’s no reason to treat common elements as unique. You should be styling the type of element, not the instance of the element. Always remember that the term “class” means “type of thing, of which there may be many.” In other words, all classes should be utility classes: reusable globally.
Of course, in this example, a .button class is redundant anyway: we have the button element selector to use instead. But what if it was a special type of button? For instance, we might write a .danger class to indicate that buttons do destructive actions, like deleting data:
.danger { background: #c00; color: #fff; }
Because class selectors are higher in specificity than element selectors and of the same specificity as attribute selectors, any rules applied in this way will override the element and attribute rules further up in the style sheet. So, my danger button will appear red with white text, but its other properties — like padding, the focus outline, and the margin applied via the flow algorithm — will remain intact.
<button>delete</button>
Name clashes may happen, occasionally, if several people are working on the same code base for a long time. But there are ways of avoiding this, like, oh, I don’t know, first doing a text search to check for the existence of the name you are about to take. You never know, someone may have solved the problem you’re addressing already.
My favorite thing to do with utility classes is to set them on containers, then use this hook to affect the layout of child elements within. For example, I can quickly code up an evenly spaced, responsive, center-aligned layout for any elements:
.centered { text-align: center; margin-bottom: -1rem; /* adjusts for leftover bottom margin of children */ } .centered > * { display: inline-block; margin: 0 0.5rem 1rem; }
With this, I can center group list items, buttons, a combination of buttons and links, whatever. That’s thanks to the use of the > * part, which means that any immediate children of .centered will adopt these styles, in this scope, but inherit global and element styles, too.
And I’ve adjusted the margins so that the elements can wrap freely without breaking the vertical rhythm set using the * + * selector above it. It’s a small amount of code that provides a generic, responsive layout solution by setting a local scope for arbitrary elements.
My tiny (93B minified) flexbox-based grid system12 is essentially just a utility class like this one. It’s highly reusable, and because it employs flex-basis, no breakpoint intervention is needed. I just defer to flexbox’s wrapping algorithm.
.fukol-grid { display: flex; flex-wrap: wrap; margin: -0.5em; /* adjusting for gutters */ } .fukol-grid > * { flex: 1 0 5em; /* The 5em part is the basis (ideal width) */ margin: 0.5em; /* Half the gutter value */ }
Using BEM, you’d be encouraged to place an explicit “element” class on each grid item:
<div> <!-- the outer container, needed for vertical rhythm --> <ul> <li></li> <li></li> <li></li> <li></li> </ul> </div>
But there’s no need. Only one identifier is required to instantiate the local scope. The items here are no more protected from outside influence than the ones in my version, targeted with > * — nor should they be. The only difference is the inflated markup.
So, now we’ve started incorporating classes, but only generically, as they were intended. We’re still not styling complex components independently. Instead, we’re solving system-wide problems in a reusable fashion. Naturally, you will need to document how these classes are used in your comments.
Utility classes like these take advantage of CSS’ global scope, the local scope, inheritance and the cascade simultaneously. The classes can be applied universally; they instantiate the local scope to affect just their child elements; they inherit styles not set here from the parent or global scope; and we’ve not overqualified using element or class selectors.
Here’s how our cascade looks now:
global (html) styles and enforced inheritance,
flow algorithm and exceptions (using the lobotomized owl selector),
element and attribute styles,
generic utility classes.
Of course, there may never be the need to write either of these example utilities. The point is that, if the need does emerge while working on one component, the solution should be made available to all components. Always be thinking in terms of the system.
We’ve been styling components, and ways to combine components, from the beginning, so it’s tempting to leave this section blank. But it’s worth stating that any components not created from other components (right down to individual HTML elements) are necessarily over-prescribed. They are to components what IDs are to selectors and risk becoming anachronistic to the system.
In fact, a good exercise is to identify complex components (“molecules,” “organisms”) by ID only and try not to use those IDs in your CSS. For example, you could place #login on your log-in form component. You shouldn’t have to use #login in your CSS with the element, attribute and flow algorithm styles in place, although you might find yourself making one or two generic utility classes that can be used in other form components.
If you do use #login, it can only affect that component. It’s a reminder that you’ve moved away from developing a design system and towards the interminable occupation of merely pushing pixels.
When I tell folks that I don’t use methodologies such as BEM or tools such as CSS modules, many assume I’m writing CSS like this:
header nav ul li { display: inline-block; } header nav ul li a { background: #008; }
I don’t. A clear over-specification is present here, and one we should all be careful to avoid. It’s just that BEM (plus OOCSS, SMACSS, atomic CSS, etc.) are not the only ways to avoid convoluted, unmanageable CSS.
In an effort to defeat specificity woes, many methodologies defer almost exclusively to the class selector. The trouble is that this leads to a proliferation of classes: cryptic ciphers that bloat the markup and that — without careful attention to documentation — can confound developers new to the in-house naming system they constitute.
By using classes prolifically, you also maintain a styling system that is largely separate from your HTML system. This misappropriation of ‘separate concerns’ can lead to redundancy or, worse, can encourage inaccessibility: it’s possible to affect a visual style without affecting the accessible state along with it:
<input aria-invalid="false" />
In place of the extensive writing and prescription of classes, I looked at some other methods:
leveraging inheritance to set a precedent for consistency;
making the most of element and attribute selectors to support transparent, standards-based composition;
applying a code- and labor-saving flow layout system;
incorporating a modest set of highly generic utility classes to solve common layout problems affecting multiple elements.
All of these were put in service of creating a design system that should make writing new interface components easier and less reliant on adding new CSS code as a project matures. And this is possible not thanks to strict naming and encapsulation, but thanks to a distinct lack of it.
Even if you’re not comfortable using the specific techniques I’ve recommended here, I hope this article has at least gotten you to rethink what components are. They’re not things you create in isolation. Sometimes, in the case of standard HTML elements, they’re not things you create at all. The more you compose components from components, the more accessible and visually consistent your interface will be, and with less CSS to achieve that end.
There’s not much wrong with CSS. In fact, it’s remarkably good at letting you do a lot with a little. We’re just not taking advantage of that.
Some like it loud, others need some steady beats to stay focused, others calm tunes. A while ago we asked on Twitter1 and Facebook2 what music the web community is listening to when coding and designing.
The answers were as diverse as the community itself and certainly too good to live an existence only in a Twitter discussion. That’s why we’ve compiled those hand-crafted playlists, favorite artists, and loved soundtracks in this article to see which tunes fuel the web, and, well, first and foremost, to provide you with some new ear candy to get you through lengthy coding and design sessions, of course. Get your headphones ready!
Positive psychology describes flow as the mental state when you get fully immersed in what you’re doing, feeling energized, focused, and involved. These playlists will tickle your brain to help you reach that much sought-after state.
As developers, are we paid to write code? This challenging question raises concerns about product quality, code quality, and our purpose as developers in a world of coded applications. You’ll find an interesting post that dives deeper into the matter in the “Work & Life” section of our reading list this week.
But we have other amazing resources to look at this week, too: new tools, new tutorials, and we’ll also take some time to reconsider CSS print styles. Let’s get started!
Firefox 50 was released this week1. The new version comes with support for the once option for Event Listeners, the referrerpolicy attribute and a fix for dashed and dotted borders. On the other hand, box-sizing: padding-box was removed. The upcoming version, Firefox 512, which is currently in beta, will introduce a couple of changes, too: <img> with empty src will now fire an error event and JavaScript will be blocked if it’s served with a wrong MIME type. Furthermore, the non-standard Web Payments API will be removed, Accept header for XHR will be simplified, and SHA-1 certificates issued by public CA will no longer be accepted.
Splittable5 is a next-generation module bundler for JavaScript that aims at combining efficiency with ease of use. It supports code splitting and tree shaking and uses Babel and Browserify to resolve modules and their dependencies and Google’s Closure Compiler for efficient compilation of code. Definitely one of the most advanced module bundlers available today. Unfortunately, it still needs the Java version of Closure Compiler to work, since the JavaScript variant doesn’t support the relevant feature yet.
blake2x6 is a new hashing function that is even better than blake2. It does not only allow hashes of any arbitrary size but also has a key derivation function and a deterministic random bit generator.
Do you have a plan for your hiring interviews? The people at GitLab certainly have, and they share it with the public: Read their Hiring Guide18 to get some useful advice on writing job ads, handling rejections, and conducting interviews.
Garann Means quit the web industry about two years ago. Now she shares what that really meant to her19, why she did it, and why it’s important that we think very carefully about it before we take this step for real. It’s easy to joke about leaving the industry, but the consequences are real and might differ a lot from what we expect.
Theo Nicolaou wrote about web development and pressure20. Even if we don’t read articles every day, work on side-projects all the time, or contribute to open-source projects regularly, the web will still be here tomorrow, and we can still help to move it forward and make an impact. We need to remind ourselves that sometimes it’s okay to just do something different, to relax or go out with friends.
“You Are Not Paid to Write Code21.” Tyler Treat wrote about our job as developers and why we introduce the possibility of failure into a system every time we write code or introduce third-party services. Our job is to find solutions that (if possible) don’t require a new system and to keep out everything else from a codebase unless it’s really necessary.
With Thanksgiving coming up next week, have you already thought about ways how to spend your days before the holiday? Well, you could send simple “Thank You” emails to your past clients, perhaps design something free for somebody, or take some time to improve your website. To those of you who celebrate Thanksgiving, we’ve got a nice icon set for you today — all available in PNG, PSD, AI and SVG formats.
This set of 15 free icons was created by the design team at ucraft1. Please note that this icon set is licensed under a Creative Commons Attribution 3.0 Unported2. You may modify the size, color or shape of the icons. No attribution is required, however, reselling of bundles or individual pictograms is not cool. Please provide credits to the creators and link to the article in which this freebie was released if you would like to spread the word in blog posts or anywhere else.
“Autumn is here and Thanksgiving is just around the corner and we can’t imagine a better time to share this set of icons with you. In the spirit of giving back we decided to create 15 icons to help you celebrate this holiday. Spruce up your commercial project and give it that homely and warm tone. You can also share them with your family and friends (that’s entirely up to you); just be sure that you are kind and grateful to people around you.”
A big thank you to the folks behind ucraft — we sincerely appreciate your time and efforts. Keep up the brilliant work!
Imagine a cloudy, rainy November evening. After a long day, you walk home along the streets, following the dimmed street lamps. Everybody seems to be busy, rushing somewhere, crossing paths with strangers and lonely stores. It’s dark and cold outside, and it’s difficult to see things through, so you decide to take a shortcut route to shorten the path.
Suddenly you see a bright light and music streaming from one of the remote corners of the street. Out of curiosity, you slowly walk towards the light, and hold your breath for a second. You discover a cozy little place with a fireplace, packed with people, jazzy tunes, and the smell of pizza, pasta and red wine. You see people smiling. Talking. Laughing. Sharing. Inviting you to join them.
You probably have a whole bunch of reasons to keep on walking down that street, but what if you walked inside instead? Well, that’s what joining the Smashing Conference experience feels like: It’s an intimate, personal, friendly experience for web designers and developers — in a cozy venue with charm and personality, with people who deeply care about the quality of their work. Don’t take our word — see for yourself2.
Guess what: we have SmashingConf San Francisco 20173 coming up in April next year, featuring tasty front-end ingredients, UX recipes and neat design beats from the hidden, remote corners of the web. 1 track, 2 conference days, 8 workshops, 16 excellent speakers and just 500 available tickets, taking place on April 4–5, 2017. And it’s going to be… smashing!
We’ve put aside 50 early-bird tickets, and if you book a workshop6, too, you’ll save $100 off the conference and workshop price. That pretty smashing, isn’t it?
Also, in case you get a hard time taking a few days off, we’ve got your back: You can always use the Convince Your Boss PDF7 (0.1 Mb) to convince… well, whoever you have to convince! To the tickets.8
About The Conference
So you know what’s going on in front-end. You’ve been working with pattern libraries and atomic design and Gulp and SMCSS and BEM and HTTP/2 and Flexbox and SVG. What you might not know though is what pitfalls and traps other web designers have encountered in practice — to prevent issues creeping up on you later on.
With our conference in San Francisco9, we bring together experienced speakers to share practical front-end and UX techniques and pitfalls they ran into, lessons they’ve learned and the workflows they’ve chosen to stay efficient. Expect an intimate, hands-on conference experience, with lots of learning, sharing and networking along the way. Admittedly, we can’t solve every problem faced by the web community, but for front-end and UX, these conferences will push your boundaries significantly and give you answers you can take to the bank.
The conference will cover CSS/JavaScript techniques, SVG and Flexbox gotchas, architecting and “selling” design systems libraries, performance optimization and psychology insights, UX strategies and design workflows, tips on establishing and maintaining pattern libraries, and steps to produce resilient, fast responsive websites. All learned from actual, real-life projects.
We don’t care about trends, but we do care about smart solutions. We love exploring how designers and developers work and how actual problems are solved—ideas and techniques that actually worked, or failed in real-life projects, and why exactly they failed and what decisions were made instead. All those things that help us make smarter design decisions and build better products, faster.
That’s exactly what the conference will be about. No theory, no fluff, just curated quality content. 2 conference days, 1 track, hands-on workshops13 and 16 speakers14, taking place on April 4–5, 2017 in the beautiful and iconic Palace of Fine Arts. Of course, we embrace respect and tolerance with our code of conduct15.
For this year’s line-up, we invited experts who have worked in small as well as large companies. People who spend day and night working and playing with web technologies. It doesn’t get more practical than that, does it? Get ready, set, go:
Our workshops offer the opportunity to get to grips with new ideas and techniques in real depth, with a full day spent on the topic of your choice. It’s a great way to round off your conference experience, and we provide lunch, too. By registering for a workshop while buying your conference ticket, you’ll save $100 on the regular workshop ticket price.
In this full day checkout optimization workshop Christian Holst, research director at the Baymard Institute, will share their newest checkout usability test findings, uncovered during the past 7 years of large-scale testing of e-commerce sites and from working with clients like Etsy, Office Depot, Sears, Kohl’s, Nike, John Lewis, Carnival, T-mobile, Overstock.com, The North Face, etc.Read more…34
The role of design with large organizations is expanding, spreading across product teams and influencing decision-making at higher and higher levels. This scale makes it increasingly challenging to align designers and product teams to deliver cohesive, consistent experiences across a customer journey.Read more…37
In this hands-on workshop, you’ll learn front-end techniques and strategies you need to know today to make proper use of HTTP/2 tomorrow. The workshop will also look into dealing with legacy browsers that don’t support HTTP/2 as well as the security aspects of how you can keep your website or application safe, future-proof and blazingly fast.Read more…40
In this workshop, Vitaly Friedman (editor-in-chief of Smashing Magazine), will cover practical techniques, clever tricks and useful strategies you need to be aware of when working on responsive websites. From responsive modules co clever navigation patterns and web form design techniques; the workshop will provide you with everything you need to know today to start designing better responsive experiences tomorrow.Read more…43
Smashing Workshops on Thursday, April 6th, 2017Link
In this full-day workshop, Sarah will teach you the basics of the SVG Animation development and the essentials needed to start using these techniques in production environments for animations both large and small.Read more…46
This workshop is designed for designers and developers who already have a good working knowledge of HTML and CSS. We will cover a range of CSS methods for achieving layout, from those you are safe to use right now even if you need to support older version of Internet Explorer through to things that while still classed as experimental, are likely to ship in browsers in the coming months.Read more…49
With so many tools available to visualize your data, it’s easy to get stuck in thinking about chart types, always just going for that bar or line chart, without truly thinking about effectiveness. In this workshop, Nadieh will teach you how you can take a more creative and practical approach to the design of data visualization.Read more…52
In this full-day workshop, Vitaly Friedman, editor-in-chief of Smashing Magazine, will present practical techniques, clever tricks and useful strategies you need to be aware of when working on any responsive design project. Most techniques are borrowed from mid-size and large-scale real-life projects, such as large e-commerce projects, online magazines and Web applications.Read more…55
Pretty much because of the value it will provide. We’ll explore how designers and developers work, design and build and how they approach problems strategically. Think of it as a playbook with handy rules of thumb for your next projects: it can’t get more practical than this. Learn:
Strategies for building fast responsive websites,
Clever techniques for better front end,
Rules of thumb for better transitions and animations,
Strategy to break out of the generic layouts,
Approaches for better visual/brand identity design,
Guidelines for building pattern libraries,
How to build accessible and future-proof UIs,
Mistakes and lessons learned from large projects,
How to apply psychology in design decisions,
How to improve conversion rates in eCommerce projects,
How to craft responsive HTML email for Gmail, Yahoo and common mail clients,
How to tackle complexity when building a delightful, responsive user experience,
…more practical takeaways from real-life projects.
So you need to convince your manager to send you to the SmashingConf? No worries, we’ve got your back! We prepared a neat Convince Your Boss (PDF)56 (0.15 Mb) that you can use to convince your colleagues, friends, neighbors and total strangers to join you or send you to the event. We know that you will not be disappointed. Still not good enough? Well, tweet us @smashingconf and we’ll help you out — we can be quite convincing, too, you know!
We do everything possible to keep ticket prices affordable for everyone, and we welcome sponsors to help us create a truly unique, unforgettable conference experience. And you can be a major part of it. We have some attractive and creative sponsorship packages for you, and we are also flexible and would love to adjust them to your needs. So if you’re interested, please email Mariona at mariona@smashingconf.com59 — we’d love for you to be involved!
We’d love you to come and join us. Walk in. Take a seat. We are looking forward to seeing you in San Francisco, and who knows, perhaps months later after the conference is over, you’ll look back at your workflow, at your projects and at this very post realizing that it wasn’t that off after all. Grab your ticket and see you there! 😉
I recently spoke with a back-end developer friend about how many hours I spend coding or learning about code outside of work. He showed me a passage from an Uncle Bob book, “Clean Code”, which compares the hours musicians spend with their instruments in preparation for a concert to developers rehearsing code to perform at work.
I like the analogy but I’m not sure I fully subscribe to it; it’s that type of thinking that can cause burnout in the first place. I think it’s great if you want to further your craft and broaden your skill set, but to be doing it every hour of the day isn’t sustainable.
Front-end fatigue is very real. I’ve seen a number of posts on JavaScript fatigue but I think the problem extends further than that specific language.
To be clear, this isn’t another rant about how it’s all bad and everything is moving too fast — I love that technology is evolving so rapidly. Equally, I can appreciate how it can be overwhelming and have certainly felt flushed out myself at times.
As far as I can tell, this is a two-pronged problem.
The first is that as a front-end developer you think you’re expected to have all of the following in your arsenal:
HTML (writing clean, semantic markup)
CSS (Modular, scalable)
CSS methodologies (BEM, SMACSS, OOCSS)
CSS preprocessors (something like LESS, SCSS, PostCSS)
A basic understanding of whatever back-end language is being used
And on top of that you’re either dabbling with or looking towards things like:
Service workers
Progressive Web Apps (PWA)
Web Components
The second is that your day-to-day work probably doesn’t cover it all or give you time to learn it all, so how are you going to make sure you have all the tools at your disposal?
Now, as a consumer you might:
Subscribe to a bunch of different weekly development newsletters
Trawl your Twitter feed
Attend a weekly catch up your Front-end team at work
Have a Slack channel outside of work with a handful of devs that you also talk shop with
Follow online tutorials (that hopefully aren’t out of date)
Buy web development books (that hopefully aren’t out of date)
Attend meetups
Attend conferences
Attend training courses
As a contributor you might:
Write blogs/magazine articles
Dabble in speaking
Run a podcast
Contribute to open-source projects
Have your own side projects
Recently I found my attention being split three ways, I was focusing a third on writing code, with headphones on half-listening to discussions about code whilst chatting on Slack about code. I decided enough was enough — every orifice was clogged with code and I was mentally drained.
Whilst that is certainly at the extreme end, I’m sure others of you have experienced something similar. On top of all this you probably have a full-time job, family, friends, hobbies. It’s no wonder that there are so many of us feeling burnt out and wondering if we made the right career choice.
Some of my fellow front-ends have expressed interest in packing it all in and switching job to one where they can turn off at five o’clock. But part of me thinks this job attracts a certain type of person and if we were to throw it all away and become an estate agent instead, you’d still want to be the best estate agent you can be. Attending estate agency meetups and tracking house price trends in your free time. Many moons ago I worked in finance and I was still studying in my evenings and reading around it to become the most skilled I could in my chosen field.
We’re not alone in this discipline, a lot of professions require a solid amount of dedication and learning outside of work. Maybe the thing with front-end development is that the technology evolves so fast that it feels like someone keeps moving the goal posts. It seems like every other day I receive an email saying “XYZ” technology is dead. Which I’m sure can’t be true because otherwise we’d have no tech left.
The ecosystem is in a state of constant change and I think that can be a good thing. Personally I love being in a role where I can constantly learn develop and push myself but that’s not to say I don’t get overwhelmed at times.
With that in mind, here are some things I try to remember in order to stop my head exploding as well as some general advice on how to avoid the fatigue.
The developers I know, both at work and outside of it are amongst the smartest people I know. But they are all feeling overwhelmed. Most have some sort of wish list of technologies that they are trying to learn. There might be a handful of people who know it all and are on top of everything, but the majority of us are in the exact same position.
We’re all still reliant on Google and Stack Overflow to get us through the day and have far too many tabs open filled with answers to web related questions. You’re not alone!
Be happy in the knowledge that you’re not a bad developer just because you haven’t tried whatever the cool kids are using yet.
Yes, even the “web celebs” are in the same spot…
There’s no way you can know everything and the rock star developers you follow on Twitter tend to be really really good in a few areas each. You’ll notice that they’re the same areas they are famous for being knowledgeable about. Again there will be exceptions but they’re just humans like us. 🙂
I know several great front-end developers that won’t apply for roles because they’d feel like a fraud going for them without knowing all the things on the job description requirements. To quote one of them:
“90% of the JDs I see make me think “Argh, I’m so behind!” In fact, it bothers me so much, that I’m thinking about staying in my current role, and just trying to push for more money simply because I feel like I’ve “gotten away with it” here.”
The fact is, most of those job specs are a farce. My friend Bård4 put together this great image that shows the difference between what front-end job specs say and what they mean.
Just remember, it will be ok. Every job I’ve had I’ve felt out of my depth to start with, but eventually you get used to their tools and workflow, you learn and become a better developer for it.
Don’t be afraid to learn on the job, the best way to pick up new skills is to be using them every day.
If you’ve got imposter syndrome, odds are you’re actually a decent developer because otherwise you wouldn’t be self aware enough to realise it.
It’s easy to get distracted by the shiny and new but if your foundations aren’t solid then odds are what you’re building won’t stand the test of time.
As a good friend of mine said to me once:
“Focus on the fundamentals has always been my mantra. If you can build good sh!t and solve problems then that’s all that matters, how you solve them (the tools) has and will always change.”
For example, when React catapulted to fame it always seemed to be bundled up with ES6, and I put my focus on those changes or additions to the language rather than the nuances of the framework itself. Once React is dead and gone, the knowledge I’ve picked up from keeping on top of the latest vanilla Javascript will live on. A lot of the features you can play about with natively in Chrome so you don’t have to pull in Babel and get bogged down in dependency hell to play with it.
This is really key. I don’t think it’s the new frameworks, libraries and modules that are killing us, it’s our own belief that we have to learn them all.
With learning I find the best bet is to keep it focused — at the moment I’m delving into functional JavaScript programming in ES6.
There are tons of other things on my list that I’d like to learn, but I try not to get distracted. For example, I would love to brush up on my accessibility knowledge, play around with Polymer and dive into some of the latest CSS techniques like Grid but if I start reading about too many different areas at once I won’t retain all the information. These other things aren’t going anywhere, I’ll get to them when I get to them.
Avoid rushing to try and consume everything on a given topic. Take your time and make sure you thoroughly understand it.
If you’re like me, you’ll have an ever-growing list, but don’t be afraid to cull items from it. Not everything is worth investing time in and you should try to recognize what is worth learning and what is likely to be gone in a couple of years. Taking time to learn programming design patterns and architectural techniques is always going to be more beneficial in the long run rather than leaping to the current hotness in framework land. You’ll only end up scrambling to play buzzword bingo again a short while down the track.
Most Companies Aren’t Using Bleeding Edge Tech Link
There is a lot of new stuff coming out, the web is progressing at a staggering rate but typically it will take a long time before businesses actually start adopting these new technologies. The majority of companies will wait for a technology to mature for a while and see it proven in the field.
Angular8 was created six years ago and I first started working at a startup who decided it was the framework for them three years ago. Reactjs9 has been about for just over three years and my current company started using it just before Christmas. I’m sure a lot of other frameworks have come and gone in that time. If I’d jumped on them all I’d be going crazy.
In CSS land, Flexbox has been available since 2010 — six years ago! Browser support is still limited. We started using it in production earlier this year, but I don’t see it being used much in the wild elsewhere.
My point being, there is no rush to learn all the things, whilst technology might move quickly your potential employers are moving at a much slower pace. You don’t have to be ahead of the curve, just make sure you’re keeping an eye on it’s trajectory.
The More You Learn, The More You Discover You Don’t Know, And That’s Okay Link
This is totally normal. When you first start out, you don’t know what you don’t know. Then you learn some stuff and decide you’re a genius. Then little by little that fantasy unravels and you start to comprehend actually how much there is out there that you don’t know.
Essentially, the more experience you get, the deeper into the void you go. You need to make peace with this, otherwise it will consume you. If anything, this feeling should give you the confidence that you’re heading in the right direction. Odds are in our chosen profession you’ll never comfortably be able to sit on a throne constructed from all front-end knowledge.
It’s easy to feel that you’re so far behind you need to be coding and learning every minute. This is a one-way ticket to burnout-ville. Set some time aside to develop your skillset, see if you can negotiate some time with your boss so it’s scheduled in and spend the rest of the time doing what you love.
I’ve had some of my coding epiphanies at the gym. Exercising is extremely important for your mind as well as your body. Try and do at least 20–30 minutes a day to keep your mind sharp and help prevent burnout.
Make time for your family and friends — try not to talk shop with them!
Don’t be worried about finding a job right now. At the moment we’re in a very fortunate position where there are more roles than developers to fill them. I don’t know how long this will last, but capitalise on it now!
You can get a job without knowing all the things. I’ve found that in the interviews I’ve carried out 99% of people are totally blagging it.
Worst case scenario, remember that there’s gold in legacy code. If you’re a developer that loves the old ways there will always be companies stuck on legacy tech that need developers to work on their software.
I hope some of these pointers have helped mitigate some of the frustrations you might be feeling. The worst thing you can do is reach the edge and become fully burnt out because once you are, it’s very hard to regain that passion you had for what you do and why you started doing it in the first place.