SmashingConf 2018: Fetch Those Early-Bird Tickets! ?? ?? ??

SmashingConf 2018: Fetch Those Early-Bird Tickets! ?? ?? ??

Great conferences are all about learning new skills and making new connections. That’s why we’ve set up a couple of new adventures for SmashingConf 2018 — just practical sessions, new formats, new lightning talks, evening sessions and genuine, interesting conversations — with a dash of friendly networking! Taking place in London, San Francisco, Toronto. Tickets? Glad you asked!1

SmashingConf London52 / #perfmatters / Feb 7–8 Link

Performance matters. Next year, we’re thrilled to venture to London for our brand new conference fully dedicated to everything front-end performance. Dealing with ads, third-party scripts, A/B testing, HTTP/2, debugging, JAM stack, PWA, web fonts loading, memory/CPU perf, service workers. Plus lightning community talks. Schedule and details.3

4
SmashingConf London52: everything web performance. Feb 7–8.

Speakers and Topics Link

Over the two days, we’ll cover pretty much every aspect of front-end performance: from rendering to debugging. Two days, one track, 16 speakers and 7 hands-on workshops. 50 early-bird-tickets now on sale.6

£379 £459

Conference Ticket15All taxes included. Only 50 early-bird tickets.

Save £76!

Conference & Workshop16All taxes included. Only 50 early-bird tickets.

SmashingConf San Francisco17 / #breakout / Apr 17–18 Link

The classic. With our third annual conference in San Francisco, we want to explore ways and strategies for breaking out: leaving behind generic designs and understanding the new techniques and capabilities available today. We care about the solutions we come up with, and the approaches that failed along the way. In San Francisco, we want to find out the why, and how, and what we all — as designers and developers — need to know today to be more productive and make smarter decisions tomorrow. Schedule and details.18

A queen cat welcoming you a Smashing Conference in San Francisco, April 17 to 18, 201819
SmashingConf SF20: breaking out of the box. Apr 17–18.

Speakers and Topics Link

A wide range of everything web-related, covered in 2 days, with a single track, 16 speakers and 8 hands-on workshops. CSS Grid, Design systems, new frontiers of CSS and JavaScript, accessibility, performance, lettering, graphic design, UX, multi-cultural design, among other things. 100 super early-bird-tickets now on sale.21

US$499 $599

Conference Ticket28All taxes included. Only 100 early-bird tickets.

Save US$100!

Conference & Workshop29All taxes included. Only 100 early-bird tickets.

SmashingConf Toronto3330 / #noslides / Jun 26–27 Link

What’s the best way to learn? By observing designers and developers working live. For our new conference in Toronto, all speakers aren’t allowed to use any slides at all. Welcome SmashingConf #noslides, a brand new conference, full of interactive live sessions, showing how web designers design and how web developers build — including setup, workflow, design thinking, naming conventions and everything in-between. Schedule and details.31

A queen cat welcoming you a Smashing Conference in Toronto, June 26 to 27, 201832
SmashingConf Toronto3330: no slides, live sessions only. Jun 26–27.

Speakers and Topics Link

Interactive live sessions on everything from organizing layers in Photoshop to naming conventions in CSS. Live workflow in Sketch and Photoshop, design systems setup, lettering, new frontiers of CSS and JavaScript, CSS Grid Layout, live debugging, performance audits, accessibility audits, sketching, graphic design, data visualization and creative coding. 100 super early-bird-tickets now on sale.34

CAD$640 $705

Conference Ticket45All taxes included. Only 100 early-bird tickets.

Save CAD$128!

Conference & Workshop46All taxes included. Only 100 early-bird tickets.

Tickets! Link

To give everybody a chance to buy ticket in time, we are releasing all tickets in batches this time. The first batch of super early-birds are available right away: fetch them before they fly out!

Ah, and just in case you’re wondering: we’re planning on running a conference in our hometown Freiburg, Germany on September 10–11, and we will be coming back to New York, USA on October 23–24 — with a slightly different format, too. We can’t wait to see you there! 😉

(vf ms)

Footnotes Link

  1. 1 #tickets
  2. 2 https://smashingconf.com
  3. 3 https://smashingconf.com/
  4. 4 https://smashingconf.com
  5. 5 https://smashingconf.com
  6. 6 https://shop.smashingmagazine.com/products/smashingconf-london-2018?variant=990650269721
  7. 7 https://www.twitter.com/
  8. 8 https://www.twitter.com/
  9. 9 https://www.twitter.com/
  10. 10 https://www.twitter.com/
  11. 11 https://www.twitter.com/
  12. 12 https://www.twitter.com/
  13. 13 https://www.twitter.com/
  14. 14 https://www.twitter.com/
  15. 15 https://shop.smashingmagazine.com/products/smashingconf-london-2018
  16. 16 https://smashingconf.com/registration
  17. 17 https://smashingconf.com/sf-2018/
  18. 18 https://smashingconf.com/sf-2018/
  19. 19 https://smashingconf.com/sf-2018/
  20. 20 https://smashingconf.com/sf-2018/
  21. 21 https://shop.smashingmagazine.com/products/smashingconf-san-francisco-2018?variant=1012441088025on
  22. 22 https://smashingconf.com/sf-2018/speakers/jessica-hische
  23. 23 https://smashingconf.com/sf-2018/speakers/trent-walton
  24. 24 https://www.twitter.com/
  25. 25 https://smashingconf.com/sf-2018/speakers/yiying-lu
  26. 26 https://smashingconf.com/sf-2018/speakers/patrick-hamann
  27. 27 https://smashingconf.com/sf-2018/speakers/rachel-andrew
  28. 28 https://shop.smashingmagazine.com/products/smashingconf-san-francisco-2018
  29. 29 https://smashingconf.com/sf-2018/registration
  30. 30 https://smashingconf.com/toronto-2018/
  31. 31 https://smashingconf.com/toronto-2018/schedule
  32. 32 https://smashingconf.com/toronto-2018/
  33. 33 https://smashingconf.com/toronto-2018/
  34. 34 https://shop.smashingmagazine.com/products/smashingconf-toronto?variant=1012127203353
  35. 35 https://smashingconf.com/toronto-2018/speakers/lea-verou
  36. 36 https://smashingconf.com/toronto-2018/speakers/seb-lee-delisle
  37. 37 https://smashingconf.com/toronto-2018/speakers/sarah-drasner
  38. 38 https://smashingconf.com/toronto-2018/speakers/joe-leech
  39. 39 https://smashingconf.com/toronto-2018/speakers/gemma-obrien
  40. 40 https://smashingconf.com/toronto-2018/speakers/tim-kadlec
  41. 41 https://smashingconf.com/toronto-2018/speakers/dan-mall
  42. 42 https://smashingconf.com/toronto-2018/speakers/nadieh-bremer
  43. 43 https://smashingconf.com/toronto-2018/speakers/aaron-draplin
  44. 44 https://smashingconf.com/toronto-2018/speakers/yiying-lu
  45. 45 https://shop.smashingmagazine.com/products/smashingconf-toronto?variant=1012127203353
  46. 46 https://smashingconf.com/toronto-2018/registration
  47. 47 https://smashingconf.com/registration
  48. 48 https://smashingconf.com/sf-2018/registration
  49. 49 https://smashingconf.com/toronto-2018/registration

↑ Back to topTweet itShare on Facebook

A Swift Transition From iOS To macOS Development

Today started just like any other day. You sat down at your desk, took a sip of coffee and opened up Xcode to start a new project. But wait! The similarities stop there. Today, we will try to build for a different platform! Don’t be afraid. I know you are comfortable there on your iOS island, knocking out iOS applications, but today begins a brand new adventure. Today is the day we head on over to macOS development, a dark and scary place that you know nothing about.

The good news is that developing for macOS using Swift has a lot more in common with iOS development than you realize. To prove this, I will walk you through building a simple screen-annotation application. Once we complete it, you will realize how easy it is to build applications for macOS.

The Concept

The idea comes from two unlikely sources. The first source is my boss, Doug Cook. He came over and asked if I knew how to make a circular floating app on macOS for prototyping purposes. Having never really done anything on macOS, I started to do some digging. After a little digging, I found Apple’s little gem of RoundTransparentWindow. Sure, it was in Objective-C, and it was pre-ARC to boot, but after reading through the code, I saw that figuring out how to do it in Swift wasn’t very difficult. The second source was pure laziness. I recently picked up a side project making tutorial videos on YouTube. I wanted to be able to describe what I was saying by drawing directly on the screen, without any post-production.

I decided to build a macOS app to draw on the computer screen:

Behold the app in all its glory!
Behold the app in all its glory! (View large version)

OK, it doesn’t look like much — and, honestly, it shouldn’t because I haven’t drawn anything. If you look closely at the image above, you will see a little pencil icon in the upper-right bar. This area of macOS contains items called “menu extras.” By clicking the pencil icon, you will enable drawing on screen, and then you can draw something like this below!

Behold the app in all its glory, again!
Behold the app in all its glory, again! (View large version)

I wanted drawing on the screen to be enabled at all times, but not to take over the screen when not in use. I preferred that it not live in the dock, nor change the contents of macOS’ menu bar. I knew that it was possible because I’d seen it in other apps. So, over lunch one day, I decided to take a crack at building this drawing tool, and here we are! This is what we will build in this tutorial.

The Soapbox

At this point, you might be saying to yourself, “Why build this? It’s just a simple drawing app!” But that isn’t the point. If you are anything like me, you are a little intimidated by the thought of making a macOS app. Don’t be. If you program for your iPhone on your MacBook, shouldn’t you also be able to program for your MacBook on your MacBook? What if Apple actually does merge iOS and macOS? Should you be left behind because you were intimidated by macOS development? ChromeOS already supports Android builds. How long before macOS supports iOS builds? There are differences between Cocoa and UIKit, which will become apparent, but this tutorial will get your feet wet and (hopefully) challenge you to build something bigger and better.

Caveats and Requirements

There are some caveats to this project that I want to get out of the way before we start. We will be making an app that draws over the entire screen. This will work on only one screen (for now) and will not work as is over full-screen apps. For our purpose, this is enough, and it leaves open room for plenty of enhancements in future iterations of the project. In fact, if you have any ideas for enhancements of your own, please leave them in the comments at the bottom!

For this tutorial, you’ll need a basic understanding of Swift development and familiarity with storyboards. We will also be doing this in Xcode 9 because that is the latest and greatest version at the time of writing.

1. Begin A macOS Project

Open Xcode and create a new macOS project. At this point, you are probably still in the “iOS” style project. We have to update this so that we are building for the correct system. Hit the “macOS” icon at the top, and then make sure that “Cocoa App” is selected. Hit “Next.”

Create a new macOS project
Create a new macOS project.

Enter in the product name as you normally would. We will call this one ScreenAnnotation, and then do the normal dance with “organization name” and “team” that you’d normally do. Make sure to select Swift as the language, and again hit “Next.” After saving it in the directory of your choosing, you will have your very own macOS app. Congratulations!

At first glance, you will see most everything you get in an iOS app. The only differences you might notice right now are the entitlements file, Cocoa in place of UIKit in each of the .swift files, and (naturally) the contents of the storyboard.

2. Clean Up

Our next step is to go into the storyboard and delete anything we don’t need. First, let’s get rid of the menu; because our app will live as a menu extra, instead of as a dock app, it is unnecessary. Beneath where the menu existed is the window. We need to subclass this and use it to determine whether we are drawing within our app or toying with windows beneath it. Looking below the window, we can also see the familiar view controller. We already have a subclass of this, provided by Xcode. It is aptly named ViewController because it is a subclass of NSViewController, but we still need a NSWindow subclass, so that we can decorate it as a clear window. Head over to the left pane, right-click, select “New file,” then select “Swift” file. When prompted, name this ClearWindow. Replace the one line with the following:

import Cocoaclass ClearWindow : NSWindow { override init(contentRect: NSRect, styleMask style: NSWindow.StyleMask, backing backingStoreType: NSWindow.BackingStoreType, defer flag: Bool) { super.init(contentRect: contentRect, styleMask: StyleMask.borderless, backing: backingStoreType, defer: flag) level = NSWindow.Level.statusBar backgroundColor = NSColor.blue } override func mouseDown(with event: NSEvent) { print("Mouse down: (event.locationInWindow)") } override func mouseDragged(with event: NSEvent) { print("Mouse dragged: (event.locationInWindow)") } override func mouseUp(with event: NSEvent) { print("Mouse up: (event.locationInWindow)") }}

In this code snippet, we are importing Cocoa, which is to macOS as UIKit is to iOS development. This is the main API we will use to control our app. After importing Cocoa, we subclass NSWindow, and then we update our super-call in the init method. In here, we keep the same contentRect but will modify this later. We change the styleMask to borderless, which removes the standard application options: close, minimize and maximize. It also removes the top bar on the window. You can also do this in the storyboard file, but we are doing it here to show what it would look like to do it programmatically. Next, we pass the other variables right on through to the constructor. Now that we have that out of the way, we need to tell our window where to draw. Looking at the NSWindow documentation, we see that we can set our window at different levels. We will set the window level to NSStatusWindowLevel because it will draw above all other normal windows.

3. Our First Test

We are using NSResponder methods in the same way that we’d use the UIResponder method to respond to touches on iOS. On macOS, we are interested in mouse events. Later on, we will be using these methods to draw in our ViewController.

Finally, we’ll change the color of our view to blue, just to make sure things are running smoothly; if we went straight to a transparent view, we wouldn’t know where the window was drawn yet! Next, we need to set up the storyboard to use our new ClearWindow class, even though it isn’t living up to its name yet. Go back to the storyboard, click the window, and edit its subclass under the “Custom Class” area in the right pane. Type in ClearWindow here, and we can now run our app.

Type in ClearWindow
Type in ClearWindow

Lo and behold, we have a blue rectangle on our screen! Nothing impressive, yet. We can click and drag around, and we can spam the console. Let’s stop running the app at this point because it will only get in the way.

4. Let’s Start Drawing!

Next, we can update our implementation of ViewController. The bulk of our work will now happen here and in Main.storyboard. Right now, the important part is to piggyback on the methods that we created in ClearWindow to capture mouse gestures. Replace the contents of ViewController with the following code:

import Cocoaclass ViewController: NSViewController { override func viewDidLoad() { super.viewDidLoad() view.frame = CGRect(origin: CGPoint(), size: NSScreen.main!.visibleFrame.size) } func startDrawing(at point: NSPoint) { } func continueDrawing(at point: NSPoint) { } func endDrawing(at point: NSPoint) { }}

This does not look like much yet, but it is the foundation of our drawing code. In our modified viewDidLoad() method, we’ve resized the view to equal our main screen’s dimension. We do this because we can only draw within our NSViewController. If our view covered everything, then we’d be able to draw over anything! Finally, we’ve created hooks that, for ClearWindow, will call for the ViewController to draw. More on that in a bit.

The next thing we need to do is define how we will draw onto the screen. Add the following code above our viewDidLoad method.

let lineWeight: CGFloat = 10let strokeColor: NSColor = .redvar currentPath: NSBezierPath?var currentShape: CAShapeLayer?

These four variables define our drawing. Currently, our line thickness is 10 points, and we will be drawing in red — nothing revolutionary here, but that needs to be defined. Next, we have a NSBezierPath and a CAShapeLayer. These should look pretty familiar if you have ever played with UIBezierPath. Note that these two are optional (they will come up again later).

Now for the fun part: We can start implementing our drawing methods.

Start Drawing

Update startDrawing with the following code:

func startDrawing(at point: NSPoint) { currentPath = NSBezierPath() currentShape = CAShapeLayer() currentShape?.lineWidth = lineWeight currentShape?.strokeColor = strokeColor.cgColor currentShape?.fillColor = NSColor.clear.cgColor currentShape?.lineJoin = kCALineJoinRound currentShape?.lineCap = kCALineCapRound currentPath?.move(to: point) currentPath?.line(to: point) currentShape?.path = currentPath?.cgPath view.layer?.addSublayer(currentShape!)}

This is the most complicated of the three drawing methods. The reason for this is that we need to set up a new NSBezierPath and CAShapeLayer each time we start drawing. This is important because we don’t want to have one continuous line all over our screen — that wouldn’t do at all. This way, we can have one layer per line, and we will be able to make any kind of drawing we want. Then, we set up the newly created CAShapeLayer’s properties. We send in our lineWeight and then the stroke color to our nice red color. We set the fill color to clear, which means we will only be drawing with lines, instead of solid shapes. Then, we set the lineJoin and lineCap to use rounded edges. I chose this because the rounded edges make the drawing look nicer in my opinion. Feel free to play with these properties to figure out what works best for you.

Then, we move the point where we will start drawing to the NSPoint that will be sent to us. This will not draw anything, but it will give the UIBezierPath a reference point for when we actually give it instructions to draw. Think of it as if you had a pen in your hand and you decided to draw something in the middle of a sheet of paper. You move the pen to the location you want to draw, but you’re not doing anything yet, just hovering over the paper, waiting to put the ink down. Without this, nothing can be drawn because the next line requires two points to work. The next line, aptly named line(to: point), draws a line from the current position to wherever you specify. Currently, we’re telling our UIBezierPath to stay in the same position and touch down on our sheet of paper.

The last two lines pull the path data out of our UIBezierPath in a usable format for CAShapeLayer. Note that currentPath?.cgPath will be marked as an error at the moment. Don’t fret: We will take care of that after we cover the next two methods. Just know that when it does work, this function will have our CAShapeLayer draw its path, even if it is currently a dot. Then, we add this layer to our view’s sublayer. At this point, the user will be able to see that they are now drawing.

Continue Drawing

Update continueDrawing with the following code:

func continueDrawing(at point: NSPoint) { currentPath?.line(to: point) if let shape = currentShape { shape.path = currentPath?.cgPath }}

Not much going on here, but we are adding another line to our currentPath. Because the CAShapeLayer is already in our view’s sublayer, the update will show on screen. Again, note that these are optional values; we are guarding ourselves just in case they are nil.

End Drawing

Update endDrawing with the following code:

func endDrawing(at point: NSPoint) { currentPath?.line(to: point) if let shape = currentShape { shape.path = currentPath?.cgPath } currentPath = nil currentShape = nil}

We update the path again, just the same way as we did in continueDrawing, but then we also nil out our currentPath and currentShape. We do this because we are done drawing and no longer need to talk to this shape and path. The next action we can take is startDrawing again, and we start this process all over again. We nil out these values so that we cannot update them again; the view’s sublayer will still hold a reference to the line, and it will stay on screen until we remove it.

If we run the app with what we have, we’ll get errors! Almost forgot about that. One thing you will definitely notice when moving over to macOS development is that not every API between iOS and macOS are equivalent. One such issue here is that NSBezierPath doesn’t have the handy cgPath property that UIBezierPath has. We use this property to easily convert the NSBezierPath path to the CAShapeLayer path. This way, CAShapeLayer will do all the heavy lifting to display our line. We can sit back and reap the reward of a nice-looking line with none of the work! StackOverflow has a handy answer that I’ve used to handle this absence of cgPath by updating it to Swift 4 (see below). This code lets us create an extension to NSBezierPath and returns to us a handy CGPath to play with. Create a file named NSBezierPath+CGPath.swift and add the following code to it.

import Cocoaextension NSBezierPath { public var cgPath: CGPath { let path = CGMutablePath() var points = [CGPoint](repeating: .zero, count: 3) for i in 0 ..< self.elementCount { let type = self.element(at: i, associatedPoints: &points) switch type { case .moveToBezierPathElement: path.move(to: points[0]) case .lineToBezierPathElement: path.addLine(to: points[0]) case .curveToBezierPathElement: path.addCurve(to: points[2], control1: points[0], control2: points[1]) case .closePathBezierPathElement: path.closeSubpath() } } return path }}

At this point, everything will run, but we aren’t drawing anything quite yet. We still need to attach the drawing functions to actual mouse actions. In order to do this, we go back into our ClearWindow and update the NSResponder mouse methods to the following:

 override func mouseDown(with event: NSEvent) { (contentViewController as? ViewController)?.startDrawing(at: event.locationInWindow) } override func mouseDragged(with event: NSEvent) { (contentViewController as? ViewController)?.continueDrawing(at: event.locationInWindow) } override func mouseUp(with event: NSEvent) { (contentViewController as? ViewController)?.endDrawing(at: event.locationInWindow) }

This basically checks to see whether the current view controller is our instance of ViewController, where we will handle the drawing logic. If you run the app now, you should see something like this:

The app as it is now
The app as it is now (View large version)

This is not completely ideal, but we are currently able to draw on the blue portion of our screen. This means that our drawing logic is correct, but our layout is not. Before correcting our layout, let’s create a good way to quit, or disable, drawing on our app. If we make it full screen right now, we would either have to “force quit” or switch to another space to quit our app.

5. Create A Menu

Head back over to our Main.storyboard file, where we can add a new menu to our ViewController. In the right pane, drag “Menu” under the “View Controller Scene” in our storyboard’s hierarchy.

Setting up the menu
Setting up the menu (View large version)

Edit these menu items to say “Clear,” “Toggle” and “Quit.” For extra panache, we can add a line separator above our “Exit” item, to deter accidental clicks:

Creating menu options
Creating menu options (View large version)

Next, open up the “Assistant Editor” (the Venn diagram-looking button near the top right of Xcode), so that we can start hooking up our menu items. For both “Clear” and “Toggle,” we want to create a “Referencing Outlet” so that we can modify them. After this, we want to hook up “Sent Action” so that we can get a callback when the menu item is selected.

Hooking up the code to the buttons
Hooking up the code to the buttons (View large version)

For “Exit,” we will drag our “Sent Action” to the first responder, and select “Terminate.” “Terminate” is a canned action that will quit the application. Finally, we need a reference to the menu itself; so, right-click on the menu under “View Controller Scene,” and create a reference named optionsMenu. The newly added code in ViewController should look like this:

@IBOutlet weak var clearButton: NSMenuItem!@IBOutlet weak var toggleButton: NSMenuItem!@IBOutlet var optionsMenu: NSMenu!@IBAction func clearButtonClicked(_ sender: Any) {}@IBAction func toggleButtonClicked(_ sender: Any) {}

We have the building blocks for the menu extras for our app; now we need to finish the process. Close out of “Assistant Editor” mode and head over to ViewController so that we can make use of these menu buttons. First, we will add the following strings to drive the text of the toggle button. Add the following two lines near the top of the file.

private let offText = "Disable Drawing"private let onText = "Enable Drawing"

We need to update what happens when the clear and toggle buttons are clicked. Add the following line to clear the drawing in clearButtonClicked(_ sender):

view.window!.ignoresMouseEvents = !view.window!.ignoresMouseEventstoggleButton.title = view.window!.ignoresMouseEvents ? onText : offText

This toggles the flag on our window to ignore mouse events, so that we will click “through” our window, in order to use our computer as intended. We’ll also update the toggleButton’s text to let the user know that drawing is either enabled or disabled.

Now we need to finally put our menu to use. Let’s start by adding an icon to our project. You can find that in the repository. Then, we can override the awakeFromNib() method because, at this point, our view will be inflated from the storyboard. Add the following code to ViewController.

let statusItem = NSStatusBar.system.statusItem(withLength: NSStatusItem.variableLength)override func awakeFromNib() { statusItem.menu = optionsMenu let icon = NSImage(named: NSImage.Name(rawValue: "pencil")) icon?.isTemplate = true // best for dark mode statusItem.image = icon toggleButton.title = offText}

Make sure to put the statusItem near the top, next to the rest of the variables. statusItem grabs a spot for our app to use menu extras in the top toolbar. Then, in awakeFromNib, we set the menu as our optionsMenu, and, finally, we give it an icon, so that it is easily identifiable and clickable. If we run our app now, the “menu extra” icon will appear at the top! We aren’t quite finished yet. We need to ensure that the drawing space is placed correctly on screen; otherwise, it will be only partially useful.

6. Positioning The Drawing App

To get our view to draw where we want it, we must venture back into Main.storyboard. Click on the window itself, and then select the attributes inspector, the icon fourth from the left in the right-hand pane. Uncheck everything under the “Appearance,” “Controls” and “Behavior” headings, like so:

Attributes inspector
Attributes inspector

We do this to remove all extra behaviors and appearances. We want a clear screen, with no bells and whistles. If we run the app again, we will be greeted by a familiar blue screen, only larger. To make things useful, we need to change the color from blue to transparent. Head back over to ClearWindow and update the blue color to this:

backgroundColor = NSColor(calibratedRed: 1, green: 1, blue: 1, alpha: 0.001)

This is pretty much the same as NSColor.clear. The reason why we are not using NSColor.clear is that if our entire app is transparent, then macOS won’t think our app is visible, and no clicks will be captured in the app. We’ll go with a mostly transparent app — something that, at least to me, is not noticeable, yet our clicks will be recorded correctly.

The final thing to do is remove it from our dock. To do this, head over to Info.plist and add a new row, named “Application is agent (UIElement)” and set it to “Yes.” Once this is done, rerun the app, and it will no longer appear in the dock!

Conclusion

This app is pretty short and sweet, but we did learn a few things. First, we figured out some of the similarities and differences between UIKit and Cocoa. We also took a tour around what Cocoa has in store for us iOS developers. We also (hopefully) know that creating a Cocoa-based app is not difficult, nor should it be intimidating. We now have a small app that is presented in an atypical fashion, and it lives up in the menu bar, next to the menu extras. Finally, we can use all of this stuff to build bigger and better Cocoa apps! You started this article as an iOS developer and grew beyond that, becoming an Apple developer. Congratulations!

I will be updating the public repository for this example to add extra options. Keep an eye on the repository, and feel free to add code, issues and comments.

Good luck and happy programming!

Attributes inspector
Thank you!

This does not look like much yet, but it is the foundation of our drawing code. In our modified viewDidLoad() method, we’ve resized the view to equal our main screen’s dimension. We do this because we can only draw within our NSViewController. If our view covered everything, then we’d be able to draw over anything! Finally, we’ve created hooks that, for ClearWindow, will call for the ViewController to draw. More on that in a bit.

The next thing we need to do is define how we will draw onto the screen. Add the following code above our viewDidLoad method.

let lineWeight: CGFloat = 10let strokeColor: NSColor = .redvar currentPath: NSBezierPath?var currentShape: CAShapeLayer?

These four variables define our drawing. Currently, our line thickness is 10 points, and we will be drawing in red — nothing revolutionary here, but that needs to be defined. Next, we have a NSBezierPath and a CAShapeLayer. These should look pretty familiar if you have ever played with UIBezierPath. Note that these two are optional (they will come up again later).

Now for the fun part: We can start implementing our drawing methods.

Start Drawing

Update startDrawing with the following code:

func startDrawing(at point: NSPoint) { currentPath = NSBezierPath() currentShape = CAShapeLayer() currentShape?.lineWidth = lineWeight currentShape?.strokeColor = strokeColor.cgColor currentShape?.fillColor = NSColor.clear.cgColor currentShape?.lineJoin = kCALineJoinRound currentShape?.lineCap = kCALineCapRound currentPath?.move(to: point) currentPath?.line(to: point) currentShape?.path = currentPath?.cgPath view.layer?.addSublayer(currentShape!)}

This is the most complicated of the three drawing methods. The reason for this is that we need to set up a new NSBezierPath and CAShapeLayer each time we start drawing. This is important because we don’t want to have one continuous line all over our screen — that wouldn’t do at all. This way, we can have one layer per line, and we will be able to make any kind of drawing we want. Then, we set up the newly created CAShapeLayer’s properties. We send in our lineWeight and then the stroke color to our nice red color. We set the fill color to clear, which means we will only be drawing with lines, instead of solid shapes. Then, we set the lineJoin and lineCap to use rounded edges. I chose this because the rounded edges make the drawing look nicer in my opinion. Feel free to play with these properties to figure out what works best for you.

Then, we move the point where we will start drawing to the NSPoint that will be sent to us. This will not draw anything, but it will give the UIBezierPath a reference point for when we actually give it instructions to draw. Think of it as if you had a pen in your hand and you decided to draw something in the middle of a sheet of paper. You move the pen to the location you want to draw, but you’re not doing anything yet, just hovering over the paper, waiting to put the ink down. Without this, nothing can be drawn because the next line requires two points to work. The next line, aptly named line(to: point), draws a line from the current position to wherever you specify. Currently, we’re telling our UIBezierPath to stay in the same position and touch down on our sheet of paper.

The last two lines pull the path data out of our UIBezierPath in a usable format for CAShapeLayer. Note that currentPath?.cgPath will be marked as an error at the moment. Don’t fret: We will take care of that after we cover the next two methods. Just know that when it does work, this function will have our CAShapeLayer draw its path, even if it is currently a dot. Then, we add this layer to our view’s sublayer. At this point, the user will be able to see that they are now drawing.

Continue Drawing

Update continueDrawing with the following code:

func continueDrawing(at point: NSPoint) { currentPath?.line(to: point) if let shape = currentShape { shape.path = currentPath?.cgPath }}

Not much going on here, but we are adding another line to our currentPath. Because the CAShapeLayer is already in our view’s sublayer, the update will show on screen. Again, note that these are optional values; we are guarding ourselves just in case they are nil.

End Drawing

Update endDrawing with the following code:

func endDrawing(at point: NSPoint) { currentPath?.line(to: point) if let shape = currentShape { shape.path = currentPath?.cgPath } currentPath = nil currentShape = nil}

We update the path again, just the same way as we did in continueDrawing, but then we also nil out our currentPath and currentShape. We do this because we are done drawing and no longer need to talk to this shape and path. The next action we can take is startDrawing again, and we start this process all over again. We nil out these values so that we cannot update them again; the view’s sublayer will still hold a reference to the line, and it will stay on screen until we remove it.

If we run the app with what we have, we’ll get errors! Almost forgot about that. One thing you will definitely notice when moving over to macOS development is that not every API between iOS and macOS are equivalent. One such issue here is that NSBezierPath doesn’t have the handy cgPath property that UIBezierPath has. We use this property to easily convert the NSBezierPath path to the CAShapeLayer path. This way, CAShapeLayer will do all the heavy lifting to display our line. We can sit back and reap the reward of a nice-looking line with none of the work! StackOverflow has a handy answer that I’ve used to handle this absence of cgPath by updating it to Swift 4 (see below). This code lets us create an extension to NSBezierPath and returns to us a handy CGPath to play with. Create a file named NSBezierPath+CGPath.swift and add the following code to it.

import Cocoaextension NSBezierPath { public var cgPath: CGPath { let path = CGMutablePath() var points = [CGPoint](repeating: .zero, count: 3) for i in 0 ..< self.elementCount { let type = self.element(at: i, associatedPoints: &points) switch type { case .moveToBezierPathElement: path.move(to: points[0]) case .lineToBezierPathElement: path.addLine(to: points[0]) case .curveToBezierPathElement: path.addCurve(to: points[2], control1: points[0], control2: points[1]) case .closePathBezierPathElement: path.closeSubpath() } } return path }}

At this point, everything will run, but we aren’t drawing anything quite yet. We still need to attach the drawing functions to actual mouse actions. In order to do this, we go back into our ClearWindow and update the NSResponder mouse methods to the following:

 override func mouseDown(with event: NSEvent) { (contentViewController as? ViewController)?.startDrawing(at: event.locationInWindow) } override func mouseDragged(with event: NSEvent) { (contentViewController as? ViewController)?.continueDrawing(at: event.locationInWindow) } override func mouseUp(with event: NSEvent) { (contentViewController as? ViewController)?.endDrawing(at: event.locationInWindow) }

This basically checks to see whether the current view controller is our instance of ViewController, where we will handle the drawing logic. If you run the app now, you should see something like this:

The app as it is now
The app as it is now (View large version)

This is not completely ideal, but we are currently able to draw on the blue portion of our screen. This means that our drawing logic is correct, but our layout is not. Before correcting our layout, let’s create a good way to quit, or disable, drawing on our app. If we make it full screen right now, we would either have to “force quit” or switch to another space to quit our app.

5. Create A Menu

Head back over to our Main.storyboard file, where we can add a new menu to our ViewController. In the right pane, drag “Menu” under the “View Controller Scene” in our storyboard’s hierarchy.

Setting up the menu
Setting up the menu (View large version)

Edit these menu items to say “Clear,” “Toggle” and “Quit.” For extra panache, we can add a line separator above our “Exit” item, to deter accidental clicks:

Creating menu options
Creating menu options (View large version)

Next, open up the “Assistant Editor” (the Venn diagram-looking button near the top right of Xcode), so that we can start hooking up our menu items. For both “Clear” and “Toggle,” we want to create a “Referencing Outlet” so that we can modify them. After this, we want to hook up “Sent Action” so that we can get a callback when the menu item is selected.

Hooking up the code to the buttons
Hooking up the code to the buttons (View large version)

For “Exit,” we will drag our “Sent Action” to the first responder, and select “Terminate.” “Terminate” is a canned action that will quit the application. Finally, we need a reference to the menu itself; so, right-click on the menu under “View Controller Scene,” and create a reference named optionsMenu. The newly added code in ViewController should look like this:

@IBOutlet weak var clearButton: NSMenuItem!@IBOutlet weak var toggleButton: NSMenuItem!@IBOutlet var optionsMenu: NSMenu!@IBAction func clearButtonClicked(_ sender: Any) {}@IBAction func toggleButtonClicked(_ sender: Any) {}

We have the building blocks for the menu extras for our app; now we need to finish the process. Close out of “Assistant Editor” mode and head over to ViewController so that we can make use of these menu buttons. First, we will add the following strings to drive the text of the toggle button. Add the following two lines near the top of the file.

private let offText = "Disable Drawing"private let onText = "Enable Drawing"

We need to update what happens when the clear and toggle buttons are clicked. Add the following line to clear the drawing in clearButtonClicked(_ sender):

view.window!.ignoresMouseEvents = !view.window!.ignoresMouseEventstoggleButton.title = view.window!.ignoresMouseEvents ? onText : offText

This toggles the flag on our window to ignore mouse events, so that we will click “through” our window, in order to use our computer as intended. We’ll also update the toggleButton’s text to let the user know that drawing is either enabled or disabled.

Now we need to finally put our menu to use. Let’s start by adding an icon to our project. You can find that in the repository. Then, we can override the awakeFromNib() method because, at this point, our view will be inflated from the storyboard. Add the following code to ViewController.

let statusItem = NSStatusBar.system.statusItem(withLength: NSStatusItem.variableLength)override func awakeFromNib() { statusItem.menu = optionsMenu let icon = NSImage(named: NSImage.Name(rawValue: "pencil")) icon?.isTemplate = true // best for dark mode statusItem.image = icon toggleButton.title = offText}

Make sure to put the statusItem near the top, next to the rest of the variables. statusItem grabs a spot for our app to use menu extras in the top toolbar. Then, in awakeFromNib, we set the menu as our optionsMenu, and, finally, we give it an icon, so that it is easily identifiable and clickable. If we run our app now, the “menu extra” icon will appear at the top! We aren’t quite finished yet. We need to ensure that the drawing space is placed correctly on screen; otherwise, it will be only partially useful.

6. Positioning The Drawing App

To get our view to draw where we want it, we must venture back into Main.storyboard. Click on the window itself, and then select the attributes inspector, the icon fourth from the left in the right-hand pane. Uncheck everything under the “Appearance,” “Controls” and “Behavior” headings, like so:

Attributes inspector
Attributes inspector

We do this to remove all extra behaviors and appearances. We want a clear screen, with no bells and whistles. If we run the app again, we will be greeted by a familiar blue screen, only larger. To make things useful, we need to change the color from blue to transparent. Head back over to ClearWindow and update the blue color to this:

backgroundColor = NSColor(calibratedRed: 1, green: 1, blue: 1, alpha: 0.001)

This is pretty much the same as NSColor.clear. The reason why we are not using NSColor.clear is that if our entire app is transparent, then macOS won’t think our app is visible, and no clicks will be captured in the app. We’ll go with a mostly transparent app — something that, at least to me, is not noticeable, yet our clicks will be recorded correctly.

The final thing to do is remove it from our dock. To do this, head over to Info.plist and add a new row, named “Application is agent (UIElement)” and set it to “Yes.” Once this is done, rerun the app, and it will no longer appear in the dock!

Conclusion

This app is pretty short and sweet, but we did learn a few things. First, we figured out some of the similarities and differences between UIKit and Cocoa. We also took a tour around what Cocoa has in store for us iOS developers. We also (hopefully) know that creating a Cocoa-based app is not difficult, nor should it be intimidating. We now have a small app that is presented in an atypical fashion, and it lives up in the menu bar, next to the menu extras. Finally, we can use all of this stuff to build bigger and better Cocoa apps! You started this article as an iOS developer and grew beyond that, becoming an Apple developer. Congratulations!

I will be updating the public repository for this example to add extra options. Keep an eye on the repository, and feel free to add code, issues and comments.

Good luck and happy programming!

Attributes inspector
Thank you!

(al)

Quick Wins For Improving Performance And Security Of Your Website

When it comes to building and maintaining a website, one has to take a ton of things into consideration. However, in an era when people want to see results fast, while at the same time knowing that their information online is secure, all webmasters should strive for a couple of things:

  • Improving the performance of their website,
  • Increasing their website’s security.

Both of these goals are vital in order to run a successful website.

So, we’ve put together a list of five technologies you should consider implementing to improve both the performance and security of your website. Here’s a quick overview of the topics we’ll cover:

  • Let’s Encrypt (SSL) A free way to obtain an SSL certificate for improved security and better performance.
  • HTTP/2 The successor to the HTTP 1.1 protocol, which introduces many performance enhancements.
  • Brotli compression A compression method that outperforms Gzip, resulting in smaller file sizes.
  • WebP images An image format that renders images smaller than a typical JPEG or PNG, resulting in faster loading times.
  • Content delivery network A collection of servers spread out across the globe, with the aim of caching and delivering your website’s assets faster.

If you aren’t aware of the benefits of improving your website’s performance and security, consider the fact that Google loves speed and, since 2010, has been using website speed as a ranking factor. Furthermore, if you run an e-commerce shop or a simple blog with an opt-in form, a faster website will increase your conversions. According to a study by Mobify, for every 100-millisecond decrease in home-page loading speed, Mobify saw a 1.11% lift in session-based conversions for its customer base, amounting to an average annual revenue increase of $376,789.

The web is also quickly moving towards SSL to provide users with better security and improved overall performance. In fact, for a couple of the technologies mentioned in this article, having an SSL-enabled website is a prerequisite.

Before jumping in, note that even if you can’t (or decide not to) apply each and every one of the suggestions mentioned here, your website would still benefit from implementing any number of the methods outlined. Therefore, try to determine which aspects of your website could use improvement and apply the suggestions below accordingly.

The Front-End Performance Challenge

In case you missed it, we’re currently running a front-end performance challenge to tickle your brains! A perfect opportunity to apply everything you know about Service Workers, HTTP/2, Brotli and Zopfli, and other optimization techniques in one project. Join in! →

Let’s Encrypt (SSL)

If your website is still being delivered over HTTP, it’s time to migrate now. Google already takes HTTPS into consideration as a ranking signal and according to Google’s Security blog, all non-secure web pages will eventually display a dominant “Not Secure” message within the Chrome browser.

That’s why, to start off this list, we’ll go over how you can complete the migration process with a free SSL certificate, via Let’s Encrypt. Let’s Encrypt is a free and automated way to obtain an SSL certificate. Before Let’s Encrypt, you were required to purchase a valid certificate from a certificate-issuing authority if you wanted to deliver your website over HTTPS. Due to the additional cost, many web developers opted not to purchase the certificate and, therefore, continued serving their website over HTTP.

However, since Let’s Encrypt’s public beta launched in late 2015, millions of free SSL certificates have been issued. In fact, Let’s Encrypt stated that, as of late June 2017, over 100 million certificates have been issued. Before Let’s Encrypt launched, fewer than 40% of web pages were delivered over HTTPS. A little over a year and a half after the launch of Let’s Encrypt, that number has risen to 58%.

If you haven’t already moved to HTTPS, do so as soon as possible. Here are a few reasons why moving to HTTPS is beneficial:

  • increased security (because everything is encrypted),
  • HTTPS is required in order for HTTP/2 and Brotli to work,
  • HTTPS is a ranking signal,
  • SSL-secured websites build visitor trust.

How to Obtain a Let’s Encrypt Certificate

You can obtain an SSL certificate in a few ways. Although the SSL certificates that Let’s Encrypt provides satisfy most use cases, there are certain things to be aware of:

  • There is currently no option for wildcard certificates. However, this is planned to be supported in January 2018.
  • Let’s Encrypt certificates are valid for a period of 90 days. You must either renew them manually before they expire or set up a process to renew them automatically.

Of course, if one or both of these points are a deal-breaker, then acquiring a custom SSL certificate from a certificate authority is your next best bet. Regardless of which provider you choose, having an HTTPS-enabled website should be your top priority.

To obtain a Let’s Encrypt certificate, you have two methods to choose from:

  • With shell access: Run the installation and obtain a certificate yourself.
  • Without shell access: Obtain a certificate through your hosting or CDN provider.

The second option is pretty straightforward. If your web host or CDN provider offers Let’s Encrypt support, you basically just need to enable it in order to start delivering assets over HTTPS.

However, if you have shell access and want or need to configure Let’s Encrypt yourself, then you’ll need to determine which web server and operating system you’re using. Next, go to Certbot and select your software and system from the dropdown menus to find your specific installation instructions. Although the instructions for each combination of software and OS are different, Certbot provides simple setup instructions for a wide variety of systems.

Let's Encrypt Certbot home page
Certbot home page (View large version)

HTTP/2

Thanks to Let’s Encrypt (or any other SSL certificate authority), your website should now be running over HTTPS. This means you can now take advantage of the next two technologies we’ll discuss, which would otherwise be incompatible if your website was delivered over HTTP. The second technology we’ll cover is HTTP/2.

HTTP 1.1 was released more than 15 years ago, and since then some major improvements have occurred. One of the most welcome improvements of HTTP/2 is that it allows browsers to parallelize multiple downloads using only one connection. With HTTP 1.1, most browsers were able to handle only six concurrent downloads on average. HTTP/2 now renders methods such as domain-sharding obsolete.

Apart from requiring only one connection per origin and allowing multiple requests at the same time (multiplexing), HTTP/2 offers other benefits:

  • Server push Pushes additional resources that it thinks the client will require in the future.
  • Header compression Reduces the size of headers by using HPACK header compression.
  • Binary Unlike in HTTP 1.1, which was textual, binary reduces the time required to translate text to binary and makes it easier for a server to parse.
  • Prioritization Priority levels are associated with requests, thereby allowing resources of higher importance to be delivered first.

Enabling HTTP/2

Regardless of how you’re delivering the majority of your content, whether from your origin server or a CDN, most providers now support HTTP/2. Determining whether a provider supports HTTP/2 should be fairly easy by going to its features page and checking around. As for CDN providers, Is TLS Fast Yet? provides a comprehensive list of CDN services and marks whether they support HTTP/2.

If you want to check for yourself whether your website currently uses HTTP/2, then you’ll need to get the latest version of cURL and run the following command:

curl --http2 http://yourwebsite.com

Alternatively, if you’re not comfortable using the command line, you can open up Chrome’s Developer Tools and navigate to the “Network” tab. Under the “Protocol” column, you should see the value h2.

Chrome's Developer Tools
Chrome’s Developer Tools h2 (View large version)

Enabling HTTP/2 on nginx

If you’re running your own server and are using an outdated software version, then you’ll need to upgrade it to a version that supports HTTP/2. For nginx users, the process is pretty straightforward. Simply ensure that you’re running nginx version 1.9.5 or higher, and add the following listen directive within the server block of your configuration file:

listen 443 ssl http2;

Enabling HTTP/2 on Apache

For Apache users, the process involves a few more steps. Apache users must update their system to version 2.4.17 or higher in order to make use of HTTP/2. They’ll also need to build HTTPS with the mod_http2 Apache module, load the module, and then define the proper server configuration. An outline of how to configure HTTP/2 on an Apache server can be found in the Apache HTTP/2 guide.

No matter which web server you’re using, your website will need to be running on HTTPS in order to take advantage of the benefits of HTTP/2.

HTTP/2 Vs. HTTP 1.1: Performance Test

You can test the performance of HTTP/2 compared to HTTP 1.1 manually by running an online speed test before and after enabling HTTP/2 or by checking your browser’s development console.

Based on the structure and number of assets that your website loads, you might experience different levels of improvement. For instance, a website with a large number of resources will require multiple connections over HTTP 1.1 (thus increasing the number of round trips required), whereas on HTTP/2 it will require only one.

The results below are the findings for a default WordPress installation using the 2017 theme and loading 18 image assets. Each setup was tested three times on a 100 Mbps connection, and the average overall loading time was used as the final result. Firefox was used to examine the waterfall structure of these tests.

The first test below shows the results over HTTP 1.1. In total, the entire page took an average of 1.73 seconds to fully load, and various lengths of blocked time were incurred (as seen by the red bars).

HTTP 1.1 speed test results
HTTP 1.1 loading time and waterfall (View large version)

When testing the exact same website, only this time over HTTP/2, the results were quite different. Using HTTP/2, the average loading time of the entire page took 1.40 seconds, and the amount of blocked time incurred was negligible.

HTTP/2 speed test results
HTTP/2 loading time and waterfall (View large version)

Just by switching to HTTP/2, the average savings in loading time ended up being 330 milliseconds. That being said, the more resources your website loads, the more connections must be made. So, if your website loads a lot of resources, then implementing HTTP/2 is a must.

3. Brotli Compression

The third technology is Brotli, a compression algorithm developed by Google back in 2015. Brotli continues to grow in popularity, and currently all popular web browsers support it (with the exception of Internet Explorer). Compared to Gzip, Brotli still has some catching up to do in global availability (i.e. in CMS plugins, server support, CDN support, etc.).

However, Brotli has shown some impressive compression results compared to other methods. For instance, according to Google’s algorithm study, Brotli outperformed Zopfli (another modern compression method) by 20 to 26% in compression ratio.

Enabling Brotli

Depending on which web server you’re running, implementation of Brotli will be different. You’ll need to use the method appropriate to your setup. If you’re using nginx, Apache or Microsoft IIS, then the following modules are available to enable Brotli.

Once you’ve finished downloading and installing one of the modules above, you’ll need to configure the directives to your liking. When doing this, pay attention to three things:

  • File type The types of files that can be compressed with Brotli include CSS, JavaScript, XML and HTML.
  • Compression quality The quality of compression will depend on the amount of compression you want to achieve in exchange for time. The higher the compression level, the more time and resources will be required, but the greater the savings in size. Brotli’s compression value can be defined anywhere from 1 to 11.
  • Static versus dynamic compression The stage at which you would like Brotli compression to take place will determine whether to implement static or dynamic compression:
    • Static compression pre-compresses assets ahead of time — before the user actually makes a request. Therefore, once the request is made, there is no need for Brotli to compress the asset — it will already have been compressed and, hence, can be served immediately. This feature comes built-in with the nginx Brotli module, whereas implementing static compression with Apache requires some configuration.
    • Dynamic compression occurs on the fly. In other words, once a visitor makes a request for a Brotli-compressible asset, the asset is compressed on the spot and subsequently delivered. This is useful for dynamic content that needs to be compressed upon each request, the downside being that the user must wait for the asset to be compressed before it is delivered.

A Brotli configuration for nginx users might look similar to the snippet below. This example sets compression to occur dynamically (on the fly), defines a quality level of 5 and specifies various file types.

brotli on;brotli_comp_level 5;brotli_types text/plain text/css application/json application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript;

To verify that Brotli is enabled on your server, you can open up Chrome’s Developer Tools, navigate to the “Network” tab, select an asset, and check the Content-Encoding header. This should now be br. Note that Brotli requires HTTPS, so if you’ve correctly gone through the installation and configuration process but still don’t see the br value, then you’ll need to migrate to HTTPS.

Chrome Developer Tools Network tab showing the br encoding
Chrome’s Developer Tools br (View large version)

Otherwise, you can run a simple cURL command, such as:

curl -I https://yourwebsite.com/path/to/your/asset.js

This will return a list of response headers, where you can also check for the Content-Encoding header value. If you’re using WordPress and want to take things a step further by delivering a Brotli-compressed HTML document, check out my WordPress guide to Brotli to learn how.

Brotli Vs. Gzip: Performance Test

To compare Brotli and Gzip compression, we’ll take three compressible web assets and compare them in size and loading speed. Both compression methods were defined with a level 5 compression value.

Having tested the assets three times and taking the average loading speed of each, the results were as follows:

Asset name Gzip size Gzip loading time Brotli size Brotli loading time
jquery.js 33.4 KB 308 ms 32.3 KB 273 ms
dashicons.min.css 28.1 KB 148 ms 27.9 KB 132 ms
style.css 15.7 KB 305 ms 14.5 KB 271 ms

Overall, the Gzipped assets were 77.2 KB in total size, while the Brotli assets were 74.7 KB. That’s a 3.3% reduction in overall page size just from using Brotli compression on three assets. As for loading time, the Gzip assets had a combined total time of 761 milliseconds, while the Brotli assets took 676 milliseconds to load — an improvement of 12.6%.

4. WebP Images

Our fourth suggestion is to use the image format that goes by the name of WebP. Like Brotli, WebP was developed by Google for the purpose of making images smaller. Like JPEG and PNG, WebP is an image format. The primary advantage of serving WebP images is that they are much smaller in size than JPEGs and PNGs. Typically, savings of up to 80% can be achieved after converting a JPEG or PNG to WebP.

The downside of the WebP image format is that not all browsers support it. At the time of writing, only Chrome and Opera do. However, with proper configuration, you can deliver WebP images to supporting browsers, while delivering a fallback image format (such as JPEG) to non-supporting browsers.

WebP still has a way to go before becoming as widespread as JPEG and PNG. However, thanks to its impressive savings in size, it stands a good chance of continued growth. Overall, WebP reduces total page size, speeds up website loading and saves bandwidth.

How to Convert to and Deliver WebP

A few options are available to convert images to WebP format. If you use a popular CMS, such as WordPress, Joomla or Magento, plugins are available that enable you to convert images directly within the CMS’ dashboard.

On the other hand, if you want to take a manual approach, online WebP image converters are available, and certain image-processing apps even come with a WebP format option that you can export to, thereby saving you from having to convert anything at all.

Lastly, if you prefer a more integrated approach, certain image-processing services provide an API that you can use to integrate directly in your web project, enabling you to convert images automatically.

As mentioned, not all browsers currently support WebP images. Therefore, if you serve an image on your website with only a .webp extension, non-supporting browsers will return a broken image. That’s why a fallback is important. Let’s go over three ways to achieve this.

1. Use the picture Element

This method allows you to define the path of a WebP image, as well as the path of the original JPEG within the website’s HTML. With this method, supporting browsers will display the WebP images, while all other browsers will display the default image defined in the last nested child tag within the picture block. Consider the following example:

<picture> <source srcset="images/my-webp-image.webp" type="image/webp"> <img src="images/my-jpg-image.jpg" alt="My image"></picture>

This method implements WebP functionality most widely, while ensuring that a fallback mechanism is in place. However, it might require a lot of modification to the HTML, depending on how large your application is.

2. Modify the Server’s Config File

This method uses rewrite rules defined in the server’s config file to fall back to a supported image format if the browser doesn’t support WebP. Use the appropriate snippet for Apache or nginx according to your web server, and adjust the path/images directory accordingly.

For Apache:

RewriteEngine OnRewriteCond %{HTTP_ACCEPT} image/webpRewriteCond %{DOCUMENT_ROOT}/$1.webp -fRewriteRule ^(path/images.+).(jpe?g|png)$ $1.webp [T=image/webp,E=accept:1] Header append Vary Accept env=REDIRECT_acceptAddType image/webp .webp

For nginx:

# http config blockmap $http_accept $webp_ext { default ""; "~*webp" ".webp";}# server config blocklocation ~* ^(path/images.+).(png|jpg)$ { add_header Vary Accept; try_files $1$webp_ext $uri =404;}

The downside of this method is that it is not recommended if you are going to be using WebP images in conjunction with a CDN. The reason is that the CDN will cache a WebP image if a WebP-supported browser is the first one to request the asset. Therefore, any subsequent requests will return the WebP image, whether the browser supports it or not.

3. Use a WordPress Caching Plugin

If you’re a WordPress user and need a solution that will deliver WebP images to supporting browsers while falling back to JPEGs and PNGs for others, all the while being compatible with a CDN, then you can use a caching plugin such as Cache Enabler. If you define within the plugin that you want to create an additional cached version for WebP, then the plugin will deliver a WebP-cached version to supporting browsers, while falling back to HTML or HTML Gzip for other browsers.

WebP Vs. JPEG: Performance Tests

To demonstrate the difference in size between a WebP and JPEG image, we’ll take three JPEG images, convert them to WebP, and compare the output to the originals. The three images are shown below and carry a size of 2.1 MB, 4.3 MB and 3.3 MB, respectively.

JPEG test image 1
Test JPEG image 1 (View large version)
JPEG test image 2
Test JPEG image 2 (View large version)
JPEG test image 3
Test JPEG image 3 (View large version)

When converted to WebP format, each image reduced significantly in size. The table below outlines the sizes of the original images, the sizes of the WebP versions, and how much smaller the WebP images are than the JPEGs. The images were converted to WebP using lossy compression, with a quality level of 80.

Image name JPEG size WebP size Percentage smaller
test-jpg-1 2.1 MB 1.1 MB 48%
test-jpg-2 4.3 MB 1 MB 77%
test-jpg-3 3.3 MB 447 KB 85.9%

You can compress WebP images using either a lossless (i.e. no quality loss) or lossy (i.e. some quality loss) method. The tradeoff for quality is a smaller image size. If you want to implement lossy compression for additional savings in size, doing so with WebP will render a better quality picture at a smaller size, as opposed to a lossy JPEG at the same level of quality. David Walsh has written a comprehensive post outlining the size and quality differences between WebP, JPEG and PNG.

5. Content Delivery Network

The last suggestion is to use a content delivery network (CDN). A CDN accelerates web assets globally by caching them across a cluster of servers. When a website uses a CDN, it essentially offloads the majority of its traffic to the CDN’s edge servers and routes its visitors to the nearest CDN server.

CDNs store a website’s resources for a predefined period of time thanks to caching. With caching, a CDN’s server creates a copy of the origin server’s web asset and store it on its own server. This process makes web requests much more efficient, given that visitors will be accessing your website from multiple geographic regions.

If no CDN has been configured, then all of your visitors’ requests will go to the origin server’s location, wherever that may be. This creates additional latency, especially for visitors who are requesting assets from a location far away from the origin server. However, with a CDN configured, visitors will be routed to the CDN provider’s nearest edge server to obtain the requested resources, thus minimizing request and response times.

Setting up a CDN

The process for setting up a CDN will vary according to the CMS or framework you’re using. However, at a high level, the process is more or less the same:

  1. Create a CDN zone that points to your origin URL (https://yourwebsite.com).
  2. Create a CNAME record to point a custom CDN URL (cdn.yourwebsite.com) to the URL provided by your CDN service.
  3. Use your custom CDN URL to integrate the CDN with your website (make sure to follow the guide appropriate to your website’s setup).
  4. Check your website’s HTML to verify that the static assets are being called using the CDN’s URL that you defined and not the origin URL.

Once this is complete, you’ll be delivering your website’s static assets from the CDN’s edge servers instead of your own. This will not only improve website speed, but will also enhance security, reduce the load on your origin server and increase redundancy.

Before and After Using a CDN: Performance Test

Because a CDN, by nature, has multiple server locations, performance tests will vary according to where you are requesting an asset from and where the CDN’s closest server is. Therefore, for the sake of simplicity, we’ll choose three locations from which to perform our tests:

  • Frankfurt, Germany
  • New York, United States
  • Toronto, Canada.

If you’re a WordPress user and need a solution that will deliver WebP images to supporting browsers while falling back to JPEGs and PNGs for others, all the while being compatible with a CDN, then you can use a caching plugin such as Cache Enabler. If you define within the plugin that you want to create an additional cached version for WebP, then the plugin will deliver a WebP-cached version to supporting browsers, while falling back to HTML or HTML Gzip for other browsers.

WebP Vs. JPEG: Performance Tests

To demonstrate the difference in size between a WebP and JPEG image, we’ll take three JPEG images, convert them to WebP, and compare the output to the originals. The three images are shown below and carry a size of 2.1 MB, 4.3 MB and 3.3 MB, respectively.

JPEG test image 1
Test JPEG image 1 (View large version)
JPEG test image 2
Test JPEG image 2 (View large version)
JPEG test image 3
Test JPEG image 3 (View large version)

When converted to WebP format, each image reduced significantly in size. The table below outlines the sizes of the original images, the sizes of the WebP versions, and how much smaller the WebP images are than the JPEGs. The images were converted to WebP using lossy compression, with a quality level of 80.

Image name JPEG size WebP size Percentage smaller
test-jpg-1 2.1 MB 1.1 MB 48%
test-jpg-2 4.3 MB 1 MB 77%
test-jpg-3 3.3 MB 447 KB 85.9%

You can compress WebP images using either a lossless (i.e. no quality loss) or lossy (i.e. some quality loss) method. The tradeoff for quality is a smaller image size. If you want to implement lossy compression for additional savings in size, doing so with WebP will render a better quality picture at a smaller size, as opposed to a lossy JPEG at the same level of quality. David Walsh has written a comprehensive post outlining the size and quality differences between WebP, JPEG and PNG.

5. Content Delivery Network

The last suggestion is to use a content delivery network (CDN). A CDN accelerates web assets globally by caching them across a cluster of servers. When a website uses a CDN, it essentially offloads the majority of its traffic to the CDN’s edge servers and routes its visitors to the nearest CDN server.

CDNs store a website’s resources for a predefined period of time thanks to caching. With caching, a CDN’s server creates a copy of the origin server’s web asset and store it on its own server. This process makes web requests much more efficient, given that visitors will be accessing your website from multiple geographic regions.

If no CDN has been configured, then all of your visitors’ requests will go to the origin server’s location, wherever that may be. This creates additional latency, especially for visitors who are requesting assets from a location far away from the origin server. However, with a CDN configured, visitors will be routed to the CDN provider’s nearest edge server to obtain the requested resources, thus minimizing request and response times.

Setting up a CDN

The process for setting up a CDN will vary according to the CMS or framework you’re using. However, at a high level, the process is more or less the same:

  1. Create a CDN zone that points to your origin URL (https://yourwebsite.com).
  2. Create a CNAME record to point a custom CDN URL (cdn.yourwebsite.com) to the URL provided by your CDN service.
  3. Use your custom CDN URL to integrate the CDN with your website (make sure to follow the guide appropriate to your website’s setup).
  4. Check your website’s HTML to verify that the static assets are being called using the CDN’s URL that you defined and not the origin URL.

Once this is complete, you’ll be delivering your website’s static assets from the CDN’s edge servers instead of your own. This will not only improve website speed, but will also enhance security, reduce the load on your origin server and increase redundancy.

Before and After Using a CDN: Performance Test

Because a CDN, by nature, has multiple server locations, performance tests will vary according to where you are requesting an asset from and where the CDN’s closest server is. Therefore, for the sake of simplicity, we’ll choose three locations from which to perform our tests:

  • Frankfurt, Germany
  • New York, United States
  • Toronto, Canada.

As for the assets to be tested, we chose to measure the loading times of an image, a CSS file and a JavaScript file. The results of each test, both with and without a CDN enabled, are outlined in the table below:

Frankfurt, Germany New York, United States Toronto, Canada
Image, no CDN 222 ms 757 ms 764 ms
Image, with CDN 32 ms 81 ms 236 ms
JavaScript file, no CDN 90 ms 441 ms 560 ms
JavaScript file, with CDN 30 ms 68 ms 171 ms
CSS file, no CDN 96 ms 481 ms 553 ms
CSS file, with CDN 31 ms 77 ms 148 ms

In all cases, the loading times for assets loaded through a CDN were faster than without a CDN. Results will vary according to the location of the CDN and your visitors; however, in general, performance should be boosted.

Conclusion

If you’re looking for ways to increase your website’s performance and security, these five methods are all great options. Not only are they all relatively easy to implement, but they’ll also modernize your overall stack.

Some of these technologies are still in the process of being globally adopted (in terms of browser support, plugin support, etc.); however, as demand increases, so will compatibility. Thankfully, there are ways to implement some of the technologies (such as Brotli and WebP images) for browsers that support them, while falling back to older methods for browsers that do not.

As a final note, if you haven’t already migrated your website to HTTPS, do so as soon as possible. HTTPS is now the standard and is required in order to use certain technologies, such as HTTP/2 and Brotli. Your website will be more secure overall, will perform faster (thanks to HTTP/2) and will look better in the eyes of Google.

Smashing Editorial(rb, vf, yk, al, il)

The Role Of Storyboarding In UX Design

To come up with a proper design, UX designers use a lot of different research techniques, such as contextual inquires, interviews and workshops. They summarize research findings into user stories and user flows and communicate their thinking and solutions to the teams with artifacts such as personas and wireframes. But somewhere in all of this, there are real people for whom the products are being designed for.

In order to create better products, designers must understand what’s going on in the user’s world and understand how their products can make the user’s life better. And that’s where storyboards come in.

In this article, we’ll focus on storyboards as a means to explore solutions to UX issues, as well as to communicate these issues and solutions to others. In case you’ve been looking for a way to go from idea to prototype much faster than you usually do, you can download and test Adobe XD, the all-in-one UX/UI solution for designing websites, mobile apps, and more.

What Is A Storyboard?

A storyboard is a linear sequence of illustrations, arrayed together to visualize a story. As a tool, storyboarding comes from motion picture production. Walt Disney Studios is credited with popularizing storyboards, having used sketches of frames since the 1920s. Storyboards enable Disney animators to create the world of the film before actually building it.

Storyboards have long been used as a tool in the visual storytelling media. Here is a Peter Pan storyboard. (Image: Wikia) (View large version)

Stories are the most powerful form of delivering information for a number of reasons:

  • Visualization

    A picture is worth a thousand words. Illustrating a concept or idea helps people to understand it more than anything else. An image speaks more powerfully than just words by adding extra layers of meaning.
  • Memorability

    Stories are 22 times more memorable than plain facts.
  • Empathy

    Storyboards help people relate to a story. As human beings, we often empathize with characters who have challenges similar to our own real-life ones. And when designers draw storyboards, they often imbue the characters with emotions.
  • Engagement

    Stories capture attention. People are hardwired to respond to stories: Our sense of curiosity immediately draws us in, and we engage to see what will happen next.

What Is A Storyboard In UX Design?

A storyboard in UX is a tool that visually predicts and explores a user’s experience with a product. It presents a product very much like a movie in terms of how people will use it. It can help UX designers understand the flow of people’s interaction with a product over time, giving the designers a clear sense of what’s really important for users.

Why Does Storytelling Matter in UX?

Stories are an effective and inexpensive way to capture, convey and explore experiences in the design process. In UX design, this technique has the following benefits:

  • Design approach is human-centered

    Storyboards put people at the heart of the design process. They put a human face on analytics data and research findings.
  • Forces thinking about user flow

    Designers are able to walk in the shoes of their users and see the products in a similar light. This helps designers to understand existing scenarios of interaction, as well as to test hypotheses about potential scenarios.
  • Prioritizes what’s important

    Storyboards also reveal what you don’t need to spend money on. Thanks to them, you can cut out a lot of unnecessary work.
  • Allows for “pitch and critique” method

    Storyboarding is a team-based activity, and everyone on a team can contribute to it (not just designers). Similar to the movie industry, each scene should be critiqued by all team members. Approaching UX with storytelling inspires collaboration, which results in a clearer picture of what’s being designed. This can spark new design concepts.
  • Simpler iteration

    Storyboarding relies heavily on an iterative approach. Sketching makes it possible for designers to experiment at little or no cost and to test multiple design concepts at the same time. Designers can be shot down, move on and come up with a new solution relatively quickly. Nobody gets too attached to the ideas generated because the ideas are so quick and rough.

Storyboarding in the UX Design Process

A storyboard is a great instrument for ideation. In UX design, storyboards shape the user journey and the character (persona). They help designers to string together personas, user stories and various research findings to develop requirements for the product. The familiar combination of images and words makes even the most complex ideas clear.

When Is Storyboarding Useful?

Storyboarding is useful for participatory design. Participatory design involves all parties (stakeholders, UI and UX designers, developers, researchers) in the design process, to ensure that the result is as good as possible. With a compelling storyboard that shows how the solution addresses the problem, the product is more likely to be compelling to the target audience.

It can also be helpful during design sprints and hackathons, when the prototype is being built by multiple people in a very short time. Communicating design decisions with a storyboard really comes in handy.

When Is There No Need for a Storyboard?

If everyone involved in creating a product already shares a solid understanding of how the product should be designed and agrees on the direction of the design and development, then there’s no need for a storyboard.

Use Storyboarding To Illustrate Experiences

Before you start creating a storyboard, it’s important to know exactly why you want to do it. If you don’t have a clear goal in mind, you might end up with a few attractive storyboards, but they won’t give you important insights into the user’s experience.

The Primary Purpose of Storyboards Is Communication

When you search for storyboards online, they always look really nice. You might think that in order to do them properly, you have to be really good at drawing. Good news: You don’t. A great storyboard artist isn’t necessary the next Leonardo da Vinci. Rather, a great storyboard artist is a great communicator.

Thus, it doesn’t matter whether you’re a skilled illustrator. What is far more important is the actual story you want to tell. Clearly conveying information is key. Keep in mind that a designer’s main skill isn’t in Photoshop or Sketch, but rather is the ability to formulate and describe a scenario.

When thinking about storyboarding, most people focus on their ability (or inability) to draw. The good news is that you don’t need to be good at drawing in order to create storyboards. This example is a storyboard frame from Martin Scorsese’s film Goodfellas. (View large version)

How to Work Out a Story Structure?

Before drawing a single line on a piece of paper or whiteboard, prepare to make your story logical and understandable. By understanding the fundamentals of the story and deconstructing it to its building blocks, you can present the story in a more powerful and convincing way.

Each story should have following elements:

  • Character

    A character is the persona featured in your story. Behavior, expectations, feelings, as well as any decisions your character makes along the journey are very important. Revealing what is going on in the character’s mind is essential to a successful illustration of their experience. Each story should have at least one character.
  • Scene

    This is the environment inhabited by the character (it should have a real-world context that includes a place and people).
  • Plot

    The plot should start with a specific event (a trigger) and conclude with either the benefit of the solution (if you’re proposing one) or the problem that the character is left with (if you’re using the storyboard to highlight a problem the user is facing).
  • Narrative

    The narrative in a storyboard should focus on a goal that the character is trying to achieve. All too often, designers jump right into explaining the details of their design before explaining the backstory. Avoid this. Your story should be structured and should have an obvious beginning, middle and end. Most stories follow a narrative structure that looks a lot like a pyramid — often called a Gustav Freytag pyramid, after the person who identified the structure. Freytag broke down stories into five acts: exposition, rising action, climax, falling action (resolution) and denouement (conclusion).
Freytag’s pyramid, showing the five parts, or acts: exposition, rising action, climax, falling action and denouement. Ben Crothers has drawn in a quick story about a guy whose phone doesn’t work.

To make your story powerful, account for these things:

  • Clarity

    The main thing is to make the character, their goal and what happens in their experience as clear as possible. The outcome of the story should be clear for anyone who sees it: If you use a storyboard to communicate an existing problem, end with the full weight of the problem; if you use a storyboard to present a solution that will make the character’s life better, end with the benefits of that solution.
  • Authenticity

    Honor the real experiences of the people for whom you’re designing. If you’re writing a story that isn’t faithful to the product, it won’t bring any value to you and your users. Thus, the more realistic the storyboard is, the better will be the outcome.
  • Simplicity

    Each detail in the story should be relevant to experience. Cut out any unnecessary extras. No matter how good a phrase or picture may be, if it doesn’t add value to the overall message, remove it.
  • Emotion

    Bake emotion into the story. Communicate the emotional state of your character throughout their experience.

Step-by-Step Guide to Creating Your Own Storyboard

With so many things to take into account, creating a storyboard might seem like an impossible task. Don’t worry, the following guide will help you turn out a good one:

  1. Grab a pen and paper.

    You don’t have to use special software to leverage storyboards in the design process. Start with a pen or whiteboard marker, and be ready to experiment.
  2. Start with a plain text and arrows.

    Break up the story into individual moments, each of which should provide information about the situation, a decision the character makes and the outcome of it, whether a benefit or a problem.
  3. Lay out each story as a sequence of moments.
  4. Bake emotion into the story.

    Next, convey what the character feels during each step. I add emoticons at each step, to give a feeling for what’s going on in the character’s head. You can draw in each emotional state as a simple expression.
  5. The same sequence of moments but with emoticons added will give the viewer a sense of what’s going on with the character’s emotional state.
  6. Translate each step into a frame.

    Roughly sketch a thumbnail in each frame of the storyboard to tell the story. Emphasize each moment, and think of how your character feels about it. Visuals are a great way to bring a story to life, so use them wherever possible. You can leave a comment on the back of each frame to give more context. You can also show a character’s thinking with thought bubbles.
  7. Storyboard frames
    Story told in frames (Image: Elena Marinelli) (View large version)
  8. Show it to teammates.

    After you’ve drawn the storyboard, show it to other team members to make sure it’s clear to them.

A Few Notes on Fidelity

High-fidelity storyboards (like the one in the example below) can look gorgeous.

A smile or frown can add emotion to the story and make it come alive for the audience. (Image: Chelsea Hostetter, Austin Center for Design)

However, in most cases, there’s no need for high-fidelity illustration. The level of fidelity will determine how expensive the storyboard will be to create. As I said before, conveying information is what’s important. A more schematic illustration can do that perfectly, while saving a lot of time.

Real-Life Storyboard In Action

Airbnb is a great example of how storyboarding can help a company understand the customer experience and shape a product strategy. To shape the future of Airbnb, CEO Brian Chesky borrowed a strategy from Disney animators. Airbnb created a list of the emotional moments that comprise an Airbnb stay, and it built the most important of those moments into stories. One of the first insights the team gained from storyboarding is that their service isn’t the website — most of the Airbnb experience happens offline, in and around the homes it lists on the website. This understanding steered Airbnb’s next move: to focus on the mobile app as a medium that links online and offline.

(View large version)

Conclusion

Dieter Rams once said, “You cannot understand good design if you do not understand people; design is made for people.” Storyboarding in UX helps you better understand the people you’re designing for. Every bit you can do to understand the user will be tremendously helpful.

This article is part of the UX design series sponsored by Adobe. Adobe XD tool is made for a fast and fluid UX design process, as it lets you go from idea to prototype faster. Design, prototype and share — all in one app.You can check out more inspiring projects created with Adobe XD on Behance, and also sign up for the Adobe experience design newsletter to stay updated and informed.

Further Reading

Smashing Editorial(vf, yk, al, il)

The Front-End Performance Challenge: Make Your Site Blazingly Fast And Win Some Smashing Prizes

Not too long ago, front-end performance was a mere afterthought. Something that was postponed to the end of a project and that didn’t go much beyond minification, asset optimization, and maybe a few adjustments on the server’s config file. But things have changed. We have become more conscious of the impact performance has on the user experience, and the tools and techniques that help us cater for snappy experiences have improved and are widely supported now as well.

It's time for a new challenge! Are you ready?
It’s time for a new challenge! Are you ready?

Time to roll up your sleeves and make use of these possibilities! A while ago, we challenged your coding skills in the CSS Grid Challenge, now we have something new to tickle your brains: The Front-End Performance Challenge. A perfect opportunity to apply everything you’ve learned about Service Workers, HTTP/2, Brotli and Zopfli, resource hint and other optimization techniques in one project. And, of course, there’ll be a smashing prize waiting for one lucky winner in the end.

The Challenge

Show off the performance of your site or your project — use everything you can to make your website perform blazingly fast! Please note that the final visual appearance should be identical before and after (font loading might differ and reflows are acceptable but should be kept to a minimum). You can use this checklist as a guideline and dive into performance optimization for everything from image assets and web fonts delivery to HTTP/2 and Service Workers.

The deadline is the 24th of November, 2017.

Here are a few things you can do to enhance your chances of winning:

  • Optimize as much as you can: We’ll be looking into Lighthouse and WebPageTest as well as the complexity of the site you’re working on.
  • You don’t have to optimize a personal blog: The more advanced the project is, the better chances of winning you have.
  • The most critical metrics are the first meaningful paint and the time to interactive.

So, What Can You Win?

After the deadline has ended, we’ll award a smashing prize to one lucky winner. It has to do with web performance, but see for yourself:

  • A roundtrip flight to London,
  • Full accommodation in a fancy hotel,
  • A ticket to SmashingConf London 2018, a new front-end, performance-focused conference, taking place Feb 7–8, 2018,
  • A Smashing workshop of your choice.

Join In!

Ready to take on the challenge? We’d love to see how you’ll tackle it!

What You Need To Deliver

  • Performance results before and after (using WebPageTest and Lighthouse).
  • A brief description/strategy of the work you did.

Once you have everything together, please send us your entry to challenge@smashingmagazine.com. The deadline is the 24th of November. The winner will be announced on the 4th of December, 2017.

Resources To Get Started

Last but not least, here are some resources to kick-start your performance optimization endeavor. Have fun!

  • Improving Web Fonts Delivery

    Zach Leatherman’s “Comprehensive Guide To Font Loading Strategies” explains the ins and outs of approaches such as FOUT with a Class and Critical FOFT.
  • Improving CSS Delivery

    Dean Hume summarized an easy way to inline critical CSS into the <head> of your pages, even when your site contains different templates.
  • Getting Started With Service Workers

    Lyza Danger Gardner wrote up her gotchas and the bugs she ran into when making a Service Worker. Also be sure to check out her “Pragmatist’s Guide to Service Workers,” a GitHub repository with Service Worker code examples.
  • Dealing With Third-Party Scripts

    Damien Jubeau shares useful tips and techniques to deal with third-party content such as social network widgets, advertising and tracking scripts, and explains their impact on performance.
  • Moving To HTTP/2

    HTTPS is a must for websites. Vladislav Denishev’s complete guide to switching from HTTP to HTTPS helps you master the transition.
  • HTTP/2 Server Push

    Server Push allows you to send site assets to the user before they’ve even asked for them. Jeremy Wagner’s comprehensive guide to Server Push explains everything from how it works to the problems it solves.
  • Progressive Web App

    Progressive Web Apps can replace all of the functions of native apps and websites at once. Ada Rose Edwards summarized do’s and don’ts on how to make them.
  • Brotli/Zopfli Compression

    Do you already use Brotli or Zopfli compression? The lossless data format Brotli appears to be more effective than Gzip and Deflate, while Zopfli is a good solution for resources that don’t change much and are designed to be compressed once and downloaded many times.
  • Resource Hints

    Resource hints are a good way to warm up the connection and speed up delivery, saving time on dns-prefetch, preconnect, prefetch, prerender and preload.
  • CDNs Supporting HTTP/2

    Concerns over performance have long been a common excuse to avoid security obligations. In reality, TLS has only one performance problem: It’s not used widely enough. Everything else can be optimized.
  • Responsive Images

    Eric Portis wrote up how to get responsive images right — with <picture> and srcset.
  • Caching

    If you need a little refresher on caching, Ilya Grigorik and Jake Archibald have got you covered.
  • Optimizing For First Meaningful Paint

    First Meaningful Paint is the paint after which the biggest above-the-fold layout change has happened. The lower its score, the faster that the page appears to display its primary content.

With all of this, you should be well-equipped for the challenge. Need a checklist? Here we go. We’re already looking forward to your submissions and to hearing your optimization stories!

Smashing Editorial(il)

We’ve Got A Lil’ Announcement To Make: Rachel Andrew Is SmashingMag’s New Editor-In-Chief

Sometimes things evolve faster than you think. Something that started as a simple WordPress blog back in September 2006, has evolved into a little Smashing universe — with books, eBooks, conferences, workshops, consultancy, job board and, most recently, 56 fancy cats (upcoming, also known as Smashing Membership). We have a wonderful team making it all happen, but every project requires attention and focus and every project desperately needs time to evolve and flourish and improve.

After more than 11 years of being editor-in-chief at Smashing Magazine, I’ve been struggling a lot to find that perfect balance between all of our projects, often focusing on exciting ideas and neglecting good old ones. I would jump to writing, or teaching, or coding, or designing, or working with conference speakers, instead of reviewing and editing articles, often leaving Smashing Magazine running on the side.

It’s time for a change. It’s not an easy decision for me to make, but I sincerely believe that it’s an important one. Smashing Magazine has been the heart of everything we’ve been working on throughout all this time, and with many new Smashing adventures scheduled for 2018, it deserves a stronger focus and support: a guidance stronger than the one I was providing throughout the last years. Most importantly, it needs much more care and attention.

With this in mind, I can’t be more happy and honored to welcome the one-and-only Rachel Andrew (yep, you got it right) as the new editor-in-chief of Smashing Magazine. Rachel will be helping us bring the focus back to the core of this little Smashing universe — this very magazine that you are reading right now. Rachel doesn’t really need an introduction, and her work for the community speaks for herself. There is one thing worth mentioning though: with Rachel, I’m happy to have a reliable and extremely knowledgeable editor on our side, the one I could only dream of. I’m not going anywhere, of course, but I’ll be spending more time writing and teaching and working on other Smashing projects instead. This is Rachel’s spot to take now.

About Rachel Andrew

For those of you who may have not heard about Rachel, here are a few things to know about her.

Rachel Andrew

Rachel Andrew lives in Bristol, England. She is one half of web development company edgeofmyseat.com, the company behind Perch CMS. Her day to day work can include anything from product development to devops to CSS, and she writes about all of these subjects on her blog at rachelandrew.co.uk.

Rachel has been working on the web since 1996 and writing about the web for almost as long. She is the author or co-author of 22 books including The New CSS Layout, and a regular contributor to a number of publications both on and offline. She is a Google Developer Expert for Web Technologies and a W3C Invited Expert to the CSS Working Group. Rachel is a frequent speaker at web development and design events including An Event Apart, Smashing Conference, and Web Directions Code.

Rachel is a keen distance runner and likes to encourage people to come for a run when attending conferences, with varying degrees of success. She is also a student pilot and aviation geek. You can find her on Twitter as @rachelandrew and find out what she is up to now.

Exciting times indeed! Let’s shape the future together — I can’t wait to see what’s coming up next. So please make Rachel feel welcome — and here’s one for the next adventures!

Smashing Editorial(vf)

How I would explain a decade of web development to a time traveler from 2007

Go to the profile of Ivan Zarea

Hello friend! I hope you like this new world of ours. It’s a lot different than the world of 2007. Quick tip: if you just got a mortgage, go back and cancel it. Trust me.

I’m glad that you’re still interested in computers! Today we have many more of them than we did 10 years ago, and that comes with new challenges. We wear computers on our wrists and faces, keep them in our pockets, and have them in our fridges and kettles. The cars are driving themselves pretty well, and we’ve taught programs to be better than humans at pretty much every game out there — except maybe drinking.

(Web) Apps

You might have seen the release of the iPhone just before you stepped into the time booth. Apple is the biggest and richest tech company, mostly due to the iPhone and its operating system, iOS. Google has this competing thing called Android, and Microsoft tried to get a slice of the ever-growing pie with Windows Phone. It didn’t work out.

Left: a hand holding an iPhone 3GS from 2008. Right: a similarly-sized hand holding a larger iPhone X from 2017. We also got to practice doing over-the-shoulder shots. Left: iMore, right: BusinessInsider

We started calling programs apps, and some websites are calling themselves web apps. In 2008, Google released a new browser called “Chrome.” Nine years later it’s the most popular way to get on the Web.

The Chrome team invested a lot in working with JavaScript, and the code gets better every month. Web apps are written using a lot of JavaScript, and they resemble the desktop interfaces of your time.

Companies have also invested in JavaScript to make it better—it now supports classes and modules. We use languages that compile to JavaScript, like TypeScript (from Microsoft, they’re cool now) or Flow.

We write a lot of JavaScript these days, since nobody supports Flash anymore. We even run JavaScript on the server, instead of Perl, using a thing called Node. It sounds easier than it is.

A responsive design: the same website shows differently on multiple devices. We’re still bad at it, but we have found pretty ways to show it off. Source: 10twelve.

Remember Swing, SWT and the likes of wxWidgets? We had to reinvent them for the browser world. Several new UI programming models emerged, which mostly focused on components.

We had to find a way to design, build, and test apps while keeping them responsive (a term we use to describe a website that doesn’t look like crap on a mobile phone). We also needed to keep it slim — not everybody has a fast connection, but everybody has a browser in their pockets.

To help with all this, there are now component frameworks. The term is vague, since it includes the likes of Angular by Google, React by Facebook, and Vue by the community. But it’s the best term we have.

By the way, I’m not sure you remember Facebook from 2007. It was getting big in the US around that time, and now it’s bigger than huge. Boasting more than a billion users, it’s also one of the largest codebases in the world.

The Facebook development team writes a lot of great code and publishes it online. They have their own conference, F8. Most big companies have their own conferences.

CSS also had to evolve, since the new apps require more intricate layouts. We don’t use tables with images anymore. Frames are gone as well. Instead, we have created new standards, like CSS Floats, Flexbox, and CSS Grid.

People had to iterate on these standards, and they’ve built libraries to make things look consistent, like BootstrapFoundation and many more. Similar to JavaScript, we have created languages that compile to CSS. They make up for some of the things that CSS misses, like variables or modules. It’s still hard.

It’s okay to be lost

Don’t feel bad if you’re confused. The truth is that we’re all a little confused — and it’s okay to be so. There are many more developers on the planet now, and tech companies are becoming more successful. For a while we used the term “startup” to describe companies that grew quickly and didn’t know what to do. But even this term has become old.

Data

There are more programmers, more programs, and more devices. We have more data now. Computers had to grow powerful enough to process it all, and we have developed several techniques to turn that data into insight.

First, we created a field called Data Science, which aims to learn about and extract information from data.

For example, a startup called Waze let people install an app on their phones that would track their movements while they were in their cars. Because many people installed the app, Waze got a lot of data about how cars move. They used it to develop programs that understood where traffic jams were.

Now, when you open Waze on your phone, you see traffic jams on the map in real time and choose another route.

Waze has since been bought by Google. This happens a lot with startups.

Somebody using Waze to get to somewhere. The other Waze users are shown as funny icons. Source: The waze blog.

There were three main challenges with Data Science — storing data, understanding data, and acting on data. We’ve improved in all of these areas. Let’s look at each one.

Storage

We now need to store a lot more information and then find out which part is important. We needed to invent new databases. The likes of MySQL and PostgreSQL weren’t fit to store terabytes of data (we called it Big Data).

Big, internet-first companies typically faced these challenges, and so they were on the forefront of developing the technologies. Most of the time, technologies were first used internally and then open-sourced.

There was a movement we called NoSQL. This new class of databases took some of the conventions of traditional Relational databases and turned them around.

There’s Hadoop, which deals with how the data is stored on many hard computers. It defines a way of processing the data called MapReduce (inspired by a paper from Google — big companies write good scientific papers these days).

Then there’s Cassandra, which looks at data not as tables, but as sets of keys and columns which can be stored on different computers. It also makes sure that any of these computers can go offline without causing data loss.

And we have MongoDB, a database that is easy to install and use for prototyping apps. In 2017, we’re treating technologies the same way we treated pop stars ten years ago — we zealously defend some of them and vehemently hate others. MongoDB — like the band Nickelback — belongs to the latter group.

Learning

A dog photographed through Prisma, an app that uses machine learning to make ordinary pictures look like famous works of art. No more Photoshop Plastic Wrap. Source: cultofmac.

In the “understanding data” camp, most of the focus has been in an area called Machine Learning. There have been many new techniques, from naive classification to deep learning, that are now in every Data Scientist’s toolbox. They mostly write Python and work alongside developers to put machine learning pretty much everywhere.

For example, with the help of Data Scientists, a lot of web apps use A/B testing. This technique serves two slightly different versions of the app to different, but similar, groups of users. It is used to see which version leads quicker to our desired goal, whether that’s a sign-up or a purchase.

A lot of big companies like Airbnb (pronounced air-bee-en-bee), Uber, and Netflix are running hundreds and thousands of A/B tests at the same time to make sure their users get the best experience. Netflix is an app where people can binge-watch TV shows on all their devices. ¯\_(ツ)_/¯

Microservices and The Cloud

Companies like Netflix are enormous. Because they serve a lot of people, they have to make sure they are up and running at all times. That means they have to manage their computers pretty well. They can add hundreds of new servers when they’re needed.

This is difficult to achieve in a traditional data center, so the amazing engineers at Netflix use virtual machines. Remember Amazon Web Services, which launched back in 2006? Back then, they started offering Elastic Cloud Compute, known as EC2, to help people get virtual computers in Amazon’s data centers.

Today, they have almost 80 similar services, all built to help companies grow quickly. We used to have a trendy name for that — “The Cloud” — but this term is as difficult to define as NoSQL.

Daily Blogging and an Ode to RSS

One of my favorite Twitter lists which I follow regularly in Flipboard is “Gigaom Vets.” The list includes people who previously worked for GigaOm magazine, which has been rebooted but unexpectedly fired all its staff in March 2015. They were an amazing group of tech and tech culture journalists, and they still are, except now they all work for different companies. Thanks to the power of Twitter and Flipboard, however, I still read their aggregated ideas via a “single pane of glass” (my Flipboard subscription to their Twitter list) and am regularly both educated and inspired by the thoughts they share.

Om Malik (@om) was Gigaom magazine’s founder, and he’s on my Gigaom Vets Twitter list as well. Last night before bed, I saw the article he shared, “Seth Godin Explains Why You Should Blog Daily,” by CJ Chilvers (@cjchilvers).

Seth Godin Explains Why You Should Blog Daily https://t.co/ZMmZ2xgqPL

— OM (@om) October 22, 2017

CJ’s post is not only an encouragement for all of us to blog daily, because of the inherent value of generously sharing reflections about what we notice around us daily, but also the first shout out I’ve read in quite awhile to my old friend RSS. Ah, RSS. Twitter streams and the Facebook news feed have largely eclipsed your name and fame, but I still use you via my Feedly account (at least weekly) and acknowledge your latent power. A few of my older posts here still testify to your greatness:

  1. March 2010: Favorite iPhone / iPod Touch News and RSS / update applications
  2. July 2006: RSS: Connecting Ideas and Knowledge (shout out to @willrich45)
  3. November 2005: Blogs & RSS: Tapping into the global conversation (shout out to @dwarlick)

It’s quite easy to become depressed by the ways Facebook was cleverly used to subvert democratic processes in our last Presidential election and even now, our sitting President uses Twitter to discredit mainstream media sources as “fake news” and obfuscates rather than clarifies truth for many. Our needs for media literacy and the “crap detector” of Neil Postman are as great as ever, as Jason Neiffer (@techsavvyteach) discussed on last week’s EdTech Situation Room podcast.

The last couple of years, since I become an independent school technology director but perhaps even before that, I’ve fallen into a pattern of blogging where I write much longer posts but share MUCH less frequently. I started blogging in 2003, and have shared 6,088 posts here since that time. At one point, I was blogging daily. My routines have changed, but CJ Chilvers and Om Malik have me rethinking those today.

RSS is a free information subscription technology, which is an open standard and is supported (still, despite Google’s painful abandonment of Google Reader in 2013) by multiple applications and platforms. Podcasting is alive and well, in fact thriving far more today in 2017 than it was at the dawn of the podcasting age around 2005 when I started. Blogs like this WordPress-powered website, thousands of Blogger blogs, and others continue to create RSS / ATOM feeds, which permit free subscriptions unfiltered and lacking the black-box modification of secret algorithms like the Facebook news feed.

I remember you, RSS, and have not forgotten your power! I’m podcasting weekly via @edtechSR, and have been now for about 70 weeks, but I also resolve to return to my “short share” blogging roots. Long live the open web, RSS, blogs, podcasts, and information streams unfiltered by corporate (and monetized) secret algorithms.

I will keep noticing ideas of significance in our world, and sharing short reflections about them here on “Moving at the Speed of Creativity.” I encourage you, as well, to consider or reconsider a commitment to regular blogging. We live in an emerging surveillance state, and our understanding of those dynamics should temper our personal sharing stream, but they should not chill or silence our capacity to be inspired and share our inspirations with each other on the social web.

Long live RSS!


Did you know Wes has published several eBooks and “eBook singles?” 1 of them is available free! Check them out!

Do you use a smartphone or tablet? Subscribe to Wes’ free magazine “iReading” on Flipboard!


If you’re trying to listen to a podcast episode and it’s not working, check this status page. (Wes is migrating his podcasts to Amazon S3 for hosting.) Remember to follow Wesley Fryer on Twitter (@wfryer), Facebook and Google+. Also “like” Wesley’s Facebook pages for “Speed of Creativity Learning” and his eBook, “Playing with Media.” Don’t miss Wesley’s latest technology integration project, “Mapping Media to the Curriculum.”

On this day..

Tagged with →

Combining Graphical And Voice Interfaces For A Better User Experience

With the appearance of voice user interfaces, AI and chatbots, what is the future of graphical user interfaces (GUIs)? Don’t worry: Despite some dark predictions, GUIs will stay around for many years to come. Let me share my personal, humble predictions and introduce multi-modal interfaces as a more human way of communication between user and machine.

What Are Our Primary Sensors?

The old wisdom that a picture is worth a thousand words is still true today. Our brain is an incredible image-processing machine. We can understand complex information faster when we see it visually. According to studies, even when we talk with someone else, nonverbal communication represents two thirds of the conversation. According to other studies, we absorb most information from our sight (83% sight, 11% hearing, 3% smell, 2% touch and 1% taste). In short, our eyes are our primary sensors.

Our ears are the second-most important sensors we have, and in some situations, voice conversation is a very effective communication channel. Imagine for a moment a simple shopping experience. Ordering your favorite pizza is much easier if you pick up the phone and order it, instead of going through all of the different offers on a website. But in a more complex situation, relying just on verbal communication is not enough. For example, would you buy a shoe without seeing it first? Of course not.

Even traditionally text-based messaging platforms have started introducing visual elements. It’s not coincidence that visual UI snippets were the first thing Facebook implemented when it created its chatbot platform. Some information is just easier to understand when we see it.

Text-only and voice-only interfaces can do a good job in some use cases, but today it’s clear they are not optimal for everything. As long as visual image-processing remains people’s main information source, and we are able to process complex information faster visually, the GUI is here to stay. On the other hand, more traditional GUI patterns cannot survive in their current form either. So, instead of radical predictions, I suggest another idea: User interfaces will adapt to our sensors even more.

Designing Voice Experiences

A new interface does not mean that we have to disregard everything we have successfully applied to previous interfaces; we will need to adapt our process for the nuances of voice-driven interfaces, including conversational interactions and the lack of a screen. Read more →

Adaptive Multi-Modal Interfaces

Humans have different input and output devices, just like computers. Our eyes and ears are our main input sensors. We are very good at pattern recognition and at processing images. This means we can process complex information faster visually. On the other hand, our reaction time to sound is faster, so voice is a good option for warnings.

We have output devices, too: we can talk, and we can gesture. Our mouth is the most effective output device we have, because obviously most people can talk faster than they type, write or make signs.

Because humans are good at combining different channels, I predict that machines will follow and that they will use multi-modal interfaces to adapt to human’s capabilities. These interfaces will use different channels for input and output, and different mediums for different information types (for example, asking short questions versus presenting complex information).

Interfaces will adapt to humans by using the medium and message format that is most convenient to humans in the given situation. Let’s look at some examples, including the ones we explored at UX Studio, as well as some established commercial products.

Chatbots Are Getting More And More Visual

Nuru is a chatbot concept that helps with day-to-day problems in Africa. Starting to design it as a pure chat application, we soon discovered the limits of text-only conversational interfaces.

For basic communication, chat is more effective than traditional user interfaces (UIs). In Africa, for example, chat can be used to boost local commerce. Sellers and buyers can find each other and negotiate different deals. In this case, chat is optimal because of the one-on-one communication. But when it comes to more sophisticated interaction, like comparing many different job postings, we need a more advanced UI. In this case, we added cards to the chat interface, which users can swipe through.

(View large version)

Some other companies, such as China’s Tencent, went even further and let developers build mini-apps that run within its chat app, WeChat. This inspired Western designers to imagine a conversational interface in which every single message could contain a different app, each with its own rich interface. For example, you caould play little games together with your chat partner, like we did 15 years ago in MSN Messenger. This is also an attempt to enhance the simple conversational interface that people love with rich UI functions.

(Image: Medium) (View large version)

Self-Driving Cars With Mixed Interfaces

A year ago, our team imagined the interface of a self-driving car as a pure exercise in multi-modal design. We imagined the whole process and tried to optimize the interaction at each step.

To order a car, you would push a button on your phone. This is the most simple interaction, and it’s enough to order a car. Obviously, there’s no need to talk on the phone if just pushing a button is enough.

Then, once you enter the car, you would spend some time with getting comfortable, placing your belongings and fastening your seatbelt. Following that, verbal communication would be easier, so the car asks you where to go. It is also faster to say the place, rather than typing the location on a touchscreen. In order for this to work properly, the car would have to understand any ambiguous instruction you give it.

Trust is an important issue in self-driving cars. When we are on the road, we want to see whether we are headed in the right direction and whether our self-driving car is aware of the bicycle in front of us. Having to ask the car every time for its status would be impractical, especially if you’re travelling with others. A tablet-like interface, visible to all occupants, would solve this issue. It would always show what the car detects in its surroundings, as well as your position on the map. The fact that it’s always there would build trust. And, of course, showing map information would be easier visually than in any conversational form.

In this example, you could order a car using a touchscreen, give voice commands, receive auditory feedback, as well as check the status on a screen. The car always uses the most convenient medium.

Home Entertainment And Digital Assistants

The Xbox console with the Kinect controller is another example of a mixed interface. You can control its GUI with both voice and hand gestures. In the video below, you can see that the gesture-recognition technology is not perfect yet, but it will certainly get better in the future. The voice recognition is also a bit awkward because you always have to say the magic word, “Xbox,” before every command.

Despite the technical flaws, it is a good example of how a machine can gives continual visual feedback to voice and gesture commands. When you use your hand as a control, you can see a small hand on the screen as a cursor, and as you move it above different content tiles, it always highlights the current one below your cursor, to show which one you are about to activate. When you say the word “Xbox” to give a command, the console displays a command word on each tile with green, so that you know what to say to select an item.

Of course, the goal here is to help you voice-control an interface that was really designed for voice in the first place. In the future, more accurate voice-recognition and language-processing will help people to say commands in their own words. That is an important and necessary step to make mixed interfaces more mainstream.

Amazon is without a doubt one of the great pioneers of voice interfaces and “no GUI” interfaces. But even it added a screen to its new generation of Echo device, after an arguably failed attempt to push the GUI into an app on the user’s phone.

The freedom that a voice UI gives you is truly fascinating, especially the first time you try it. For example, standing in the kitchen and saying “play Red Hot Chili Peppers” is easier than scrolling through Spotify albums with dirty hands.

But after a while, when you want to use it for more advanced tasks, it just doesn’t work. In one video review, a user pointed out how weird it is that once you start a kitchen timer, you have to ask the device for the status, because no screen exists. Now, with the Echo Show, you can see multiple timers on the same dashboard.

And what’s more important for Amazon than shopping? With the old Echo, you could add things to your shopping list, but then you had to open up the mobile app to actually purchase something. Hearing Alexa read out long product names and descriptions from the Amazon store was just a terrible experience. Now, you can handle these tasks on the Echo easily, because it shows you products and you can choose the ones you like.

(View large version)

Unlike the Xbox with the Kinect, the Echo Show is a voice-first device. Its home screen is not loaded with app icons. But when you issue an initial voice command, the screen shows you all related information. It is very simple: When you need to know more, you just look at the screen. It’s a bit like how a person works in the kitchen: We can maintain a basic conversation while we focus on cooking, but when an important or complex question arises, we stop and look at our partner’s face. This is why the Echo Show’s direction towards a multi-modal interface is more natural.

(View large version)

Here’s another design detail. On the home screen, the Echo will display a news headline and highlight a word in the headline in bold, making it the command word you would say if you wanted to hear the full story. In this way, the capabilities of the products are clear, and it’s obvious how you would use it. The Echo effectively sets expectations and gives tips through its visual interface.

One of the main advantages of Google Home, Echo’s main competitor, is that you can ask follow-up questions. After ask, “How many people live in Budapest?,” you could also ask, “What’s the weather like there?” Google Home will know that you’re talking about the same place. Context-awareness is a great feature and will be a must-have in future products.

When we’re designing an interface, if we know the context, we can remove friction. Will the product be used in the kitchen when the user’s hands are full? Use voice control; it’s easier than a touchscreen. Will they use it on a crowded train? Then touching a screen would feel far less awkward than talking to a voice assistant. Will they need a simple answer to a simple question? Use a conversational interface. Will they have to see images or understand complex data? Put it on a screen. To improve interaction, we can ask questions, such as which screen is closer to them, or which one would be more convenient to use given the situation.

One thing that is still missing from Google Home is multiuser support. Devices like this will be used by many different people, bringing us back to the shared computer phenomenon of the early PC age. Switching between users seamlessly will be a tough challenge. Security and UX are not easy to align. Imagine that at one moment you are talking to your virtual assistant, with access to all of your apps and data, then a second later someone else enters the room and does the same.

Both Amazon Echo and Google Home give nice visual feedback when they are listening to you or searching for an answer. They use LED animation. For multi-modal interfaces, keeping the voice and visual outputs in sync is essential; otherwise, people will get easily confused. For instance, when talking to someone, we can easily look at their face to see if they are getting the message. We would probably want to be able to do the same when talking to a product.

Healthcare Products

PD Measure is an app to measure pupillary distance for people who wear prescription glasses. It is a good example of syncing and combining visual and voice interfaces.

Any customer needs to know their pupillary distance in order to purchase glasses online. If they don’t know, then they’d have to go to a retail store and measure there. A measurement tool that is available to anyone at home would open up a huge market for online optics.

With PD Measure, the customer stands in front of a mirror and takes a photo of themselves, keeping their phone in a particular position, following precise instructions. The app then automatically calculates their pupillary distance using an advanced internal algorithm. It is precise enough to make ordering glasses online possible.

(View large version)

PD Measure’s UI is a combination of animated illustrations on the screen, which show you how to hold your phone, and voice instructions, which tell you what to do. The user has to move their hands to the right position, and the app will uses its sensors to give feedback when they are there. When the app finally takes the right image, it provides the user with auditory feedback (a bell rings). This way, the user gets used to the confirmation sound and will take each subsequent measurement more efficiently.

During the prototyping phase, we conducted a lot of user tests, and it turns out that people are more likely to follow voice instructions than visual ones.

In this example, visual and voice interfaces work together: The animated illustrations show you how to hold the phone, while the voice instruction helps you to get in the perfect position.

Examples From Publishing

Back in 2013, a company named Volio experimented with mixed interfaces. One of its flagship clients was Esquire magazine, which created an interactive experience in which people could talk with Esquire’s columnists. As you can see in the video below, this was a series of videos, and you could choose the next one based on the answer you gave to the question in the current video. Of course, you could just choose from a few predefined answers, but the interaction still felt like a live conversation. It also had a good combination of media: voice as input for commands and the screen to display the content.

Many people think of today’s multi-screen world as separate output channels for our content. Mixed interfaces will be much more than that. People will be able to use your app on different devices simultaneously, at the same time (for example, using the Alexa for voice input, while seeing the data on their tablet).

Combining voice and GUI in that way is not necessary either. A sports-streaming app we designed recently enables people to comment on a football game and talk with other fans while watching the match live on their smart TV. The two screens perfectly complete each other.

Such advanced interfaces offer functionality available through many different devices and media simultaneously. This is redundant, which programmers and designers don’t really like. But it also has advantages, because it gives people backup options, in case the main option is not available. It also helps disabled people who can’t use voice or visual interfaces.

How To Choose The Primary Mode?

Having discussed trends and some current products, let’s summarize when to use voice and when to use a visual user interface.

Visual user interfaces work better with:

  • lists with many items (where reading all items out loud would take too long);
  • complex information (graphs, diagrams and data with many attributes);
  • things you have to compare or things you have to choose from;
  • products you would want to see before buying;
  • status information that you would want to quietly check from time to time (the time, a timer, your speed, a map, etc.).

Voice user interfaces work better for:

  • commands (i.e. any situation in which you know exactly what you want, so you can skip the navigation and just dictate your command);
  • user instructions, because people tend to follow voice instructions better than written instructions;
  • audio feedback for success and error situations, with different signals;
  • warnings and notifications (because the reaction time to voice is faster);
  • simple questions that needs relatively simple answers.

What’s Next?

When I asked my designer friends what mixed interfaces they know about, some of them mentioned the legendary MIT Media Lab video from 1979, “The Put That There.” Nostalgia aside, it is shocking that this technology had a working prototype 38 years ago. Is our super-fast progress just an illusion?

Voice recognition still has some obvious challenges today, and just a few major players provide platforms for products based on voice recognition, including apps such as WeChat and hardware devices such as the Amazon Echo.

A good start would be to develop a mini-app or bot that integrates with these systems. Here are some tips from our own experience of working with multi-modal interfaces:

  • Speed and accuracy are deal-breakers.
  • Sync voice and visual interfaces. Always have visual feedback of what’s happening.
  • Show visual indicators when the device is listening or thinking about an answer.
  • Highlight voice-command words in the graphical interface.
  • Set the right expectations with users about the interface’s capabilities, and make sure the product explains how it works.
  • The product should be aware of the physical and social context of the device and the conversation, and should respond accordingly.
  • Think about the context of the user, and identify which medium and device would reduce friction and make it easier to perform a task.
  • Give users options to access a function through alternative devices or media. This will help in situations where something breaks, and it will also make your product more accessible to disabled people.
  • Don’t ignore security and privacy. Enable people to turn off components (for example, the microphone), and build trust by being transparent. Don’t be too pushy, or else you will frighten everyone away (for example, voice spam is very annoying).
  • Don’t read out long audio monologues. If it cannot be summarized in a few words, display it on a screen instead.
  • Take time to understand the specifics of each platform, and choose the right one to build on.

Before starting out, though, keep in mind that, compared to other digital designs, multi-modal interfaces are still quite an unexplored area.

First, we don’t really have a general-purpose language or programming framework to describe mixed interfaces. Such a language could make it possible to define voice and GUI elements in one coherent code base, making it easier to design and develop these interfaces. It would also support multiple output and input options, enabling us to design omni-channel, multi-screen or multi-device experiences.

Secondly, designers have to come up with new design patterns to support the special needs of multi-modal interfaces. (For example, how would you give visual and audio feedback at the same time?)

Although the future looks exciting, and it will happen fast, we still need to reach the tipping point in voice recognition and language processing: where the usability of the voice medium will reach a level of quality that would indeed make it the best option in a range of applications. We will also need better tools to design and code multi-modal interfaces.

Once we accomplish these goals, then nothing will be holding these natural interfaces back, and they will become mainstream.

History Repeats Itself: Be A Part Of It

Humans have multiple senses. Technology and interfaces that use more than just one have a better chance of facilitating strong human-computer interaction.

A similar multi-modal evolution happened before. Radio and silent movies were combined into the movies, which were further enhanced with 3D and so on. I’m positive that this process will happen in the interactive digital world, too. Exciting times, indeed.

Confessions Of An Impostor

Five years ago, when, for the first time ever, I was invited to speak at one of the best front-end conferences in Europe, I had quite a mixture of feelings. Obviously, I was incredibly proud and happy: I had never had a chance to do this before for a diverse audience of people with different skillsets. But the other feelings I had were quite destructive.

I sincerely could not understand how I could be interesting to anyone: Even though I had been working in front-end for many years by then, I was very silent in the community. I hadn’t contributed to popular frameworks or libraries. I was just average. So, the feeling of a mistake having been made, that I did not deserve to be at that conference, was very strong, and I could not believe that I would indeed be speaking until I had bought my plane ticket.

But a plane ticket won’t guarantee that you won’t collapse on stage from pressure, so things got even worse. The line-up of speakers was so fantastic that during the final weeks before the conference, and more so after meeting in person all of those famous people whose books and articles I had been learning from, the only thing I could think of was, “They are gonna find out. All of these great people will find out that I am here by mistake, because I know nothing. It will be the end of my career and the worst embarrassment I could ever have in my professional life.”

Back then, in 2012, I had heard nothing about impostor syndrome. I didn’t even know that those feelings of mine had a name! The only thing I knew was that I had to fake it till I make it. Some years after, I read a lot of articles and research on this phenomenon and, critically, gradually found out how to deal with it in my professional life. Only now is the topic emerging in our industry and getting its deserved acknowledgement.

intro
Impostor syndrome is about not feeling like the person whom others believe you to be. (View large version)

So, it’s time to shed some light on what impostor syndrome is, how we suffer from it day to day in our jobs, why it happens and what we can do about it. This article will, hopefully, guide you through some seldom-spoken aspects of this phenomenon in our industry.

But first things first: What is impostor syndrome? Let’s find out.

Imposter Syndrome Is Real And We All Have It

How many hours do you spend coding or learning about code outside of work? Front-end fatigue is very real, but thankfully there are a number of ways to help your head from exploding. Read a related article →

What Is Impostor Syndrome?

Simply put, impostor syndrome is the feeling of being a fraud, despite all evidence to the contrary. It’s an inability to internalize your own achievements, which results in a feeling of being less competent than the rest of the world believes you to be.

The term “impostor syndrome” (or “impostor phenomenon,” or sometimes “impostrism”) was coined by Pauline Clance and Suzanne Imes in 1978 in their work on high-achieving women in academics. That’s right: For years, the scientific community believed that this phenomenon was largely confined to women. But many of those same researchers are beginning to realize that the experience is more universal and that it might be even more problematic for men — simply because it is naturally much harder for men to admit to feeling insecure or incompetent. As a result, men hide their fears, unable to unburden themselves or seek help.

women-men
For years, impostor syndrome was largely thought to be confined to women in academics. But the feeling is much more prevalent. (View large version)

There is a difference, though, between impostor syndrome and a simple feeling of insecurity. Insecurity might make you hold on to a position that you have overgrown for some years simply because you don’t feel comfortable with taking action. Someone with impostor syndrome, on the other hand, feels compelled to constantly take action and to be better at whatever they are doing. Hence, people who suffer from it will go further in their career but will be in constant self-doubt about whether they deserve to be where they are. To a large extent, one of the main motivating forces of impostor syndrome is a wish to be successful, to be among the best. That’s why, ironically enough, impostor syndrome is most prevalent among high performers. Research shows that two out of five successful people constantly suffer from it, and up to 70% of the general population has experienced it for at least some part of their career.

Every year, charisma coach and persuasion expert Olivia Fox Cabane asks the incoming class at Stanford Business School, “How many of you in here feel that you are the one mistake that the admissions committee made?” Every year, two thirds of the class instantly raise their hands. How can Stanford students, passing such an intensive admissions process, being selected from among thousands of applicants, with a long list of documented achievements and accomplishments behind them, possibly feel that somehow they don’t belong there? The answer is impostor syndrome. Let’s take a closer look at its main characteristics.

symptoms
What are the signs of impostor syndrome? (View large version)
  • Superwoman/superman

    Self-criticism, arising from a tendency towards perfectionism, is one of the most common obstacles to great performance in any field. Ever felt like something you worked on could be improved even after having gotten a lot of praise?
  • Dissatisfaction caused by comparison

    Dissatisfaction arises when one constantly compare oneself to others. Nothing is wrong with wanting to be the best — that is evolution at work. But impostors are far from getting a kick out of this competition. Have you ever thought that the majority around you are smarter than you are, or felt like you don’t belong where you are?
  • Fear of failure

    Have you ever feared that somebody will find out that you are not as skilled as everyone thinks you are? Fear of failure is an underlying motivation of most “impostors.” Therefore, to reduce the risk of failure, impostors tend to overwork.
  • Denial of competence and praise

    Do you relate to the feeling that your success is a result of luck, timing or forces other than your talent, hard work and intelligence? Do you shudder when someone says you’re an expert? According to Pauline Rose Clance, impostors not only discount positive feedback and objective evidence of success, but also focus on evidence or develop arguments to show that they do not deserve praise or credit for their achievements.

If these feelings are familiar to you, then welcome to the club.

Of course, impostor syndrome is not simply a matter of psychological discomfort. Underestimation and deprecation of your own achievements can have a real impact on you and your professional life.

Nature And Impact Of Impostor Syndrome

We probably agree by now — especially if you suffer from it — that impostor syndrome is a rather uncomfortable feeling. I wouldn’t suggest that it does not affect one’s private life, but the feeling of insecurity has a definite effect on the achievements in one’s professional life. So, what happens (or doesn’t happen) in your professional life when you ignore these feelings or simply are not aware of the syndrome?

It might keep you from asking for a well-deserved raise. You might shy away from applying for a job unless you meet every single requirement. In the office, you might be regarded as a private person because you don’t dare share your achievements or even discuss technology with colleagues, because you think they know everything while you’re a fraud. It might even stop you from asking to speak at a conference that you’ve dreamed of speaking at simply because you always think you are not good enough. Truth be told, those who suffer from impostor syndrome and who really, really want to achieve any of the things mentioned here usually do overcome these obstacles (recall the difference between imposter syndrome and insecurity). Impostor syndrome can be highly motivating, spurring us to work harder than anyone else. But at what cost?

In our community, impostor syndrome causes us to criticize ourselves constantly, because a lot of the problems we try to solve for ourselves have already been solved by others. In environments like that, it’s easy to feel that you aren’t smart enough. This feeds the syndrome and compels us to try to catch up on everything going on in our industry, so that we feel competent in whatever we’re doing. And we all know how much information there is to catch up on: This feeling is well known to all of us.

Only a couple of years ago, I had several reading applications on my phone, such as Flipboard, Pocket and Instapaper. I constantly saved the latest news from the world of development to read later. I followed several online magazines (like the one you’re reading right now) for the latest tutorials, how-to’s and developments within the industry. Then, there’s Twitter. Reading Twitter can make things even worse: Seeing a lot of talented people bragging about their achievements does not soothe impostor syndrome at all. But my story doesn’t end there.

overload
Information overload is a side effect of impostor syndrome in our industry. (View large version)

There were also RSS feeds, email subscriptions (such as to HTML Weekly and Javascript Weekly), videos from recent conferences. I tried to consume most new articles and videos. Obviously, reading everything was impossible: In this flow of information, I also had to find time to do work that paid the bills. Sound familiar?

At some point, I realized that I wasn’t reading the saved articles anymore. On the best of days, I would quickly look through the titles, pick some, and those would usually lie untouched in my browser for days. Clearly, I didn’t feel more competent or skilled after consuming all of that information.

The reason is that it was not me, really, who was interested in all of that information. It was the “impostor,” pushing me to catch up on everything going on in the community, so that I wouldn’t feel like an incompetent fraud. Rather than pushing us to learn more of what we really want, to apply it in our work, to enjoy and be better in our profession and to feel competent, imposter syndrome pushes us into the state of frustration.

How To Deal With Impostor Syndrome

If you have ever experienced this, I have good news. One of imposter syndrome’s frustrating ironies is that actual frauds rarely seem to experience this phenomenon. English philosopher Bertrand Russell put it more poetically: “The trouble with the world is that the stupid are cocksure and the intelligent are full of doubt.” It’s great to know that those who suffer from this syndrome are intelligent; nevertheless, it is an uncomfortable psychological problem that we have to do something about. Let’s see how we can deal with this feeling.

Below is a list of solutions that could work separately or in combination. Try them to see what works for you.

Embrace It

Pacific Standard magazine once wrote, “Impostor syndrome is, for many people, a natural symptom of gaining expertise.” This makes total sense: In gaining expertise, we enhance our knowledge. And as we expand the boundary of what we know, we become more and more exposed to what we don’t. So, the next time you suffer an attack, do not rush for new information. Instead, stop and enjoy. Most probably, this is a sign that you are gaining experience and gaining the wisdom to accept that there is much more in the industry, and in the world in general, for you to discover.

embrace
To fight impostor syndrome, start embracing it. (View large version)

I deliberately said “most probably” above because some confuse foolish bravery with expertise. However, such people would count as edge cases, suffering from the Dunning-Kruger effect, which essentially means that they cannot recognize their own ignorance.

Reframe Your Understanding of Failure

It would be naive to believe that as you progress in your professional life, you will not make any mistakes. It is OK to be occasionally wrong, to fail or not to know everything. That’s perfectly normal; it doesn’t make you fake or undeserving. Even the best of us make mistakes — we are human, after all. Even Brazil’s football team lost to Norway in the World Cup once (a remarkable thing for anyone living in Norway since Norwegians were not even skiing). Try to reframe failure as an opportunity to learn. There is even a global conference dedicated to failure, called FailCon, which was once held in Silicon Valley, home to the biggest names in the industry. Recognize that failure is simply the path to success, and failing quickly is the surest way to learn what works and what doesn’t and to grow even more.

failure
Some things are not really failures. Incidents, maybe? Instead, think of them as a way to learn. (View large version)

Measure Yourself by Your Own Rule

It’s easy to feel overwhelmed by other people’s talents, but comparing yourself to others is a game that is impossible to win. Instead, try competing with yourself. Where were you a year ago? Six months ago? Can you measure your improvement over time? I am sure this will give you a much better perspective of your own progress.

yardstick
Compete with yourself, not with others. (View large version)

Communicate Your Fears and Feelings

This might sound even more frightening, but bear with me. Don’t be afraid to talk about your feelings. The funny thing is that most people who experience impostor syndrome are unaware that others around them feel inadequate as well. This happens simply because impostor syndrome can be hard to spot in others. As mentioned earlier, those who experience it generally do very well in their jobs. But award-winning writer Neil Gaiman has the perfect anecdote. He shares a funny story about attending a gathering of acknowledged figures, and recognizing that he and Neil Armstrong felt exactly the same discomfort because neither thought they deserved to be at the gathering. Communicating these feelings made a big difference to him: “And I felt a bit better. Because if Neil Armstrong felt like an impostor, maybe everyone did.”

Communicate your fears and feelings. You will be surprised by how many people around you feel the same. (View large version)

So, the next time you start to feel like a fraud at work or are afraid that your colleagues might suspect that you don’t know as much as they thought you did, seek comfort in the knowledge that some of even the most accomplished among us feel similarly. Maybe even your boss.

Conclusion

Impostor syndrome is not a mental disorder, even though it is on the radar of many psychologists and has been extensively researched in recent years. Nevertheless, it is a real psychological issue, rooted deeply in many of us. If we do not pay attention to its symptoms, if we blindly follow its triggers, then we can get into real psychological trouble. The good news is that, even though there is no pill for it, we can change out attitude towards it. Simply acknowledging the feeling can help to neutralize its effect.

final
(View large version)

I hope you’re now better aware of impostor syndrome, because if you spot the symptoms early enough and try to overcome the effects using the approaches mentioned above, then the practices you integrate will help you to live a more fulfilling life.

P.S.These days, instead of constantly monitoring what’s going on in our industry and diving into each and every piece of news, I dedicate only 20 minutes every morning to it. And let me tell you, that is more than enough time to get what is really important. Stay healthy.

Smashing Editorial(vf, al, il)

Naming Things In CSS Grid Layout

When first learning how to use Grid Layout, you might begin by addressing positions on the grid by their line number. This requires that you keep track of where various lines are on the grid.

Built on top of this system of lines, however, are methods that enable the naming of lines and even grid areas. Using these methods enables easier placement of items by name rather than number, but also brings additional possibilities when creating systems for layout. In this article, I’ll take an in-depth look at the various ways to name lines and areas in CSS Grid Layout, and some of the interesting possibilities this creates.

Naming Lines

We can make a start by naming the lines on a grid layout. If you take the example below, we have a grid with six explicit column tracks and one explicit row track. Items are placed on this grid by way of line numbers.

.grid { display: grid; grid-gap: 20px; grid-template-rows: 20vh ; grid-template-columns: 1fr 2fr 1fr 2fr 1fr 2fr;}.header { grid-row: 1; grid-column: 1 / -1;}.sidebar { grid-row: 2; grid-column: 1 / 3;}.content { grid-row: 2; grid-column: 3 / -1;}

If we want to name the lines, we do so inside square brackets in the track listing. The key thing here is to remember that you are naming the line, not the track that follows. Having named the lines you can swap line numbers for names when positioning items.

.grid { display: grid; grid-gap: 20px; grid-template-rows: [header-start] 20vh [header-end] ; grid-template-columns: [sidebar-start] 1fr 2fr [sidebar-end] 1fr 2fr 1fr 2fr;}.header { grid-row: header-start; grid-column: 1 / -1;}.sidebar { grid-row: 2; grid-column: sidebar-start / sidebar-end;}

You can name lines anything you like other than the span keyword. For reasons which you will discover later in this article, it is a good idea to name them with the suffix -start for start lines (whether row or column and -end for end lines). You might have main-start and main-end, or sidebar-start and sidebar-end.

Quite often the end line of one part of your grid and the start line of another coincide, and this is not a problem as lines can have multiple names. Create multiple names by adding them separated by a space — inside the square brackets.

.grid { display: grid; grid-gap: 20px; grid-template-rows: [header-start] 20vh [header-end] ; grid-template-columns: [full-start sidebar-start] 1fr 2fr [sidebar-end main-start] 1fr 2fr 1fr 2fr [main-end full-end];}.header { grid-row: header-start; grid-column: full-start / full-end;}.sidebar { grid-row: 2; grid-column: sidebar-start / sidebar-end;}.content { grid-row: 2; grid-column: main-start / main-end;}

This example also demonstrates that you don’t need to name every single line of the grid, and you always still have numbers to use in addition to names.

See the Pen 1. Naming things: line names by rachelandrew (@rachelandrew) on CodePen.

Lines With The Same Name

We have seen how lines can have multiple names, but you can also have multiple lines with the same name. This will happen if you use repeat notation and include named lines in the track listing. The next example creates six named lines, alternately named col-a-start and col-b-start.

.grid { display: grid; grid-gap: 20px; grid-template-columns: repeat(3, [col-a-start] 1fr [col-b-start] 2fr);}

If you place an item using col-a-start, it will be placed against the first instance of col-a-start (in this example, that would be the first line of the grid). If you place it against col-b-start, it will be positioned against the second line of the grid.

To target later lines, add a number after the line name to indicate which instance of that line you are targeting. The following CSS will place the item starting on the second line named col-a-start and finishing on the third line named col-b-start.

.box3 { grid-row: 2; grid-column: col-a-start 2 / col-b-start 3;}

See the Pen 2. Naming things: multiple lines with the same name by rachelandrew (@rachelandrew) on CodePen.

The specification describes this behaviour as “creating a named set of grid lines” which can be a helpful way of looking at the grid you have created with multiple lines of the same name. By adding the number you are then selecting which line of the set you wish to target.

Maintaining Line Names While Redefining A Responsive Grid

Whether you choose to use line numbers or named lines is completely down to you. In general, lines can be useful where you wish to change the grid definition within media queries. Rather than needing to keep track of which line number you are placing things against at different breakpoints, you can have consistently named lines. Only the definition then needs to change and not the positioning of items.

In the following simple example, I define my grid columns for narrow widths, then redefine them at a width of 550 pixels. The positioned items continue to place themselves against the same named line, despite the fact the location of the line on the grid has changed.

See the Pen 3. Naming things: redefining the position of named lines by rachelandrew (@rachelandrew) on CodePen.

Named Areas

We have so far had a good look at named lines, however, there is another way of naming things on the grid. We can name grid areas.

A grid area is a rectangular area consisting of one or more grid cells. The area is defined by four grid lines marking out the start and end lines for columns and rows.

A grid area covering six cells of the defined grid
(Large preview)

We name areas of the grid using the grid-template-areas property. This property takes a somewhat unusual value (a set of strings, one for each row) which describe our layout in ascii-art style.

.grid { display: grid; grid-template-columns: repeat(3, 1fr 2fr) ; grid-template-areas: "head head head head head head""side side main main main main""foot foot foot foot foot foot";}

The names we use in the strings for grid-template-areas are assigned to the direct child elements of the grid using the grid-area property. The value of this property when used to assign a name is what is known as a custom identifier, so it should not be quoted.

.header { grid-area: head;}.sidebar { grid-area: side;}.content { grid-area: main;}.footer { grid-area: foot;}

Whenever we describe our layout as the value of grid-template-areas, it will cause an area to cover more than one cell of the grid we repeat the ident along the row or down the columns. The area created must be a complete rectangle — no L or T-shaped areas. You may also only create one rectangular area per name — disconnected areas are not possible. The specification does note that:

“Non-rectangular or disconnected regions may be permitted in a future version of this module.”

The grid-template-areas-property

When creating our grid description, we also need to create a complete representation of our grid, otherwise the whole declaration is thrown away as invalid. That means that every cell of the grid needs to be filled.

grid-template-areas: "head head head head head head""side side main main main main""foot foot foot foot foot foot";}

As you might want to leave some cells empty in your design, the spec defines a full-stop character . or a sequence .... with no white space between as a null cell token.

 grid-template-areas: "head head head head head head""side side main main main main""....... ....... foot foot foot foot";}

See the Pen 4. Naming things: Grid Template Areas by rachelandrew (@rachelandrew) on CodePen.

If you haven’t already downloaded Firefox Nightly in order to benefit from all the newest features of the Firefox DevTools Grid Inspector, I can recommend doing so when working with named areas.

The Grid Inspector in Firefox demonstrating named areas
The “Grid Inspector” in Firefox demonstrating named areas. (Large preview)

From Named Lines Come Areas

Now we come to an interesting part of all of this naming fun. You might remember that when we looked at naming lines I suggested you use the convention of ending the line which begins an area with -start and the end line -end. The reason for this is that if you name lines like this, you will get a named area of the main name used into which you can position an item by giving it a name with grid-area; in the same way that you position items in grid-template-areas by assigning the ident using grid-area.

In this next example, I am naming lines for both rows and columns panel-start and panel-end. This will give me a named area called panel. If I assign that as the value of grid-area to an element on my page it will be placed into the area defined by those lines.

.grid { display: grid; grid-gap: 20px; grid-template-columns: 1fr [panel-start] 2fr 1fr 2fr 1fr [panel-end] 2fr; grid-template-rows: 10vh [panel-start] minmax(200px, auto) 10vh [panel-end]; grid-template-areas: "head head head head head head" "side side main main main main" ".... .... foot foot foot foot";}.panel { grid-area: panel;}

See the Pen 5. Naming things: From named lines come named areas by rachelandrew (@rachelandrew) on CodePen.

From Named Areas Come Lines

We can also do the reverse of the above, and use lines created from our named areas. Each area creates four named lines using the same -start and -end convention. If you have a named area called main, then you have row lines main-start and main-end for the start and end row lines, and column lines main-start and main-end for the start and end column lines. You can then position an item using line-based placement and the named lines.

In this example, I am positioning the overlay panel using these created named lines.

.grid { display: grid; grid-template-areas: "head head head head head head" "side side main main main main" ".... .... foot foot foot foot";}.panel { grid-column: main-start / main-end; grid-row: head-start / foot-end;}

See the Pen 6. Naming things: From named areas come named lines by rachelandrew (@rachelandrew) on CodePen.

Line Names Equivalent To The Area Name

In addition to line names for start and end, a line is created for the start edge of any named grid area which uses the main name. Therefore, if you have an area called main, you could use the ident main as a value for grid-row-start or grid-column-startand the content would start at the start line of that area. If you used the value for grid-row-end or grid-column-end, then the end line of that area is chosen. In the below example, I am stretching the overlay panel from the start of main to the end of main for columns, and the start of main and the end of foot for rows.

.panel { grid-column: main; grid-row: main / foot ;}

See the Pen 7. Naming things: Line names equivalent to area name by rachelandrew (@rachelandrew) on CodePen.

The Grid-Area Property Explained

To wrap up all of this magical line business, it is also useful to know something about grid-area. Essentially what we are doing when using grid-area with an ident like main is defining all four lines of the area. A valid value for grid-area is also to use line numbers.

.main { grid-area: 2 / 1 / 4 / 3;}

This would be the same as writing:

.main { grid-row-start: 2; grid-column-start: 1; grid-row-end: 4; grid-column-end: 3;}

When you set:

.main { grid-area: main;}

This is really:

.main { grid-area: main / main / main / main ;}

The grid-area property behaves a little differently when you use a custom ident rather the a number. If you use line numbers for the start values in grid-area, any end line number you do not set will be set to auto, therefore grid auto placement will be used to work out where to put your item.

If, however, you use a custom ident and omit some of the lines, then the missing lines are set as follows.

Set Three Line Names

.main { grid-area: main / main / main ;}

If you set three line names, then you are essentially missing grid-column-end. If grid-column-start is a custom ident then grid-column-end is also set to that ident. As we have already seen, an -end property will use the end edge of the area when using the main name to set a line, so with grid-column-start and grid-column-end set to the same name, the content stretches over the columns of that area.

Set Two Line Names

.main { grid-area: main / main ;}

With only two names are set, you are setting the row and column start lines. If grid-column-start is a custom ident then grid-column-end is also set to that ident. If grid-row-start is a custom ident, then grid-row-end will be set to that ident.

Set One Line Name

.main { grid-area: main ;}

Setting one line name is what you do when you set grid-area to the main name of your area. In this case, all four lines are set to that value.

Note: This works forgrid-columnandgrid-row, too.

This technique essentially means you can target a set of columns of rows on the grid to place items. As just as the -end values of grid-area are set to the same index as the start values when they are omitted, so too are the end values of grid-column and grid-row. This means you can place an item between the start and end column lines of main by using:

.main { grid-column: main ;}

In the blog post Breaking Out with CSS Grid explained, I showed how this capability is used to create a useful design pattern of full-width areas breaking out of a constrained content area.

A Grid Can Have A Lot Of Named Lines!

What all of the above means is that a grid can end up with a huge number of named lines. In most cases, you don’t need to worry about this. Pick the ones you want to use and ignore the fact there are others. They will just sit quietly with their names, causing you and your layout no problems at all.

Naming And The grid And grid-template Shorthands

CSS Grid Layout has two shorthands which enable the use of many grid properties in one compact syntax. Personally, I find this quite hard to read. Opinion is divided when I discuss this with other developers – some people love it, and others would rather use the individual properties. Have a look and see which camp you fall into! As with all shorthands, the key thing to remember is that properties you do not use will be reset when you use the shorthand.

The grid-template Shorthand: Creating The Explicit Grid

You can use the grid-template shorthand to set all of the explicit grid properties at once.

grid-template-columnsgrid-template-rowsgrid-template-areas

This means that you can define named lines and named areas in one. To create the syntax combining named areas and lines, I would first define your grid-template-areas value as in the section above.

Then you might want to add row names. These are placed at the beginning and end of each string – remember that a string represents a row. The row name or names needs to be inside the square brackets, just as when you named lines in grid-template-rows and should be outside of the quotes wrapping the string defining the row.

I have named two row lines in the code example: panel-start comes after the header line (row line 2 of the grid), while panel-end comes after the end footer line (line 4 of our three row track grid). I have also defined the row track sizing for named and un-named rows, adding the value after the string for that row.

.grid { display: grid; grid-gap: 20px; grid-template: " head head head head head head" 10vh [panel-start] "side side main main main main" minmax(200px, auto)".... .... foot foot foot foot" 10vh [panel-end];}

If we also want to name columns, we can’t do this inside the string so we need to add a / separator and then define our column track listing. We name the lines in the same way we would if this listing were the value of grid-template-columns.

.grid { display: grid; grid-gap: 20px; grid-template: " head head head head head head" 10vh[panel-start] "side side main main main main" minmax(200px, auto)".... .... foot foot foot foot" 10vh [panel-end]/ [full-start ] 1fr [panel-start] 2fr 1fr 2fr 1fr [panel-end] 2fr [full-end];}

In this example, which you can see in the codepen below, I am creating an additional set of lines for rows and columns, these lines define an area named panel as I have used the panel-start and panel-end syntax. So I can place an item by giving it a grid-area value of panel.

This looks pretty obscure at first glance, however what we are doing here is creating a column listing that lines up with our ascii-art definition above. You could carefully add white space between everything in order to make the template-areas and template-columns definitions align, if you wanted.

See the Pen 8. Naming things: The grid-template shorthand by rachelandrew (@rachelandrew) on CodePen.

Thegrid Shorthand: The Explicit And Implicit Grid

The specification suggests that unless you want to define the implicit grid separately you should use the grid rather than the grid-template shorthand. The grid shorthand will reset all of the implicit values that you do not set. So this shorthand allows the setting, and resets the following properties:

grid-template-columnsgrid-template-rowsgrid-template-areasgrid-auto-columnsgrid-auto-rowsgrid-auto-flow

For our purposes, using the grid shorthand would look identical to using the grid-template shorthand as we are creating an explicit grid definition. The only difference would be the resetting of the grid-auto-* properties. The grid shorthand can either be used to set the explicit grid — resetting the implicit properties, or the implicit grid — resting the explicit properties. Doing both at once doesn’t make much sense!

.grid { display: grid; grid-gap: 20px; grid: " head head head head head head" 10vh[panel-start] "side side main main main main" minmax(200px, auto)".... .... foot foot foot foot" 10vh [panel-end]/ [full-start ] 1fr [panel-start] 2fr 1fr 2fr 1fr [panel-end] 2fr [full-end];}

Note: In the initial Candidate Recommendation of the CSS Grid spec, this shorthand also resets the gutter properties grid-column-gap and grid-row-gap. However, this has been changed. Browsers are updating their implementations, but at the time of writing you may find the gap properties being reset to 0 when using this shorthand, so you would need to define them afterwards.

Which Method Should I Use?

Which of all of these different methods, you might be wondering, is the best to use for any given task. Well, there are no hard and fast rules. Speaking personally, I love using grid-template-areas for components while working in my pattern library. It is nice to see the shape of the component right there in the CSS and grid-template-areas makes it easy to try out different layouts as I test a component. Something I have discovered is that, due to it being so easy to move things around, it is really important to check you haven’t disconnected the visual and logical order of your component. A user navigating your site with a keyboard, tabbing between items will be following the order of elements as defined in the source. Make sure that you do not forget to rearrange the source once you have worked out the best way to display your content. For more information about this issue, I advise you to read CSS Grid Layout and Accessibility.

I have been finding that I tend to use named lines for the larger sections of the layout, on the main page grid where I may well be placing different types of components for different layouts. With that said, I’m still exploring how best to use grid in production — probably just like everyone else. I’ve been creating small examples and playing with the ideas for several years, yet it has only been recently that I could use the techniques on real websites. Try not to get hung up on what is “right” or “wrong”. If you find a method confusing, or it doesn’t seem to work in your context, simply don’t use it. The beauty of this is that we can choose the ways that make the most sense for the projects we are working on. And, if you do come up with some guidelines for your own projects based on experience, write them up. I’m really keen to see what is working well for people in the real world of production grid layouts.

Quick Rules When Naming Things

To round up this article, here are some quick rules to remember when naming lines or areas of your grid:

When Naming Lines

  1. You can use almost any name you like (other than the word span), however, you might want to use a named area created from your lines and name them ending in -start and -end.
  2. Lines can have multiple names, space separated inside a single set of square brackets.
  3. Multiple lines can have the same name; just add the number of the line instance that you want to target after the line name.

When Creating Named Areas

  1. When defining an area using grid-template-areas, the shape must be a complete rectangle.
  2. Each row of your grid needs to be wrapped in quotes inside the value of grid-template-areas. You are creating a collection of strings, i.e. one string per grid row.
  3. Each cell needs to be filled. If your design requires some cells to be left empty, then use a full-stop . or multiple ... with no white space to indicate this.
  4. Your named areas create lines with the same name as the area, plus lines named with the area name and -start and -end appended. You can use these to place items.
Smashing Editorial(vf, il)

Why

Here is the question that I am currently pondering. Why did I not become an actor? Actors (that get a break) are typically revered by the public. Is it because they are special? Well, more than likely they acted special for a part. Here is the paradoxical point, if I act in a way that I know will be liked by others, then why have I not succeeded or have material and monetary riches like revered actors? I guess my mistake was not filming it!

Dynamic Shape Overlays with SVG

Some ideas for multi-layered SVG shape overlays that get generated dynamically with adjustable properties for a variety of effects.

Today we’d like to share another way of achieving morphing page transitions. This time, we’ll generate multiple SVG curves with JavaScript, making many different looking shapes possible. By controlling the individual coordinates of the several layers of SVG paths, the curved shapes animate to a rectangle (the overlay) with a gooey motion. We use some nice easing functions from glsl-easings and by tuning the curve, speed and the delay value, we can generate many interesting effects.

ShapeOverlays
Attention: We use some new CSS properties in the demos; please view them with a modern browsers.

This demo is kindly sponsored by HelloSign API: The dev friendly eSign.

Building the SVG

Let’s have a look at the SVG which we will use to insert the path coordinates dynamically.

First, we’ll make sure that the whole SVG and the overlay paths are stretched to the size of the screen. For that, we’ll set the preserveAspectRatio attribute to none. Depending on how many layers we want, we’ll create that amount of paths:

<svg viewBox="0 0 100 100" preserveAspectRatio="none"> <path></path> <path></path> <path></path> </svg>

The styles that will allow the SVG to match the size of the browser window looks as follows:

.shape-overlays { width: 100vw; height: 100vh; position: fixed; top: 0; left: 0; }

Each path element corresponds to a layer of the overlay. We’ll specify the fill for each of these layers in our CSS. The last path element is the background that stays after the overlay expansion:

.shape-overlays path:nth-of-type(1) { fill: #c4dbea; } .shape-overlays path:nth-of-type(2) { fill: #4c688b; } .shape-overlays path:nth-of-type(3) { fill: #2e496a; }

Note that in our demos, we make use of CSS custom properties to set the path colors.

The JavaScript

For our demos, we define an overlay control class that allows us to set and control a couple of things. By changing each value, you can create unique looking shapes and effects:

class ShapeOverlays { constructor(elm) { this.elm = elm; // Parent SVG element. this.path = elm.querySelectorAll('path'); // Path elements in parent SVG. These are the layers of the overlay. this.numPoints = 18; // Number of control points for Bezier Curve. this.duration = 600; // Animation duration of one path element. this.delayPointsArray = []; // Array of control points for Bezier Curve. this.delayPointsMax = 300; // Max of delay value in all control points. this.delayPerPath = 60; // Delay value per path. this.timeStart = Date.now(); this.isOpened = false; } ... } const elmOverlay = document.querySelector('.shape-overlays'); const overlay = new ShapeOverlays(elmOverlay);

Further methods that determine the appearance of the overlay are the ShapeOverlays.toggle() method and the ShapeOverlays.updatePath() method.

The ShapeOverlays.toggle() method has the function of opening and closing the overlay, and also of setting the delay value of each control point for every time it opens and closes. It is not necessary to set the delay value every time, but by altering it, it will create some nice randomness.

The ShapeOverlays.updatePath() controls the animation by specifying the easing function.

For example, in demo 1, the same easing function is used for all control points, and the delay value is set like a fine wave using trigonometric functions, so that we get a “melting” appearance.

toggle() { const range = 4 * Math.random() + 6; for (var i = 0; i < this.numPoints; i++) { const radian = i / (this.numPoints - 1) * Math.PI; this.delayPointsArray[i] = (Math.sin(-radian) + Math.sin(-radian * range) + 2) / 4 * this.delayPointsMax; } ... } updatePath(time) { const points = []; for (var i = 0; i < this.numPoints; i++) { points[i] = ease.cubicInOut(Math.min(Math.max(time - this.delayPointsArray[i], 0) / this.duration, 1)) * 100 } ... }

In our demos we use this effect to create an overlay in order to show a menu in the end of the animation. But it could also be used for other types of transitions, like page transitions or scroll effects. Your imagination is the limit.

Here are a couple of screenshots:

ShapeOverlays_01

ShapeOverlays_02

ShapeOverlays_03

ShapeOverlays_04

We hope you enjoyed this effect and find it useful!

Credits

  • glsl-easings by glslify. Easing functions that use to demos are based on the code of glsl-easings module.

Monthly Web Development Update 10/2017: CSS Grid, CAA Pitfalls, And Image Optimization

Editor’s Note: Welcome to this month’s web development update. Anselm has summarized the most important happenings in the web community that have taken place over the past few weeks in one handy list for you. Enjoy!

As web developers, we’re working in a very diverse environment: We have countless options to specialize in, but it’s impossible to keep up with everything. This week I read an article from a developer who realized that even though he has been building stuff for the web for over seven years, sometimes he just doesn’t understand what’s going on: “I’m slamming my keyboard in frustration as another mysterious error appears in my build script,” he writes. For him, writing JavaScript isn’t fun anymore. The tool chain got too complex, the workflows are built mainly for developer convenience, and many things that exist in the languages itself are reinvented in external libraries.

Now when I look at the articles I collected for you this month, I can relate to the kind of frustration he’s feeling. Soon we won’t be able to use .dev domains anymore, HTTPS CAA checks don’t work with private network interfaces, and when I look at a (admittedly great) tutorial on how we can replace scroll events with IntersectionObserver, I see code that might have better performance but that is more complex as what we used to do with EventListener.

The web is developing and changing so fast, and we need to acknowledge that we as individual persons can’t know and understand everything. And that’s fine. Choose what you want to do, set your priorities, and, most importantly of all, don’t hesitate to hire someone else for the things you can’t do on your own.

News

  • Mattias Geniar reminds us that Chrome, according to a recent commit in Chromium, will very soon preload .dev domains as HTTPS via preloaded HSTS. Google bought the domain name, and they now want it to be accessible only via HTTPS. So if you use a .dev name in your projects (which often is the case on your local machine, registered manually via the hosts file), you should switch to the reserved .test domain name now or consider using localhost instead. Once the patch lands in Chrome, you’ll not be able to access your projects anymore without a valid TLS certificate in place.
  • HTTP Immutable Responses are now an official Internet standard, and they are already available in most browsers.
  • React 16 is out now — under a full MIT license which finally ends the debate about the previously used patent-clause copyright license. The new version comes with a rewritten core, better error handling, custom DOM attributes, and it returns fragments and strings (so no more useless span-elements). Also, it’s footprint has decreased by 30%.

Tooling

  • Infusion is an inclusive, accessible documentation builder.
  • Sketch 47 is out with two major new features: libraries and smooth corners. Especially libraries are a huge step forward as they allow us to sync, share and update symbols from any Sketch document and even in collaboration with other people.

Web Performance

  • Essential Image Optimization” by Addy Osmani is a free eBook that explains almost everything you can and should know about image optimization for the web. Be sure to take a look at it.
  • News from Cloudflare: You’ll soon be able to deploy JavaScript to Cloudflare’s edge, written against an API similar to Service Workers. Sounds pretty amazing.
Essential Image Optimization
Images are still the number one bloat on the web. Addy Osmani’s new eBook explains how you can change that by compressing your images efficiently. (Image credit)

CSS

Slack Grid
The Slack engineering team lets us sneak a peek behind the scenes of their recent CSS Grid powered website redesign. (Image credit)

JavaScript

Accessibility

Accessible tabbed interfaces
Heydon Pickering explains how to make tabbed interfaces accessible. (Image credit)

Security

Privacy

Work & Life

Improving Focus
Get more out of your work week without working more hours. Ivan Mir shares how he did it. (Image credit)

Going Beyond…

“Perhaps I didn’t get to email the people who’re truly responsible here; and what they do with my requests, I don’t know, either.

But the point is that reaching out is one of the few options we have at our disposal; and if even one small thing changes and improves, it may be a success. And as such I believe more people should reach out. Instead of waiting for politicians or law enforcement to act, let’s act ourselves, let’s make ourselves heard. Constructive action always helps.”

We hope you enjoyed this Web Development Update. The next one is scheduled for November 17th. Stay tuned!