Collective #323
Vue Tetris * BEM by Example * Bojler * Creative Portfolios * Topol.io * Spellbook of Modern Web Dev * Dropdown…
Vue Tetris * BEM by Example * Bojler * Creative Portfolios * Topol.io * Spellbook of Modern Web Dev * Dropdown…
Tisp is a “Time is Space” functional programming language that aims to be simple, canonical, and practical.
We are very happy to share an exclusive illustration set with you to celebrate the EGO icons launch! These unique illustrations have an iconic and angular look like the EGO icons and they can be used for articles, websites and other design projects. We hope you like them!
This set of 6 unique and modern vector illustrations come in AI, SVG and Sketch format and are easy to customize. They depict various hot topics and are composed of meaningful details. Their unique EGO look will elevate any article or web design and give them a futuristic touch.
The illustration set contains:
Have a look at all 6 illustrations:
EGO is the latest icon set from Webalys, the creator of Streamline and Nova Icons. The EGO icons took 2 years to be designed and the result is a massive set of 3600 unique and versatile icons with personality including 100 categories and 2 customizable styles. The icons have a futuristic look and they can bring a distinctive touch to any projects.
All icons are provided in .SVG, .PDF, .AI, .SKETCH, .EPS, and .iconjar, so you can fire ‘em up in your favorite graphics software. They come in two modern styles: monoline style, drawn with a single minimal stroke and duotone style, with shaded parts that bring more depth.
Icon colors can be changed in seconds. Using the “Shared Styles” in Sketch, or the “Global Colors” in Illustrator, quickly apply a new color scheme to all icons!
The EGO icon pack is fresh and forward-facing — perfect for making your apps, web interfaces, and UI designs stand out in the crowd. Preview the complete collection with 3,600 fresh-to-death vector icons here:
There is a 30% launch discount until the end of the weekend so get them quickly!
Want to try them first? Get 100 icons of the EGO icon set for free and fire ’em up in your app, web interface, or UI design project.
You can download the ZIP file of the icon set here:
We hope you enjoy this freebie and find it useful!
If you’d like to contribute and publish your exclusive freebie on Codrops just drop us a line.
Hardbin is an encrypted pastebin, with the decryption key passed in the URL fragment, and the code and data served securely with IPFS.
When did you take your last vacation? For many of us, it was probably a long time ago. However, since quite a while, I stumble across more and more stories about companies that take unusual steps vacation-wise. Companies giving their employees a day off each week in summer or going on vacation together as a team building event instead of traveling somewhere just to work.
But while there’s a new generation building their dream work environments, a lot of people still suffer from very bad working conditions. They work long hours and are discriminated or harassed by colleagues or their managers. And just this week, I heard that many company owners are desperate because “Generation Y” doesn’t want to work long hours anymore.
As with every major break in generations, I think it’s good to have people standing up for their rights, enjoying their life and their work. But we also need to talk with executives who are under pressure to match deadlines. Because only if we show them evidence that working less can be beneficial for their employees’ health and productiveness, we can convince them to let us work more freely.
init
and migrate
command to help ease the setup of Webpack.Thanks for reading this. If you like it, consider supporting my work18.
—Anselm
Fuse is a toolkit for creating apps that run on both iOS and Android devices. It enables you to create apps using UX Markup, an XML-based language. But unlike the components in React Native1 and NativeScript2, Fuse is not only used to describe the UI and layout; you can also use it to add effects and animation.
Styles are described by adding attributes such as Color
and Margin
to the various elements. Business logic is written using JavaScript. Later on, we’ll see how all of these components are combined to build a truly native app.
In this article, you will learn what Fuse is all about. We’ll see how it works and how it compares to other platforms such as React Native and NativeScript. In the second half of the article3, you will create your first Fuse app. Specifically, you will create a weather app that shows the weather based on the user’s current location. Here’s what the output will look like:
In creating the app, you will learn how to use some of Fuse’s built-in UI components and learn how to access native device functionality such as geolocation. Towards the end of the article, you will consolidate your learning by looking at the advantages and disadvantages of using Fuse for your next mobile app project.
I’d like to describe how Fuse works using the following diagram:
On the top layer are the UX Markup and JavaScript. This is where we will spend most of our time when working with Fuse. On the middle layer are the libraries that are packaged with Fuse. This includes the JavaScript APIs that allow access to native device features such as geolocation and the camera. Lastly, on the bottom layer is the Uno compiler, which is responsible for translating the UX Markup into pure native code (Objective-C for iOS and C++ for Android). Once the app runs, all of the UI that you will see will be native UI for that particular platform. JavaScript code is executed via a virtual machine on a separate thread. This makes the UI really snappy because JavaScript won’t affect the UI’s performance.
Before we create an app with Fuse, one of the important questions that needs to be answered is how does it stack up against existing tools that do the same job. In this section, we’ll learn about the features and tools available in Fuse compared to those of React Native and NativeScript, as well as how things are done on each platform. Specifically, we’ll compare the following areas:
On all platforms, the UI can be built using an XML-based language. Common UI components such as text fields, switches and sliders are available on each platform.
React Native has the most of these components, although some are not unified, which means that there can be a maximum of two ways to use a particular component. For example, one can be used on both platforms, and one for a specific platform only. A few components, such as the ProgressBar
, are also implemented differently on each platform, which means that it’s not totally “write once, run everywhere.”
On the other hand, NativeScript has a unified way of implementing the different UI components on each platform. For every component, there’s an equivalent native component for both Android and iOS.
Fuse has a decent number of UI components that will cover the requirements of most projects. One component that’s not built into either React Native or NativeScript is the Video
component, which can be used to play local videos and even videos from the Internet. The only component that is currently missing is the date-picker, which is especially useful during user registration. Though you can always create your own using the components that are already available to Fuse.
In React Native, layout is done with Flexbox. In a nutshell, Flexbox enables you to specify how content should flow through the available space. For example, you can set flex
to 1
and flexDirection
to row
in a container element in order to equally divide the available space among the children and to arrange the children vertically.
<View style={{flex: 1, flexDirection: 'row'}}> <View style={{backgroundColor: 'powderblue'}} /> <View style={{backgroundColor: 'skyblue'}} /> <View style={{backgroundColor: 'steelblue'}} /> </View>
In NativeScript, layout is achieved using layout containers, the most basic one being StackLayout
, which puts all elements on top of each other, just like in the example below. In a horizontal orientation, they’re placed side by side.
<StackLayoutorientation="vertical"><Imagesrc="assets/images/dog.png"/><Imagesrc="assets/images/cat.png"/><Imagesrc="assets/images/gorilla.png"/></StackLayout>
Similarly, Fuse achieves layout by using a combination of the different elements in UX Markup, the most common ones being StackPanel
, Grid
and DockPanel
. StackPanel
works similar to StackLayout
in NativeScript. Here’s an example:
<StackPanelOrientation="Vertical"><PanelHeight="100"Background="Red"/><PanelHeight="100"Background="White"/><PanelHeight="100"Background="Blue"/></StackPanel>
All of the platforms cover all of the basics with JavaScript APIs. Things like camera functionality, platform information, geolocation, push notifications, HTTP requests and local storage can be done on all platforms. However, looking at the documentation for each platform, you could say that React Native has the most JavaScript APIs that bridge the gap between native and “JavaScript native” features. There’s no official name yet for platforms such as React Native, NativeScript and Fuse, so let’s just stick with “JavaScript native” for now, because they all use JavaScript to write code and they all offer native-like performance.
If you need access to specific device features that don’t expose a JavaScript API yet, each platform also provides ways for developers to tap into native APIs for Android and iOS.
NativeScript gives you access to all of the native APIs of the underlying platform through JavaScript. This means you don’t have to touch any Swift, Objective-C or Java code in order to make use of the native APIs. The only requirement is that you know how the native APIs work.
React Native falls a bit short in accessing native APIs because you’ll have to know the native language in order to extend native functionality. This is done by creating a native module (an Objective-C class for iOS or a Java class for Android), exposing your desired public methods to JavaScript, then importing it into your project.
Fuse allows you to extend functionality through a feature that it refers to as “foreign code.” This allows you to call native code on each platform through the Uno language. The Uno language is the core technology of Fuse. It’s what makes Fuse work behind the scenes. Making use of native features that aren’t supported by the core Fuse library is done by creating an Uno class. Inside the Uno class, you can write the Objective-C or Java code that implements the functionality you want and have it exposed as JavaScript code, which you can then call from your project.
Both React Native and NativeScript supports the use of all npm packages that don’t have dependencies on the browser model. This means you can use a library such as lodash and moment simply by executing npm install {package-name}
in your project directory and then importing it in any of your project files, just like in a normal JavaScript project.
Fuse, on the other hand, is currently lacking in this regard. Usage of existing JavaScript libraries is mostly not possible; only a short list of libraries10 are known to work. The good news is that the developers are constantly working on polyfills to improve compatibility with existing libraries.
Another important part of the UX is animation. In React Native, animation is implemented via its Animated API. With it, you can customize the animation a lot. For example, you can specify how long an animation takes or how fast it runs. But this comes with the downside of not being beginner-friendly. Even simple animation such as scaling a particular element requires a lot of code. The good thing is that libraries such as React Native Animatable11 make it easier to work with animation. Here’s sample code for implementing a fadeIn
animation using the Animatable library:
<Animatable.View animation="fadeIn">Fade me in!</Animatable.View>
NativeScript animations can be implemented in two ways: via the CSS3 animations API or the JavaScript API. Here’s an example of scaling an element with a class of el
:
.el {animation-name: scale;animation-duration: 1;}@keyframes scale {from {transform: scale(1, 1);}to {transform: scale(1.5, 1.5);}}
And here’s the JavaScript equivalent:
var view = page.getViewById('box'); //must have an element with an ID of box in the markup view.animate({ scale:{ x:1.5, y:1.5}, duration:1000});
Animation in Fuse is implemented via triggers and animators. Triggers are used to detect whether something is happening in the app, whereas animators are used to respond to those events. For example, to make something bigger when pressed, you would have this:
<RectangleWidth="50"Height="50"Fill="#ccc"><WhilePressed><ScaleFactor="2"/></WhilePressed></Rectangle>
In this case, <WhilePressed>
is the trigger and <Scale>
is the animator.
When it comes to community, React Native is the clear winner. Just the fact that it was created by Facebook is a big deal. Because the main technology used to create apps is React, React Native taps into that community as well. This means that a lot of projects can help you to develop apps. For example, you can reuse existing React components for your React Native project. And because many people use it, you can expect to quickly get help when you get stuck, because you can just search for an answer on Stack Overflow. React Native is also open-source, and the source code is available on GitHub12. This makes development really fast because the maintainers can accept help from developers outside of the organization.
NativeScript, meanwhile, was created by Telerik. The project has a decent-sized community behind it. If you look at its GitHub page13, currently over 10,000 people have starred the project. It has been forked 700 times, so one can assume that the project is getting a lot of contributions from the community. There are also a lot of NativeScript packages on npm and questions on Stack Overflow, so expect that you won’t have to implement custom functionality from scratch or be left alone looking for answers if you get stuck.
Fuse is the lesser known among the three. It doesn’t have a big company backing it up, and Fuse is basically the company itself. Even so, the project comes complete with documentation, a forum, a Slack channel, sample apps, sample code and video tutorials, which make it very beginner-friendly. The Fuse core is not yet open-source, but the developers will be making the code open-source soon.
With React Native and NativeScript, you need to have an actual mobile device or an emulator if you want to view changes while you’re developing the app. Both platforms also support live reloading, so every time you make a change to the source files, it automatically gets reflected in the app — although there’s a slight delay, especially if your machine isn’t that powerful.
Fuse, on the other hand, allows you to preview the app both locally and on any number of devices currently connected to your network. This means that both designers and developers can work at the same time and be able to preview changes in real time. This is helpful to the designer because they can immediately see what the app looks like with real data supplied by the developer’s code.
When it comes to debugging, both React Native and NativeScript tap into Chrome’s Developer Tools. If you’re coming from a web development background, the debugging workflow should make sense to you. That being said, not all features that you’re used to when inspecting and debugging web projects are available. For example, both platforms allow you to debug JavaScript code but don’t allow you to inspect the UI elements in the app. React Native has a built-in inspector that is the closest thing to the element inspector in Chrome’s Developer Tools. NativeScript currently doesn’t have this feature.
On the other hand, Fuse uses the Debugging Protocol in Google’s V8 engine to debug JavaScript code. This allows you to do things like add breakpoints to your code and inspect what each object contains at each part in the execution of the code. The Fuse team encourages the use of the Visual Studio Code14 text editor for this, but any text editor or IDE that supports V8’s Debugging Protocol should work. If you want to inspect and visually edit the UI elements, Fuse also includes an inspector — although it allows you to adjust only a handful of properties at the moment, things like widths, heights, margins, padding and colors.
Now you’re ready to create a simple weather app with Fuse. It will get the user’s location via the GeoLocation API and will use the OpenWeatherMap API to determine the weather in the user’s location and then display it on the screen. You can find the full source code of the app in the GitHub repository15.
To start, go to the OpenWeatherMap website and sign up for an account16. Once you’re done signing up, it should provide you with an API key, which you can use to make a request to its API later on.
Next, visit the Fuse downloads page17, enter your email address, download the Fuse installer for your platform, and then install it. Once it’s installed, launch the Fuse dashboard and click on “New Project”. This will open another window that will allow you to select the path to your project and enter the project’s name.
Do that and then click on the “Create” button to create your project. If you’re using Sublime Text 3, you can click on the “Open in Sublime Text 3” button to open a new Sublime Text instance with the Fuse project20 already loaded. Once you’re in there, the first thing you’ll want to do is install the Fuse package. This includes code completion, “Goto definition,” previewing the app from Sublime and viewing the build.
Once the Fuse plugin is installed, open the MainView.ux
file. This is the main file that we will be working with in this project. By default, it includes sample code for you to play with. Feel free to remove all of the contents of the file once you’re done inspecting it.
When you create an app with Fuse, you always start with the <App>
tag. This tells Fuse that you want to create a new page.
<App></App>
Fuse allows you to reuse icon fonts that are commonly used for the web. Here, we’re using Weather Icons21. Use the <Font>
tag to specify the location of the web font file in your app directory via the File
attribute. For this project, it’s in the fonts
folder at the root directory of the project. We also need to give it a ux:Global
attribute, which will serve as its ID when you want to use this icon font later on.
<FontFile="fonts/weather-icons/font/weathericons-regular-webfont.ttf"ux:Global="wi"/>
Next, we have the JavaScript code. We can include JavaScript code anywhere in UX Markup by using the <JavaScript>
tag. Inside the tag will be the JavaScript code to be executed.
<JavaScript></JavaScript>
In the <JavaScript>
tag, require two built-in Fuse libraries: Observable and GeoLocation. Observable allows you to implement data-binding in Fuse. This makes it possible to change the value of the variable via JavaScript code and have it automatically reflected in the UI of the app. Data-binding in Fuse is also two-way; so, if a change is made to a value via the UI, then the value stored in the variable will also be updated, and vice versa.
var Observable =require('FuseJS/Observable');
GeoLocation allows you to get location information from the user’s device.
var Geolocation =require('FuseJS/GeoLocation');
Create an object containing the hex code for each of the weather icons that we want to use. You can find the hex code on the GitHub page of the icon font22.
var icons = {'clear':'uF00d', 'clouds':'uF002', 'drizzle':'uF009', 'rain':'uF008', 'thunderstorm':'uF010', 'snow':'uF00a', 'mist':'uF0b6', 'fog':'uF003', 'temp': 'uF055' };
Create a function to convert Kelvin to Celsius. We need it because the OpenWeatherMap API returns temperatures in Kelvin.
functionkelvinToCelsius(kelvin){return kelvin -273.15;}
Determine whether it’s currently day or night based on the time on the user’s device. We’ll use orange as the background color for the app if it’s day, and purple if it’s nighttime.
var hour =(newDate()).getHours();var color ='#7417C0';if(hour >=5&& hour <=18){ color ='#f38844';}
Add the OpenWeather Map API key that you got earlier and create an observable variable that contains the weather data.
var api_key ='YOUR OPENWEATHERMAP API KEY';var weather_data =Observable();
Get the location information:
var loc = Geolocation.location;
This will return an object containing the latitude
, longitude
and accuracy
of the location. However, Fuse currently has a problem with getting location information on Android. If the location setting is disabled on the device, it won’t ask you to enable it when you open the app. So, as a workaround, you’ll need to first enable location before launching the app.
Make a request to the OpenWeatherMap API using the fetch()
function. This function is available in Fuse’s global scope, so you can call it from anywhere without including any additional libraries. This will work the same way as the fetch()
function available in modern browsers: It also returns a promise that you need to listen to using the then()
function. When the supplied callback function is executed, the raw response is passed in as an argument. You can’t really use this yet since it contains the whole response object. To extract the data that the API actually returned, you need to call the json()
function in the response object. This will return another promise, so you need to use then()
one more time to extract the actual data. The data is then assigned as the value of the observable that we created earlier.
var req_url ='http://api.openweathermap.org/data/2.5/weather?lat='+ loc.latitude +'&lon='+ loc.longitude +'&apikey='+ api_key;fetch(req_url).then(function(response){return response.json();}).then(function(responseObject){ weather_data.value ={ name: responseObject.name, icon: icons[responseObject.weather[0].main.toLowerCase()], weather: responseObject.weather[0], temperature:kelvinToCelsius(responseObject.main.temp)+' °C'};});
For your reference, here’s a sample response returned by the API:
{"coord":{"lon":120.98,"lat":14.6},"weather":[{"id":803,"main":"Clouds","description":"broken clouds","icon":"04d"}],"base":"stations","main":{"temp":304.15,"pressure":1009,"humidity":74,"temp_min":304.15,"temp_max":304.15},"visibility":10000,"wind":{"speed":7.2,"deg":260},"clouds":{"all":75},"dt":1473051600,"sys":{"type":1,"id":7706,"message":0.0115,"country":"PH","sunrise":1473025458,"sunset":1473069890},"id":1701668,"name":"Manila","cod":200}
Export the variables so that it becomes available in the UI.
module.exports ={ weather_data: weather_data, icons: icons, color: color };
Because this project is very small, I’ve decided to put everything in one file. But for real projects, the JavaScript code and the UX Markup should be separated. This is because the designers are the ones who normally work with UX Markup, and the developers are the ones who touch the JavaScript code. Separating the two allows the designer and the developer to work on the same page at the same time. You can separate the JavaScript code by creating a new JavaScript file in the project folder and then link it in your markup, like so:
<JavaScriptFile="js/weather.js">
Finally, add the actual UI of the app. Here, we’re using <DockPanel>
to wrap all of the elements. By default, <DockPanel>
has a Dock
property that is set to Fill
, so it’s the perfect container for filling the entire screen with content. Note that we didn’t need to set that property below because it’s implicitly added. Below, we have only assigned a Color
attribute, which allows us to set the background color using the color that we exported earlier.
<DockPanelColor="{color}"></DockPanel>
Inside <DockPanel>
is <StatusBarBackground>
, which we’ll dock to the top of the screen. This allows us to show and customize the status bar on the user’s device. If you don’t use this component, <DockPanel>
will consume the entirety of the screen, including the status bar. Simply setting this component will make the status bar visible. We don’t really want to customize it, so we’ll just leave the defaults.
<StatusBarBackgroundDock="Top"/>
Below <StatusBarBackground>
is the actual content. Here, we’re wrapping everything in a <ScrollView>
to enable the user to scroll vertically if the content goes over the available space. Inside is <StackPanel>
, containing all of the weather data that we want to display. This includes the name of the location, the icon representing the current weather, the weather description and the temperature. You can display the variables that we exported earlier by wrapping them in braces. For objects, individual properties are accessed just like you would in JavaScript.
<ScrollView><StackPanelAlignment="Center"><TextValue="{weather_data.name }"FontSize="30"Margin="0,20,0,0"Alignment="Center"TextColor="#fff"/><TextValue="{weather_data.icon}"Alignment="Center"Font="wi"FontSize="150"TextColor="#fff"/><TextValue="{weather_data.weather.description}"FontSize="30"Alignment="Center"TextColor="#fff"/><StackPanelOrientation="Horizontal"Alignment="Center"><TextValue="{icons.temp}"Font="wi"FontSize="20"TextColor="#fff"/><TextValue="{weather_data.temperature}"Margin="10,0,0,0"FontSize="20"TextColor="#fff"/></StackPanel></StackPanel></ScrollView>
You might also notice that all attributes and their values are always capitalized; this is the standard in Fuse. Lowercase or uppercase won’t really work. Also, notice that Alignment="Center"
and TextColor="#fff"
are repeated a few times. This is because Fuse doesn’t have the concept of inheritance when it comes to styling properties, so setting TextColor
or Alignment
in a parent component won’t actually affect the nested components. This means we need to repeat it for each component. This can be mitigated by creating components and then simply reusing them without specifying the same style properties again. But this isn’t really flexible enough, especially if you need a different combination of styles for each component.
The last thing you’ll need to do is to open the {your project name}.unoproj
file at the root of your project folder. This is the Uno project file. By default, it contains the following:
{"RootNamespace":"","Packages":["Fuse","FuseJS"],"Includes":["*"]}
This file specifies what packages and files to include in the app’s build. By default, it includes the Fuse
and FuseJS
packages and all of the files in the project directory. If you don’t want to include all of the files, edit the items in the Includes
array, and use a glob pattern to target specific files:
"Includes":["*.ux","js/*.js"]
You can also use Excludes
to blacklist files:
"Excludes":["node_modules/"]
Going back to the Packages
, Fuse
and FuseJS
allow you to use Fuse-specific libraries. This includes utility functions such as getting the environment in which Fuse is currently running:
var env =require('FuseJS/Environment');if(env.mobile){debug_log("There's geo here!");}
To keep things lightweight, Fuse includes only the very basics. So, you’ll need to import things like geolocation as separate packages:
"Packages":["Fuse","FuseJS","Fuse.GeoLocation"],
Once Fuse.GeoLocation
has been added, Fuse will add the necessary libraries and permissions to the app once you’ve compiled the project.
You can run the app via the Fuse dashboard by selecting the project and clicking on the “Preview” button.
This lets you pick whether to run on Android, iOS or locally. (Note that there is no iOS option in the screenshot because I’m running on Windows.) Select “Local” for now, and then click on “Start.” This should show you a blank screen because geolocation won’t really work in a local preview. What you can do is close the preview then update the req_url
to use the following instead, which allows you to specify a place instead of the coordinates:
var req_url ='http://api.openweathermap.org/data/2.5/weather?q=london,uk&apikey='+ api_key;
You’ll also need to comment out all of the code that uses geolocation:
//var Geolocation = require('FuseJS/GeoLocation'); //var loc = Geolocation.location; //var req_url = 'http://api.openweathermap.org/data/2.5/weather?lat=' + loc.latitude + '&lon=' + loc.longitude + '&apikey=' + api_key;
Run the app again, and it should show you something similar to the screenshot at the beginning of the article.
If you want to run on a real device, please check “Preview and Export25” in the documentation. It contains detailed information on how to deploy your app to both Android and iOS devices.
Now that you have tested the waters, it’s time to look at some of the pros and cons of using Fuse for your next mobile app project. As you have seen so far, Fuse is both developer- and designer-friendly, because of its real-time updates and multi-device preview feature, which enables developers and designers to work at the same time. Combine that with the native UX and access to device features, and you’ve got yourself a complete platform for building cross-platform apps. This section will drive home the point on why you should (or shouldn’t) use Fuse for your next mobile app project. First, let’s look at the advantages.
Fuse is developer-friendly because it uses JavaScript for the business logic. This makes it a very approachable platform for creating apps, especially for web developers and people who have some JavaScript experience. In addition, it plays nice with JavaScript transpilers such as Babel. This means that developers can use new ECMAScript 6 features to create Fuse apps.
At the same time, Fuse is designer-friendly because it allows you to import assets from tools such as Sketch, and it will automatically take care of slicing and exporting the pieces for you.
Aside from that, Fuse clearly separates the business logic and presentation code. The structure, styles and animations are all done in UX Markup. This means that business-logic code can be placed in a separate file and simply linked from the app page. The designer can then focus on designing the user experience. Being able to implement animations using UX Markup makes things simpler and easier for the designer.
Fuse makes it very easy for designers and developers to collaborate in real time. It allows for simultaneous previewing of the app on multiple devices. You only need USB the first time you connect the device. Once the device has been connected, all you need to do is connect the device to the same Wi-Fi network as your development machine, and all your changes will be automatically reflected on all devices where the app is open. The sweetest part is that changes get pushed to all the devices almost instantly. And it works not just on code changes: Any change you make on any linked asset (such as images) will trigger the app to reload as well.
Fuse also comes with a preview feature that allows you to test changes without a real device. It’s like an emulator but a lot faster. In “design mode,” you can edit the appearance of the app using the graphical user interface. Developers will also benefit from the logging feature, which allows them to easily debug the app if there are any errors.
If you need functionality not already provided by the Fuse libraries, Fuse also allows you to implement the functionality yourself using Uno. Uno is a language created by the Fuse team itself. It’s a sub-language of C# that compiles to C++. This is Fuse’s way of letting you access the native APIs of each platform (Android and iOS).
UX Markup is converted to the native UI equivalent at compile time. This makes the UI really snappy and is comparable to native performance. And because animations are also written declaratively using UX Markup, animations are done natively as well. Behind the scenes, Fuse uses OpenGL ES26 acceleration to make things fast.
No tool is perfect, and Fuse is no exception. Here are a few things to consider before picking Fuse.
We’ve learned about Fuse, a newcomer in the world of JavaScript native app development. From what I’ve seen so far, I can say that this project has a lot of potential. It really shines in multi-device support and animation. And the fact that it’s both designer- and developer-friendly makes it a great tool for developing cross-platform apps.
(da, vf, yk, al, il)
There’s a lot of hype about WebAssembly in JavaScript circles today. People talk about how blazingly fast it is, and how it’s going to revolutionize web development. But most conversations don’t go into the details of why it’s fast. In this article, I want to help you understand what exactly it is about WebAssembly that makes it fast.
But first, what is it? WebAssembly is a way of taking code written in programming languages other than JavaScript and running that code in the browser.
When you’re talking about WebAssembly, the apples to apples comparison is with JavaScript. Now, I don’t want to imply that it’s an either/or situation — that you’re either using WebAssembly or using JavaScript. In fact, we expect that developers will use WebAssembly and JavaScript hand-in-hand, in the same application. But it is useful to compare the two, so you can understand the potential impact that WebAssembly will have.
JavaScript was created in 1995. It wasn’t designed to be fast, and for the first decade, it wasn’t fast.
Then the browsers started getting more competitive.
In 2008, a period that people call the performance wars began. Multiple browsers added just-in-time compilers, also called JITs. As JavaScript was running, the JIT could see patterns and make the code run faster based on those patterns.
The introduction of these JITs led to an inflection point in the performance of code running in the browser. All of the sudden, JavaScript was running 10x faster.
With this improved performance, JavaScript started being used for things no one ever expected, like applications built with Node.js and Electron.
We may be at another one of those inflection points now with WebAssembly.
Before we can understand the differences in performance between JavaScript and WebAssembly, we need to understand the work that the JS engine does.
When you as a developer add JavaScript to the page, you have a goal and a problem.
You speak a human language, and the computer speaks a machine language. Even if you don’t think about JavaScript or other high-level programming languages as human languages, they really are. They’ve been designed for human cognition, not for machine cognition.
So the job of the JavaScript engine is to take your human language and turn it into something the machine understands.
I think of this like the movie Arrival, where you have humans and aliens who are trying to talk to each other.
In that movie, the humans and aliens can’t just translate from one language to the other, word-for-word. The two groups have different ways of thinking about the world, which is reflected in their language. And that’s true of humans and machines too.
So how does the translation happen?
In programming, there are generally two ways of translating to machine language. You can use an interpreter or a compiler.
With an interpreter, this translation happens pretty much line-by-line, on the fly.
A compiler on the other hand works ahead of time, writing down the translation.
There are pros and cons to each of these ways of handling the translation.
Interpreters are quick to get code up and running. You don’t have to go through that whole compilation step before you can start running your code. Because of this, an interpreter seems like a natural fit for something like JavaScript. It’s important for a web developer to be able to have that immediate feedback loop.
And that’s part of why browsers used JavaScript interpreters in the beginning.
But the con of using an interpreter comes when you’re running the same code more than once. For example, if you’re in a loop. Then you have to do the same translation over and over and over again.
The compiler has the opposite trade-offs. It takes a little bit more time to start up because it has to go through that compilation step at the beginning. But then code in loops runs faster, because it doesn’t need to repeat the translation for each pass through that loop.
As a way of getting rid of the interpreter’s inefficiency — where the interpreter has to keep retranslating the code every time they go through the loop — browsers started mixing compilers in.
Different browsers do this in slightly different ways, but the basic idea is the same. They added a new part to the JavaScript engine, called a monitor (aka a profiler). That monitor watches the code as it runs, and makes a note of how many times it is run and what types are used.
If the same lines of code are run a few times, that segment of code is called warm. If it’s run a lot, then it’s called hot. Warm code is put through a baseline compiler, which speeds it up a bit. Hot code is put through an optimizing compiler, which speeds it up more.
To learn more, read the full article on just-in-time compiling19.
This diagram gives a rough picture of what the start-up performance of an application might look like today, now that JIT compilers are common in browsers.
This diagram shows where the JS engine spends its time for a hypothetical app. This isn’t showing an average. The time that the JS engine spends doing any one of these tasks depends on the kind of work the JavaScript on the page is doing. But we can use this diagram to build a mental model.
Each bar shows the time spent doing a particular task.
One important thing to note: these tasks don’t happen in discrete chunks or in a particular sequence. Instead, they will be interleaved. A little bit of parsing will happen, then some execution, then some compiling, then some more parsing, then some more execution, etc.
This performance breakdown is a big improvement from the early days of JavaScript, which would have looked more like this:
In the beginning, when it was just an interpreter running the JavaScript, execution was pretty slow. When JITs were introduced, it drastically sped up execution time.
The tradeoff is the overhead of monitoring and compiling the code. If JavaScript developers kept writing JavaScript in the same way that they did then, the parse and compile times would be tiny. But the improved performance led developers to create larger JavaScript applications.
This means there’s still room for improvement.
Here’s an approximation of how WebAssembly would compare for a typical web application.
There are slight variations between browsers’ JS engines. I’m basing this on SpiderMonkey.
This isn’t shown in the diagram, but one thing that takes up time is simply fetching the file from the server.
It takes less time to download WebAssembly it does the equivalent JavaScript, because it’s more compact. WebAssembly was designed to be compact, and it can be expressed in a binary form.
Even though gzipped JavaScript is pretty small, the equivalent code in WebAssembly is still likely to be smaller.
This means it takes less time to transfer it between the server and the client. This is especially true over slow networks.
Once it reaches the browser, JavaScript source gets parsed into an Abstract Syntax Tree.
Browsers often do this lazily, only parsing what they really need to at first and just creating stubs for functions which haven’t been called yet.
From there, the AST is converted to an intermediate representation (called bytecode) that is specific to that JS engine.
In contrast, WebAssembly doesn’t need to go through this transformation because it is already a bytecode. It just needs to be decoded and validated to make sure there aren’t any errors in it.
As I explained before, JavaScript is compiled during the execution of the code. Because types in JavaScript are dynamic, multiple versions of the same code may need to be compiled for different types. This takes time.
In contrast, WebAssembly starts off much closer to machine code. For example, the types are part of the program. This is faster for a few reasons:
Sometimes the JIT has to throw out an optimized version of the code and retry it.
This happens when assumptions that the JIT makes based on running code turn out to be incorrect. For example, deoptimization happens when the variables coming into a loop are different than they were in previous iterations, or when a new function is inserted in the prototype chain.
In WebAssembly, things like types are explicit, so the JIT doesn’t need to make assumptions about types based on data it gathers during runtime. This means it doesn’t have to go through reoptimization cycles.
It is possible to write JavaScript that executes performantly. To do it, you need to know about the optimizations that the JIT makes.
However, most developers don’t know about JIT internals. Even for those developers who do know about JIT internals, it can be hard to hit the sweet spot. Many coding patterns that people use to make their code more readable (such as abstracting common tasks into functions that work across types) get in the way of the compiler when it’s trying to optimize the code.
Because of this, executing code in WebAssembly is generally faster. Many of the optimizations that JITs make to JavaScript just aren’t necessary with WebAssembly.
In addition, WebAssembly was designed as a compiler target. This means it was designed for compilers to generate, and not for human programmers to write.
Since human programmers don’t need to program it directly, WebAssembly can provide a set of instructions that are more ideal for machines. Depending on what kind of work your code is doing, these instructions run anywhere from 10% to 800% faster.
In JavaScript, the developer doesn’t have to worry about clearing out old variables from memory when they aren’t needed anymore. Instead, the JS engine does that automatically using something called a garbage collector.
This can be a problem if you want predictable performance, though. You don’t control when the garbage collector does its work, so it may come at an inconvenient time.
For now, WebAssembly does not support garbage collection at all. Memory is managed manually (as it is in languages like C and C++). While this can make programming more difficult for the developer, it does also make performance more consistent.
Taken together, these are all reasons why in many cases, WebAssembly will outperform JavaScript when doing the same task.
There are some cases where WebAssembly doesn’t perform as well as expected, and there are also some changes on the horizon that will make it faster. I have covered these future features in more depth36 in another article.
Now that you understand why developers are excited about WebAssembly, let’s look at how it works.
When I was talking about JITs above, I talked about how communicating with the machine is like communicating with an alien.
I want to take a look now at how that alien brain works — how the machine’s brain parses and understands the communication coming in to it.
There’s a part of this brain that’s dedicated to the thinking , e.g. arithmetic and logic. There’s also a part of the brain near that which provides short-term memory, and another part that provides longer-term memory.
These different parts have names.
The sentences in machine code are called instructions.
What happens when one of these instructions comes into the brain? It gets split up into different parts that mean different things.
The way that this instruction is split up is specific to the wiring of this brain.
For example, this brain might always take bits 4–10 and send them to the ALU. The ALU will figure out, based on the location of ones and zeros, that it needs to add two things together.
This chunk is called the “opcode”, or operation code, because it tells the ALU what operation to perform.
Then this brain would take the next two chunks to determine which two numbers it should add. These would be addresses of the registers.
Note the annotations I’ve added above the machine code here, which make it easier for us to understand what’s going on. This is what assembly is. It’s called symbolic machine code. It’s a way for humans to make sense of the machine code.
You can see here there is a pretty direct relationship between the assembly and the machine code for this machine. When you have a different architecture inside of a machine, it is likely to require its own dialect of assembly.
So we don’t just have one target for our translation. Instead, we target many different kinds of machine code. Just as we speak different languages as people, machines speak different languages.
You want to be able to translate any one of these high-level programming languages down to any one of these assembly languages. One way to do this would be to create a whole bunch of different translators that can go from each language to each assembly.
That’s going to be pretty inefficient. To solve this, most compilers put at least one layer in between. The compiler will take this high-level programming language and translate it into something that’s not quite as high level, but also isn’t working at the level of machine code. And that’s called an intermediate representation (IR).
This means the compiler can take any one of these higher-level languages and translate it to the one IR language. From there, another part of the compiler can take that IR and compile it down to something specific to the target architecture.
The compiler’s front-end translates the higher-level programming language to the IR. The compiler’s backend goes from IR to the target architecture’s assembly code.
You might think of WebAssembly as just another one of the target assembly languages. That is kind of true, except that each one of those languages (x86, ARM, etc) corresponds to a particular machine architecture.
When you’re delivering code to be executed on the user’s machine across the web, you don’t know what target architecture the code will be running on.
So WebAssembly is a little bit different than other kinds of assembly. It’s a machine language for a conceptual machine, not an actual, physical machine.
Because of this, WebAssembly instructions are sometimes called virtual instructions. They have a much more direct mapping to machine code than JavaScript source code, but they don’t directly correspond to the particular machine code of one specific hardware.
The browser downloads the WebAssembly. Then, it can make the short hop from WebAssembly to that target machine’s assembly code.
To add WebAssembly to your web page, you need to compile it into a .wasm file.
The compiler tool chain that currently has the most support for WebAssembly is called LLVM. There are a number of different front-ends and back-ends that can be plugged into LLVM.
Note: Most WebAssembly module developers will code in languages like C and Rust and then compile to WebAssembly, but there are other ways to create a WebAssembly module. For example, there is an experimental tool that helps you build a WebAssembly module using TypeScript53, or you can code in the text representation of WebAssembly directly54.
Let’s say that we wanted to go from C to WebAssembly. We could use the clang front-end to go from C to the LLVM intermediate representation. Once it’s in LLVM’s IR, LLVM understands it, so LLVM can perform some optimizations.
To go from LLVM’s IR to WebAssembly, we need a back-end. There is one that’s currently in progress in the LLVM project. That back-end is most of the way there and should be finalized soon. However, it can be tricky to get it working today.
There’s another tool called Emscripten which is a bit easier to use. It also optionally provides helpful libraries, such as a filesystem backed by IndexDB.
Regardless of the toolchain you’ve used, the end result is a file that ends in .wasm. Let’s look at how you can use it in your web page.
The .wasm file is the WebAssembly module, and it can be loaded in JavaScript. As of this moment, the loading process is a little bit complicated.
function fetchAndInstantiate(url, importObject) { return fetch(url).then(response => response.arrayBuffer() ).then(bytes => WebAssembly.instantiate(bytes, importObject) ).then(results => results.instance ); }
You can see this in more depth in our docs57.
We’re working on making this process easier. We expect to make improvements to the toolchain and integrate with existing module bundlers like webpack or loaders like SystemJS. We believe that loading WebAssembly modules can be as easy as as loading JavaScript ones.
There is a major difference between WebAssembly modules and JS modules, though. Currently, functions in WebAssembly can only use WebAssembly types (integers or floating point numbers) as parameters or return values.
For any data types that are more complex, like strings, you have to use the WebAssembly module’s memory.
If you’ve mostly worked with JavaScript, having direct access to memory is unfamiliar. More performant languages like C, C++, and Rust, tend to have manual memory management. The WebAssembly module’s memory simulates the heap that you would find in those languages.
To do this, it uses something in JavaScript called an ArrayBuffer. The array buffer is an array of bytes. So the indexes of the array serve as memory addresses.
If you want to pass a string between the JavaScript and the WebAssembly, you convert the characters to their character code equivalent. Then you write that into the memory array. Since indexes are integers, an index can be passed in to the WebAssembly function. Thus, the index of the first character of the string can be used as a pointer.
It’s likely that anybody who’s developing a WebAssembly module to be used by web developers is going to create a wrapper around that module. That way, you as a consumer of the module don’t need to know about memory management.
I’ve explained more about working with WebAssembly modules62 in another article.
On February 28, the four major browsers announced their consensus that the MVP of WebAssembly is complete. Firefox turned WebAssembly support on-by-default about a week after that, and Chrome followed the next week. It is also available in preview versions of Edge and Safari.
This provides a stable initial version that browsers can start shipping.
This core doesn’t contain all of the features that the community group is planning. Even in the initial release, WebAssembly will be fast. But it should get even faster in the future, through a combination of fixes and new features. I detail some of these features65 in another article.
With WebAssembly, it is possible to run code on the web faster. There are a number of reasons why WebAssembly code runs faster than its JavaScript equivalent.
What’s currently in browsers is the MVP, which is already fast. It will get even faster over the next few years, as the browsers improve their engines and new features are added to the spec. No one can say for sure what kinds of applications these performance improvements could enable. But if the past is any indication, we can expect to be surprised.
(rb, ms, cm, il)
This article has been republished from Medium66.
Fair Analytics is an open, transparent, distributed and fair Google analytics alternative. By Alessandro Arnodo.
Once someone starts using your app, they need to know where to go and how to get there at any point. Good navigation is a vehicle that takes users where they want to go. But establishing good navigation is a challenge on mobile due to the limitations of the small screen and the need to prioritize content over chrome.
Different navigation patterns have been devised to solve this challenge in different ways, but they all suffer from a variety of usability problems. In this article, we’ll examine five basic navigation patterns for mobile apps and describe the strengths and weaknesses of each of them. If you’d like to add some patterns and spice up your designs, you can download and test Adobe XD1 for free and get started right away.
Screen space is a precious commodity on mobile, and the hamburger menu (or side drawer) is one of the most popular mobile navigation patterns for helping you save it. The drawer panel allows you to hide the navigation beyond the left edge of the screen and reveal it only upon a user’s action.
The main downside of the hamburger menu is its low discoverability8, and it’s not recommended as the main navigation menu. However, this pattern might be an appropriate solution for secondary navigation options. Secondary navigation options are destinations or features that are important for users only in certain circumstances. Being secondary, they can be relegated to less prominent visual placement, as long as users can quickly find a utility when they need it. By hiding these options behind the hamburger icon, designers avoid overwhelming users with too many options.
Uber uses a hamburger icon for this purpose. Because everything about the main screen of the Uber app is focused on requesting a car, there’s no need to display secondary options such as “Payment,” “History” or “Settings.” The normal user flow doesn’t include these actions, and so they can be hidden behind the hamburger icon.
The tab bar pattern is inherited from desktop design. It usually contains relatively few destinations, and those destinations are of similar importance and require direct access from anywhere in the app.
Tabbed navigation is a great solution for apps with relatively few top-level navigation options (up to five). The tab bar makes the main pieces of core functionality available with one tap, allowing rapid switching between features.
The “Priority+43” pattern was coined by Michael Scharnagl to describe navigation that exposes what’s deemed to be the most important navigation elements and hides away less important items behind a “more” button.
This pattern might be good solution for content-heavy apps and websites with a lot of different sections and pages (such as a news website or a large retailer’s store). The Guardian44 makes use of the priority+ pattern for its section navigation. Less important items are revealed when the user hits the “All” button.
Shaped like a circled icon floating above the UI, the floating action button changes color upon focus and lifts upon selection. It’s well known by all Android users and is a distinct element of material design. Floating above the interface of an app, it promotes user action, says Google47.
The design of the floating action button hinges on the premise that users will perform a certain action most of the time. You can make this “hero” action even more heroic by reinforcing the sense that it is a core use case of your app. For example, a music app might have a floating action button that represents “play.”
The button is a natural cue to users for what to do next. In user research, Google found50 that users understand it as a wayfinding tool. When faced with an unfamiliar screen, as many users are regularly (like when running an app for the first time), they will use the floating action button to navigate. Thus, it is a way to prioritize the most important action you want users to take.
While with other patterns mentioned in this article, you’d be struggling to minimize the space that the navigation systems take up, the full-screen pattern takes the exact opposite approach. This navigation approach usually devotes the home page exclusively to navigation. Users incrementally tap or swipe to reveal additional menu options as they scroll up and down.
This pattern works well in task-based and direction-based websites and apps, especially when users tend to limit themselves to only one branch of the navigation hierarchy during a single session. Funnelling users from broad overview pages to detail pages helps them to home in on what they’re looking for and to focus on content within an individual section.
29 June 2007 was a game changer. From the moment Apple launched the first fully touchscreen smartphone on the market, mobile devices have been dominated by touchscreen interaction.
Gestures immediately became popular among designers, and many apps were designed around experimenting with gesture controls.
In today’s world, the success of a mobile app can largely depend on how well gestures are implemented in the user experience.
This pattern is good when users want to explore the details of particular content easily and intuitively. Users will spend more time with content than they will with navigation menus. So, one of the reasons to use in-context gestures instead of a standard menu is that it’s more engaging. For example, as users view page content, they can tap on a card to learn more.
3D Touch is a subtle touch mechanism that was first introduced in Apple’s iPhone 6s and 6s Plus. It allows for some new interactions, which Apple defines in two main categories:
Using 3D Touch, you can make the most frequent actions the most accessible. Think of 3D Touch like keyboard shortcuts on a desktop computer: They enable people to do repeated tasks more quickly. You can use 3D Touch to help users skip a few steps or to avoid unnecessary steps altogether.
However, just like keyboard shortcuts, essential features shouldn’t be exclusive to 3D Touch. Users must be able to operate your app normally without it.
People are shifting to larger-screen phones. Large smartphones don’t surprise anyone anymore. But the bigger the display is, the less easily accessible90 most of the screen is, and the more necessary it is to adapt the design (and navigation, in particular) to improve the user experience.
To solve this problem, designers are forced to look for new solutions to navigation. A couple of interesting innovative solutions can be found in the recently published article “Bottom Navigation Interface92.” One solution can be found in a health app named Ada93. This app’s interface layout is a mirror image of a basic interface with a hamburger menu: Everything that’s usually at the top is conveniently at the bottom, in the easy-to-access zone.
The second solution is a concept for a calling app that applies one-handed navigation principles. The method feels good for calling and messaging apps because users tend to use one hand for dialing and texting.
Helping users navigate should be a high priority for every app designer. Both new and returning users should be able to figure out how to move through your app with ease. The easier your product is for them to use, the more likely they’ll use it.
This article is part of the UX design series sponsored by Adobe. The newly introduced Adobe Experience Design CC (Beta) tool is made for a fast and fluid UX design process97, as it lets you go from idea to prototype faster. Design, prototype and share — all in one app.
You can check out more inspiring projects created with Adobe XD on Behance98, and also visit the Adobe XD blog99 to stay updated and informed. Adobe XD is being updated with new features frequently, and since it’s in public Beta, you can download and test it for free100.
(ms, vf, al, yk, il)
In 2015, Google announced that mobile searches surpassed desktop searches in at least 10 countries. 56% of traffic1 on major websites comes from mobile. In light of this, Google’s decision to improve the mobile user experience by various means, such as AMP pages2 and a dedicated mobile index, comes across as a sound business move.
More than half of the 2 trillion searches3 Google processes each year come from mobile devices. Mobile devices have changed the way we approach search, ushering in new types of habits such as local search, voice search and more. These consumer habits have greatly affected the way search engine providers think about user search intent.
A new type of mobile improvement was rolled out by Google. The search giant has decided to launch a war against interstitials that ruin the user experience. As a result, any website found guilty of showing intrusive popups, banners or overlays (called interstitials) will see its content demoted in Google’s mobile search results. We’ll be looking at what’s being penalized, what’s allowed and some workarounds to help you cope with this new penalty.
Two years ago, Google introduced a mobile-friendly label8 as it began prioritizing mobile websites. The aim was to push websites that offer a substandard user experience to improve their design, code and content. These guidelines are updated quite often as Google aims to keep the mobile user experience center stage:
With a dramatically levelled playing field, various elements are fighting to be displayed on one small screen. Coupons, offers, newsletter registration, ads, text, photos, social sharing buttons, and live chat are all vying for some valuable mobile screen space. Often, the visitor pays the price as their mobile experience becomes a confusing mess. Google’s latest penalty aims to change how we think about mobile advertising.
One recent change made this year is the intrusive interstitial update15. This algorithm update has one mission: to penalize for intrusive popup ads that affect the mobile user experience. The update rolled out earlier this year. Certain websites saw their mobile rankings in Google lowered.
First and foremost, it’s a mobile popup penalty. Secondly, it’s an intrusive popup penalty. Google wants to make sure that ads and popups do not interfere with the user experience on a mobile screen. The intent behind devaluing pages with popups and overlays that take up most of the screen is to help users access the content they’ve requested in the first place.
Google will no longer consider pages with intrusive interstitials as being mobile-friendly. Bottom line: Obstructing content on mobile with ads or other popups is against Google’s guidelines. The main target of this update is overlays that gray out the content, making the content inaccessible for a few seconds or until the user miraculously manages to tap the minuscule “X” with their sausage fingers to dismiss the ads. Ads displayed over content are no longer acceptable16 on mobile. This includes popups that appear when you load the page from Google or when you scroll down. For many websites, playing according to Google’s rules means removing all barriers17 that could prevent the user from reading the content on the page at any time.
Interstitial spaces can be defined as a type of “web page” that appears before or after the page (in a website or an app) expected by the user. Essentially, it is something that blocks out the website or app’s main content. These spaces have served as a promotional tool for many online marketers. They can contain advertisements, surveys, newsletter signup forms, coupons and other various calls to action.
Intrusive interstitials tend to block most or all of a page, leading to a frustrating experience for desktop and mobile users alike. These ads hinder the experience because they are unexpected and block the content that users seek. On the desktop, they’re annoying at best, but on mobile, intrusive interstitials can ruin the entire experience. Ever had to deal with popups gone rogue on mobile? Google has been relatively straightforward in its definition of what constitutes an intrusive interstitial. A few types of interstitials are problematic, as defined by the new guidelines18:
If your popups cover the main content shown on the screen, pop up without user interaction or require a dismissal before disappearing, chances are that they will trigger an algorithmic penalty.
Popups that incite users to choose their country or their preferred language are also targets because they fit the description above of what constitutes an intrusive interstitial. They bother readers and are not a necessity.
Not all interstitials are targeted by this penalty. Some exceptions exist, such as interstitials that are in place for legal or ethical reasons. The following popups do not require a change of size, design or position within a page:
The official guidelines provided by Google are pretty vague at times. The company’s definition of what constitutes an intrusive interstitial is not clearcut. Many elements are not addressed, such as:
The jury is still out on whether these elements will be targeted by the algorithm update, even if they comply with the guidelines on interstitials. Navigating this gray area can be tricky. If you want to come out on top, we recommend designing these elements in a way that ensures they do not cover any of the page’s actual content. Making sure these notifications do not hinder the user experience can go a long way towards avoiding a penalty.
Make no mistake, as of 10 January 2017, interstitials are a new ranking signal for mobile SEO. It doesn’t necessarily mean that all types of popups should be banned.
This penalty targets mobile issues. Overlays, popups and other types of interstitials will live another day on desktop.
Here is Google’s John Mueller’s advice on interstitials triggered by exit intent:
At the moment, those wouldn’t count. What we’re looking for is, really, interstitials that show up on the interaction between the search click and going through the page and seeing the content. So, that’s kind of the place we’re looking for those interstitials. What you do afterwards, like if someone clicks on stuff within your website or closes the tab or something like that, then that’s kind of between you and the user.
Google is not looking to penalize all pages with interstitials, only the ones that searchers could land on from search results. It’s still OK to display an interstitial when a user navigates from one of your pages to another. If a user can find a page by searching for something on Google, then you’ll need to ensure that this page has no intrusive interstitials. If after landing on this page, the user decides to navigate to another page on the website that is not listed by Google, then interstitials would be allowed on that page25. Sounds a bit convoluted, but this has been confirmed by Google.
Google’s main objective is to make the web more accessible, intuitive and usable, especially for mobile users. If your popups serve a real purpose that add value for the user, chances are you should be OK.
This mobile interstitial penalty is rolling out on a recrawling basis26. Basically, the next time Google’s bot crawls your website after 10 January 2017, it will evaluate the interstitials present on mobile. The penalty could hurt, but so far the reported impact has been minimal27.
Auditing your popups is a great way to determine whether your mobile website is compliant. Start by reviewing the mobile popups, because desktop popups are not going to be penalized. Prefer small messages such as banners, inlines and slide-ins for mobile pages.
Google has stated that it will not release an official tool to test websites. However, there is a tool out there, Interstitial Penalty Check29, that uses image recognition to identify potential interstitials on your pages. It measures how much space popups take on the screen to flag the ones at risk.
Any banners that take up a “reasonable amount of space” will not be targeted. That’s great news! However, no exact guidelines have been communicated by Google. The recommended size is 15% or less, even in landscape mode, because it still allows readers to access several lines of text.
Popups aren’t being banned, but marketing efforts will have to adapt to these new restrictions. Let’s rephrase that: Marketing efforts will have to respect the user’s screen space. Try to redesign any interstitials that were previously considered essential to marketing efforts. You can replace them with a link to a separate page or an inline ad, for example.
For instance, language-selection interstitials30 can be phased out, in favor of a banner on the website. But first evaluate whether such an element is worth that much screen space: Users arrive with a linguistic context already if they’ve come from Google.
If your website suffers a significant drop-off in conversion following the removal of some interstitials, then you’ll have to work around the issue. Luckily for you, only pages listed in Google’s search results are being targeted. If a user navigates to another page on your website, then a popup placed there would not be a problem for Google. The entry page is the issue for the search engine. This isn’t particularly recommended, but if interstitials remain the best way to convert users for your website, then consider keeping them on the website (and away from Google by placing a no-index
tag in your code). Keep in mind that this would mean cutting off organic traffic from Google to that page.
Some websites have been hit by Google’s mobile interstitial penalty even though they were adhering to the guidelines31. Other websites that should have been affected have not encountered any negative impact yet. So, what do you do if you’ve been hit unfairly? Head on over to the Google Webmaster Help forum to give feedback to the Google team.
If your traffic has suffered because of this penalty, we recommend auditing your popups and removing the ones deemed intrusive. Once that’s done, you’ll have to wait until Google recrawls your pages. You can also submit your pages via the Google Search Console’s fetch and submit tool to kickstart the recrawling effort a bit. This should remove the demotion32 in mobile ranking inflicted on your website.
Google’s John Mueller confirms that the demotion would be removed in real time when the pages are recrawled and reindexed. This penalty is doled out in real time, meaning that recovery will depend on how frequently Google indexes those pages. For most websites, though, “real time” means waiting until each individual page is crawled by Google and the demotion is removed.
Yes, popups are frustrating, but companies use them because they are effective. SumoMe33 shares recent findings that establish the interstitial’s track record: An average conversion rate is 3.09%, versus 9.28% on average for high-performing popups.
Removing revenue-generating interstitials could hurt websites at first, but losing organic search traffic could hurt their revenues even more. Designers, developers and marketers must work together to find new, non-intrusive ways to generate revenue. A sound strategy would be to leverage content marketing to educate audiences and guide them through the buying process.
This means moving away from interruption marketing towards permission marketing. The first can be defined as an annoying version of the traditional way of doing marketing via advertising, promotion and sales, whereas the latter is about promoting and selling a product to a potential client who explicitly agrees to receive marketing information. Google is aware that many websites are supported by ads.
Its AdSense is based in part on this revenue model. There are pros and cons to this new update. It could mean more time on website, more page views and a lower bounce rate, but it could still harm revenue in the short to medium term. Google representatives have stated publicly that they don’t want confusing ads — so, if there’s any question about what is an ad and what is the main content, you could have a problem. Therefore, label your ads as sponsored content, make them unobtrusive, and you should be fine.
In the meantime, audit your popups, cookie notifications, overlays and big banners to make sure they comply with Google’s new guidelines. Make sure they don’t take up more than 15% of the screen to avoid a penalty. If you do get penalized, don’t panic; fix the issues, and wait for Google’s bot to visit your website again. The bot should notice your efforts and remove the penalty.
(da, vf, yk, al, il)
Since WebGL is getting more and more popular, we start to see a bunch of websites using it for amazing creative designs. The brand Fornasetti recently published their website using the power of WebGL for a very nice effect: an animation that let’s us seemingly travel through a tunnel with a changing pattern. The most interesting part of this experience is that the motion through the tunnel is controlled by the movement of the mouse. Today we’ll share some similar Three.js powered tunnel traveling experiments with you.
For this demo we are using Three.js which is a useful library to create WebGL without getting your hands dirty. Before generating the tube we first need to setup a scene with a renderer, a camera and an empty scene.
If you don’t feel comfortable using Three.js, I suggest you to first read a tutorial to get started with it. Rachel Smith wrote a good one in three parts over here.
Once we have our scene ready we will proceed with these steps:
Thanks to Three.js we have a useful function that allows us to create a curve based on a set of points. We first need to calculate the positions of those points. Once it’s done we can create our curve like this:
// Create an empty array to stores the points var points = []; // Define points along Z axis for (var i = 0; i < 5; i += 1) { points.push(new THREE.Vector3(0, 0, 2.5 * (i / 4))); } // Create a curve based on the points var curve = new THREE.CatmullRomCurve3(points)
This curve will be updated later to modify the shape of the tube in real time. As you can see, all the points have the same X and Y positions. At the moment, the curve is a single straight line.
Now that we have a curve (that is not curved at all) we can create a tube using Three.js. To do so we need three parts: a geometry (the shape of the tube), a material (how it looks) and finally a mesh (the combination of the geometry and the material):
// Create the geometry of the tube based on the curve // The other values are respectively : // 70 : the number of segments along the tube // 0.02 : its radius (yeah it's a tiny tube) // 50 : the number of segments that make up the cross-section // false : a value to set if the tube is closed or not var tubeGeometry = new THREE.TubeGeometry(this.curve, 70, 0.02, 50, false); // Define a material for the tube with a jpg as texture instead of plain color var tubeMaterial = new THREE.MeshStandardMaterial({ side: THREE.BackSide, // Since the camera will be inside the tube we need to reverse the faces map: rockPattern // rockPattern is a texture previously loaded }); // Repeat the pattern to prevent the texture being stretched tubeMaterial.map.wrapS = THREE.RepeatWrapping; tubeMaterial.map.wrapT = THREE.RepeatWrapping; tubeMaterial.map.repeat.set(30, 6); // Create a mesh based on tubeGeometry and tubeMaterial var tube = new THREE.Mesh(tubeGeometry, tubeMaterial); // Add the tube into the scene scene.add(this.tubeMesh);
This part is my favorite because to my surprise the animation did not work as I believed initially. I first thought that the tube was actually moving into the direction of the camera. Then, I suggested that the camera was moving inside the tube. But both ideas were wrong!
The actual solution is really clever: nothing is “physically” moving forward in the scene beside the texture that is translated along the tube.
To do so, we need to define an offset for the texture on every frame to create the illusion of motion.
function updateMaterialOffset() { // Update the offset of the material with the defined speed tubeMaterial.map.offset.x += speed; };
The demo wouldn’t be that good if we didn’t implement some user interaction. When you move your mouse in the browser you can control the shape of the tube. The trick here is to update the points of the curve we created in the first step. Once the curve has been changed, we can update the tube accordingly with some transition.
// Update the third point of the curve in X and Y axis curve.points[2].x = -mouse.position.x * 0.1; curve.points[2].y = mouse.position.y * 0.1; // Update the X position of the last point curve.points[4].x = -mouse.position.x * 0.1;
Well, sadly no, the code is a bit more complex than what I’ve explained in this article. But if you understood the steps above, you should have a global idea of how the demo works. If you want to understand even more, check the source code of the first demo; I’ve included a lot of comments. If you still have questions, do not hesitate to poke me on Twitter 🙂
Once you have a basic tube, you can improve it with many different options. Check out all the demos to give you some ideas!
Thank you for reading this article!
If you are excited by the demos in this article, please share your joy in the comments 🙂
Why write requirements? Well, let’s imagine you want to produce a mobile app, but you don’t have the programming skills. So, you find a developer who can build the app for you, and you describe the idea to him. Surprisingly, when he showcases the app for the first time, you see that it is not exactly what you want. Why? Because you didn’t provide enough detail when describing the idea.
To prevent this from happening, you need to formalize the idea, shape it into something less vague. The best way to do that is to write a requirements document and share it with the developer. A requirements document describes how you see the result of the development process, thus making sure that you and the developer are on the same page.
In this article, we will outline the most common approaches to writing requirements documents. You will learn the basic steps of writing mobile application requirements and what a good requirements document looks like.
A carefully crafted requirements document eliminates ambiguity, thus ensuring that the developer does exactly what needs to be done. In addition, the document gives a clear picture of the scope of the work, enabling the developer to better assess the time and effort required. But how do we create a good document? Below are some tips that our mobile team at Polecat5 follows when crafting requirements.
We believe that a proper description of the idea should fit in one sentence. The sentence may include a core feature of the application, so that the reader understands instantly what the app is about. For a calorie-tracking mobile application, it could be, “An app to track calorie consumption to help those who care about their weight.”
Hint: Gua Tabidze shares a few models that others use6 to describe an idea.
Study basic navigation patterns7, and describe your application in the same sequence that users would experience while exploring it. Once the idea part is done, describe the first steps of the application, such as the onboarding screens and user registration.
Then, move on to what goes next, such as the application’s home screen. This approach will give the reader a sense of what the user’s journey would look like.
At the end, don’t forget about basic features and screens such as the privacy policy and the “forgot password” feature.
Review existing applications in Apple’s App Store and Google Play, and refer to them when describing your app. If you like how the “forgot password” feature works in applications A and B, put it in the requirements document.
Focus on the features of the application, and skip details such as the color of a button. Most app users do not care about such details. What they do care about is whether your application helps to solve their problem. So, when writing requirements, concentrate on things that the user should be able to do in the app.
Convey which features are more important than others, so that the developer knows what to focus on first. We usually follow the MoSCoW method8, marking items with “Must,” “Should,” “Could” and “Won’t” levels of priority.
Create wireframes of the screens of the application to accompany your textual description of them. If you have more than four wireframe screens, then drawing a screen map makes sense. We’ll show a screen map later in this article.
Now that you know how to write the requirements, you’ll need to choose an appropriate format for the document. There are a few basic formats for writing the requirements for a mobile app, such as a functional specification document (FSD), user stories and wireframes.
An FSD9 is probably the default format in the software development industry. It consists of a standard list of items that cover what the product should do and how it should do it.
Let’s take a simple calculator application and describe its features as an FSD:
As you can see, this format requires quite a detailed description of the product because the description will be used by both the business and the developers. It ensures that all participants are on the same page.
The person who composes the FSD should have strong experience in software development and should know the specifics of the mobile or other platform for which you are building. Also, because of the high level of detail required, creating and polishing such a document usually takes a decent amount of time.
A user story10 is less formal than an FSD yet still very powerful. It lists the things that the user can do in the application and is described from the user’s perspective. The document could also briefly explain why the user would want to do it, if that’s not obvious.
Let’s take our calculator example and add a few other features, describing them as a user story:
Because of the explanation, such a format provides not only a technical overview of the requirements, but also a good business case for them. Thus, if a feature is identified that is not critical to the business, you could decide either to completely remove it from the scope or to postpone it to a future release.
Using this format, you can easily split one story into multiple sub-stories to provide more detail. For example, we could split the PDF-exporting story into the following sub-stories:
Because of the simplicity and non-technical nature of user stories, in most cases, a manager cannot simply ask a developer to implement a particular user story. Turning a story into a task that can be added to a task tracker requires further discussion and detailing between the manager and technical leader.
User stories have become one of the most convenient and popular formats because of their simplicity and flexibility.
Another way to outline an application’s requirements is to visualize them in sketches or wireframes. With iOS development, around 70% of development time is spent on interface implementation, so having all of the screens in front of you would give you a good sense of what needs to be done and the scope of the work.
Creating a relevant set of wireframes for a mobile application requires you to know the basics of the user experience: how screens can be linked with each other, which states each screen can have, and how the application will behave when it is opened from a push notification.
Don’t be afraid to mix formats. By doing this properly, you take advantage of the strengths of each format. In our experience, mixing user stories and wireframes makes the most sense. While the user stories describe the features of the application and provide a business case for them, the wireframes show how these features would appear on the screens of the app. In addition, putting together user stories and wireframes would take you less time than writing an FSD, with all of its accompanying detail and descriptions of the interactions.
Start by sketching out wireframes for the application. Once the wireframes are done, add two or more user stories for each screen, describing what the user can do on that screen. We’ve found this approach to be the most appropriate for mobile application development, so we use it a lot.
I’ll take our What I Eat application as an example. I’ll compose the requirements document as if we were developing the application from scratch.
First, let’s formalize the idea using Steve Blank’s XYZ pattern:14 “We help X do Y by doing Z.” The premise of the application is to enable users to take control of what they eat during the day and of their calorie intake. According to the XYZ method: “What I Eat helps those who care about their weight to track calorie consumption by providing functionality for a simple meal log.”
As mentioned, mixing user stories and wireframes works best for us, so why not use them here?
The next step is to describe the What I Eat app as user stories, screen by screen. We’ll begin with the application’s start and home screen:
To avoid any ambiguity, we’ll create a wireframe for this screen.
As you can see, we weren’t able to put the “Add new meal” functionality on the home screen. Instead, we added a button to navigate to another screen that presents this feature. Now, we need to put together user stories for this new screen:
The home screen has a button that opens the calendar. Because there are many other calendar apps, checking their designs first makes sense. We like the iPhone’s default calendar app, so we will use it as a reference.
We will also put a piece of the iPhone calendar’s user interface in the wireframe.
Finally, we need to add some settings to the app.
Phew! Almost done. The final step is to put the wireframes and user stories together in one document, with each wireframe and its respective story on its own page.
In addition, we can draw a map to visualize how the screens are connected to each other. We’ll use RealtimeBoard22 for that.
In doing the screen map, we realize that there is no button to go to the settings screen, so we’ll add that to the home screen.
We have created two documents: a PDF with user stories and wireframes, and a screen map that complements the PDF. Together, they describe in detail what features the application should have. We can go ahead and send that to our developer. This time, the application he delivers will correlate with your vision.
Generally speaking, writing a requirements document is mostly about conveying your vision to the rest of the team. Don’t limit yourself to the methodology described above. Feel free to experiment and find the solution that works best for you.
Delve deeper into the subject with the following resources:
We’d love to hear about your approach to creating requirements documents. Please share your thoughts in the comments.
(da, vf, yk, al, il)
Hendrik Mans explores JavaScript, the language that he has been mocking for the last decade or two, and shares his experience.
In a world between building accessible interfaces, optimizing the experiences for users, and big businesses profiting from this, we need to find a way to use our knowledge meaningfully. When we read that even the engineers who built it don’t know how their autonomous car algorithm works or that the biggest library of books that mankind ever saw is in the hand of one single company and not accessible to anyone, we might lose our faith in what we do as developers.
But then, on the other hand, we stumble across stories about accessible smart cities or about companies that embrace full honesty in their culture. There are amazing examples of how we can pursue meaningful work and build a better future. Let’s not let negative news get us down, but let’s embrace them as a reason to change for the better instead.
And with that, I’ll close for this week. If you like what I write each week, please support me with a donation23 or share this resource with other people. You can learn more about the costs of the project here24. It’s available via email, RSS and online.
— Anselm
Can you believe it is May already? Time flies! Here in Belgium, spring has arrived and has brought along its bright colors, the delicate odours of blooming flowers, as well as the cheerful chirping of birds. I try to soak it all in as this is my favorite time of the year.
On a related note, if we only looked closer, we would find gems of inspiration in the things around us. For me, nature is my personal and biggest gem. What’s yours?
Great color palette and love all the gradient mesh work in this illustration.
Still shot of short film. Beautiful perspective.
Perfect morning light at the Dark Hedges, Northern Ireland.
One part of a series of illustrations for the inaugural set of artwork launched for Facebook Event themes. Special color combination.
Beautiful colours that are being used here. The eyes draw you in.
Lovely combination of type and illustrations. Beautiful extrude 3D effect too. It adds this perfect contrast.
Adoring the simplicity of all this. Just the basics so that your mind fills in the rest.
The special style of Sami Viljanto is an eye-catcher.
Such a great vibe due to the way the fills are done with this particular pencil stroke effect.
Magnificent reproduction of the 1940s advertisement vibe. The entire piece was handpainted using digital paint brushes on a Cintiq tablet, and then a texture was applied created by an old, scratched up sheet of metal and a grain filter.
Packaging illustration for 7 flavour designs for Smashmallow, all natural, organic sugar, gluten-free marshmallow snacks. The others are lovely too. Make sure to check them out!
Very nice capture! A scenery that I always hope to encounter too when I’m out on my bicycle.
What a great book cover! Brilliant usage of lines that translate the title so well.
A warm atmosphere in this illustration. Great patterns on the dresses and brilliant colors that suit the mood. The threat is hidden in the shape of the shadow. Very clever.
Inspired by the film of Stephen King’s best seller with the same name. Nice detail that the hat is found on the heart. Such perfect soft blue tones to create the shapes of the ice.
Love this subtly absurd and boldly colored illustration.
The little details that make this shine are the eyes that you see in the binoculars.
Beautiful Pallet! Great textures too.
All the plants are so nicely executed. Also loving the female running and how the waves are drawn.
So much greatness in this.
Nice editorial image. The line drawings in the back are clever. Love this combo of styles.
Great job on the light effect by using (I assume?) the moon as the light source and the use of white and black for the shadows. This high contrast effect is just perfect.
Maria’s intricately patterned illustrations made with a blend of colour pencils, markers, acrylic and watercolour on paper is what made me pick this one. The remarkable part is that she goes straight to paper without sketches. “I usually draw pretty quickly, I never prepare sketches, I go straight to the paper. I like it to be intuitive, to feel that the hand is almost possessed and drawing by itself, not letting the head think about every movement.”
Great for studying the texture and patterns usage. Beautiful retro style too!
Great idea to fill the figure with a square grid. Fits perfectly with all the other ‘sharp’ elements.
So much energy in this illustration. Great colors as well.
Wouldn’t mind having a go at this pinball machine created by Ellen Schofield. Such beautiful details!
Another lovely poster from Scott Hansen. Many would proudly hang this one on their wall I think.
One morning you wake up and think, what if I draw a chubby panda bear riding a bike. I’m sure it would look funny. It does 🙂 This is so well executed.
Amazing sunset at Byron Bay. Such rich colors!
Away from all traffic congestion, just focusing on that magical moment when the sun comes up. Great times, especially with friends!
Illustration for the Wallstreet Journal in a piece about the trouble with too much tech. Great use of mid-century era’s furniture to create a wonderful composition.
Nice colors and great font pairings. Beautiful inspiring stuff!
Nice set of colors to celebrate our beautiful planet.
Great looking characters.
Love the clean lines, forms and the smooth perspective.
Inspiring usage of vivid colors without being too overwhelming.
Very abstract and modern. Love the subtle gradients that create this subtle depth effect.
The colorful style of the Australian agency WBYK. Very eye-catching!
I’ve always admired those vintage illustrated ads. This is one is from 1957 for the Northwest Airlines Fleet.
It has been a while since I last visited ‘The Windows of New York’. Here’s the one from 164 WEST 136TH ST. Harlem.
A dramatic sky over Derwentwater in the Lake District National Park in North West England. Thankfully Chris Upton was on hand with his camera.
Lovely combination of 2D and 3D. Beautiful composition.
Love the merge/melt effect.
This photo of Phantom Manor, taken in Disneyland Paris, could be directly used for a horror movie poster.
Wonderful set of colors being used in this one. Bonus points for those that think “hmmm, this title rings a bell”.
Editorial illustrations about self-employment. Two characters from white and blue collar jobs look at each other within a mirror. I can keep looking at this one for a while. Love all those little details, objects etc.
The colours and details are spectacular!
Illustration made just to practice working with a restricted color palette. Nicely done I would say.
Remember that I showed a sneak peek from this? Well here’s the finished version which is great as usual. Look at those details and how the (few) colors are applied. Just perfect!
A meter-based to-do list app where a progress meter fills as you complete tasks. Built by Cassidy Williams with Electron, React, Redux, and LESS.
Voice-based interfaces are becoming commonplace. Voice assistants such as Siri and Cortana have been around for a few years, but this past holiday season, voice-driven devices from Amazon and Google made their way into millions of homes.
Recent analysis from VoiceLabs1 estimates that 24.5 million voice-driven devices will be shipped this year, almost four times as many as last year. As experience designers, we now have the opportunity to design voice experiences and interfaces!
A new interface does not mean that we have to disregard everything we have successfully applied to previous interfaces; we will need to adapt our process for the nuances of voice-driven interfaces, including conversational interactions and the lack of a screen. We will look at how a typical genie in a bottle works, discuss the steps involved in designing voice experiences, and illustrate these steps by designing a voice app for Alexa (or Skill, as Amazon calls it).
Just as mobile apps run on an OS and a device, three layers have to work together to enable voice interactions:
Each layer uses the one below and supports the one above it. The voice interface lies in the upper two layers, both of which reside in the cloud, not on the device itself.
Let’s peek under the hood to see how these layers work together, using the Alexa Jeopardy! Skill as an example.
Voice-driven devices such as the Amazon Echo and Google Home are constantly listening, waiting for a wake word (“Alexa…” or “OK, Google…”) to jump into action. Once activated, the device sends the audio that follows to the AI platform in the cloud (“… play Jeopardy!”). The platform uses a combination of automatic speech recognition9 (ASR) and natural language understanding10 (NLU) to decipher the user’s intent (to start a trivia game) and send it to the supporting app (Jeopardy! J6 Skill on Alexa). The app processes the request and responds through text (and a visual if applicable). The platform converts the text to speech and plays it through the device (“Welcome to Jeopardy J6. Here are today’s clues…”). All this in matter of seconds.
Last year, Mark Zuckerberg took on a personal challenge to build a simple AI11 to run his home. He did, called it Jarvis and gave it the voice of Morgan Freeman12.
The rest of us who don’t have the ability or resources to do the same can get away with building voice apps that run on complex AI platforms that have already been built. This frees us to only have to worry about the design and development of the voice app, that too with a simplified development process. Amazon15 and Google16 have provided open access to templates, code, and detailed step-by-step instructions to build different types of voice apps, to the point that even non-developers could develop an app in around an hour!
Their investment in simplifying app development is paying off, with thousands of new voice apps being launched every month17. The growth in voice apps brings back memories of the ’90s web gold rush, as well as the explosion of mobile apps that followed the launch of app stores.
In a crowded voice marketplace, good design is what will differentiate your voice app from the hundreds of other similar apps.
Designing a good voice user experience is a five-step process that should take place before starting development. Though jumping straight into development might be tempting, the time spent getting the design right is time well spent.
We will discuss and apply each step to design a voice app, which could easily be developed using one of many Skill templates for Alexa22.
The design journey begins with the question, “How will this voice app provide value to my users?” This question applies whether you are developing a standalone voice app (like our example) or whether your voice app is just one of many touchpoints for your customers. Take into consideration why and where people use voice apps. People use voice interfaces because of the benefits of hands-free interaction, the speed of interaction and the ease of use, primarily using it at home or in the car, as shown in Mary Meeker’s 2016 Internet Trends Report23.
The key is to find consistent user needs that are easier or more convenient through a voice app rather than a phone or a computer. Some examples include banks providing account information or a moviegoer finding new movies playing nearby.
If you have competitors who already have voice apps, take into consideration what they are doing and the reviews and feedback their apps have received in the app marketplace (such as Amazon’s Alexa Skill Store27). The aim is not to blindly imitate, but to be aware of the capabilities bar that has been set, as well as user expectations.
(At the time of writing this, there were over 1,500 “knowledge and trivia” Alexa Skills, making it the most crowded Skill categories on Amazon. However, there wasn’t a single trivia skill catering to the area of user experience. To illustrate the voice design process, we will create a UX design skill, for our readers to test their knowledge or maybe even to learn something new.)
During this step, we will define the personality of our app and the capabilities it will have.
When designing voice interfaces, we don’t have access to many of the visual elements we use in web and mobile interfaces to show a personality. The personality has to come through the voice and tone of verbal interactions. And unlike Zuckerberg, who hears Freeman’s soothing voice, we are constrained to hearing the default voice of the device. That makes tone and wording crucial in conveying the personality we want to convey.
The good news is that most of the groundwork in this area should have already been completed and documented in a corporate brand guide or website style guide (hint: look for the “tone of voice” section). Leverage those guidelines for your voice app, as well to maintain a consistent personality across channels and touchpoints.
When I think of personality and tone, the Virgin Group immediately comes to mind. They clearly define who they are and how they convey that to users. For Virgin America, the ideal tone is “hip, easygoing, informal, playful and tongue in cheek,” and it comes across clearly in all their communication.
If you’ve ever asked Alexa to sing or tried any of the numerous Alexa Easter Eggs30, then you’ll know she has a personality of her own. Curious, I reached out to the team responsible for her personality, and here’s what they had to say:
When architecting Alexa’s voice, we tried to give her a personality that reflects the attributes we value most at Amazon. We wanted her to feel helpful, humble and smart, while still maintaining a sense of fun. This is an ongoing process, and we expect the voice of Alexa will evolve as more developers focus on making her smarter.
Personality can also be reflected in the app’s name, icon and description that are displayed to users in the app directory listing, as well as in the name used to invoke the app (the invocation name). So, make sure it shines through while publishing your app.
For our UX Design skill, we could take a straightforward or a funny approach, and that would be reflected in the wording of our quiz’ Q&A options.
An example of a normal tone would be:
Which UX design principle favors simplicity over complexity?
- Occam’s Razor
- Hick’s Law
- Aesthetic-usability effect
- Satisficing
And an example of a funny tone would be:
Apparently, there’s a UX design principle that favors simplicity over complexity. Really! Can you guess what it’s called?
- Occam’s Razor: The best a UX guy can get.
- Hick’s Law: Sounds like something a UX bumpkin would come up with.
- Aesthetic-usability effect: That’s some fancy UX jargon.
- Satisficing: I can’t get no satisficing… apologies to the Rolling Stones.
Yeah, let’s stick with normal.
This is where you carefully think of the functionality that will be valuable for your voice app’s users. Revisit your work from the first step to identify the capabilities that are core or related to your business. Sometimes offering core capabilities is a no-brainer — such as a bank offering information on balance, transactions and due dates. Others offer value in the form of related features, such as Tide’s stain-removal guide voice app, or Glad’s (makers of food storage and trash bags) voice apps, one of which helps users to remember where they stored their leftovers, or the other one that allows users to check which items should be recycled or disposed of in the trash.
If you did a similar exercise when going from web to mobile, that can serve as a starting point. For voice capabilities, consider what capabilities would benefit your users on a voice-driven device in a shared space. If a Skill has security or privacy implications, consider adding a level of protection (the Capital One Alexa Skill allows users to create a personal key for account access). While you may end up with a laundry list of functionality that would work over voice, start with one to five core capabilities, and use voice analytics to update and improve after launch.
The core capabilities of a UX design Skill could be:
Because we are building this UX design Skill using Amazon’s Skill templates, our choices are currently restricted to either the first (fact skill template) or third (trivia skill template) option above. Assuming that our research has shown that our users would find a quiz more valuable than just hearing a UX principle recited, our core capability will be to quiz the user on UX principles and to keep score.
Now that you have shortlisted the capabilities of your voice app, start focusing on the detailed conversation flow that the app will have with its users. Human conversation is complex; it often has many twists and turns and may pivot anytime, with people often jumping from one topic to another. Voice AI platforms still have a long way to go to match that level of complexity, so you have to teach your Skill how to respond to users.
Your voice app can only support the capabilities you have defined in the previous step, but users always have the ability to ask the app anything and in any format. Detailing a conversation flow allows you to respond to the user, or to drive the conversation towards what the app can do for the user.
For each capability that the voice app will support, start creating conversational dialogues between the user and the app, similar to dialogues in a screenplay. As you write these dialogues, remember the personality as well as voice and tone characteristics. Start creating and curating the actual content for your voice app; for our quiz, this would mean building the list of quiz questions.
Begin with the “happy path” — a conversational flow in which the voice app can respond to the user’s request without any exceptions or errors. Then, move on to detailing the conversational flow for exceptions (in which the user does not provide complete information) and errors (in which the voice app does not understand or cannot do what the user is asking).
Because the conversation will be heard and not read, a good practice is to read it out loud to see if it sounds like a natural spoken conversation, and to check that it conveys the tone of voice you’ve intended.
If your voice app needs to supplement the conversation with content displayed on the phone app, design these interactions together, so that they appear seamless to the user. For instance, Tide’s stain-removal Skill informs the user that they could also refer to the stain-removal steps in the Alexa app, in addition to hearing the instructions. This may soon be required if the rumors of a touchscreen on the new Echo31 are true.
Here is a sample dialogue for the happy path our UX design Skill’s core capability:
User: “Alexa, start the UX design quiz.”
Alexa: “I will ask you five questions, with multiple choice answers. Try to get as many right as you can. Just say the number of the answer. Let’s begin. Question 1…”
User: [responds correctly]
Alexa: “That’s correct! Your score is 1. Here’s question 2…”
User: [responds incorrectly]
Alexa: “Oops, that’s the wrong answer. The correct answer is [correct answer]. Your score is 1. Here’s question 3…”
…
Alexa (at the end of five questions): “That’s correct! You got four out of five questions correct. Thank you for playing!”
People don’t always use the same words to say the same thing, and voice apps need to be taught that. Phrase-mapping is an exercise to teach voice apps to accommodate variation in the way users phrase their requests.
For each conversational path that you detailed in the previous step, think about the different ways users could word those requests. Then break down the wording of each request, and identify word variations and synonyms that they might use, taking into account any regional variations and dialects. You will have your hands full if your voice app deals with sweetened carbonated beverages (soda, pop, coke, tonic, soft drink, fizzy drink), long sandwiches (sub, grinder, hoagie, hero, poor boy, bomber, Italian sandwich, baguette) or athletic footwear (sneakers, shoes, gym shoes, sand shoes, jumpers, tennis shoes, running shoes, runners, trainers).
Make this list of variations as complete and exhaustive as possible, so that your voice app can understand user requests. Alexa needs these variations in the form of “utterances” and recommends providing “… as many representative phrases as possible32.” Depending on the capabilities of your voice app, the number of utterances could easily run into the hundreds, but there are ways to simplify the generation of utterances33.
Here’s a sample phrase-mapping for a capability of our UX design quiz. Alexa’s AI platform does a good job of translating user intent for Skills based on their templates. However, if you make changes (like we changed “trivia game” to “quiz”), then these phrases will have to be added.
The final step in the design process is to validate and refine the voice application before spending time and effort on development. During the “detail” step, reading the conversation flows aloud helped to make sure that they sounded natural and conversational. The current step involves testing the voice interface with users.
The simplest way to test is using the Wizard of Oz technique, with a person playing the role of the voice-driven device and responding to the user based on the voice interface script. Another option is to use prototyping software such as SaySpring6236 to create and test interactive prototypes.
If your voice app is being built using code templates (as our app is), then it might be easier to create the app and test it using testing tools provided by Amazon37 and Google38 within the Skill development area (as shown below), or in test mode on an actual device.
This testing will give you a good feel for the voice experience in the real world, including handling of errors, repetitive responses, and unnatural, forced or machine-like replies.
Now that the voice experience has been designed, it is time to move to the build-test-submit phase. Each platform has detailed guides and tutorials to help anyone build and test skills, including Alexa Skills Kit40, Develop Actions for Google41, and Cortana42, which offers to reuse your custom Alexa skill code!
Think about your feedback loop and the analytics that will help you to understand usage of your voice app. You can get Skill metrics (users, sessions, utterances, intents) within your developer account without any additional coding, but advanced analytics are available through free services such as VoiceLabs43 (I could not get it to work, probably due to my lack of coding skills or the lack of a VoiceLabs for Dummies setup guide).
After you finish building and testing your voice app, the last step is a streamlined submission process44. Because the Alexa Skill marketplace has rapidly grown, discovering new and useful apps is getting difficult. Until Amazon improves this, use visible elements of your voice app listing to help users find and try your Skill, including a catchy and relevant skill icon, name and description.
The companion skill that was built as an illustration can be taken for a test drive on the Amazon Alexa Skill store: UX Design Quiz45
Here are a few guiding principles for designing voice experiences. More principles and detailed do’s and don’ts are offered by Amazon46 and Google47.
Introduce the app and the ways the user can engage with it.
Welcome to UX Design Quiz. I will ask you five questions about UX design and see how many you get right. You can ask me to repeat a question or pause if you need to. Would you like to start a new quiz?
With a voice user interface, the user has to use their short-term memory while interacting with the voice app. So, keep it short and sweet.
Alexa: “This principle is attributed to a 14th-century logician and Franciscan friar and is named after the village in the English county of Surrey where he was born. In a nutshell, it states that simplicity is better than complexity. This problem-solving principle can easily be applied to user experience design, by going for the simpler design solution. What is this principle called?
- Your first option is Occam’s Razor, sometimes known as Ockham’s razor, or the law of parsimony.
- Your next option is Hick’s Law, also known as the Hick-Hyman Law.
- Your next option is the aesthetic-usability effect.
- Your last option is called “satisficing,” not to be confused with “satisfying” or “sacrificing.”
Please say A, B, C, or D to make your selection.”
User: “Huh?! Alexa, repeat. On second thought, end quiz!”
Instruction: “Please say your date of birth in the month/day/year format.”
Example: “Please say your date of birth, like April 15, 1990.”
This is a balancing act. Too much and it gets tiresome quickly.
If you ask Alexa to turn off the lights, you can see it happen and do not need a verbal confirmation, although she sometimes confirms with a short “OK.”
Things will go wrong: design for those situations. Examples include unintelligible questions or information, incomplete information, silence or requests that cannot be handled. Acknowledge, and give the user options to recover.
Anytime you’re dealing with trying to interact with a human, you have to think of humans as very advanced operating systems. Your highest goal is to try to emulate them.
– K.K Barrett, Her movie production designer, Wired, 2014
If you haven’t seen the movie Her, take a couple of hours to watch this futuristic movie about a lonely writer who develops a relationship with an operating system. While it is science fiction, in today’s world, voice experiences are increasing with the adoption of standalone voice-driven devices, such as the Amazon Echo family and Google Home. Developing a voice app is a relatively simple, template-driven process, with IKEA-like instructions provided by Amazon and Google in an attempt to establish their platforms. Though jumping into development may be tempting, a good voice user experience doesn’t just happen; it has to be designed, going through the steps described in this article.
Please use the comments area to share any other feedback, tips and resources with other readers.
(da, vf, yk, al, il)
In case you’re wondering what OAuth2 is, it’s the protocol that enables anyone to log in with their Facebook account. It powers the “Log in with Facebook” button in apps and on websites everywhere.
This article shows you how “Log in with Facebook” works and explains the protocol behind it all. You’ll learn why you’d want to log in with Facebook, Google, Microsoft or one of the many other companies that support OAuth2.
We’ll look at two examples: why Spotify uses Facebook to let you log into the Spotify mobile app, and why Quora uses Google and Facebook to let you log into its website.
OAuth2 won a standards battle a few years ago. It’s the only authentication protocol supported by the major vendors. Google recommends OAuth2 for all of its APIs, and Facebook’s Graph API only supports OAuth2.
The best way to understand OAuth2 is to look at what came before it and why we needed something different. It all started with Basic Auth.
Authentication schemes focus on two key questions: Who are you? And can you prove it?
The most common way to ask these two questions is with a username and password. The username says who you are, and the password proves it.
Basic Auth was the first web authentication scheme. It sounds funny but “Basic authentication” was its actual name in the specification first published in 1999.
Basic Auth allows web servers to ask for these credentials in a way that browsers understand. The server returns an HTTP response code of 401
(which means that authentication is required) and adds a special header to the response, named WWW-Authenticate
, with a special value of Basic
.
When the browser sees this response code and this header, it shows a popup log-in dialog:
The great part about Basic Auth is its simplicity. You don’t have to write a log-in screen. The browser handles all of that and just sends the username and password to the server. It also gives the browser a chance to specially handle the password, whether by remembering it for the user, getting it from a third-party plugin or taking the user’s credentials from their operating system.
The downside is that you don’t get any control over the look and feel of the log-in screen. That means you can’t style it or add new functionality, such as a “Forgot password?” link or an option to create a new account. If you want more customization, you’d have to write a custom log-in form.
Custom log-in forms give you all the control you could want. You write an HTML form and prompt for the credentials. You then submit the form and handle the log-in any way you want. You get total control: You can style it, ask for more details or add more links.
Some websites, such as WordPress, use a simple form for the log-in screen:
LinkedIn lets users log in or create an account on the same page, without having to go to another part of the website:
Form-based log-in is very popular, but it has a major fundamental problem: Users have to tell the website their password.
In security circles, we call a password a secret. It’s a piece of information that only you have and proves that you’re you. The secret can also be more than just a password; we’ll talk more about that a little later.
A website can take all the security measures in the world, but if the user shares their password, then that security is gone. Hackers breached the Gawker website in 2010, exposing many users’ passwords. While this was a problem for Gawker, the problem didn’t stop there. Most people reuse passwords, so hackers took the leaked data from Gawker and tried to log into more critical websites, such as Gmail, Facebook and eBay. Anyone who used a Gawker password for more important things lost a lot more than the latest gossip about Hulk Hogan’s sex tape.
Making sure your users don’t reuse a password for multiple accounts is the first half of the problem — and it’s impossible. As long as people have to create different accounts all over the Internet, they will reuse their passwords.
The second half of the problem is storing the passwords securely.
When someone logs into your app, you need to verify their password, and that means you need a copy to verify it against. You could store all usernames and passwords in a database somewhere, but now you risk losing those passwords or getting hacked. The best practice is to use a hash function9, such as one of the SHA-210 functions. This function encrypts data in a way that you can never get it back, but you can replicate the encryption: “my password” will hash to something like bb14292d91c6d0920a5536bb41f3a50f66351b7b9d94c804dfce8a96ca1051f2
every time.
And now we’re off in the tall grass: I’m telling you how to implement cryptographic protocols. Next, I’ll have to explain how to add a salt11 to your data and which textbooks to read on man-in-the-middle attacks. All you wanted to do is write an app, and now you have to become a security expert. We need to step back.
You probably aren’t a security expert. Even if you are, I still wouldn’t trust you with my password. OAuth2 gives you a better way.
As an example, I use Spotify on my iPad. I pay the company $10 a month to listen to music. Spotify gives me access on only three devices, so I need a password to make sure that nobody else uses my account. My Spotify account isn’t a big security concern. Getting hacked wouldn’t be the end of the world, but the company does have my credit card, so I want to make sure that I’m secure.
I hardly ever log into Spotify, so I don’t want to create another account and have to remember another password. Spotify gives me a better option:
I can use my Facebook account to log in. When I tap that button, Spotify sends me over to facebook.com, and I log in there. This might seem like a small detail, but it’s the most important step of the whole process.
Spotify’s programmers could have written a log-in form themselves and then sent my username and password to Facebook with a back-end API, but there are two big reasons why I don’t want them to do that:
There are also two big reasons why Spotify doesn’t want to do that:
I’m not in a Mission Impossible movie, but in the real world, many companies use two-factor authentication, such as a password plus something else. The most common method is to use your phone. When you want to log in, the company sends you a text with a special code that lasts for a few minutes; you then type in the code or use an app to input it.
Now the company is sure that nobody can log into your account without your phone. If someone steals your password, they still can’t log in. As long as you don’t lose your phone, everything is secure.
Facebook isn’t the only OAuth2 provider. When I log into Quora with my Google account, Google tells me what Quora would like to do and asks if that’s OK:
I might be fine with allowing Quora to view my email address and my basic profile data, but I don’t want it to manage my contacts. OAuth2 shows me all of the access that Quora wants, allowing me to pick and choose what I grant access to.
So, those are the advantages of OAuth2. Let’s see how it works.
Facebook, Google and most of the other OAuth2 providers treat native clients differently from web clients. Native clients are considered more secure, and they get tokens and refresh tokens that can last for months. Web clients get much shorter tokens, which typically time out when the user closes the browser or hasn’t clicked on the website for a while.
In both cases, the log-in process is the same. The difference is in how often the user needs to go through it.
OAuth2 log-in follows these general steps:
Opening a new browser window for the OAuth2 provider is a crucial step. That’s what allows providers to show their own log-in forms and to ask each user for whatever log-in information they need. Most apps do this with an embedded web view.
Along with the provider’s log-in URL, you need to send some URL parameters that tell the provider who you are and what you want to do:
client_id
redirect_uri
response_type
token
, to indicate that you want an access token, or code
, to indicate that you want an access code. Providers may also extend this value to provide other types of data.scope
There are additional fields that can add more security or help with caching. Certain providers also get to add more fields, but these four are the important ones.
Once your app opens the web view, the provider takes over. They might just ask for a simple username and password, or they might present multiple screens requesting anything from the name of your favorite teacher to your mother’s maiden name. That’s all up to them. The important part is that, when the provider is done, they will redirect back to you and give you a token.
When the process completes, the provider will give you a token and a token type. There are two types of tokens: access tokens and refresh tokens. The type of client you have will determine which types of tokens you’re allowed to ask for.
When I log into my Spotify app, I can stay logged in for months, because the assumption is that my phone is used only by me. Facebook trusts the Spotify app to manage the tokens, and I trust the Spotify app not to lose the tokens.
When the access token times out (typically, in one to two hours), Spotify can use the refresh token to get a new one.
The refresh token lasts for months. That way, I only have to log in on my phone a few times a year. The downside is that if I lose that refresh token, someone else could use my account for months. The refresh token is so important that iOS provides a keychain for tokens, where it makes sure to encrypt and store them safely.
Using OAuth2 in a web application works the same way. Instead of using a web view, you can open up the OAuth2 log-in request in a frame, an iframe or a separate window. You can also open it on the current page, but this would cause you to lose all JavaScript application state whenever someone needs to log in.
When I log into Quora with my web browser, I don’t get a refresh token. They want the token to time out and prompt me to log in again when I quit my browser or even just go away for lunch. Untrusted clients can’t refresh the token because they can’t be trusted to hold on to the important refresh token. It’s more secure but less convenient, because they will prompt you to log in again much more frequently.
Now you know how OAuth2 works, but you probably don’t want to implement your own OAuth2 client. You could go read the whole 75-page OAuth 2.0 specification17 if you’re having trouble sleeping, but you don’t need to. Some great libraries are out there for you to use.
iOS has built-in support for OAuth2. Corrina Krych has a very helpful tutorial on using OAuth 2.0 with Swift18. It walks you through how to get a token, how to integrate the views in your app and where to store your tokens.
Android also has built-in support for OAuth2. I must admit that I’m not as familiar with it because I focus on iOS, but there are some good sections in the documentation19 to show you examples and some open-source libraries to make it even easier.
JavaScript doesn’t have built-in support for OAuth2, but there are clients for all of the major JavaScript libraries. React fully supports OAuth2. AngularJS has third-party support for OAuth2.0 for many projects. I even wrote one of them20.
Once you have an OAuth2 client, you’ll need to choose a provider.
The big assumption here is that I trust Facebook more than Spotify. I have no good reason for that. Facebook doesn’t make its internal security public, and there’s no good way for me to audit it. Neither does Spotify. There’s no Consumer Reports for OAuth2 security. I’m basically trusting Facebook because it’s bigger. I trust Facebook because other people do.
I’m also trusting Facebook more every time I click the “Log in with Facebook” button. If Facebook loses my password, then hackers will get access not just to my Facebook account, but also to my Spotify account and to any other service I’ve logged into with my Facebook account. The upside is that there is only one place I have to reset my password in order to fix the problem.
I don’t have to trust Facebook, but I have to trust someone. Somebody has to authenticate me. I need to choose the provider I trust.
Wikipedia maintains a list of OAuth providers21, but you wouldn’t care about most of them. The big ones are Facebook and Google. You might also want to look at Amazon or Microsoft.
All four of them are big and easy to integrate with. Facebook provides instructions for registering an app22. Google has similar steps23. The basic idea is that you create a developer account and then create an app ID. The provider then gives you a client ID that you can use to make requests.
You can also choose multiple providers. Quora allows you to log in with Facebook or Google; because they both use OAuth2, you may use the same code for both.
OAuth2 does a very good job of solving a complex problem, but it is missing a couple of things:
if
statements. Each interprets the specification differently, and there are little dissimilar details for each one. They also always have different ideas on what scopes to provide. Using a library to integrate with OAuth2 helps a lot with this problem, but it will never be 100% transparent in your app’s code.There is a separate specification for invalidating OAuth2 tokens24, but it wasn’t picked up by many of the major providers. OAuth2 doesn’t provide a way to recover if a hacker gets your refresh token; even though you can delete your local copy of the token, the hacker will still have it. Many providers give you a way to suspend your account, but there’s no standard way to do it.
In defence of OAuth2, this is a difficult problem, because many providers use public-key cryptography25 to create stateless tokens. This means that the server doesn’t remember the tokens it has created, so it can’t forget them later.
The other major problem with OAuth2 is that you are dependent on your provider. When Facebook goes down, so does the “Log in with Facebook” button in your app. If Google decides to start charging you to support OAuth2 or demands that you share your profit with it, there’s nothing you can do. This is the double-edged sword of trusting a provider: They are doing a lot for you, but they have control over your users.
Even with a couple of missing features and a big dependency, OAuth2 is still an excellent choice. It makes it easy for users to log into your app, to not have to remember a password for every website, and to trust your security. OAuth2 is a very popular choice. It dominates the industry. No other security protocol comes close to the adoption of OAuth2.
Now you know where OAuth2 comes from and how it works. Go make smart choices about who to trust, stop reading articles about safely storing encrypted passwords, and spend more of your time writing your amazing app.
(da, il, al)
We always try our best to challenge your creativity and get you out of your comfort zone. A great occasion to do so is our monthly wallpapers challenge which has been going on for eight years1 already.
It’s an opportunity to let your ideas run wild and try something new, to indulge in a little project just for fun. Whatever technique you fancy, whatever story you want to tell with your wallpaper, the submissions to this challenge make a beautiful, unique bouquet of community artworks each month anew. Artworks that adorn desktops and, who knows, maybe even spark new ideas.
This post features desktop wallpapers for May 2017, created by designers and artists from all across the globe. Each wallpaper comes in versions with and without a calendar and can be downloaded for free. Time to freshen up your desktop!
Please note that:
“Edwin Way Teale once said that ‘[t]he world’s favorite season is the spring. All things seem possible in May.’ Now that the entire nature is clothed with grass and branches full of blossoms that will grow into fruit, we cannot help going out and enjoying every scent, every sound, every joyful movement of nature’s creatures. Make this May the best so far!” — Designed by PopArt Studio3 from Serbia.
“Someone who wakes up early morning, cooks you healthy and tasty meals, does your dishes, washes your clothes, sees you off to school, sits by your side and cuddles you when you are down with fever and cold, and hugs you when you have lost all hopes to cheer you up. Have you ever asked your mother to promise you never to leave you? No. We never did that because we are never insecure and our relationship with our mothers is never uncertain. We have sketched out this beautiful design to cherish the awesomeness of motherhood. Wishing all a happy Mothers Day!” — Designed by Acodez IT Solutions48 from India.
“We don’t usually count the breaths we take, but observing nature in May, we can’t count our breaths being taken away.” — Designed by Ana Masnikosa91 from Belgrade, Serbia.
“May your wishes come true.” — Designed by Dan Di134 from Italy.
“May is National Bike Month! So, instead of hopping in your car, grab your bike and go. Our whole family loves that we live in our bike-friendly community. So, bike to work, to school, to the store, or to the park – sometimes it is faster. Not only is it good for the environment, but it is great exercise!” — Designed by Karen Frolo181 from the United States.
“A maypole is a tall wooden pole erected as a part of various European folk festivals, around which a maypole dance often takes place. P.S. I love Super Mario.” — Designed by Jonny Jordan212 from Northern Ireland.
“In May nature is always beautiful, so I designed a wallpaper with flowers and leaves to celebrate the month of May. I always feel so happy in Spring so I wanted to give everyone a good feeling with this joyful wallpaper.” — Designed by Melissa Bogemans253 from Belgium.
“Spring is here! Flowers, grass… All of this is greener. But the sea is prepared for spring, too. Do you want to discover it with me?” — Designed by Verónica Valenzuela296 from Spain.
“Summer means happy times and good sunshine. It means going to the beach, having fun.” — Designed by Suman Sil317 from India.
“Wizard of Oz is a classic! May 15th marks Lyman Frank Baum’s Birthday. To honour this legendary author’s birthday, I created this wallpaper with some of the characters from Wizard of Oz.” — Designed by Safia Begum348 from the United Kingdom.
Designed by James Mitchell367 from the United Kingdom.
“Winter is nearly here in my part of the world and I think rainy days should be spent at home with a good book!” — Designed by Tazi Design388 from Australia.
Designed by Doud413 from Belgium.
“Biking is truly the best way to get around Washington D.C. in spring. Every day, I ride past the U.S. Capitol building, the national mall with its Smithsonian Museums (free for all!), gardens and people, and I smile that I get to live in this beautiful city. I want to tell all the people trapped in cars and trains to get out and enjoy the weather! Ride a bike!” — Designed by The Hannon Group436 from Washington D.C.
“It’s May and it’s mango season! I belong to the coastal part of Western India which produces the finest Alphanso mangoes in the world. As May arrives, everyone eagerly waits for the first batch of ripe mangoes in India. It’s not fruit, it’s an obsession! I wish everyone happy may-n-go season!” — Designed by Hemangi Rane463 from Gainesville, Florida.
“Lixia is the 7th solar term according to the traditional East Asian calendars, which divide a year into 24 solar terms. It signifies the beginning of summer in East Asian cultures. Usually begins around May 5 and ends around May 21.” — Designed by Hong, Zi-Cing472 from Taiwan.
“In May I think of flowers and I especially think of my favorite flowering plant Echeveria. I created ‘May’ using a vibrant Echeveria pattern.” — Designed by Cheryl Ferrell497 from San Francisco.
“Labourers are the cogs in the wheel of Society. They are the ones who are keeping this kafkaesque machine going. Do we recognize that as a fact?” — Designed by Dipanjan Karmakar520 from India.
Please note that we respect and carefully consider the ideas and motivation behind each and every artist’s work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience throughout their works. This is also why the themes of the wallpapers weren’t anyhow influenced by us, but rather designed from scratch by the artists themselves.
A big thank you to all designers for their participation. Join in next month537!
What’s your favorite theme or wallpaper for this month? Please let us know in the comment section below.
A great portfolio design with some smooth, angeled animations. Our pick this week.