Posted by Jeevan Jayaprakash


In this issue, we look at the launch of the Amazon Dash Buttons, a senior figure at Google confirms the existence of a Google Brain and a UX designer from Microsoft outlines her 3 key mobile UI/UX trends that are beginning to permeate the next generation of mobile apps.

We love user feedback, so if our digest is too long, short, amazing or boring — just let us know at hello@himumsaiddad.com.

Hi Mum! Said Dad


Amazon Dash Buttons come to the UK

Keeping with the Internet of Things theme from last week, Amazon have launched their Dash buttons in the UK for Amazon Prime customers. Each button is specific to a particular brand (there are 40 unique dash buttons at launch) and allows customers to place an order for their favourite product at the push of a button.

How does it work?

The Dash button is synced via WiFi to the user’s Amazon app on their smartphone. Before the user uses their button for the first time, they are required to set up their delivery preferences as well as the product to be ordered on the app. When the button is pressed, an order is triggered in the app. The user also receives a notification letting them know that the order has been placed successfully.

Amazon first launched the Dash Button in the US last year where uptake was initially quite slow. However, growth has picked up since and Amazon has claimed that orders through the Dash Button have grown threefold in the past couple of months. It is probably safe to say that Amazon has high hopes for the UK market. The buttons will cost £4.99 each but are effectively free as customers receive £4.99 off their first order.

Here at Hi Mum! Said Dad, we believe there exists certain barriers in relation to the Amazon Dash button both on the consumer side and the brand side. There will be more on our thoughts around the Amazon Dash debate as well as what we have been up to in the Internet of Things space in next week’s issue.

Ariel’s Amazon Dash button — conveniently placed for instant reordering. Source: Amazon

The Google Brain is real + open sourced AI

The fact that Google has a team of machine learning researchers called Google Brain is no secret. However, the senior vice president of the Google’s Cloud business has confirmed that Google Brain is also in fact something tangible and also rather surprisingly, admitted that it is something that she has only seen behind a set of “double doors”. The Google Brain is said to be where machine learning is done with the help of special processors. Much of this new machine learning technology is then made available to others via the Google Cloud.
 
The same Google Brain team were also behind TensorFlow, the open source machine intelligence library released by Google last year. That isn’t all on the AI front either. Only last week, Facebook also open sourced its image recognition tools (which are similar to the technology used to recognise and describe photos to blind users on Facebook).

So, what do Google and Facebook gain by open sourcing you ask?

Well, to give a succinct answer, there already exists similar open sources tools so the likes of Google and Facebook have no real exclusivity by keeping their software proprietary. Also, the value lies not necessarily in the technology itself but rather in what can be built from the technology. Therefore, what the likes of Facebook and Google probably would like to achieve is traction within the research community as well as within their own companies. Having TensorFlow, for example, as the standard among developers for development of the products of the future effectively gives a company like Google: 1) a new source of ideas and 2) a quicker route for the commercialisation (relative to its competitors) of interesting ideas built in TensorFlow.

A computer sees pixels as a series of number values corresponding to changes in colour. Source: FAIR

The 3 mobile UI/UX trends set to permeate the next generation of apps

This piece by Joanna Ngai, a UX Designer at Microsoft outlines the 3 key mobile UI/UX trends that are making their way into mobile apps:

  1. Tone of voice — a unique, amusing tone of voice can bring a smile to a user’s face and make your product a real pleasure to use.

Examples: Poncho (the weather app) and Mailchimp both have a playful tone of voice that is extremely popular amongst users.

2. Navigation — moving away from the hamburger menu towards circularity and tabs/sliders (an integral part of Google’s Material Design guidelines) which allows users to swipe through tabs with their thumbs rather than having to reach for the top of their screen (not an easy feat on today’s phablets).

Examples: The YouTube app and Google Cast apps on Android — both, unsurprisingly, Google apps.

3. Digestible content — again another Google inspired the trend. A card limits the amount of information on screen and promotes “progressive disclosure” which reduces clutter and gives the user a sense of control over what they see. Cards do not have to be limited to text and can contain images and graphics too.

Examples: Quora and Pinterest — the former’s cards are text focused where as the latter shows the treatment of both text and images.

Meet Poncho, your own cat-cum-weatherman. Source: Android Authority

Originally written as part of Hi Mum! Said Dad’s Weekly Digest.

If you would like to receive our Weekly Digest straight to your inbox, please drop us an email at hello@himumsaiddad.com.