Visual search: The rise of ‘See-Snap-Buy’ and the data headache that ensues


Share on LinkedIn

They say a picture is worth 1,000 words and it is no wonder considering according to MIT 90 per cent of information transmitted to the human brain is visual. Consequently it is unsurprising that the visual Search Market is estimated to surpass $14,727m by 2023 growing at CAGR +9% during the forecast period.
Whilst visual search is a long way from overtaking traditional text-based search it is growing in popularity. Google Images is the second largest player in the search engine market, (behind only traditional Google) with 21 per cent of searches starting there and Pinterest reports that their users carry out more than 600 million monthly searches using Pinterest Lens. Moreover according to a study by GlobalData 20 per cent of app users make use of visual search when the feature is available. In fact almost three quarters (74 per cent) of consumers say that text-based keyword searches are inefficient to helping them find the right product online when shopping, according to a study by Slyce.

There are many applications of visual search – for instance snapping a landmark to find out where you are on a map, or taking a photo of a plant to identify what it is, but the most mainstream application is as a shopping enabler. Increasingly customers are looking to perform what has been coined: see, snap, shop. For instance whilst walking down the street you see someone wearing a pair of shoes that you like. You take a photo of them, upload the photo to a visual search application such as Google lens, Google Images, Pinterest Lens, Asos Style Match and wait for the results to be returned – many of which provide you with the ability to buy the same shoes ones deemed similar.

Visual search is powered by computer vision and trained by machine learning. Computer vision essentially enables computers to see and crucially understand what they are seeing in order to enable them to do something useful with the knowledge, Machine learning can then add the nuances such as enabling the computer to tell the difference between slight variations – i.e. returning the images of a similar pair of shoes.

Clearly the ‘see, snap, buy’ adds another layer into the customer journey and retail brands, in particular, must add visual search to their omnichannel journey or risk out on significant incremental revenue. This means understanding how to optimise images in order for them to be returned by visual search engines, considering visual ads on platforms such as Pinterest and identifying the impact of this on their customer data platform, such as the integration of a computer vision platform, not to mention the issue of managing the customer data that ensues.

Along with voice search, vision search is widening out the search parameters and consumers now expect the ability to search in the most appropriate manner to the situation in which they find themselves. For retail brands it opens up a wealth of opportunity, but with it comes a number of data-based headaches that need to be resolved.

Dave Gurney
Dave is the CEO of Alchemetrics and co-founder of Rubicon Insight which help brands build deeper, more valuable connections with customers.


Please use comments to add value to the discussion. Maximum one link to an educational blog post or article. We will NOT PUBLISH brief comments like "good post," comments that mainly promote links, or comments with links to companies, products, or services.

Please enter your comment!
Please enter your name here