Optimizing the Survey Experience for Mobile Respondents

0
168

Share on LinkedIn

More people in today’s world have smartphones than have toothbrushes. In many developed countries, smartphone penetration is at or above 70 percent. We live in a mobile world. The rapid adoption of mobile technologies has impacted everyday life in many ways and is changing the way businesses communicate with their customers. The CX industry is no exception; customer feedback collection methods must evolve.

Analysis of MaritzCX survey data from 2011-2015 shows that respondents are increasingly using mobile devices to access web surveys. Thirty-nine percent of all survey ‘starts’ were on mobile devices in the second quarter of 2015, with 31 percent on mobile phones and eight percent on tablets. MaritzCX anticipates that close to half of all customer experience (CX) surveys will be started on mobile devices by the end of calendar year 2015.

The same survey on mobile and desktop devices.

While most surveys can be opened and viewed on mobile devices, the vast majority are designed to be taken on a computer, and will not work as well for the respondent on the small screen of a mobile device. Respondents who try to complete traditionally-designed, non-mobile-optimized web surveys are often presented with small text that is not easily readable without zooming in to increase the font size. Some platforms present the question or answer text outside of the initial vertical screen display on small-screen devices.

Navigation issues may come up when the “next” button is not prominently displayed on a small screen. When questions aren’t visible in their entirety without scrolling, mobile respondents may have difficulty understanding how to activate or respond to certain question types such as drop-down lists or ranking questions. To make matters even more complicated, some question types display differently depending on the browser a respondent is using.

Drop-downs by browser

All of these issues have an impact on respondents. If researchers are lucky, a poorly-displayed survey will only result in fatigue and frustration on behalf of respondents who are willing to put in extra effort to provide feedback. A more typical response, however, is that respondents put less effort into the survey process, either abandoning surveys or selecting the answer options that are easiest for them to register.

From a sampling perspective, a survey that is not designed for mobile users is likely to under-represent the heavy-mobile-user demographic, which tends to center in younger audiences and lower-income populations. Younger audiences in the 18-25 age range are known to have the lowest response rates of any demographic for most traditional data collection methodologies (mail, phone, and desktop-centric web surveys), but are willing to provide feedback on studies that are easy to complete on their mobile devices.

Best Practices for Mobile Survey Design

Businesses that want to improve the survey experience for a growing mobile audience should consider three key areas when designing surveys for mobile devices:

  • Programming standards
  • Question presentation
  • Survey length.

Programming Standards

Web programming standards should start with an adaptive design (e.g., device type is identified at the time the survey link is clicked, with respondents instantaneously directed to either a mobile or non-mobile template). MaritzCX has designed mobile templates that are responsive, meaning the style sheets that are applied will stretch text to fit the dimensions of the mobile screen with the largest font possible. This device-agnostic approach uses one template to cover all mobile devices rather than individual templates for all 8,000+ mobile devices on the market.

The MaritzCX design focuses on an initial text display that does not extend beyond the horizontal boundaries but does allow for vertical scrolling, as testing has determined that respondents do not generally have navigation issues with vertical scrolling. Respondent fatigue (measured through abandon rates) is higher on surveys that require right to left, back-and-forth scrolling.

Horizontal (bad) and vertical (good) scrolling on mobile surveys.

Many surveys use graphics and the treatment of brand images used in the survey background to tie in the survey experience with the overall brand experience. While this approach works well in the non-mobile environment it was originally designed for, consumer testing indicated that standard images were taking much longer to load on mobile devices, leading to higher abandon rates.

Research also revealed that text and logo alignment is often altered on smaller screens, sometimes displaying question text over a dark logo, making it difficult for a respondent to read. To overcome these issues, surveys can leverage enhanced mobile-optimized images that allow the same logo to be displayed more crisply on a smaller screen. These mobile-optimized images can be sized appropriately so that they do not contrast with the question text, and can be given less weight than a standard image so that page load time is not affected.

Question Presentation

Even after mobile programming standards are applied, some text-heavy questions may not format ideally on smaller screens; if the question takes up the whole screen, for example, then the survey-taker cannot look back and forth between the question and the answers. There may also be problems with the way that mobile browsers display questions designed for desktop environments.

Multi-attribute grids are a good example of a common question-presentation issue tied to screen size. In most grids, attribute text is displayed in the same row as the radio-button answer choices, resulting in either a very small font or a wrap-around effect that creates too much space between response lines. Most grids also display scale labels only once, above the top scale on the grid. Though this presentation works fine on large screens where the entire grid can be viewed once, it presents problems for mobile respondents who scroll down the grid and lose sight of the scale’s labels.

A better way to present grids to mobile respondents is to move the attribute text above each scale measure (creating more space versus having everything on the same line) and placing the scale labels inside each response option so that the respondent is able to always see his response options regardless of how far down the page he scrolls.

Identical scale display on desktop and mobile using a 10-point scale.

It is impossible to provide guidance on every question type, as there are an unlimited number of ways questions can be presented. That said, we recommend testing all questions on common mobile devices/browsers prior to releasing them into the field. Mobile-first design and testing philosophy is described later in this article.

Survey Length

Some might think that after best practices for programming and question presentation have been applied, mobile survey completion percentages would be the same as non-mobile completion percentages. While that is the goal, it rarely happens.

There are two primary reasons for the divide in completion percentages. First, mobile phones typically have lower bandwidths than non-mobile devices. Location and traffic on carrier networks affects the phone’s signal strength, which in turn impacts page-load speeds. Second, mobile respondents, by virtue of being mobile, are able to access a survey anywhere. They could be taking the survey on the train, while waiting for their children after school, or during a work break. Because they are often outside of a controlled environment, there are many distractions that can pull them away from the survey. This becomes an especially big problem when respondents do not have the option to save their responses and return to complete the survey later.

Many people in the research industry have set limits on how long surveys can be (e.g., no more than X minutes or Y questions) when there is the possibility of a mobile audience, but analysis shows that respondents’ tolerance for survey length varies greatly from topic to topic and is driven by their relationship with the brand commissioning the research and by their interest in the survey topic. To set a limit of five minutes on all surveys because a sharp increase in abandon rates was measured at that mark in a survey about a cable company’s call center CX, for instance, could take away 10 minutes of valuable feedback from luxury car purchasers who really want to share what they like and dislike about the features on their new sports car. MaritzCX sees long surveys with high completion rates and short surveys with low completion rates. It all comes down to respondents’ interest in the survey topic and to the overall survey design.

What can be done to ward off the distraction factor? It will always be present, but MaritzCX is piloting a technique that helps increase mobile completion rates without surgical removal of questions. Known as “survey chunking,” this technique breaks surveys into pieces. The ideal case is a survey where there is evidence that length is limiting participation from mobile respondents, either because the survey length in the invitation is discouraging or, for respondents who have started the survey, distraction factors are keeping respondents from finishing. The critical question battery is displayed to all respondents while the secondary batteries are rotated and only shown to a portion of the mobile population.

For example, assume that a nine-minute survey can be broken into three “chunks.” All respondents see chunk A (which would include the questions that are most important to the research team), but only half of the respondents see chunk B. The half who were not exposed to battery B are given chunk C. In theory, this approach would cut the average mobile survey length down from nine minutes to six minutes, and assuming more respondents are now completing the shorter survey, would give a higher level of confidence in the critical battery. It is important to have some mobile representation on the secondary batteries, as mobile respondents tend to have different behaviors and opinions than non-mobile respondents. While it may seem easiest to only provide the primary battery, some measure of the other batteries from the unique mobile audience should be measured.

 

Mobile-First Design Philosophy

 Much of the guidance on design thus far addresses techniques to improve the display of surveys designed for big browser devices (desktops, laptops, etc.) for mobile devices, but it is important to note that not every question can easily be transitioned from its original design to a mobile design. When questions cannot be altered or removed from the survey, mobile respondents should be notified that the survey is not compatible with their device and best taken on a larger screen. This is not an ideal option, as many willing respondents will not take the time to find another device, but it is better than presenting the respondent with a request that he/she cannot complete.

To keep these occurrences to a minimum, adopt a mobile-first design philosophy on new projects. Mobile-first means that surveys are first developed and tested on mobile phones with the understanding that if the survey works well on mobile phones, then it will also display well on larger screens.

It can be challenging for companies to understand how to best communicate with their customers and offer them the services they need, but, for those that are able to continually refine their strategy and optimize the survey experience for mobile users, the rewards are great.

General Mobile Survey Design Best Practices

  • Respondents can deal with vertical scrolling; horizontal scrolling leads to fatigue, higher abandon rates and data quality issues.
  • Do not alter scale presentation from one device to another (a horizontal display on non-mobile and a vertical display on mobile, for example); changes in scale presentation will often result in score differences between device types.
  • Heavy images will cause page-load delays on some mobile devices; when background images are required, they should be mobile-optimized.
  • All else being equal, shorter scales will provide a better respondent experience (larger font and easier for respondent to choose desired response).
  • It is more difficult for respondents to type verbatim responses on mobile phones; new mobile technologies that allow respondents to provide feedback without relying on a small keyboard should improve the quantity and quality of open-ended responses.
  • Most “engaging” survey designs like drag-and-drops and slider scales lead to higher abandon rates and data comparability issues than traditional question displays. Respondents tend to prefer exercises that are familiar and short; In most cases, those exercises are in simple question format.

Republished with author's permission from original post.

Ted Saunders
Ted Saunders is a Digital Solutions Manager for MaritzCX. He is an expert on survey techniques for mobile and digital platforms and helps clients to optimize their customer experience data collection. He is interested in customer insights and behavior; figuring out not only why people do things but the best ways to engage with them to let them tell their stories.

ADD YOUR COMMENT

Please use comments to add value to the discussion. Maximum one link to an educational blog post or article. We will NOT PUBLISH brief comments like "good post," comments that mainly promote links, or comments with links to companies, products, or services.

Please enter your comment!
Please enter your name here