Top 5 Ways To Deal with Non-Technical Participants in Remote Testing

Top 5 Ways To Deal with Non-Technical Participants in Remote Testing

Article written by Frankie Tam

While remote user experience testing significantly enhances researchers’ ability to reach test participants, these participants’ technical skills and access to technology can often vary greatly. Therefore, it is important to design and administer remote tests that facilitate participants’ varying technical skills and access to technology. Designing tests with non-technical participants in mind will allow more participants to successfully complete your usability tests, and will bring you higher quality test metrics.

A number of standards and considerations have been proposed by various scholars for guiding the design and implementation of remote UX research (Benfield & Szlemko, 2006; Birnbaum, 2004; Granello & Wheaton, 2004; Reips, 2002). In this article, we’ll go over the five best practices for enhancing the success of your remote testing when addressing the needs of non-technical participants.

1. Keep tasks clear and easy to understand

Throughout your usability tests, use simple and consistent layouts with easy to read fonts for enhancing navigation and accessibility. In addition, use straightforward and easy to understand instructions and descriptions to reduce any misunderstanding of the tasks and test activities. This is especially important in remote unmoderated testing environments due to the reduced or absent interaction with participants. Including a pretest that goes over technical requirements and expectations before the start of the test can also greatly reduce any participants misunderstandings or issues.

2. Use established web-based research tools and services

Well established web-based research tools and services can greatly enhance the reliability and usability of your remote testing. For instance, Userlytics offers two excellent options for allowing non-technical participants to take part in remote testing easily: 1) Its No-Download Web Recorder, and 2) Its Overlay Web Recorder. Both recorders allow participants to join a test with only an invitation link and a commonly used web browser from the comfort of their own homes at their preferred availability.

Below, we’ll go over what makes these two web recorders so accessible and easy to use:

1. No-Download Web Recorder

  • Does not require download of any extensions or plugins
  • Supports Chrome, Firefox, Safari, and Microsoft Edge and any of the major operating systems including Windows, MacOS, and ChromeOS
  • Allows studies to be conducted with participants in corporate environments (e.g., behind firewalls) and further enhances access to previously difficult to reach participants.

Please refer to our blog post on The advantages of the No-Download Web Recorder for more details.

2. Overlay Web Recorder

  • Specifically designed for remote user experience testing of websites and prototypes
  • Requires download of a web browser extension (one-click quick process)
  • Supports Google Chrome and Microsoft Edge, and with support for Firefox and Safari soon
  • High stability (near ZERO system crash rate in product testing)
  • Floating tasks window design 

Please refer to our help post on How and When to use our Overlay Web Recorder for more details.

3. Specify technology requirements

The technology variance among participants can sometimes present an issue in remote testing settings. Participants may take part in usability tests using a variety of computers, operating systems, internet connections and web browsers. It is therefore important to clearly define the technological requirements of your remote usability tests and to ensure all the participants are meeting them at the beginning of each test. This can drastically reduce potential technical difficulties during tests, which can lead to frustration and ultimately dropout.

Userlytics provides clear technology requirements (e.g., browser, internet connection, video resolution) for participants so that they can ensure they meet the requirements of participating in remote tests. In addition, unmoderated tests taken on Userlytics’ Overlay Web Recorder include a computer configuration test so that participants can make sure their technology meets the required standards. Outlining technology requirements can facilitate a positive remote testing experience by significantly reducing potential technical difficulties.

4. Apply special techniques to increase engagement and reduce dropout

Most participant dropouts often take place at the beginning of remote usability experiments. There are multiple factors that influence participants to dropout, such as loading time, layout design, functionality of testing materials, and technology requirements. Below, we go over ways to enhance your participants’ motivation and engagement in order to reduce test dropouts:

1. Provide clear experiment overview

Provide a clear overview of the goals, purposes and procedures of your usability tests at the beginning. This allows participants to understand the context and purposes of the experiments and to mentally and physically prepare themselves for the tasks ahead. This overview can also facilitate participants’ experiences of competence and autonomy, which in turn can enhance intrinsic motivation and engagement.

2. Provide time estimation and progress

Provide a time estimation for the experiments and update participants with their progress throughout the testing process. Doing so allows participants to understand how much time and effort are required for completing the tasks.  A progress update can also facilitate participants’ sense of accomplishment, which can result in more motivation and desire to participate in ongoing tests.

3. Rewards

Image from https://techdaily.ca/

Rewards can provide extra incentives for participants to take part in and complete remote usability tests. In addition, non monetary rewards, such as feedback and appreciation, can help make the whole experience more satisfying from a psychological point of view.

5. Pilot test your experiment

Before a full launch of your usability tests, pilot tests should be conducted to ensure the materials are being displayed correctly across different computers, monitors and browsers. Remote testing platforms like Userlytics are designed to promote compatibility across different forms of technology, with the goal of providing a uniform experience across all platforms and settings. Run pilot tests with a variety of different participants to make sure your test instructions are well understood among consumers of different trades and backgrounds This is particularly important in unmoderated remote testing environments, where there is no face-to-face interaction with participants. Participants should be observed closely during the test, and if necessary, should be interviewed afterward to learn more about their experiences.

Concluding Thoughts

Remote usability testing enhances access to a large number of diverse participants across geo-cultural boundaries, technical skills and settings. It is crucial to consider participants’ needs in order to maximize the advantages of remote experiments (e.g., better generalizability and external validity). This post compiles the five best practices for addressing the needs of non-technical participants based on existing literature. We have also described how remote testing platforms like Userlytics can implement these top practices into your next remote testing session. Although it can be challenging to design and conduct remote UX experiments, including an appropriate screener, script length, and branching logic, Userlytics offers the assistance of senior user experience researchers to design the optimal test plan for your goals. 

References

Benfield, J. A., & Szlemko, W. J. (2006). Internet-based data collection: Promises and realities. Journal of Research Practice, 2(2), D1-D1.

Birnbaum, M. H. (2004). Human research and data collection via the Internet. Annu. Rev. Psychol., 55, 803-832.

Granello, D. H., & Wheaton, J. E. (2004). Online data collection: Strategies for research. Journal of Counseling & Development, 82(4), 387-393.

Reips, U. D. (2002). Standards for Internet-based experimenting. Experimental psychology, 49(4), 243.

 

Tags: