Skip to content

How to Pen a Rock Solid User Testing Script

 By Userlytics
 Aug 21, 2018
 1194 views
Home  »  Blog   »   How to Pen a Rock Solid User Testing Script

From initial concept to finished draft, tips on how to craft an effective unmoderated user study

A user test is a little bit like cake: you know what a great one looks like, but you’re not quite sure how to go from individual ingredients to culinary masterpiece. We’ve all been there.

After you’ve made the decision to dive into user testing, and you’ve done your due diligence around understanding the user’s needs (through personas, user journeys) and your own needs (through drafting effective and realistic goals) it’s time to write the testing script.

But a good script is more than just a collection of tasks with some questions sprinkled on top. With a little planning, you can craft just the right script to make your test a success.

Prepare the Ingredients

Wireframes Ingredients. Solid User Testing Script

Before you bake a cake, you gather all the ingredients. Sure, you may forget a few things – but get the basics established straight away. In preparation for user testing, you should gather together the stated goals of the study. Begin an outline, highlighting all the various parts of the app, website or interface you plan to test. You’ll refine it later.

A general guideline is not to make the test so broad that users will become lost, bored or fatigued, which can affect your results, and not so short that you lose out on useful feedback. Try to keep the test to around 20-30 minutes, from the introduction to final comments, which is typically up to 20 tasks (introduction and final comments included).

Next, use the stated goals and the outline to decide which tasks participants will perform. Some good rules of thumb around tasks:

  • Make sure they’re relatively simple to read, remember and act upon. Focus on a core user story to gauge the system’s effectiveness in enabling users to complete the objective.
  • Tasks that are too complex, or include too many steps, run the risk of being misinterpreted, misunderstood or ignored. One tip is to have every action or click be its own task. Yes, it probably seems redundant to have one task be “Please click back to the home screen,” but doing so ensures nothing is lost in transition and sometimes even these small steps offer good insights themselves.
  • Use language that regular users can understand, not technical or industry jargon, if it’s not appropriate. Pay extra attention to acronyms or short-hand names that may be clear to stakeholders, but ambiguous or confusing to users.
  • Remember users should (and should be reminded to) read the task aloud, and speak as they perform each task. If the words are difficult to read or understand, human tendency is to skip over them, often to the detriment of gathering useful results. With our QA process, we will not accept tests where the user was not speaking out loud, but reminding them to speak their thoughts out loud could encourage them to speak more; which is always better.

Depending on the task, it may help to prepare the user with a short scenario (eg. “Imagine you’re shopping for a new car. Using this website, how would you…”) Other times, it’s safe to forgot the scenario, especially if the new task builds upon the previous one.

Brevity is important, because a user’s attention is divided between the instructions, the interface they are using, and any external distractions, such as families, sirens, computer issues, etc. Keep sentences short, and consider using bullet points if expressing multiple ideas, or if asking more than a single question.

An example of a task might be:

Task 2: Imagine you’re getting ready to buy a car, and, you’d like to find out what information (example website) has to help you get the lowest price. Using the site, find tools that will help you shop for the best deal.

Please speak aloud as you work, describing what you are doing and why. When you feel you’ve completed this task, please click Next.

Which you could simply copy/paste into our intuitive test builder to look like this:

Task & Questions

In addition to having them perform a task, you can ask users to describe what they see, what they expect will happen if they take a particular action, or how they would improve something in the interface. You can also build some smarts into the test, enabling it to react to a user’s decisions and responses. That’s where conditional logic comes into play.

It’s Only Logical

Conditional logic allows you to customize the test session based on a user’s actions and feedback. It gives you the ability to ask follow-up questions, or give users different tasks, depending on their answers to earlier questions.

For example, you could ask a user if he or she found a page of sales information on a site. If they did, you can skip ahead to the next task. If they did not, you can ask additional questions to discover what they found to be confusing or difficult, and what they would do to make it work better.

Branching Logic. Solid User Testing Script

Conditional logic also enables you to “skip” to a different section of a test if, for example, there are some questions or tasks that are not relevant to that participant. The dynamic nature of this logic combines the best aspects of moderated and unmoderated studies, making them more responsive and better able to deliver useful insights.

Numbers Matter, Too

User testing can help stakeholders collect a lot more than just feedback. It can also establish benchmarks suchs as the time spent on task or the success/failure rate of critical site functions.

success failure diagram

Quantitative questions can be included to create a more well-rounded picture, such as the Net Promoter Score, the System Usability Scale, and survey questions, all of which can be used to establish a baseline, compare an interface to an earlier iteration, or to a competitor’s offerings. Here is an example of our automatically calculated Net Promoter Score feature in our Metrics section:

Net promoter score

The System Usability Scale is particularly helpful in determining how users experienced your interface, overall. The SUS is a series of alternating positive and negative questions that, when calculated together, give you a good idea of how well your app or website performs. You can then make changes, test again, and track how this score evolves over time.

System Usability Scale. Solid User Testing Script

Test Early, Test Often

Once the script is in progress, consider your plan for recruiting participants. You will want to test with a minimum of 5 users. More can sometimes be helpful, especially if you’re trying to collect quantitative data. However, studies have shown 5 users are more than sufficient to uncover around 85% of usability issues in your interface.

So why test with more? As Nielsen Norman Group points out in its great write up on testing, you will get much more ROI from doing several rounds of tests with 5 participants each, than a single test with the same number of participants.

Studying each new iteration will give your team a chance to gather feedback quickly, iterate and test again to determine if you’ve fixed the issues or created any new ones. Fewer participants spread over smaller tests allow you to spin up studies and gather insight more quickly, and even perform valuable A/B testing to help steer your teams toward the final product.

Conclusion

One final step before launching your test: take it yourself. Some platforms, including Userlytics allow you to preview your test before releasing it to the wild. Ask colleagues to help you check for any show-stopping issues in the flow, like confusing questions, typos or broken logic.

Then, sit back and wait for those insights to pour in!


Interested in UX Testing?

Data Visualizations


Didn’t find what you were searching for?

Related posts:

Let's work together

Let’s work together on your next UX study.

Create positive user experiences and keep customers loyal to your product and brand.

Analytics tells you what,
Userlytics tells you WHY.