Help! I'm having trouble finding anyone (or the right people) to participate in my studies!

One of the most challenging and time-consuming parts of UX research is not necessarily the research itself, but the process of recruiting qualified and eager users to participate - all before the end of the next sprint. The practice of gathering, communicating with and scheduling possible participants for a study can be a daunting task that requires a significant about of time an energy to complete. However, it's a necessary means to an end; no participants mean no research. A project without research has a plethora of unnecessary risks that can undermine the success of the project, especially because the designers and developers are building products based on assumptions and opinions rather than data.

Luckily, with a little preparation and the right recruiting tools, UX researchers can get access to the research participants they need within nearly any timeframe or budget. Throughout this series of posts, we'll identify common recruiting challenges and ways to overcome them.

Problem 1: Your survey isn't getting the responses you hoped for

There's nothing worse than sending your precious survey link out to 100+ customers and waiting for the responses to roll-in, only to check back a week later to find that only 16 people have completed your survey. There could be a few reasons for this right off the bat:

  1. The screener is too exclusive.

The more specific the screener, the fewer participants you have to choose from, likely increasing the time it takes to find qualified participants for your research. This challenge makes it difficult to find enough qualified participants overall, much less on a time constraint.

The screener wasn't sent to enough people.

Let's say you planned for 100 responses to your survey, and you sent the link to 150 people... so you'll get the responses you need, right? Probably not. 100 responses out of 150 survey participants would mean an expected response rate of 66%, which is highly unlikely. Most researchers are lucky to achieve a response rate of 10-20%, which, in this case, means you should plan for sending the survey to at least 500 people, maybe more depending on your timeline and incentive, in order to achieve your goal of 100 responses.

Solution: Determine the project's target users and set the right sample size expectations for the research method

First things first, be sure to identify exactly what kind of users from whom you need to receive feedback. This may seem obvious, but sometimes defining this group, or groups, explicitly upfront keeps the research and project moving in the same direction.

Does your research need feedback from existing customers? Prospective customers? Where do they live within your product's customer journey? Identify your target users and a few important identifying characteristics: possibly technical ability, age, geographic location, current or prospective customer needs, motivations, etc.

Then, without being too specific, choose a set of characteristics that will get the targeted and relevant feedback needed without narrowing the group down so aggressively that there isn't a large enough sample of people from which to pull for your research. It's tough to recruit that "perfect" participant, so scale back on the details that can be more flexible. From there, you can create a screener (a list of qualifying questions) for your personae that will eliminate any participants that may not be a good fit for your study.

Once the target group is defined, understand that number of responses ultimately depends on the type of research being conducted. Is the study quantitative, qualitative or both? Jakob Nielsen breaks down the number of participants needed by research method:

Research method Number of participants needed Number of participants you should send it to
Usability testing 5 per user group 8-10 if they are warm leads or internal, 10+ if they are cold leads and have back-ups
Quantitative studies (i.e., surveys, first-click tests) 20+ depending on confidence intervals Send it to 150+ participants if you are looking for 20 completes to achieve a 10-20% response rate
Card Sorting 15-20 per user group 50+ for warm leads or internal, 100+ for cold leads and have back-ups
Eye tracking 39 users for stable heatmaps 50+

Some of these numbers may seem low, however, more responses do not necessarily mean more or deeper findings. There are a few reasons to test with larger groups:

  1. To be safe and plan for no-shows, consider inviting a few extra participants than you may need. For example, if you need five users for a usability study, schedule at least eight to ten.
  2. There are multiple target audiences to test with and the differences in expected behaviors or opinions is enough to test with more users [1].

This is part 1 of 3 in the series defining research recruiting challenges and ways to overcome them.

Part 2: Incentives and Why They Matter

Part 3: Better Tools and Better Panels for Your Research