Where does your market intelligence actually come from? And how credible is it? The data and insights that drive your business decisions can come from many different sources – and they’re not all created equal.
For most companies, expert networks and panels provide the bulk of their market intelligence. These networks are databases of individuals who are recruited once and then incentivized to participate in multiple surveys and interviews.
With these pre-recruited participants, research providers can offer speedy results at low prices. At face value – it’s a compelling offer. The problem is, the actual expertise in these panels is often fragmented, out-of-date or misaligned.
Expert Networks Recruit, Reuse & Recycle
Let’s start with the business model. By design, these providers profit by recruiting members once and then enrolling them in as many studies as possible. This effectively spreads the cost of recruiting and onboarding each individual participant across multiple paying clients. The model is profitable because it focuses on quantity over quality.
The problem is, encouraging cross-survey participation creates an environment of opportunity for ‘professional survey takers’ that often lack genuine expertise specific to your research needs. With a limited participant pool, survey takers quickly get accustomed to receiving many invitations that are–at best–loosely matched to their experience. Motivated by the compensation, the goal becomes to ‘qualify for’ and ‘complete’ as many paid surveys as possible.
It’s a junk in–junk out scenario, meaning that the quality of your research (and how much you can trust it) is directly dependent on where (and from whom) the data comes from in the first place.
‘Expert’ Can Be a Broad Label
By definition, an expert is someone who has a deep and comprehensive understanding of a specific field or area of study. The term – and the meaning behind it – is easily recognizable so it’s also easy to assume that everyone uses the same basic criteria to identify experts. But that’s not the case.
With many expert networks, the study’s vetting process can create an illusion of expertise.
Candidates are fed an almost complete list of qualifiers that enable them to game the system to participate in a study. This includes things like:
- 10+ Years of Experience in the [Fill-in-the-Blank] Industry
- Works at a Company with At Least 50,000 Employees
- Decision Maker for Software Purchases
- Familiar with ‘X’ Brand
- Located in the United States
Panel respondents can receive dozens of surveys each week with these relatively specific qualifiers. In other words, networks cast a wide net, wanting respondents to qualify for as many surveys as possible because that’s more profitable.
This approach leaves you with many respondents, but few who have true expertise in the areas you’re interested in, resulting in data you can’t trust which translates to research you throw away.
Poor Practices Compromise Your Data
What happens when you need a larger sample size than a research provider’s database can provide? If you’re working with an expert network that relies on a list of pre-recruited participants, the options are truly limited.
To increase participants, expert networks typically:
- Compromise on qualifications: Send you individuals who lack relevant experience or have outdated knowledge, like professionals who have been retired and out of the industry for years.
- Recruit externally: Source participants from other panels and expert networks to cover their blind spots, introducing duplication, unknown data skews and ultimately compromising data integrity.
- Overlook quality control: Lower the bar on suspicious activity or cheaters, allowing low-quality or fabricated information to taint the final data set.
When a provider’s existing database is light, the criteria to be an ‘expert’ becomes more flexible. As a result, your research becomes skewed by the lack of strict quality control and other poor practices focused on quantity over quality. These poor business practices on the front end can be detrimental to the data and resulting insights that you hope to rely on to steer your business strategy and make your most critical decisions.
Pinpointing the Problem–Expert Networks are Flawed By Design
It all goes back to who the respondents are and how you find them for your study. As you can see, problematic issues run deep in the fabric of how expert networks and panels manage and deliver respondents to studies.
Even with the recent trend of AI designed to help with respondent vetting and validation, the network model still creates ideal conditions for participants to game or cheat their way into your study which ultimately hurts the quality of your research data. Networks and panels continue to source respondents from the same limited pools of individuals, despite the attempt to make them look new with the addition of AI. Ultimately, to avoid the issues with expert networks, it’s essential to rethink the process altogether—especially how and where you find your respondents.
Quality Data Starts with Quality Respondents
At Azurite, we understand that precision and quality are critical to reliable market intelligence, which is one reason why we never source respondents from expert networks or panels from our studies. Instead, we recruit fresh, relevant participants directly from the market for every project – specific to that project.
Our vetting process ensures every respondent holds the necessary credentials and offers insightful, up-to-date experience. No professional research participants, no recycling of respondents.
In a world where data can make or break a strategy or investment, choosing the right research partner is crucial. Prioritizing quality ensures your findings are grounded in trustworthy information, leading to better strategies and outcomes.