But there can be some speed bumps that arise from this connected future – specifically, how to get the most from the tools while providing the best possible experience to your end users. One key way to roll over these speed bumps is to adopt a testing mindset with a dash of agile thinking.

Here, I’ll dig into a simple framework for testing, some common pitfalls, and a few use cases for you to take away as part of your Connected Recruiting strategy.

A simple framework

A test-and-learn approach starts with an idea, then putting that idea into play and learning from the results.

To start, there are only five things you need to do to kick off a test:

  1. Identify what you want to get out of the test.
  2. Identify who will be part of the test.
  3. Identify what a successful test looks like.
  4. Create a simple plan to execute.
  5. Learn. Adjust. Learn. Adjust. And repeat.

1. Identify what you want to get out of the test

I like to break this up into two parts. First, nail down the something you want to influence by testing. Some examples:

  • Drive more applications through our website
  • Increase referral leads this quarter
  • Improve internal recruiter efficiency

Next, assign a number to the thing you want to influence. So, the second example above becomes:

  • Increase referral leads by 15% this quarter

From there, you could break it down further, but in my experience, that can cause you to overthink things or, in some cases, halt the process. Keep it simple.

2. Identify who will be part of the test

This one is pretty straightforward, but it is something you need to think about before the test begins. Your audience could be candidates, clients, internal recruiting and sales teams, or others. Knowing whom you'll be targeting helps focus on what you'll be testing.

Tip: Break bigger audiences into smaller test groups. This will minimize the scope and impact the potential while you’re optimizing your approach. Then, once you’ve decided a test is successful, you can roll it out across the audience.

3. Identify what a successful test looks like

If you're just starting out with testing, what success looks like may just be your best guess. For example, “we want to increase website applications by 5%” may be a good starting point. 

The key takeaway here is you need to know the metric you’re assessing and what the results will mean. For example, if your test produces a 4% improvement, is that a good or bad result? It’s better to figure this out before the test than after.

4. Create a simple plan and execute

Now it’s time to perform the test. If you have an established tech stack, this could be a matter of tweaking outbound messaging, internal workflows, or settings. Otherwise, it may include purchasing a new tool or piece of software.

Let’s look at a couple examples:

  • We want to improve open rates by 3% for candidates within the IT category. Since open rates are byproducts of subject lines, this test is a simple A/B test on subject lines to candidates. If you’re using Bullhorn Automation, you can easily perform this test. Otherwise, you can break your audience into two buckets and test different subject lines on each, analyzing results and picking a winner.
  • We want to increase referral leads by 20%. This could be a scenario where purpose-built software scratches the itch. Since referrals are historically difficult to track and manage, buying versus building a software program could be an approach to consider. The key thing to note here is your before/after metric. If you had 100 referrals a year before, then the number you need to hit for success is 120.

5. Learn. Adjust. Learn. Adjust. And repeat.

Little bets and bites are my preferred approach to testing. I'd rather test one variable, optimize it, then test another. The key thing to remember is that testing isn't a one-time activity. Improvements to experiences or business outcomes usually take time to optimize.

Pitfalls to look out for

Like anything, there is a dark side to testing that can skew results or, worse, make them irrelevant.

  • Bias: Sometimes, a personal bias can skew a result. For example, your team designs a new homepage to improve website job applications. You like Design #1 best. But, through testing, you find Design #2 performs better by 15%. Still, you decide that you really like Design #1 and declare it the winner. Don’t do this.
  • Opinion: Similar to bias, basing a decision on opinion versus the hard numbers will lead to inaccurate results. Remember that the audience isn’t you. What they think is more important than any opinion you may carry.
  • Unrealistic expectations: Your goal might be to increase recruiter productivity by 1000%. While that would be extraordinarily awesome, it’s unrealistic, and any marginal improvements will be overshadowed by the big number that wasn’t hit.

Practical use cases

Let’s look at five things you could start testing today to help you create lasting success with the Connected Recruiting strategy:

  • Improve candidate email open rate by X%
  • Increase referral applicants by X%
  • Improve recruiter note additions by X per day
  • Shorten placed candidate feedback by X days
  • Reduce unqualified candidate conversations by X%

Embracing a testing mindset will help you uncover inefficiencies and can be applied across your organization. How you approach it is up to you, but I hope you’re able to use some of the tips and tricks above to create a sustainable Connected Recruiting practice in your organization. 

Happy testing.