NEDMA Conference Presentation: B2B Lead Generation

At the 2010 NEDMA conference, EMI gave a presentation on how to take an iterative approach to testing to optimize response in a B2B lead generation environment. We used case studies to illustrate two approaches to testing and then refining campaigns through learning, optimization and subsequent roll outs.

Case Study 1 covered a series of product promotion campaigns over the course of two years. Each campaign was optimized based on key learnings from preceding campaigns (i.e., audience targeting, positioning, response channels, and incentives) that included email and direct mail channels. We discussed what we learned from both success and failure, and how, over time, we’ve developed a knowledge base that allows us to more effectively and efficiently target our audience with messages and creative approaches that drive response. The highlight of this case study was the 80% cost per lead (CPL) reduction from 2008 to 2009 direct mail campaigns and the identification of key drivers of email campaign performance: list selection and a clear, simple call-to-action.

Case Study 2 covered a true 4-cell test. We tested two approaches to messaging (broad versus niche) with two different creative formats (letter package versus self mailer). Our measure for evaluation was CPL, and the clear winner emerged with a CPL 51% lower than the overall average. Interestingly, the winning approach was more expensive on a per-piece basis that focused on a broader positioning. We then reprinted and dropped a larger quantity of winning approach to a larger audience. That effort resulted in a 1.8% response rate and a CPL 27% lower than the CPL for the test.

At the end of the presentation, we took questions from the audience, which consisted of a range of industry professionals, and wanted to share a couple that brought up important considerations.

Q.  Do you ever find a piece of content that just works so well you keep going to it?

A.   Yes. There are some restrictions to this approach, however. The content has to be “evergreen”, meaning that the topic is relevant even if market conditions change. Even with content fitting this requirement, given the audience overlap among publication email lists and direct mail lists, you eventually hit a saturation point where you see response inevitably begin to decline. When this happens, you need to retire that go-to campaign and replace it with something new.

Q.  How do you determine list quality when you’re deciding which publication lists to rent?

A.   Testing. While some lists can be ruled out based on available demographic information or reputation of the publication, in our experience list quality and price do not always go hand in hand. We have gotten very strong response and high quality leads from inexpensive lists, and very poor response from some of the most expensive lists we’ve used. There are also general audience preferences by list that drive response for certain types of content over others, so testing lists with various pieces of content is also important for determining list preferences.

Iterate to success in B2B lead generation

Note: EMI will be presenting on this theme at the 2010 NEDMA conference.

There may be a world in which direct marketing lead generation budgets are expansive and resources are readily available, but it’s not the world in which most in B2B direct marketing live. In the B2B world, there are significant constraints—in the form of budget, time, and resource limitations—on the opportunity to conduct lead generation campaign tests with a multitude of cells. Yet, testing must be conducted for B2B direct marketing to realize its full value with respect to lead generation.

To make that happen, tests must be structured as an iterative process of continuous improvement rather than as discrete campaigns. The iterative process, simply put, is one in which smaller-scope tests are run on an ongoing basis with high frequency. Each test delivers learnings that can be applied to the next test to ensure continuous refinement.

To be successful, an iterative testing process requires discipline and patience. Specifically, it demands adherence to the following concepts:

  • Good testing = Focused testing. If you don’t have the resources, time, audience quantity, or budget to launch a 25-cell test, you need to keep your tests focused. Focus means limiting the variables being tested so that you can accurately ascribe results to a test criterion and only testing things that can be leveraged in future campaigns.
  • Data don’t tell lies (but we have to be committed to listening). In iterative testing, you must be open to finding insights in places you weren’t expecting to find them. Rigorous analysis of results across a variety of segmentation schema will often lead to the discovery of the needle in the haystack—a discovery that will save you the cost of searching for the needle later.
  • Optimization is a journey. You must accept the fact that you may never get to the point where you have all—or even most—of the answers. By the time you iterate through most of the relevant testing parameters, the audience and message elements may have shifted enough that you need to start testing all over again.

These three pillars of iterative lead generation campaign testing ensure that the approach produces both improved knowledge and increased lead volume. Without these pillars and the iterative approach, your lead generation structure is likely to crumble.

Use of Incentives: Proceed with Caution

Effective use of incentives like coffee cards and gas cards requires both an understanding of the strategic context and a feel for customer behavior. A simple assessment (high/moderate/low) of key variables will provide a clear picture of the applicability and potential desired magnitude of a campaign incentive.

The three most important variables (and their assessment scale) when weighing the value of incentives are:

  • The strategic value of the action to the company (a “high” assessment supports incentive use)
  • The perceived benefit of the action to the respondent (“low” supports incentive use)
  • The barrier(s) to desired action (“high” supports incentive use)

For example, compelling responses to a web-based market research survey have:

  • Moderate strategic value since it’s several steps removed from revenue generation
  • Low perceived benefit to the respondent
  • A high barrier to action assuming the survey is more than a few questions long

Together, these ratings point to this being a solid use of incentives.

On the other hand, a poor application of using an incentive would be to drive someone simply to visit a website or respond to an email: the strategic value is low because providing an incentive trains the customer/prospect to respond to incentives rather than content, the perceived benefit to the respondent is moderate, and the barrier to action is low.