Note: EMI will be presenting on this theme at the 2010 NEDMA conference.
There may be a world in which direct marketing lead generation budgets are expansive and resources are readily available, but it’s not the world in which most in B2B direct marketing live. In the B2B world, there are significant constraints—in the form of budget, time, and resource limitations—on the opportunity to conduct lead generation campaign tests with a multitude of cells. Yet, testing must be conducted for B2B direct marketing to realize its full value with respect to lead generation.
To make that happen, tests must be structured as an iterative process of continuous improvement rather than as discrete campaigns. The iterative process, simply put, is one in which smaller-scope tests are run on an ongoing basis with high frequency. Each test delivers learnings that can be applied to the next test to ensure continuous refinement.
To be successful, an iterative testing process requires discipline and patience. Specifically, it demands adherence to the following concepts:
- Good testing = Focused testing. If you don’t have the resources, time, audience quantity, or budget to launch a 25-cell test, you need to keep your tests focused. Focus means limiting the variables being tested so that you can accurately ascribe results to a test criterion and only testing things that can be leveraged in future campaigns.
- Data don’t tell lies (but we have to be committed to listening). In iterative testing, you must be open to finding insights in places you weren’t expecting to find them. Rigorous analysis of results across a variety of segmentation schema will often lead to the discovery of the needle in the haystack—a discovery that will save you the cost of searching for the needle later.
- Optimization is a journey. You must accept the fact that you may never get to the point where you have all—or even most—of the answers. By the time you iterate through most of the relevant testing parameters, the audience and message elements may have shifted enough that you need to start testing all over again.
These three pillars of iterative lead generation campaign testing ensure that the approach produces both improved knowledge and increased lead volume. Without these pillars and the iterative approach, your lead generation structure is likely to crumble.
Effective use of incentives like coffee cards and gas cards requires both an understanding of the strategic context and a feel for customer behavior. A simple assessment (high/moderate/low) of key variables will provide a clear picture of the applicability and potential desired magnitude of a campaign incentive.
The three most important variables (and their assessment scale) when weighing the value of incentives are:
- The strategic value of the action to the company (a “high” assessment supports incentive use)
- The perceived benefit of the action to the respondent (“low” supports incentive use)
- The barrier(s) to desired action (“high” supports incentive use)
For example, compelling responses to a web-based market research survey have:
- Moderate strategic value since it’s several steps removed from revenue generation
- Low perceived benefit to the respondent
- A high barrier to action assuming the survey is more than a few questions long
Together, these ratings point to this being a solid use of incentives.
On the other hand, a poor application of using an incentive would be to drive someone simply to visit a website or respond to an email: the strategic value is low because providing an incentive trains the customer/prospect to respond to incentives rather than content, the perceived benefit to the respondent is moderate, and the barrier to action is low.
The increased attention on marketing ROI and the customer experience has produced an increased interest in the development and use of dashboards in the marketing/CRM environment. The article “Dashboards: No Longer a Luxury” from 1-to-1 Magazine clearly points that out. The fact that everyone wants dashboards is positive, since measurement drives more informed—and therefore better—decisions, but the reality is that few succeed in creating dashboards that are truly valuable management tools. The pitfalls of most dashboards are that they measure too much which creates information overload and/or they measure the wrong things.
To be effective, dashboards should comprise no more than five or six measurements, which should be a blend of results metrics (e.g., sales, leads) and operational performance metrics (e.g., call resolution statistics, outbound call volume).
- Results measurement should focus on the two or three numbers that will provide a “snapshot” of the health of the business/functional area. The numbers should be able to provide either an early warning of issues requiring intervention or reassurance of the state of the status quo.
- Operational measurements should focus on quantifying the operational activities that drive positive results. For example, if you can correlate customer training attendance to customer satisfaction and repeat business, you should be monitoring training attendance rates.
Dashboards can and should be a valuable tool for management to understand at a glance the state of the business and the progress towards goals, but only if it’s focused on the right measurements.