Email Re-Engagement Strategy #2: The Definition of Insanity

A recent EMI blog post discussed the growing importance of email engagement and the role of preferences in re-engaging customers. In this post, we reference the famous definition of insanity: doing the same thing over and over expecting different results. If you have a segment of email recipients who are not responding to your emails, why would you continue to send them the same emails at the same day and time and expect them to respond? While content and frequency preferences likely may re-engage some of the non-responders, it is important to try new emailing approaches to see if the standard delivery method may be responsible for non-response.

For example, if the “typical” mass-deployed email prominently features images and/or other graphics, it would be worth trying an email without images. Image-rich emails could get identified either by corporate mail servers or by email applications (e.g., Outlook) as spam and put into Junk Mail folders, never to be seen again. Additionally, some recipients may be discouraged from interacting because they rely on smartphones for checking their email and graphic-heavy emails often don’t render as well on a mobile device as they do on a desktop machine.

Similarly, if you maintain a best practice-driven schedule of deploying emails during working hours on Tuesday through Thursday, you may succeed in reaching some non-responders by testing different deployment days and times. In some industries, reaching people before the business day starts gives them the few extra seconds for email viewing that you need to capture their attention. For some target markets—especially SOHOs (small office/home offices) in which the potential buyer is wearing many hats all week—sending on Fridays or even on the weekend increases the chances of response.

As with any marketing initiative, success depends on intersecting with your target audience at a moment when it is receptive to your message. By testing new creative and deployment times, you create more vectors for intersection with previously unresponsive segments.

Email Re-Engagement Strategy #1: Selectively Explicit and Implicit Preferences

In a recent survey of business executives, increasing subscriber engagement was the most frequently cited top priority—ahead of segmentation and social media integration. The focus on subscriber engagement has been rising over the last several years, driven by the growing role engagement is playing in email deliverability and by the recognition that one has to work harder to cut through an increasingly crowded inbox to affect the target audience. If recipients aren’t reading your emails, they’re not getting your message. Moreover, your emailing reputation will suffer and fewer of your emails will reach their intended inboxes.

Most “best practice” discussions around this topic advocate strongly for asking recipients to define their email preferences—the kinds of topics they’re interested in and the frequency with which they’re interested in receiving emails. Though this should absolutely be part of the email engagement approach, the reality is that for many B2B companies with small email lists, the decision to give people who are currently receiving emails (albeit not reading them) the option to refuse certain emails is an extremely difficult one to make. A compromise approach is one in which only those who are most at-risk of eternal inactivity are “invited” to define their preferences.

A complement to the explicit solicitation of preference definition is an approach that focuses on understanding what the recipient has responded to rather than on the fact that he/she hasn’t responded recently. For example, analyzing customer response data could reveal that a segment of “inactive” recipients used to respond with some frequency to a monthly newsletter; a reasonable hypothesis would be that they stopped paying attention to the newsletter because they couldn’t differentiate it from all the other emails they receive. In this case, testing the efficacy of sending them only the monthly newsletter would make sense. Likewise, looking back at how the contact got on the email list in the first place can yield some potential avenues for re-engagement: if they signed up to receive a whitepaper, it may be worth trying to limit them to only those emails offering a whitepaper download.

The point is that clearly your non-responsive recipients need to be re-engaged. Asking them what they want and responding to them offer two good options for resetting the communications relationship and gaining back their attention.

 

 

NEDMA Conference Presentation: B2B Lead Generation

At the 2010 NEDMA conference, EMI gave a presentation on how to take an iterative approach to testing to optimize response in a B2B lead generation environment. We used case studies to illustrate two approaches to testing and then refining campaigns through learning, optimization and subsequent roll outs.

Case Study 1 covered a series of product promotion campaigns over the course of two years. Each campaign was optimized based on key learnings from preceding campaigns (i.e., audience targeting, positioning, response channels, and incentives) that included email and direct mail channels. We discussed what we learned from both success and failure, and how, over time, we’ve developed a knowledge base that allows us to more effectively and efficiently target our audience with messages and creative approaches that drive response. The highlight of this case study was the 80% cost per lead (CPL) reduction from 2008 to 2009 direct mail campaigns and the identification of key drivers of email campaign performance: list selection and a clear, simple call-to-action.

Case Study 2 covered a true 4-cell test. We tested two approaches to messaging (broad versus niche) with two different creative formats (letter package versus self mailer). Our measure for evaluation was CPL, and the clear winner emerged with a CPL 51% lower than the overall average. Interestingly, the winning approach was more expensive on a per-piece basis that focused on a broader positioning. We then reprinted and dropped a larger quantity of winning approach to a larger audience. That effort resulted in a 1.8% response rate and a CPL 27% lower than the CPL for the test.

At the end of the presentation, we took questions from the audience, which consisted of a range of industry professionals, and wanted to share a couple that brought up important considerations.

Q.  Do you ever find a piece of content that just works so well you keep going to it?

A.   Yes. There are some restrictions to this approach, however. The content has to be “evergreen”, meaning that the topic is relevant even if market conditions change. Even with content fitting this requirement, given the audience overlap among publication email lists and direct mail lists, you eventually hit a saturation point where you see response inevitably begin to decline. When this happens, you need to retire that go-to campaign and replace it with something new.

Q.  How do you determine list quality when you’re deciding which publication lists to rent?

A.   Testing. While some lists can be ruled out based on available demographic information or reputation of the publication, in our experience list quality and price do not always go hand in hand. We have gotten very strong response and high quality leads from inexpensive lists, and very poor response from some of the most expensive lists we’ve used. There are also general audience preferences by list that drive response for certain types of content over others, so testing lists with various pieces of content is also important for determining list preferences.