we’re social

14 Ways Brands Can Put One-To-One Marketing Back To Work


Of various types of behavioral data, transaction data is the most predictable, as it’s about the exchange of hard currency for chosen products and services. There really is no stronger indicator for product affinities than repeat purchases.

But there are other types of behavioral data. Online marketers routinely employ click-stream data, following open, click-through, browse, shopping basket, check-out, etc. These may not be as powerful as actual transaction records, but they certainly provide good directional guidance. In fact, desire to utilize such click-stream data ignited the Big Data movement, as data gets really big really fast when collecting “every move they make.”

small: we’re social

Before we dive into such behavioral data, though, let’s remember that opens and clicks are “responses” to stimuli that marketers have created. If we really want to understand what elements would properly nudge our customers to the right direction, we must figure out both “action” (what marketers did) and “reaction” (what recipients did in response) sides of the equation.

Indeed, one of the most important questions that all marketers must answer is “What element really worked?” If we don’t understand what lead to sales, we may end up repeating the same old marketing tricks from the last century. It may sound harsh, but you’re not a database marketer if you don’t try to measure the effectiveness of everything that you tried.

However, I encounter too many companies where “nothing” is properly classified.

Getting organized

Discovering thousands of lines of “free-form” campaign descriptions is very common, regardless of the size and sophistication level of the organization. And such uncategorized metadata won’t be useful for marketing analytics, no matter how much data had been collected thus far.

So, in the interest of conducting proper response analyses for future targeting and personalization efforts, allow me to share the list of elements marketers must consistently maintain (all before diving into customer response data).

Let’s break down “what marketers did” into the following 14 elements.


What channel and/or media (Email, DM, social media, affiliate program, etc.) did you employ?  Classify and record specific ad servers and publishers as necessary.

Campaign type

Promotions, welcome series, reminders, PR communications, etc.—if more specific classification is required, add “campaign category” (as a subcategory) to distinguish product promotions, surveys, thank you letters, announcements, etc.

Featured product

What product and service were pushed? You don’t need SKU level details here, but try to maintain consistent product categories. If you have diverse product lines, create multi-level categories to separate divisions and business units.

Offer type

What did you offer? Avoiding gory details, try to maintain 10-20 major categories of offers, such as $ discount, % discount, free gift, free shipping, cash reward, no-payment-until, buy-one-get-one-free, lowest price guarantee, etc.

Actual offer

If it was a reward program, for example, how much was the reward? Put down “$50 reward”, “20% discount”, “$25 off”, description of gifts, etc.

Offer condition

Some offers come with conditions. I’ve seen uncontrollably long list of offer types, as “offer type” and “condition” were combined in a single column. Consistently separate out conditions such as limited time offer, 24-hour flash sale, with purchase over $100, available at selected location/site, etc.


Wherever applicable, record seasonal promotions, such as Black Friday, Christmas, New Year, Presidents’ Day, spring, Labor Day, 4th of July, Memorial Day, back-to-school, Super Bowl, etc.

Campaign wave/drop

If a campaign was dropped in waves or in a series, capture the serial count (e.g., drop two of six).

Drop date/time

Try to capture the local time of the recipients as much as possible, as daypart becomes an important parameter in analytics later.

Creative version

If there are multiple creative versions used in a campaign, keep short descriptions or codes for each.

Test version

If A/B testing is conducted in a campaign, clearly maintain the A/B version code.

Data source

Where did you obtain the campaign list? What were the data sources? This is very important, as data cost isn’t negligible, especially for DM lists.


If distinct segments or model groups were used, maintain the segment names and/or descriptions. Obviously, winning combinations should be employed again in the future.

Selection rules

How did you select targets for the final contact list? By rules or by model scores? If by model groups, how deep did you go?

While this list may seem overwhelming, I guarantee that campaign analysis will go to the next level with this exercise. Besides, you don’t have to tick off every bullet point on this checklist.

In the beginning, uncategorized campaign list may exceed a few thousand lines (I’ve seen worse). In such a case, just cover the past 12-month history to get it moving. If you categorize 100 campaigns a day, it will take two weeks to finish 1,000.

Once you are all caught up, just do the right thing during the campaign planning stage, and don’t drop the ball. You won’t regret it when you get to have clear answers about what worked.

Credit: Ad Week

Leave A Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.