One of the earliest database applications was the modeling of ZIP code data for use in customer acquisition promotions, direct mail or telemarketing. So it’s a little surprising to still hear direct marketers complain that their ZIP code models validated well but didn’t hold up after repeated use.
There are a number of potential reasons why the modeling effort failed:
- The modeler forgot to weight each observation – each ZIP code promoted – by the number of pieces mailed or the number of calls completed.
- The modeler used dollars per name mailed as the dependent variable, rather than build separate models for response and back-end performance.
- The modeler “cherry picked” individual rows of the frequency distributions that make up the census data associated with a ZIP code, as opposed to modeling the entire distribution.
- The modeler failed to build historical response and performance indices at the ZIP, SCF or commercial cluster segment.
- The models were not applied on a list-by-list basis, where seasonality is taken into account.
Each of these items could cause a ZIP code model to fail. When two or more of these factors come into play, it’s easy for a ZIP code model to “stop working,” assuming it ever started to work in the first place. Let’s go through these items one at a time.
- Not weighting for the number of names mailed or called. This is just a simple mistake. It should be obvious that a ZIP code that receives 10,000 pieces of mail and has a 2% response rate should in some way count for more than a ZIP that receives only 1,000 pieces. The recommended weighting scheme is to multiply the number of pieces mailed by the response rate and then by the non-response rate. The formula to remember is N*p*q, where N stands for the number of pieces mailed, p is the response rate and q is equal to 1-p.
- Not using separate models for response and performance. One could argue that there is nothing theoretically wrong with modeling dollars per name mailed or called. But it’s been our experience that the modeling exercise produces more information and more strategic insights if response and performance are modeled separately. Then, if you wish to calculate dollars per named mailed, you can do so by multiplying the ZIP code’s expected response rate by a measure of the ZIP’s expected revenue per responder – be it sales, payments or contributions.
The intuitive reason for separate models is that a variable such as income may be negatively correlated with response and positively correlated with performance. If dollars per name mailed is chosen as the dependent variable, income’s impact may be missing from the model because of the potential canceling-out effect.
- Not modeling complete distributions of census data. Census data comes to us in the form of frequency distributions. For example, income may be expressed as a frequency distribution with as many as 18 rows of data. The percent of the population earning between $24,000 and $35,000 may be one such row.
Clearly, two ZIP codes might include the same percentage of the population earning that amount, but the ZIPs could be completely opposite each other in that in one ZIP most earn less than $24,000, and in the other most earn more than $35,000.
To eliminate the possibility of this happening and to build stronger, more stable models, we recommend using principal components analysis to model each ZIP code’s distribution against the distribution of the average ZIP in the population being promoted.
- Not using historical indices. For companies that have a great deal of promotion history, it’s useful to create historical response and performance indices calculated at the ZIP code level and summarized at the sectional center facility (SCF) level. Indices summarized at a commercial cluster level, such as Prizm or MicroVision, are even better. (Each ZIP is associated with a cluster level, so it’s easy to map ZIPs to clusters.)
In our experience, this strategy has produced some important and stable variables. Sometimes the ZIP code indices won’t work themselves because coverage is too thin, but the SCF and cluster segmentations usually will.
- Not implementing list by list. The worst mistake of all is to forget to implement on a list-by-list basis. This means the first step in the implementation or rollout process is to estimate the expected response rate and performance for each list segment considered for inclusion in the promotion. After this is done, response and performance models can be applied to each list.
What you’ll find is that some lists – your very best ones – can still be mailed in their entirety; other good lists will drop 10% to 30% of the names that would otherwise be mailed or called; and the top 10% to 30% of your marginal lists – those that might not otherwise be used at all – can now be tried.