Live From NCDM: Tales of Database Buffoonery

Posted on by Chief Marketer Staff

How many data miners does it take to build a model? More than you would think, judging by the stories told by Sam Koslowsky.

Koslowsky, the vice president of Modeling Solutions for Harte-Hanks CRM Analytics, recounted several career-destroying database blunders. “Of course, I was never involved in any of these errors,” he said, speaking at the National Center for Database Marketing conference in Orlando.

First there was the credit card company that planned to offer air flight insurance to travelers. It created a model, predicting who was most likely to respond. And it worked. The person who built the model should have gotten “a pat on the back and maybe a raise,” Koslowsky said.

But there was one small problem: That there would be no insurance sales unless customers actually flew. “A lot of people responded but didn’t fly,” Koslowsky said. “They did a good job of predicting response, and a not-so-great job predicting spending. They didn’t think it through.”

Then there was the firm that modeled people likely to respond to an electronics catalog. It found them, but the top responders returned 29% of all their purchases. Koslowsky’s grim conclusion? “They should have built a model on net response. But the built it on gross response. It was a faulty objective.”

Want more? Koslowsky told the story of the Northeastern utility that wanted to sell ancillary services to new customer segments. “This I did work on,” he said with pride.

It was a great success. Eight clusters were identified, but the utility “had no money to take it to the next step, so it was a theoretical exercise,” Koslowsky continued.

Then there was the telephone company that wanted to identify future defectors. It built a model, using behavior from the 2002-03 timeframe, and used it to predict dropouts in 2004.

But that’s not quite how it turned out. Instead, the data miners did a great job of IDing people who had already left.

They not only based their results on the new period, they relied on “a flag that used attrition to predict attrition,” he said.

The lesson? “Freeze your file. Make sure the predictions come from the right time period.”

Live From NCDM: Tales of Database Buffoonery

Posted on by Chief Marketer Staff

How many data miners does it take to build a model? More than you would think, judging by the stories told by Sam Koslowsky.

Koslowsky, the vice president of Modeling Solutions for Harte-Hanks CRM Analytics, recounted several career-destroying database blunders. “Of course, I was never involved in any of these errors,” he said, speaking at the National Center for Database Marketing conference in Orlando.

First there was the credit card company that planned to offer air flight insurance to travelers. It created a model, predicting who was most likely to respond. And it worked. The person who built the model should have gotten “a pat on the back and maybe a raise,” Koslowsky said.

But there was one small problem: That there would be no insurance sales unless customers actually flew. “A lot of people responded but didn’t fly,” Koslowsky said. “They did a good job of predicting response, and a not-so-great job predicting spending. They didn’t think it through.”

Then there was the firm that modeled people likely to respond to an electronics catalog. It found them, but the top responders returned 29% of all their purchases. Koslowsky’s grim conclusion? “They should have built a model on net response. But the built it on gross response. It was a faulty objective.”

Want more? Koslowsky told the story of the Northeastern utility that wanted to sell ancillary services to new customer segments. “This I did work on,” he said with pride.

It was a great success. Eight clusters were identified, but the utility “had no money to take it to the next step, so it was a theoretical exercise,” Koslowsky continued.

Then there was the telephone company that wanted to identify future defectors. It built a model, using behavior from the 2002-03 timeframe, and used it to predict dropouts in 2004.

But that’s not quite how it turned out. Instead, the data miners did a great job of IDing people who had already left.

They not only based their results on the new period, they relied on “a flag that used attrition to predict attrition,” he said.

The lesson? “Freeze your file. Make sure the predictions come from the right time period.”

More

Related Posts

Chief Marketer Videos

by Chief Marketer Staff

In our latest Marketers on Fire LinkedIn Live, Anywhere Real Estate CMO Esther-Mireya Tejeda discusses consumer targeting strategies, the evolution of the CMO role and advice for aspiring C-suite marketers.

	
        

Call for entries now open

Pro
Awards 2023

Click here to view the 2023 Winners
	
        

2023 LIST ANNOUNCED

CM 200

 

Click here to view the 2023 winners!