When the technology of the internet and mass acceptance of desktop computers arrived in the late ’90s, things were supposed to change. And they have. But better access to customer and market information should have led to smarter management decisions across the board. Faster access to customer and market information was supposed to give companies the ability to make production or inventory adjustments on the fly.
But if either of those things were true, then in theory the 2008 crash shouldn’t have happened, and we wouldn’t be in a situation where U.S. consumers have slipping confidence and almost half feel that their country’s economy is still in a recession. Why didn’t our incredible access to technology and consumer data help us avoid these results? It’s not a simple problem with a simple answer, but one cause was highlighted recently by Vic Crain via the MRA blog. The cause? Bad Data.
Technology gives the nearly limitless access to data that decision makers crave. But if the data that is collected is wrong, so will be the decisions based on that bad data. So where does bad data come from?
- Lazy data entry
- Programming errors
- Managers and sales people who fudge or hide numbers to meet monthly or quarterly targets
- Flawed statistical analysis (for example, segmentation)
- Wrong customer data
- Bad customer satisfaction and market research data
- Changes in government data collection
Bad research data comes from several common sources listed above, including (a) asking the wrong questions, (b) response bias, unqualified respondents and fakery, and (c) biased data collectors who really don’t want to collect negative information.
Government adds to the problem. Public health inspectors who call ahead to make appointments for “surprise inspections” add the the problem. Elimination of data from the U.S. Census reduces the accuracy of information with which business works.
Blind acceptance of data as fact is a problem. Sophisticated statistical models of business operations or consumer demand built on bad data will be of questionable reliability. “The Smell Test” — a subjective tool based on experience and judgment — remains one of the best tools for assessing the quality of data and of models based on the data.