We have all been contacted by an organisation asking us to update our information, asking a “million and one” questions.
But think about if you go on a company’s website and the contact information is incorrect. Now what if an organisation needed to contact you urgently to advise of a change being implemented that will affect you, but the address, telephone numbers and e-mail addresses they have for you are incorrect or no longer relevant.
Over the years, with the speed of technology advancements and increased regulatory requirements, data captured on customers have increased significantly. So you need to know your customers (KYC).
Then vs now
In the 1990s, just supplying your name, address, home and work telephone numbers was sufficient and allowed for easy contact.
Legacy systems used to capture customers' details may not have grown as fast as the need to know your customer, or may not be dynamic enough to keep up with the pace. This resulted in customer service representatives (CSRs) entering as much data as possible for the customer in whatever space was available – even if the data wasn’t relevant for the field in which it was placed.
So the term “dirty data” arose with reference to databases. “Dirty data” can contain errors such as spelling or punctuation, incorrect data associated with a field, incomplete or outdated data or even data that is duplicated.
With every generation of CSRs and data being captured, changes – whether due to regulations, or the population as a whole – make the interpretation of “dirty” data even more difficult. It creates non-value-added work that can be time-consuming and expensive. According to Harvard Business Review, “Even a very low overall error rate of three per cent adds nearly 30 per cent non-value-added costs. Numbers such as these make clear that the best way to reduce costs may well be to improve data quality.“
With the rising importance of data and the realisation of the costs of poor data quality, organisations have begun to implement data governance. This reduces intangible costs associated with a loss of trust, and lost opportunities. In addition, once enforced, it reduces the direct and indirect costs of “dirty data” that are associated with the allowances made for these non-value-added work items in the business-as-usual activities of decision-makers, team leads, knowledge workers, data stewards, and other team members.
A 2011 Gartner study revealed:
• Poor data quality is a primary reason for 40 per cent of all business initiatives failing to achieve their targeted benefits.
• Data quality affects overall labour productivity by as much as 20 per cent.
A survey performed by the Data Warehouse Institute produced similar results, with respondents claiming that poor data quality has led to lost revenue (54 per cent), extra costs (72 per cent), and decreases in customer satisfaction (67 per cent).
It may take the simple form of just knowing how many customers an organisation has, or viewing customer transactions to analyse business decision:
Data-enabled processes reduce inefficiencies; assist in resource allocation; determine performance and support research and development. This should be the case with new products and services, new locations or expansion, or even the channels of communication and personalised outreach.
It stands to reason that “good” data is essential for “good” information, eventually making for valuable business decisions.
Organisations must develop a culture that makes data quality a high priority. To see the true value of “good” information to an organisation it must be treated as an asset, and data governance enforced.
The TT Chamber of Industry and Commerce thanks the Unit Trust Corporation, a platinum signature event sponsor, for contributing this article.