[Webinar] Collaboration and Monetization of Data Products: The Role of the Data Marketplace

Watch the replay
Glossary

Data cleansing

Data cleansing (or data cleaning) is the process of identifying and fixing incorrect, incomplete, duplicate, unneeded, or invalid data in a data set.

What is data cleansing?

Data cleansing (also called data cleaning or data scrubbing) involves identifying and fixing incorrect, incomplete, duplicate, unneeded, or otherwise erroneous data in a data set. This normally takes place during the data preparation phase, and ensures that data is accurate and actionable when shared, analyzed, and reused.

It is separate to data transformation, when data is enriched with additional information and datasets, such as by adding geographical information. Data cleansing normally occurs before data transformation in the data preparation phase.

Data cleansing can be carried out manually, updating and correcting records, or by using data cleansing tools that automate all or part of the process.

The terms data cleansing, data cleaning and data scrubbing are normally used interchangeably. However, in some cases, data scrubbing refers solely to removing (rather than identifying) duplicate, bad, unneeded or old data from data sets.

Why is data cleansing important?

It is fundamental to ensuring high data quality, so that information is accurate, consistent and able to be used with confidence across the organization and beyond. Without effective data cleansing, business decisions may rely on inaccurate data, preventing organizations becoming data-driven. As the saying goes when it comes to data “garbage in, garbage out.”

What are the benefits of data cleansing?

With data central to business operations and to transparency across wider society, ensuring that data is accurate and actionable is vital. Data cleansing therefore provides 5 specific benefits:

  • Better decision-making. Being able to make faster, better informed decisions is essential for business success. If data is not cleansed errors that impact the accuracy of decision-making can occur. This is a particular issue when data is used with AI algorithms, without human oversight.
  • Greater confidence and trust. High quality data is at the heart of data democratization. Employees and citizens need to trust that the data they are accessing is accurate, otherwise they simply will not use it.
  • Time savings. Cleansing data at the preparation stage guarantees accuracy when it is shared and used. This saves time and resources, as it is fixed once, at source.
  • Greater productivity. Higher quality data means that employees are able to focus on decision-making, rather than looking for mistakes in their datasets, increasing their productivity.
  • Reduced data costs. Cleansing data removes duplicates and inaccurate records, shrinking storage requirements and leading to faster processing times for datasets.

What are the characteristics of clean data?

Data quality can be measured using these characteristics:

  • Accuracy. Is the data correctly representing what it is measuring?
  • Consistency. Is data consistent across different datasets? For example, is a customer address the same between CRM and billing systems?
  • Validity. Does the data meet set parameters or rules? For example, is a telephone number in the correct format?
  • Completeness. Are there gaps in the data and can this be fixed with data from other sources?
  • Uniformity. Has the data been collected and represented using the same units and scales? For example, are measurements in inches and feet or meters and centimeters?

Data teams will use data quality metrics to measure these characteristics within datasets, as well as calculating overall error rates in datasets.

What types of errors does data cleansing fix?

Examples of common errors that can be discovered and fixed within the data cleansing process include:

  • Missing or invalid data – spotting gaps in fields or data that is in the wrong format (such as a numerical value in a text field).
  • Typos – misspellings or other typographical errors
  • Inconsistencies – common fields (such as addresses or names) that are formatted or described differently between datasets.
  • Duplicates – multiple records relating to the same thing (such as a customer). This often occurs when different datasets are merged.
  • Irrelevant data – data that is not needed by the organization. For example, a municipality may import a state-wide dataset, but only want to use data related to itself.

How does the data cleansing process work?

While the process will vary depending on the organization, the tools used and the data itself, it normally covers these 5 steps:

1. Data auditing to inspect data and identify anomalies and issues, which are then dealt with in the order below

2. The removal of duplicate or irrelevant data/records

3. Fixing structural errors, such as inconsistencies between fields

4. Handling any missing pieces of data, such as by comparing with other data sources

5. Verification to check that all errors have been removed and that the data meets internal data quality standards

Depending on the size and complexity of datasets, the data cleansing process will use a combination of automated tools and manual, human oversight and input.

Download the ebook making data widely accessible and usable

Learn more
The impact of GenAI on data management – predictions from Gartner Data Trends
The impact of GenAI on data management – predictions from Gartner

How can generative AI help Chief Data Officers and other data leaders to better manage their operations? Based on Gartner research, our blog outlines the key benefits AI can provide within the data management stack

3 reasons why data marketplaces are the only solution to turn data into value Data Marketplace
3 reasons why data marketplaces are the only solution to turn data into value

How can you maximize the value of data and use it to achieve organizational objectives? That’s the ambitious goal of many data leaders as they plan for 2025. In an increasingly digitalized world, where data volumes are exploding, to generate value data leaders need to enable everyone in the business to easily access the right information in a seamless way. Data marketplaces are essential to this, delivering capabilities that move beyond traditional data catalogs, as this article explains.

2025 data leader trends and the importance of self-service data – insights from Gartner Data Trends
2025 data leader trends and the importance of self-service data – insights from Gartner

Growing data volumes, increasing complexity and pressure on budgets - just some of the trends that CDOs need to understand and act on. Based on Gartner research, we analyze CDO challenges and trends and explain how they can deliver greater business value from their initiatives.

Start creating the best data experiences