Data cleansing
Encyclopedia
Data cleansing, data cleaning, or data scrubbing is the process of detecting and correcting (or removing) corrupt or inaccurate records
Storage record
In computer science, a storage record is:* A group of related data, words, or fields treated as a meaningful unit; for instance, a Name, Address, and Telephone Number can be a "Personal Record"....

 from a record set, table
Table (database)
In relational databases and flat file databases, a table is a set of data elements that is organized using a model of vertical columns and horizontal rows. A table has a specified number of columns, but can have any number of rows...

, or database
Database
A database is an organized collection of data for one or more purposes, usually in digital form. The data are typically organized to model relevant aspects of reality , in a way that supports processes requiring this information...

. Used mainly in databases, the term refers to identifying incomplete, incorrect, inaccurate, irrelevant, etc. parts of the data and then replacing, modifying, or deleting this dirty data
Dirty Data
Dirty data is a term used by Information technology professionals when referring to inaccurate information collected from data capture forms...

.

After cleansing, a data set
Data set
A data set is a collection of data, usually presented in tabular form. Each column represents a particular variable. Each row corresponds to a given member of the data set in question. Its values for each of the variables, such as height and weight of an object or values of random numbers. Each...

 will be consistent with other similar data sets in the system. The inconsistencies detected or removed may have been originally caused by user entry errors, by corruption in transmission or storage, or by different data dictionary
Data dictionary
A data dictionary, or metadata repository, as defined in the IBM Dictionary of Computing, is a "centralized repository of information about data such as meaning, relationships to other data, origin, usage, and format." The term may have one of several closely related meanings pertaining to...

 definitions of similar entities in different stores.

Data cleansing differs from data validation
Data validation
In computer science, data validation is the process of ensuring that a program operates on clean, correct and useful data. It uses routines, often called "validation rules" or "check routines", that check for correctness, meaningfulness, and security of data that are input to the system...

 in that validation almost invariably means data is rejected from the system at entry and is performed at entry time, rather than on batches of data.

The actual process of data cleansing may involve removing typographical error
Typographical error
A typographical error is a mistake made in, originally, the manual type-setting of printed material, or more recently, the typing process. The term includes errors due to mechanical failure or slips of the hand or finger, but usually excludes errors of ignorance, such as spelling errors...

s or validating and correcting values against a known list of entities. The validation may be strict (such as rejecting any address that does not have a valid postal code
Postal code
A postal code is a series of letters and/or digits appended to a postal address for the purpose of sorting mail. Once postal codes were introduced, other applications became possible.In February 2005, 117 of the 190 member countries of the Universal Postal Union had postal code systems...

) or fuzzy
Fuzzy logic
Fuzzy logic is a form of many-valued logic; it deals with reasoning that is approximate rather than fixed and exact. In contrast with traditional logic theory, where binary sets have two-valued logic: true or false, fuzzy logic variables may have a truth value that ranges in degree between 0 and 1...

 (such as correcting records that partially match existing, known records).

Motivation

Administratively, incorrect or inconsistent data can lead to false conclusions and misdirected investment
Investment
Investment has different meanings in finance and economics. Finance investment is putting money into something with the expectation of gain, that upon thorough analysis, has a high degree of security for the principal amount, as well as security of return, within an expected period of time...

s on both public and private scales. For instance, the government
Government
Government refers to the legislators, administrators, and arbitrators in the administrative bureaucracy who control a state at a given time, and to the system of government by which they are organized...

 may want to analyze population census figures to decide which regions require further spending and investment on infrastructure
Infrastructure
Infrastructure is basic physical and organizational structures needed for the operation of a society or enterprise, or the services and facilities necessary for an economy to function...

 and services. In this case, it will be important to have access to reliable data to avoid erroneous fiscal decisions.

In the business world, incorrect data can be costly. Many companies use customer information database
Database
A database is an organized collection of data for one or more purposes, usually in digital form. The data are typically organized to model relevant aspects of reality , in a way that supports processes requiring this information...

s that record data like contact information, addresses, and preferences. For instance, if the addresses are inconsistent, the company will suffer the cost of resending mail or even losing customers.

Data quality

High-quality data needs to pass a set of quality criteria. Those include:
  • Accuracy: an aggregated value over the criteria of integrity, consistency, and density
  • Integrity: an aggregated value over the criteria of completeness and validity
  • Completeness: achieved by correcting data containing anomalies
  • Validity: approximated by the amount of data satisfying integrity constraints
  • Consistency: concerns contradictions and syntactical anomalies
  • Uniformity: directly related to irregularities and in compliance with the set 'unit of measure'
  • Density: the quotient of missing values in the data and the number of total values ought to be known

The process of data cleansing

  • Data auditing: The data is audited with the use of statistical methods to detect anomalies and contradictions. This eventually gives an indication of the characteristics of the anomalies and their locations.

  • Workflow specification: The detection and removal of anomalies is performed by a sequence of operations on the data known as the workflow. It is specified after the process of auditing the data and is crucial in achieving the end product of high-quality data. In order to achieve a proper workflow, the causes of the anomalies and errors in the data have to be closely considered. For instance, if we find that an anomaly is a result of typing errors in data input stages, the layout of the keyboard
    Keyboard (computing)
    In computing, a keyboard is a typewriter-style keyboard, which uses an arrangement of buttons or keys, to act as mechanical levers or electronic switches...

     can help in manifesting possible solutions.

  • Workflow execution: In this stage, the workflow is executed after its specification is complete and its correctness is verified. The implementation of the workflow should be efficient, even on large sets of data, which inevitably poses a trade-off because the execution of a data-cleansing operation can be computationally expensive.

  • Post-processing and controlling: After executing the cleansing workflow, the results are inspected to verify correctness. Data that could not be corrected during execution of the workflow is manually corrected, if possible. The result is a new cycle in the data-cleansing process where the data is audited again to allow the specification of an additional workflow to further cleanse the data by automatic processing.

Popular methods used

  • Parsing: Parsing
    Parsing
    In computer science and linguistics, parsing, or, more formally, syntactic analysis, is the process of analyzing a text, made of a sequence of tokens , to determine its grammatical structure with respect to a given formal grammar...

     in data cleansing is performed for the detection of syntax errors. A parser decides whether a string of data is acceptable within the allowed data specification. This is similar to the way a parser works with grammars
    Grammars
    Grammars: A Journal of Mathematical Research on Formal and Natural Languages is an academic journal devoted to the mathematical linguistics of formal and natural languages, published by Springer-Verlag.ISSN 1386-7393...

     and languages.

  • Data transformation: Data transformation allows the mapping of the data from its given format into the format expected by the appropriate application. This includes value conversions or translation functions, as well as normalizing numeric values to conform to minimum and maximum values.

  • Duplicate elimination: Duplicate detection requires an algorithm
    Algorithm
    In mathematics and computer science, an algorithm is an effective method expressed as a finite list of well-defined instructions for calculating a function. Algorithms are used for calculation, data processing, and automated reasoning...

     for determining whether data contains duplicate representations of the same entity. Usually, data is sorted by a key that would bring duplicate entries closer together for faster identification.

  • Statistical methods: By analyzing the data using the values of mean
    Mean
    In statistics, mean has two related meanings:* the arithmetic mean .* the expected value of a random variable, which is also called the population mean....

    , standard deviation
    Standard deviation
    Standard deviation is a widely used measure of variability or diversity used in statistics and probability theory. It shows how much variation or "dispersion" there is from the average...

    , range
    Range (statistics)
    In the descriptive statistics, the range is the length of the smallest interval which contains all the data. It is calculated by subtracting the smallest observation from the greatest and provides an indication of statistical dispersion.It is measured in the same units as the data...

    , or clustering algorithms, it is possible for an expert to find values that are unexpected and thus erroneous. Although the correction of such data is difficult since the true value is not known, it can be resolved by setting the values to an average or other statistical value. Statistical methods can also be used to handle missing values which can be replaced by one or more plausible values, which are usually obtained by extensive data augmentation algorithms.

Existing tools

Before computer automation, data about individuals or organizations was maintained and secured as paper records, dispersed in separate business or organizational units. Information systems concentrate data in computer files that can potentially be accessed by large numbers of people and by groups outside of organization.

Criticism of existing tools and processes

Data-quality and data-cleansing initiatives are essential to improving overall operational and IT effectiveness. However, many efforts do not get off the ground and get stalled before they really start.

The main reasons cited are:
  • Project costs: costs typically in the hundreds of thousands of dollars
  • Time: lack of enough time to deal with large-scale data-cleansing software
  • Security: concerns over sharing information, giving an application access across systems, and effects on legacy systems

Challenges and problems

  • Error correction and loss of information: The most challenging problem within data cleansing remains the correction of values to remove duplicates and invalid entries. In many cases, the available information on such anomalies is limited and insufficient to determine the necessary transformations or corrections, leaving the deletion of such entries as the only plausible solution. The deletion of data, though, leads to loss of information; this loss can be particularly costly if there is a large amount of deleted data.

  • Maintenance of cleansed data: Data cleansing is an expensive and time-consuming process. So after having performed data cleansing and achieving a data collection free of errors, one would want to avoid the re-cleansing of data in its entirety after some values in data collection change. The process should only be repeated on values that have changed; this means that a cleansing lineage would need to be kept, which would require efficient data collection and management techniques.

  • Data cleansing in virtually integrated environments: In virtually integrated sources like IBM
    IBM
    International Business Machines Corporation or IBM is an American multinational technology and consulting corporation headquartered in Armonk, New York, United States. IBM manufactures and sells computer hardware and software, and it offers infrastructure, hosting and consulting services in areas...

    ’s DiscoveryLink, the cleansing of data has to be performed every time the data is accessed, which considerably decreases the response time and efficiency.

  • Data-cleansing framework: In many cases, it will not be possible to derive a complete data-cleansing graph to guide the process in advance. This makes data cleansing an iterative process involving significant exploration and interaction, which may require a framework in the form of a collection of methods for error detection and elimination in addition to data auditing. This can be integrated with other data-processing stages like integration and maintenance.

See also

  • Extract, transform, load
    Extract, transform, load
    Extract, transform and load is a process in database usage and especially in data warehousing that involves:* Extracting data from outside sources* Transforming it to fit operational needs...

     (ETL)
  • Data mining
    Data mining
    Data mining , a relatively young and interdisciplinary field of computer science is the process of discovering new patterns from large data sets involving methods at the intersection of artificial intelligence, machine learning, statistics and database systems...

  • Data quality
    Data quality
    Data are of high quality "if they are fit for their intended uses in operations, decision making and planning" . Alternatively, the data are deemed of high quality if they correctly represent the real-world construct to which they refer...

  • Data quality assurance
    Data quality assurance
    Data quality assurance is the process of profiling the data to discover inconsistencies, and other anomalies in the data and performing data cleansing activities Data quality assurance is the process of profiling the data to discover inconsistencies, and other anomalies in the data and performing...

  • Record linkage
    Record linkage
    Record linkage refers to the task of finding records in a data set that refer to the same entity across different data sources...


Sources

  • Han, J.
    Jiawei Han
    Jiawei Han, born in Shanghai, China on 11 August 1949, is a renowned computer scientist who specializes in research on Data Mining. He is an ACM fellow and an IEEE fellow. He was the 2009 winner of the McDowell Award, the highest technical award made by IEEE....

    , Kamber, M. Data Mining: Concepts and Techniques, Morgan Kaufmann, 2001. ISBN 1-55860-489-8.
  • Kimball, R., Caserta, J. The Data Warehouse ETL Toolkit, Wiley and Sons, 2004. ISBN 0-7645-6757-8.
  • Pooja Hegde, A study pertaining to the data cleansing strategies implemented by Unilog.
  • Muller H., Freytag J., Problems, Methods, and Challenges in Comprehensive Data Cleansing, Humboldt-Universitat zu Berlin, Germany.
  • Rahm, E., Hong, H. Data Cleaning: Problems and Current Approaches, University of Leipzig, Germany.

External links

The source of this article is wikipedia, the free encyclopedia.  The text of this article is licensed under the GFDL.
 
x
OK