Pythonic Data Cleaning
Pythonic Data Cleaning With Pandas and NumPy
Data scientists spend a large amount of their time cleaning datasets and getting them down to a form with which they can work. In fact, a lot of data scientists argue that the initial steps of obtaining and cleaning data constitute 80% of the job.
Therefore, if you are just stepping into this field or planning to step into this field, it is important to be able to deal with messy data, whether that means missing values, inconsistent formatting, malformed records, or nonsensical outliers.
In this tutorial, weâll leverage Pythonâs Pandas and NumPy libraries to clean data.
Weâll cover the following:
Dropping unnecessary columns in a
DataFrame
Changing the index of a
DataFrame
Using
.str()
methods to clean columnsUsing the
DataFrame.applymap()
function to clean the entire dataset, element-wiseRenaming columns to a more recognizable set of labels
Skipping unnecessary rows in a CSV file
Free Bonus: Click here to get access to a free NumPy Resources Guide that points you to the best tutorials, videos, and books for improving your NumPy skills.
Here are the datasets that we will be using:
BL-Flickr-Images-Book.csv â A CSV file containing information about books from the British Library
university_towns.txt â A text file containing names of college towns in every US state
olympics.csv â A CSV file summarizing the participation of all countries in the Summer and Winter Olympics
You can download the datasets from Real Pythonâs GitHub repository in order to follow the examples here.
Note: I recommend using Jupyter Notebooks to follow along.
This tutorial assumes a basic understanding of the Pandas and NumPy libraries, including Pandaâs workhorse Series
and DataFrame
objects, common methods that can be applied to these objects, and familiarity with NumPyâs NaN
values.
Letâs import the required modules and get started!>>>
Dropping Columns in a DataFrame
DataFrame
Often, youâll find that not all the categories of data in a dataset are useful to you. For example, you might have a dataset containing student information (name, grade, standard, parentsâ names, and address) but want to focus on analyzing student grades.
In this case, the address or parentsâ names categories are not important to you. Retaining these unneeded categories will take up unnecessary space and potentially also bog down runtime.
Pandas provides a handy way of removing unwanted columns or rows from a DataFrame
with the drop()
function. Letâs look at a simple example where we drop a number of columns from a DataFrame
.
First, letâs create a DataFrame
out of the CSV file âBL-Flickr-Images-Book.csvâ. In the examples below, we pass a relative path to pd.read_csv
, meaning that all of the datasets are in a folder named Datasets
in our current working directory:>>>
When we look at the first five entries using the head()
method, we can see that a handful of columns provide ancillary information that would be helpful to the library but isnât very descriptive of the books themselves: Edition Statement
, Corporate Author
, Corporate Contributors
, Former owner
, Engraver
, Issuance type
and Shelfmarks
.
We can drop these columns in the following way:>>>
Above, we defined a list that contains the names of all the columns we want to drop. Next, we call the drop()
function on our object, passing in the inplace
parameter as True
and the axis
parameter as 1
. This tells Pandas that we want the changes to be made directly in our object and that it should look for the values to be dropped in the columns of the object.
When we inspect the DataFrame
again, weâll see that the unwanted columns have been removed:>>>
Alternatively, we could also remove the columns by passing them to the columns
parameter directly instead of separately specifying the labels to be removed and the axis where Pandas should look for the labels:>>>
This syntax is more intuitive and readable. What weâre trying to do here is directly apparent.
If you know in advance which columns youâd like to retain, another option is to pass them to the usecols
argument of pd.read_csv
.
Changing the Index of a DataFrame
DataFrame
A Pandas Index
extends the functionality of NumPy arrays to allow for more versatile slicing and labeling. In many cases, it is helpful to use a uniquely valued identifying field of the data as its index.
For example, in the dataset used in the previous section, it can be expected that when a librarian searches for a record, they may input the unique identifier (values in the Identifier
column) for a book:>>>
Letâs replace the existing index with this column using set_index
:>>>
Technical Detail: Unlike primary keys in SQL, a Pandas Index
doesnât make any guarantee of being unique, although many indexing and merging operations will notice a speedup in runtime if it is.
We can access each record in a straightforward way with loc[]
. Although loc[]
may not have all that intuitive of a name, it allows us to do label-based indexing, which is the labeling of a row or record without regard to its position:>>>
In other words, 206 is the first label of the index. To access it by position, we could use df.iloc[0]
, which does position-based indexing.
Technical Detail: .loc[]
is technically a class instance and has some special syntax that doesnât conform exactly to most plain-vanilla Python instance methods.
Previously, our index was a RangeIndex: integers starting from 0
, analogous to Pythonâs built-in range
. By passing a column name to set_index
, we have changed the index to the values in Identifier
.
You may have noticed that we reassigned the variable to the object returned by the method with df = df.set_index(...)
. This is because, by default, the method returns a modified copy of our object and does not make the changes directly to the object. We can avoid this by setting the inplace
parameter:
Tidying up Fields in the Data
So far, we have removed unnecessary columns and changed the index of our DataFrame
to something more sensible. In this section, we will clean specific columns and get them to a uniform format to get a better understanding of the dataset and enforce consistency. In particular, we will be cleaning Date of Publication
and Place of Publication
.
Upon inspection, all of the data types are currently the object
dtype, which is roughly analogous to str
in native Python.
It encapsulates any field that canât be neatly fit as numerical or categorical data. This makes sense since weâre working with data that is initially a bunch of messy strings:>>>
One field where it makes sense to enforce a numeric value is the date of publication so that we can do calculations down the road:>>>
A particular book can have only one date of publication. Therefore, we need to do the following:
Remove the extra dates in square brackets, wherever present: 1879 [1878]
Convert date ranges to their âstart dateâ, wherever present: 1860-63; 1839, 38-54
Completely remove the dates we are not certain about and replace them with NumPyâs
NaN
: [1897?]Convert the string
nan
to NumPyâsNaN
value
Synthesizing these patterns, we can actually take advantage of a single regular expression to extract the publication year:>>>
The regular expression above is meant to find any four digits at the beginning of a string, which suffices for our case. The above is a raw string (meaning that a backslash is no longer an escape character), which is standard practice with regular expressions.
The \d
represents any digit, and {4}
repeats this rule four times. The ^
character matches the start of a string, and the parentheses denote a capturing group, which signals to Pandas that we want to extract that part of the regex. (We want ^
to avoid cases where [
starts off the string.)
Letâs see what happens when we run this regex across our dataset:>>>
Not familiar with regex? You can inspect the expression above at regex101.com and read more at the Python Regular Expressions HOWTO.
Technically, this column still has object
dtype, but we can easily get its numerical version with pd.to_numeric
:>>>
This results in about one in every ten values being missing, which is a small price to pay for now being able to do computations on the remaining valid values:>>>
Great! Thatâs done! Remove ads
Combining str
Methods with NumPy to Clean Columns
str
Methods with NumPy to Clean ColumnsAbove, you may have noticed the use of df['Date of Publication'].str
. This attribute is a way to access speedy string operations in Pandas that largely mimic operations on native Python strings or compiled regular expressions, such as .split()
, .replace()
, and .capitalize()
.
To clean the Place of Publication
field, we can combine Pandas str
methods with NumPyâs np.where
function, which is basically a vectorized form of Excelâs IF()
macro. It has the following syntax:>>>
Here, condition
is either an array-like object or a boolean mask. then
is the value to be used if condition
evaluates to True
, and else
is the value to be used otherwise.
Essentially, .where()
takes each element in the object used for condition
, checks whether that particular element evaluates to True
in the context of the condition, and returns an ndarray
containing then
or else
, depending on which applies.
It can be nested into a compound if-then statement, allowing us to compute values based on multiple conditions:>>>
Weâll be making use of these two functions to clean Place of Publication
since this column has string objects. Here are the contents of the column:>>>
We see that for some rows, the place of publication is surrounded by other unnecessary information. If we were to look at more values, we would see that this is the case for only some rows that have their place of publication as âLondonâ or âOxfordâ.
Letâs take a look at two specific entries:>>>
These two books were published in the same place, but one has hyphens in the name of the place while the other does not.
To clean this column in one sweep, we can use str.contains()
to get a boolean mask.
We clean the column as follows:>>>
We combine them with np.where
:>>>
Here, the np.where
function is called in a nested structure, with condition
being a Series
of booleans obtained with str.contains()
. The contains()
method works similarly to the built-in in
keyword used to find the occurrence of an entity in an iterable (or substring in a string).
The replacement to be used is a string representing our desired place of publication. We also replace hyphens with a space with str.replace()
and reassign to the column in our DataFrame
.
Although there is more dirty data in this dataset, we will discuss only these two columns for now.
Letâs have a look at the first five entries, which look a lot crisper than when we started out:
>>>
Note: At this point, Place of Publication
would be a good candidate for conversion to a Categorical
dtype, because we can encode the fairly small unique set of cities with integers. (The memory usage of a Categorical is proportional to the number of categories plus the length of the data; an object dtype is a constant times the length of the data.) Remove ads
Cleaning the Entire Dataset Using the applymap
Function
applymap
FunctionIn certain situations, you will see that the âdirtâ is not localized to one column but is more spread out.
There are some instances where it would be helpful to apply a customized function to each cell or element of a DataFrame. Pandas .applymap()
method is similar to the in-built map()
function and simply applies a function to all the elements in a DataFrame
.
Letâs look at an example. We will create a DataFrame
out of the âuniversity_towns.txtâ file:
We see that we have periodic state names followed by the university towns in that state: StateA TownA1 TownA2 StateB TownB1 TownB2...
. If we look at the way state names are written in the file, weâll see that all of them have the â[edit]â substring in them.
We can take advantage of this pattern by creating a list of (state, city)
tuples and wrapping that list in a DataFrame
:>>>
We can wrap this list in a DataFrame and set the columns as âStateâ and âRegionNameâ. Pandas will take each element in the list and set State
to the left value and RegionName
to the right value.
The resulting DataFrame looks like this:>>>
While we could have cleaned these strings in the for loop above, Pandas makes it easy. We only need the state name and the town name and can remove everything else. While we could use Pandasâ .str()
methods again here, we could also use applymap()
to map a Python callable to each element of the DataFrame.
We have been using the term element, but what exactly do we mean by it? Consider the following âtoyâ DataFrame:>>>
In this example, each cell (âMockâ, âDatasetâ, âPythonâ, âPandasâ, etc.) is an element. Therefore, applymap()
will apply a function to each of these independently. Letâs define that function:>>>
Pandasâ .applymap()
only takes one parameter, which is the function (callable) that should be applied to each element:>>>
First, we define a Python function that takes an element from the DataFrame
as its parameter. Inside the function, checks are performed to determine whether thereâs a (
or [
in the element or not.
Depending on the check, values are returned accordingly by the function. Finally, the applymap()
function is called on our object. Now the DataFrame is much neater:>>>
The applymap()
method took each element from the DataFrame, passed it to the function, and the original value was replaced by the returned value. Itâs that simple!
Technical Detail: While it is a convenient and versatile method, .applymap
can have significant runtime for larger datasets, because it maps a Python callable to each individual element. In some cases, it can be more efficient to do vectorized operations that utilize Cython or NumPY (which, in turn, makes calls in C) under the hood. Remove ads
Renaming Columns and Skipping Rows
Often, the datasets youâll work with will have either column names that are not easy to understand, or unimportant information in the first few and/or last rows, such as definitions of the terms in the dataset, or footnotes.
In that case, weâd want to rename columns and skip certain rows so that we can drill down to necessary information with correct and sensible labels.
To demonstrate how we can go about doing this, letâs first take a glance at the initial five rows of the âolympics.csvâ dataset:
Now, weâll read it into a Pandas DataFrame:>>>
This is messy indeed! The columns are the string form of integers indexed at 0. The row which should have been our header (i.e. the one to be used to set the column names) is at olympics_df.iloc[0]
. This happened because our CSV file starts with 0, 1, 2, âĶ, 15.
Also, if we were to go to the source of this dataset, weâd see that NaN
above should really be something like âCountryâ, ? Summer
is supposed to represent âSummer Gamesâ, 01 !
should be âGoldâ, and so on.
Therefore, we need to do two things:
Skip one row and set the header as the first (0-indexed) row
Rename the columns
We can skip rows and set the header while reading the CSV file by passing some parameters to the read_csv()
function.
This function takes a lot of optional parameters, but in this case we only need one (header
) to remove the 0th row:>>>
We now have the correct row set as the header and all unnecessary rows removed. Take note of how Pandas has changed the name of the column containing the name of the countries from NaN
to Unnamed: 0
.
To rename the columns, we will make use of a DataFrameâs rename()
method, which allows you to relabel an axis based on a mapping (in this case, a dict
).
Letâs start by defining a dictionary that maps current column names (as keys) to more usable ones (the dictionaryâs values):>>>
We call the rename()
function on our object:>>>
Setting inplace to True
specifies that our changes be made directly to the object. Letâs see if this checks out:>>>
Python Data Cleaning: Recap and Resources
In this tutorial, you learned how you can drop unnecessary information from a dataset using the drop()
function, as well as how to set an index for your dataset so that items in it can be referenced easily.
Moreover, you learned how to clean object
fields with the .str()
accessor and how to clean the entire dataset using the applymap()
method. Lastly, we explored how to skip rows in a CSV file and rename columns using the rename()
method.
Knowing about data cleaning is very important, because it is a big part of data science. You now have a basic understanding of how Pandas and NumPy can be leveraged to clean datasets!
Check out the links below to find additional resources that will help you on your Python data science journey:
The Pandas documentation
The NumPy documentation
Python for Data Analysis by Wes McKinney, the creator of Pandas
Pandas Cookbook by Ted Petrou, a data science trainer and consultant
Reference : https://realpython.com/python-data-cleaning-numpy-pandas/
Last updated