This is a repository for short and sweet examples and links for useful pandas recipes. We encourage users to add to this documentation.
Adding interesting links and/or inline examples to this section is a great First Pull Request.
Simplified, condensed, new-user friendly, in-line examples have been inserted where possible to augment the Stack-Overflow and GitHub links. Many of the links contain expanded information, above what the in-line examples offer.
Pandas (pd) and Numpy (np) are the only two abbreviated imported modules. The rest are kept explicitly imported for newer users.
These examples are written for Python 3. Minor tweaks might be necessary for earlier python versions.
In [74]: cols = pd.MultiIndex.from_tuples([(x, y) for x in ['A', 'B', 'C']
....: for y in ['O', 'I']])
....:
In [75]: df = pd.DataFrame(np.random.randn(2, 6), index=['n', 'm'], columns=cols)
In [76]: df
Out[76]:
A B C
O I O I O I
n 0.469112 -0.282863 -1.509059 -1.135632 1.212112 -0.173215
m 0.119209 -1.044236 -0.861849 -2.104569 -0.494929 1.071804
In [77]: df = df.div(df['C'], level=1)
In [78]: df
Out[78]:
A B C
O I O I O I
n 0.387021 1.633022 -1.244983 6.556214 1.0 1.0
m -0.240860 -0.974279 1.741358 -1.963577 1.0 1.0
In [79]: coords = [('AA', 'one'), ('AA', 'six'), ('BB', 'one'), ('BB', 'two'),
....: ('BB', 'six')]
....:
In [80]: index = pd.MultiIndex.from_tuples(coords)
In [81]: df = pd.DataFrame([11, 22, 33, 44, 55], index, ['MyData'])
In [82]: df
Out[82]:
MyData
AA one 11
six 22
BB one 33
two 44
six 55
To take the cross section of the 1st level and 1st axis the index:
# Note : level and axis are optional, and default to zero
In [83]: df.xs('BB', level=0, axis=0)
Out[83]:
MyData
one 33
two 44
six 55
âĶand now the 2nd level of the 1st axis.
In [84]: df.xs('six', level=1, axis=0)
Out[84]:
MyData
AA 22
BB 55
In [85]: import itertools
In [86]: index = list(itertools.product(['Ada', 'Quinn', 'Violet'],
....: ['Comp', 'Math', 'Sci']))
....:
In [87]: headr = list(itertools.product(['Exams', 'Labs'], ['I', 'II']))
In [88]: indx = pd.MultiIndex.from_tuples(index, names=['Student', 'Course'])
In [89]: cols = pd.MultiIndex.from_tuples(headr) # Notice these are un-named
In [90]: data = [[70 + x + y + (x * y) % 3 for x in range(4)] for y in range(9)]
In [91]: df = pd.DataFrame(data, indx, cols)
In [92]: df
Out[92]:
Exams Labs
I II I II
Student Course
Ada Comp 70 71 72 73
Math 71 73 75 74
Sci 72 75 75 75
Quinn Comp 73 74 75 76
Math 74 76 78 77
Sci 75 78 78 78
Violet Comp 76 77 78 79
Math 77 79 81 80
Sci 78 81 81 81
In [93]: All = slice(None)
In [94]: df.loc['Violet']
Out[94]:
Exams Labs
I II I II
Course
Comp 76 77 78 79
Math 77 79 81 80
Sci 78 81 81 81
In [95]: df.loc[(All, 'Math'), All]
Out[95]:
Exams Labs
I II I II
Student Course
Ada Math 71 73 75 74
Quinn Math 74 76 78 77
Violet Math 77 79 81 80
In [96]: df.loc[(slice('Ada', 'Quinn'), 'Math'), All]
Out[96]:
Exams Labs
I II I II
Student Course
Ada Math 71 73 75 74
Quinn Math 74 76 78 77
In [97]: df.loc[(All, 'Math'), ('Exams')]
Out[97]:
I II
Student Course
Ada Math 71 73
Quinn Math 74 76
Violet Math 77 79
In [98]: df.loc[(All, 'Math'), (All, 'II')]
Out[98]:
Exams Labs
II II
Student Course
Ada Math 73 74
Quinn Math 76 77
Violet Math 79 80
Unlike agg, applyâs callable is passed a sub-DataFrame which gives you access to all the columns
In [104]: df = pd.DataFrame({'animal': 'cat dog cat fish dog cat cat'.split(),
.....: 'size': list('SSMMMLL'),
.....: 'weight': [8, 10, 11, 1, 20, 12, 12],
.....: 'adult': [False] * 5 + [True] * 2})
.....:
In [105]: df
Out[105]:
animal size weight adult
0 cat S 8 False
1 dog S 10 False
2 cat M 11 False
3 fish M 1 False
4 dog M 20 False
5 cat L 12 True
6 cat L 12 True
# List the size of the animals with the highest weight.
In [106]: df.groupby('animal').apply(lambda subf: subf['size'][subf['weight'].idxmax()])
Out[106]:
animal
cat L
dog M
fish M
dtype: object
In [131]: df = pd.DataFrame({'Color': 'Red Red Red Blue'.split(),
.....: 'Value': [100, 150, 50, 50]})
.....:
In [132]: df
Out[132]:
Color Value
0 Red 100
1 Red 150
2 Red 50
3 Blue 50
In [133]: df['Counts'] = df.groupby(['Color']).transform(len)
In [134]: df
Out[134]:
Color Value Counts
0 Red 100 3
1 Red 150 3
2 Red 50 3
3 Blue 50 1
In [139]: df = pd.DataFrame({'host': ['other', 'other', 'that', 'this', 'this'],
.....: 'service': ['mail', 'web', 'mail', 'mail', 'web'],
.....: 'no': [1, 2, 1, 2, 1]}).set_index(['host', 'service'])
.....:
In [140]: mask = df.groupby(level=0).agg('idxmax')
In [141]: df_count = df.loc[mask['no']].reset_index()
In [142]: df_count
Out[142]:
host service no
0 other web 2
1 that mail 1
2 this mail 2
Create a list of dataframes, split using a delineation based on logic included in rows.
In [146]: df = pd.DataFrame(data={'Case': ['A', 'A', 'A', 'B', 'A', 'A', 'B', 'A',
.....: 'A'],
.....: 'Data': np.random.randn(9)})
.....:
In [147]: dfs = list(zip(*df.groupby((1 * (df['Case'] == 'B')).cumsum()
.....: .rolling(window=3, min_periods=1).median())))[-1]
.....:
In [148]: dfs[0]
Out[148]:
Case Data
0 A 0.276232
1 A -1.087401
2 A -0.673690
3 B 0.113648
In [149]: dfs[1]
Out[149]:
Case Data
4 A -1.478427
5 A 0.524988
6 B 0.404705
In [150]: dfs[2]
Out[150]:
Case Data
7 A 0.577046
8 A -1.715002
In [159]: df = pd.DataFrame(data={'A': [[2, 4, 8, 16], [100, 200], [10, 20, 30]],
.....: 'B': [['a', 'b', 'c'], ['jj', 'kk'], ['ccc']]},
.....: index=['I', 'II', 'III'])
.....:
In [160]: def SeriesFromSubList(aList):
.....: return pd.Series(aList)
.....:
In [161]: df_orgz = pd.concat({ind: row.apply(SeriesFromSubList)
.....: for ind, row in df.iterrows()})
.....:
In [162]: df_orgz
Out[162]:
0 1 2 3
I A 2 4 8 16.0
B a b c NaN
II A 100 200 NaN NaN
B jj kk NaN NaN
III A 10 20 30 NaN
B ccc NaN NaN NaN
Reading a file that is compressed but not by gzip/bz2 (the native compressed formats which read_csv understands). This example shows a WinZipped file, but is a general application of opening the file within a context manager and using that handle to read. See here
Reading multiple files to create a single DataFrame
The best way to combine multiple files into a single DataFrame is to read the individual frames one by one, put all of the individual frames into a list, and then combine the frames in the list using pd.concat():
In [189]: for i in range(3):
.....: data = pd.DataFrame(np.random.randn(10, 4))
.....: data.to_csv('file_{}.csv'.format(i))
.....:
In [190]: files = ['file_0.csv', 'file_1.csv', 'file_2.csv']
In [191]: result = pd.concat([pd.read_csv(f) for f in files], ignore_index=True)
You can use the same approach to read all files matching a pattern. Here is an example using glob:
In [192]: import glob
In [193]: import os
In [194]: files = glob.glob('file_*.csv')
In [195]: result = pd.concat([pd.read_csv(f) for f in files], ignore_index=True)
Finally, this strategy will work with the other pd.read_*(...) functions described in the io docs.
Parsing date components in multi-columns
Parsing date components in multi-columns is faster with a format
In [196]: i = pd.date_range('20000101', periods=10000)
In [197]: df = pd.DataFrame({'year': i.year, 'month': i.month, 'day': i.day})
In [198]: df.head()
Out[198]:
year month day
0 2000 1 1
1 2000 1 2
2 2000 1 3
3 2000 1 4
4 2000 1 5
In [199]: %timeit pd.to_datetime(df.year * 10000 + df.month * 100 + df.day, format='%Y%m%d')
.....: ds = df.apply(lambda x: "%04d%02d%02d" % (x['year'],
.....: x['month'], x['day']), axis=1)
.....: ds.head()
.....: %timeit pd.to_datetime(ds)
.....:
9.66 ms +- 154 us per loop (mean +- std. dev. of 7 runs, 100 loops each)
2.85 ms +- 98 us per loop (mean +- std. dev. of 7 runs, 100 loops each)
De-duplicating a large store by chunks, essentially a recursive reduction operation. Shows a function for taking in data from csv file and creating a store by chunks, with date parsing as well. See here
In [206]: df = pd.DataFrame(np.random.randn(8, 3))
In [207]: store = pd.HDFStore('test.h5')
In [208]: store.put('df', df)
# you can store an arbitrary Python object via pickle
In [209]: store.get_storer('df').attrs.my_attribute = {'A': 10}
In [210]: store.get_storer('df').attrs.my_attribute
Out[210]: {'A': 10}
Binary files
pandas readily accepts NumPy record arrays, if you need to read in a binary file consisting of an array of C structs. For example, given this C program in a file called main.c compiled with gcc main.c -std=gnu99 on a 64-bit machine,
#include <stdio.h>
#include <stdint.h>
typedef struct _Data
{
int32_t count;
double avg;
float scale;
} Data;
int main(int argc, const char *argv[])
{
size_t n = 10;
Data d[n];
for (int i = 0; i < n; ++i)
{
d[i].count = i;
d[i].avg = i + 1.0;
d[i].scale = (float) i + 2.0f;
}
FILE *file = fopen("binary.dat", "wb");
fwrite(&d, sizeof(Data), n, file);
fclose(file);
return 0;
}
the following Python code will read the binary file 'binary.dat' into a pandas DataFrame, where each element of the struct corresponds to a column in the frame:
names = 'count', 'avg', 'scale'
# note that the offsets are larger than the size of the type because of
# struct padding
offsets = 0, 8, 16
formats = 'i4', 'f8', 'f4'
dt = np.dtype({'names': names, 'offsets': offsets, 'formats': formats},
align=True)
df = pd.DataFrame(np.fromfile('binary.dat', dt))
Note
The offsets of the structure elements may be different depending on the architecture of the machine on which the file was created. Using a raw binary file format like this for general data storage is not recommended, as it is not cross platform. We recommended either HDF5 or parquet, both of which are supported by pandasâ IO facilities.
Often itâs useful to obtain the lower (or upper) triangular form of a correlation matrix calculated from DataFrame.corr(). This can be achieved by passing a boolean mask to where as follows:
In [211]: df = pd.DataFrame(np.random.random(size=(100, 5)))
In [212]: corr_mat = df.corr()
In [213]: mask = np.tril(np.ones_like(corr_mat, dtype=np.bool), k=-1)
In [214]: corr_mat.where(mask)
Out[214]:
0 1 2 3 4
0 NaN NaN NaN NaN NaN
1 -0.018923 NaN NaN NaN NaN
2 -0.076296 -0.012464 NaN NaN NaN
3 -0.169941 -0.289416 0.076462 NaN NaN
4 0.064326 0.018759 -0.084140 -0.079859 NaN
The method argument within DataFrame.corr can accept a callable in addition to the named correlation types. Here we compute the distance correlation matrix for a DataFrame object.
In [215]: def distcorr(x, y):
.....: n = len(x)
.....: a = np.zeros(shape=(n, n))
.....: b = np.zeros(shape=(n, n))
.....: for i in range(n):
.....: for j in range(i + 1, n):
.....: a[i, j] = abs(x[i] - x[j])
.....: b[i, j] = abs(y[i] - y[j])
.....: a += a.T
.....: b += b.T
.....: a_bar = np.vstack([np.nanmean(a, axis=0)] * n)
.....: b_bar = np.vstack([np.nanmean(b, axis=0)] * n)
.....: A = a - a_bar - a_bar.T + np.full(shape=(n, n), fill_value=a_bar.mean())
.....: B = b - b_bar - b_bar.T + np.full(shape=(n, n), fill_value=b_bar.mean())
.....: cov_ab = np.sqrt(np.nansum(A * B)) / n
.....: std_a = np.sqrt(np.sqrt(np.nansum(A**2)) / n)
.....: std_b = np.sqrt(np.sqrt(np.nansum(B**2)) / n)
.....: return cov_ab / std_a / std_b
.....:
In [216]: df = pd.DataFrame(np.random.normal(size=(100, 3)))
In [217]: df.corr(method=distcorr)
Out[217]:
0 1 2
0 1.000000 0.199653 0.214871
1 0.199653 1.000000 0.195116
2 0.214871 0.195116 1.000000
In [218]: import datetime
In [219]: s = pd.Series(pd.date_range('2012-1-1', periods=3, freq='D'))
In [220]: s - s.max()
Out[220]:
0 -2 days
1 -1 days
2 0 days
dtype: timedelta64[ns]
In [221]: s.max() - s
Out[221]:
0 2 days
1 1 days
2 0 days
dtype: timedelta64[ns]
In [222]: s - datetime.datetime(2011, 1, 1, 3, 5)
Out[222]:
0 364 days 20:55:00
1 365 days 20:55:00
2 366 days 20:55:00
dtype: timedelta64[ns]
In [223]: s + datetime.timedelta(minutes=5)
Out[223]:
0 2012-01-01 00:05:00
1 2012-01-02 00:05:00
2 2012-01-03 00:05:00
dtype: datetime64[ns]
In [224]: datetime.datetime(2011, 1, 1, 3, 5) - s
Out[224]:
0 -365 days +03:05:00
1 -366 days +03:05:00
2 -367 days +03:05:00
dtype: timedelta64[ns]
In [225]: datetime.timedelta(minutes=5) + s
Out[225]:
0 2012-01-01 00:05:00
1 2012-01-02 00:05:00
2 2012-01-03 00:05:00
dtype: datetime64[ns]
In [226]: deltas = pd.Series([datetime.timedelta(days=i) for i in range(3)])
In [227]: df = pd.DataFrame({'A': s, 'B': deltas})
In [228]: df
Out[228]:
A B
0 2012-01-01 0 days
1 2012-01-02 1 days
2 2012-01-03 2 days
In [229]: df['New Dates'] = df['A'] + df['B']
In [230]: df['Delta'] = df['A'] - df['New Dates']
In [231]: df
Out[231]:
A B New Dates Delta
0 2012-01-01 0 days 2012-01-01 0 days
1 2012-01-02 1 days 2012-01-03 -1 days
2 2012-01-03 2 days 2012-01-05 -2 days
In [232]: df.dtypes
Out[232]:
A datetime64[ns]
B timedelta64[ns]
New Dates datetime64[ns]
Delta timedelta64[ns]
dtype: object
Values can be set to NaT using np.nan, similar to datetime
In [233]: y = s - s.shift()
In [234]: y
Out[234]:
0 NaT
1 1 days
2 1 days
dtype: timedelta64[ns]
In [235]: y[1] = np.nan
In [236]: y
Out[236]:
0 NaT
1 NaT
2 1 days
dtype: timedelta64[ns]
Aliasing axis names
To globally provide aliases for axis names, one can define these 2 functions:
In [237]: def set_axis_alias(cls, axis, alias):
.....: if axis not in cls._AXIS_NUMBERS:
.....: raise Exception("invalid axis [%s] for alias [%s]" % (axis, alias))
.....: cls._AXIS_ALIASES[alias] = axis
.....:
In [238]: def clear_axis_alias(cls, axis, alias):
.....: if axis not in cls._AXIS_NUMBERS:
.....: raise Exception("invalid axis [%s] for alias [%s]" % (axis, alias))
.....: cls._AXIS_ALIASES.pop(alias, None)
.....:
In [239]: set_axis_alias(pd.DataFrame, 'columns', 'myaxis2')
In [240]: df2 = pd.DataFrame(np.random.randn(3, 2), columns=['c1', 'c2'],
.....: index=['i1', 'i2', 'i3'])
.....:
In [241]: df2.sum(axis='myaxis2')
Out[241]:
i1 -0.461013
i2 2.040016
i3 0.904681
dtype: float64
In [242]: clear_axis_alias(pd.DataFrame, 'columns', 'myaxis2')
Creating example data
To create a dataframe from every combination of some given values, like Râs expand.grid() function, we can create a dict where the keys are column names and the values are lists of the data values: