Creating a DataFrame by passing a NumPy array, with a datetime index and labeled columns:
dates = pd.date_range('20130101', periods=6)dates# Out[6]: DatetimeIndex(['2013-01-01', '2013-01-02', '2013-01-03', '2013-01-04','2013-01-05', '2013-01-06'], dtype ='datetime64[ns]', freq ='D')df = pd.DataFrame(np.random.randn(6, 4), index=dates, columns=list('ABCD'))df# Out[8]: A B C D2013-01-010.469112-0.282863-1.509059-1.1356322013-01-021.212112-0.1732150.119209-1.0442362013-01-03-0.861849-2.104569-0.4949291.0718042013-01-040.721555-0.706771-1.0395750.2718602013-01-05-0.4249720.5670200.276232-1.0874012013-01-06-0.6736900.113648-1.4784270.524988
Creating a DataFrame by passing a dict of objects that can be converted to series-like.
df2 = pd.DataFrame({'A': 1., ...: 'B': pd.Timestamp('20130102'), ...: 'C': pd.Series(1, index=list(range(4)), dtype='float32'), ...: 'D': np.array([3] *4, dtype='int32'), ...: 'E': pd.Categorical(["test", "train", "test", "train"]), ...: 'F': 'foo'}) ...:df2# Out[10]: A B C D E F01.02013-01-021.03 test foo11.02013-01-021.03 train foo21.02013-01-021.03 test foo31.02013-01-021.03 train foo
The columns of the resulting DataFrame have different dtypes.
df2.dtypes# Out[11]: A float64B datetime64[ns]C float32D int32E categoryF objectdtype:object
If you’re using IPython, tab completion for column names (as well as public attributes) is automatically enabled. Here’s a subset of the attributes that will be completed:
As you can see, the columns A, B, C, and D are automatically tab completed. E is there as well; the rest of the attributes have been truncated for brevity.
Here is how to view the top and bottom rows of the frame:
df.head()#Out[13]: A B C D2013-01-010.469112-0.282863-1.509059-1.1356322013-01-021.212112-0.1732150.119209-1.0442362013-01-03-0.861849-2.104569-0.4949291.0718042013-01-040.721555-0.706771-1.0395750.2718602013-01-05-0.4249720.5670200.276232-1.087401df.tail(3)#Out[14]: A B C D2013-01-040.721555-0.706771-1.0395750.2718602013-01-05-0.4249720.5670200.276232-1.0874012013-01-06-0.6736900.113648-1.4784270.524988
DataFrame.to_numpy() gives a NumPy representation of the underlying data. Note that this can be an expensive operation when your DataFrame has columns with different data types, which comes down to a fundamental difference between pandas and NumPy: NumPy arrays have one dtype for the entire array, while pandas DataFrames have one dtype per column. When you call DataFrame.to_numpy(), pandas will find the NumPy dtype that can hold all of the dtypes in the DataFrame. This may end up being object, which requires casting every value to a Python object.
describe() shows a quick statistic summary of your data:
df.describe()# Out[19]: A B C Dcount 6.0000006.0000006.0000006.000000mean 0.073711-0.431125-0.687758-0.233103std 0.8431570.9228180.7798870.973118min-0.861849-2.104569-1.509059-1.13563225%-0.611510-0.600794-1.368714-1.07661050%0.022070-0.228039-0.767252-0.38618875%0.6584440.041933-0.0343260.461706max1.2121120.5670200.2762321.071804
df.sort_index(axis=1, ascending=False)# Out[21]: D C B A2013-01-01-1.135632-1.509059-0.2828630.4691122013-01-02-1.0442360.119209-0.1732151.2121122013-01-031.071804-0.494929-2.104569-0.8618492013-01-040.271860-1.039575-0.7067710.7215552013-01-05-1.0874010.2762320.567020-0.4249722013-01-060.524988-1.4784270.113648-0.673690
Sorting by values:
df.sort_values(by='B')#Out[22]: A B C D2013-01-03-0.861849-2.104569-0.4949291.0718042013-01-040.721555-0.706771-1.0395750.2718602013-01-010.469112-0.282863-1.509059-1.1356322013-01-021.212112-0.1732150.119209-1.0442362013-01-06-0.6736900.113648-1.4784270.5249882013-01-05-0.4249720.5670200.276232-1.087401
Selection
Note
While standard Python / Numpy expressions for selecting and setting are intuitive and come in handy for interactive work, for production code, we recommend the optimized pandas data access methods, .at, .iat, .loc and .iloc.
Selecting a single column, which yields a Series, equivalent to df.A:
df['A']#Out[23]: 2013-01-010.4691122013-01-021.2121122013-01-03-0.8618492013-01-040.7215552013-01-05-0.4249722013-01-06-0.673690Freq: D, Name: A, dtype: float64
Selecting via [], which slices the rows.
df[0:3]#Out[24]: A B C D2013-01-010.469112-0.282863-1.509059-1.1356322013-01-021.212112-0.1732150.119209-1.0442362013-01-03-0.861849-2.104569-0.4949291.071804--------------------------------------df['20130102':'20130104']#Out[25]: A B C D2013-01-021.212112-0.1732150.119209-1.0442362013-01-03-0.861849-2.104569-0.4949291.0718042013-01-040.721555-0.706771-1.0395750.271860
df.loc[dates[0]]#Out[26]: A 0.469112B -0.282863C -1.509059D -1.135632Name:2013-01-0100:00:00, dtype: float64
Selecting on a multi-axis by label:
df.loc[:, ['A','B']]#Out[27]: A B2013-01-010.469112-0.2828632013-01-021.212112-0.1732152013-01-03-0.861849-2.1045692013-01-040.721555-0.7067712013-01-05-0.4249720.5670202013-01-06-0.6736900.113648
Showing label slicing, both endpoints are included:
df.loc['20130102':'20130104', ['A','B']]# Out[28]: A B2013-01-021.212112-0.1732152013-01-03-0.861849-2.1045692013-01-040.721555-0.706771
Reduction in the dimensions of the returned object:
df.loc['20130102', ['A','B']]#Out[29]: A 1.212112B -0.173215Name:2013-01-0200:00:00, dtype: float64
For getting a scalar value:
df.loc[dates[0],'A']# Out[30]: 0.4691122999071863
For getting fast access to a scalar (equivalent to the prior method):
df.iloc[3]#Out[32]: A 0.721555B -0.706771C -1.039575D 0.271860Name:2013-01-0400:00:00, dtype: float64
By integer slices, acting similar to numpy/python:
df.iloc[3:5,0:2]#Out[33]: A B2013-01-040.721555-0.7067712013-01-05-0.4249720.567020
By lists of integer position locations, similar to the numpy/python style:
df.iloc[[1,2,4], [0,2]]#Out[34]: A C2013-01-021.2121120.1192092013-01-03-0.861849-0.4949292013-01-05-0.4249720.276232
For slicing rows explicitly:
df.iloc[1:3,:]#Out[35]: A B C D2013-01-021.212112-0.1732150.119209-1.0442362013-01-03-0.861849-2.104569-0.4949291.071804
For slicing columns explicitly:
df.iloc[:,1:3]#Out[36]: B C2013-01-01-0.282863-1.5090592013-01-02-0.1732150.1192092013-01-03-2.104569-0.4949292013-01-04-0.706771-1.0395752013-01-050.5670200.2762322013-01-060.113648-1.478427
For getting a value explicitly:
df.iloc[1,1]#Out[37]: -0.17321464905330858
For getting fast access to a scalar (equivalent to the prior method):
df.iat[1,1]#Out[38]: -0.17321464905330858
Boolean indexing
Using a single column’s values to select data.
df[df['A']>0]#Out[39]: A B C D2013-01-010.469112-0.282863-1.509059-1.1356322013-01-021.212112-0.1732150.119209-1.0442362013-01-040.721555-0.706771-1.0395750.271860
Selecting values from a DataFrame where a boolean condition is met.
df[df >0]#Out[40]: A B C D2013-01-010.469112 NaN NaN NaN2013-01-021.212112 NaN 0.119209 NaN2013-01-03 NaN NaN NaN 1.0718042013-01-040.721555 NaN NaN 0.2718602013-01-05 NaN 0.5670200.276232 NaN2013-01-06 NaN 0.113648 NaN 0.524988
df2 = df.copy()df2['E']= ['one','one','two','three','four','three']df2#Out[43]: A B C D E2013-01-010.469112-0.282863-1.509059-1.135632 one2013-01-021.212112-0.1732150.119209-1.044236 one2013-01-03-0.861849-2.104569-0.4949291.071804 two2013-01-040.721555-0.706771-1.0395750.271860 three2013-01-05-0.4249720.5670200.276232-1.087401 four2013-01-06-0.6736900.113648-1.4784270.524988 three------------------------------df2[df2['E'].isin(['two', 'four'])]#Out[44]: A B C D E2013-01-03-0.861849-2.104569-0.4949291.071804 two2013-01-05-0.4249720.5670200.276232-1.087401 four
Setting
Setting a new column automatically aligns the data by the indexes.
df#Out[51]: A B C D F2013-01-010.0000000.000000-1.5090595 NaN2013-01-021.212112-0.1732150.11920951.02013-01-03-0.861849-2.104569-0.49492952.02013-01-040.721555-0.706771-1.03957553.02013-01-05-0.4249720.5670200.27623254.02013-01-06-0.6736900.113648-1.47842755.0
A where operation with setting.
df2 = df.copy()df2[df2 >0]=-df2df2#Out[54]: A B C D F2013-01-010.0000000.000000-1.509059-5 NaN2013-01-02-1.212112-0.173215-0.119209-5-1.02013-01-03-0.861849-2.104569-0.494929-5-2.02013-01-04-0.721555-0.706771-1.039575-5-3.02013-01-05-0.424972-0.567020-0.276232-5-4.02013-01-06-0.673690-0.113648-1.478427-5-5.0
Missing data
pandas primarily uses the value np.nan to represent missing data. It is by default not included in computations. See the Missing Data section.
Reindexing allows you to change/add/delete the index on a specified axis. This returns a copy of the data.
df1 = df.reindex(index=dates[0:4], columns=list(df.columns) + ['E'])df1.loc[dates[0]:dates[1],'E']=1df1#Out[57]: A B C D F E2013-01-010.0000000.000000-1.5090595 NaN 1.02013-01-021.212112-0.1732150.11920951.01.02013-01-03-0.861849-2.104569-0.49492952.0 NaN2013-01-040.721555-0.706771-1.03957553.0 NaN
To drop any rows that have missing data.
df1.dropna(how='any')#Out[58]: A B C D F E2013-01-021.212112-0.1732150.11920951.01.0
Filling missing data.
df1.fillna(value=5)#Out[59]: A B C D F E2013-01-010.0000000.000000-1.50905955.01.02013-01-021.212112-0.1732150.11920951.01.02013-01-03-0.861849-2.104569-0.49492952.05.02013-01-040.721555-0.706771-1.03957553.05.0
To get the boolean mask where values are nan.
pd.isna(df1)#Out[60]: A B C D F E2013-01-01FalseFalseFalseFalseTrueFalse2013-01-02FalseFalseFalseFalseFalseFalse2013-01-03FalseFalseFalseFalseFalseTrue2013-01-04FalseFalseFalseFalseFalseTrue
Operating with objects that have different dimensionality and need alignment. In addition, pandas automatically broadcasts along the specified dimension.
s = pd.Series([1, 3, 5, np.nan, 6, 8], index=dates).shift(2)s#Out[64]: 2013-01-01 NaN2013-01-02 NaN2013-01-031.02013-01-043.02013-01-055.02013-01-06 NaNFreq: D, dtype: float64df.sub(s, axis='index')#Out[65]: A B C D F2013-01-01 NaN NaN NaN NaN NaN2013-01-02 NaN NaN NaN NaN NaN2013-01-03-1.861849-3.104569-1.4949294.01.02013-01-04-2.278445-3.706771-4.0395752.00.02013-01-05-5.424972-4.432980-4.7237680.0-1.02013-01-06 NaN NaN NaN NaN NaN
Apply
Applying functions to the data:
In [66]: df.apply(np.cumsum)#Out[66]: A B C D F2013-01-010.0000000.000000-1.5090595 NaN2013-01-021.212112-0.173215-1.389850101.02013-01-030.350263-2.277784-1.884779153.02013-01-041.071818-2.984555-2.924354206.02013-01-050.646846-2.417535-2.6481222510.02013-01-06-0.026844-2.303886-4.1265493015.0------------------------------df.apply(lambdax: x.max() - x.min())#Out[67]: A 2.073961B 2.671590C 1.785291D 0.000000F 4.000000dtype: float64
s = pd.Series(np.random.randint(0, 7, size=10))s#Out[69]: 04122132465464768494#dtype: int64s.value_counts()#Out[70]: 45622211#dtype: int64
String Methods
Series is equipped with a set of string processing methods in the str attribute that make it easy to operate on each element of the array, as in the code snippet below. Note that pattern-matching in str generally uses regular expressions by default (and in some cases always uses them). See more at Vectorized String Methods.
pandas provides various facilities for easily combining together Series and DataFrame objects with various kinds of set logic for the indexes and relational algebra functionality in the case of join / merge-type operations.
Concatenating pandas objects together with concat():
df = pd.DataFrame(np.random.randn(10, 4))df#Out[74]: 01230-0.5487021.467327-1.015962-0.48307511.637550-1.217659-0.291519-1.7455052-0.2639520.991460-0.9190690.2660463-0.7096611.6690521.037882-1.7057754-0.919854-0.0423791.247642-0.00992050.2902130.4957670.3629491.5481066-1.131345-0.0893290.337863-0.9458677-0.9321321.9560300.017587-0.0166928-0.5752470.254161-1.1437040.21589791.193555-0.077118-0.408530-0.862495# break it into piecespieces = [df[:3], df[3:7], df[7:]]pd.concat(pieces)#Out[76]: 01230-0.5487021.467327-1.015962-0.48307511.637550-1.217659-0.291519-1.7455052-0.2639520.991460-0.9190690.2660463-0.7096611.6690521.037882-1.7057754-0.919854-0.0423791.247642-0.00992050.2902130.4957670.3629491.5481066-1.131345-0.0893290.337863-0.9458677-0.9321321.9560300.017587-0.0166928-0.5752470.254161-1.1437040.21589791.193555-0.077118-0.408530-0.862495
Note
Adding a column to a DataFrame is relatively fast. However, adding a row requires a copy, and may be expensive. We recommend passing a pre-built list of records to the DataFrame constructor instead of building a DataFrame by iteratively appending records to it. See Appending to dataframe for more.
In [87]: df = pd.DataFrame({'A': ['foo', 'bar', 'foo', 'bar',
....: 'foo', 'bar', 'foo', 'foo'],
....: 'B': ['one', 'one', 'two', 'three',
....: 'two', 'two', 'one', 'three'],
....: 'C': np.random.randn(8),
....: 'D': np.random.randn(8)})
....:
In [88]: df
Out[88]:
A B C D
0 foo one 1.346061 -1.577585
1 bar one 1.511763 0.396823
2 foo two 1.627081 -0.105381
3 bar three -0.990582 -0.532532
4 foo two -0.441652 1.453749
5 bar two 1.211526 1.208843
6 foo one 0.268520 -0.080952
7 foo three 0.024580 -0.264610
Grouping and then applying the sum() function to the resulting groups.
In [89]: df.groupby('A').sum()
Out[89]:
C D
A
bar 1.732707 1.073134
foo 2.824590 -0.574779
Grouping by multiple columns forms a hierarchical index, and again we can apply the sum function.
In [90]: df.groupby(['A', 'B']).sum()
Out[90]:
C D
A B
bar one 1.511763 0.396823
three -0.990582 -0.532532
two 1.211526 1.208843
foo one 1.614581 -1.658537
three 0.024580 -0.264610
two 1.185429 1.348368
In [91]: tuples = list(zip(*[['bar', 'bar', 'baz', 'baz',
....: 'foo', 'foo', 'qux', 'qux'],
....: ['one', 'two', 'one', 'two',
....: 'one', 'two', 'one', 'two']]))
....:
In [92]: index = pd.MultiIndex.from_tuples(tuples, names=['first', 'second'])
In [93]: df = pd.DataFrame(np.random.randn(8, 2), index=index, columns=['A', 'B'])
In [94]: df2 = df[:4]
In [95]: df2
Out[95]:
A B
first second
bar one -0.727965 -0.589346
two 0.339969 -0.693205
baz one -0.339355 0.593616
two 0.884345 1.591431
The stack() method “compresses” a level in the DataFrame’s columns.
In [96]: stacked = df2.stack()
In [97]: stacked
Out[97]:
first second
bar one A -0.727965
B -0.589346
two A 0.339969
B -0.693205
baz one A -0.339355
B 0.593616
two A 0.884345
B 1.591431
dtype: float64
With a “stacked” DataFrame or Series (having a MultiIndex as the index), the inverse operation of stack() is unstack(), which by default unstacks the last level:
In [98]: stacked.unstack()
Out[98]:
A B
first second
bar one -0.727965 -0.589346
two 0.339969 -0.693205
baz one -0.339355 0.593616
two 0.884345 1.591431
In [99]: stacked.unstack(1)
Out[99]:
second one two
first
bar A -0.727965 0.339969
B -0.589346 -0.693205
baz A -0.339355 0.884345
B 0.593616 1.591431
In [100]: stacked.unstack(0)
Out[100]:
first bar baz
second
one A -0.727965 -0.339355
B -0.589346 0.593616
two A 0.339969 0.884345
B -0.693205 1.591431
In [101]: df = pd.DataFrame({'A': ['one', 'one', 'two', 'three'] * 3,
.....: 'B': ['A', 'B', 'C'] * 4,
.....: 'C': ['foo', 'foo', 'foo', 'bar', 'bar', 'bar'] * 2,
.....: 'D': np.random.randn(12),
.....: 'E': np.random.randn(12)})
.....:
In [102]: df
Out[102]:
A B C D E
0 one A foo -1.202872 0.047609
1 one B foo -1.814470 -0.136473
2 two C foo 1.018601 -0.561757
3 three A bar -0.595447 -1.623033
4 one B bar 1.395433 0.029399
5 one C bar -0.392670 -0.542108
6 two A foo 0.007207 0.282696
7 three B foo 1.928123 -0.087302
8 one C foo -0.055224 -1.575170
9 one A bar 2.395985 1.771208
10 two B bar 1.552825 0.816482
11 three C bar 0.166599 1.100230
We can produce pivot tables from this data very easily:
In [103]: pd.pivot_table(df, values='D', index=['A', 'B'], columns=['C'])
Out[103]:
C bar foo
A B
one A 2.395985 -1.202872
B 1.395433 -1.814470
C -0.392670 -0.055224
three A -0.595447 NaN
B NaN 1.928123
C 0.166599 NaN
two A NaN 0.007207
B 1.552825 NaN
C NaN 1.018601
Time series
pandas has simple, powerful, and efficient functionality for performing resampling operations during frequency conversion (e.g., converting secondly data into 5-minutely data). This is extremely common in, but not limited to, financial applications. See the Time Series section.
In [104]: rng = pd.date_range('1/1/2012', periods=100, freq='S')
In [105]: ts = pd.Series(np.random.randint(0, 500, len(rng)), index=rng)
In [106]: ts.resample('5Min').sum()
Out[106]:
2012-01-01 24182
Freq: 5T, dtype: int64
In [113]: rng = pd.date_range('1/1/2012', periods=5, freq='M')
In [114]: ts = pd.Series(np.random.randn(len(rng)), index=rng)
In [115]: ts
Out[115]:
2012-01-31 -1.475051
2012-02-29 0.722570
2012-03-31 -0.322646
2012-04-30 -1.601631
2012-05-31 0.778033
Freq: M, dtype: float64
In [116]: ps = ts.to_period()
In [117]: ps
Out[117]:
2012-01 -1.475051
2012-02 0.722570
2012-03 -0.322646
2012-04 -1.601631
2012-05 0.778033
Freq: M, dtype: float64
In [118]: ps.to_timestamp()
Out[118]:
2012-01-01 -1.475051
2012-02-01 0.722570
2012-03-01 -0.322646
2012-04-01 -1.601631
2012-05-01 0.778033
Freq: MS, dtype: float64
Converting between period and timestamp enables some convenient arithmetic functions to be used. In the following example, we convert a quarterly frequency with year ending in November to 9am of the end of the month following the quarter end:
Convert the raw grades to a categorical data type.
In [124]: df["grade"] = df["raw_grade"].astype("category")
In [125]: df["grade"]
Out[125]:
0 a
1 b
2 b
3 a
4 a
5 e
Name: grade, dtype: category
Categories (3, object): [a, b, e]
Rename the categories to more meaningful names (assigning to Series.cat.categories is inplace!).
In [126]: df["grade"].cat.categories = ["very good", "good", "very bad"]
Reorder the categories and simultaneously add the missing categories (methods under Series .cat return a new Series by default).
In [127]: df["grade"] = df["grade"].cat.set_categories(["very bad", "bad", "medium",
.....: "good", "very good"])
.....:
In [128]: df["grade"]
Out[128]:
0 very good
1 good
2 good
3 very good
4 very good
5 very bad
Name: grade, dtype: category
Categories (5, object): [very bad, bad, medium, good, very good]
Sorting is per order in the categories, not lexical order.
In [129]: df.sort_values(by="grade")
Out[129]:
id raw_grade grade
5 6 e very bad
1 2 b good
2 3 b good
0 1 a very good
3 4 a very good
4 5 a very good
Grouping by a categorical column also shows empty categories.
In [130]: df.groupby("grade").size()
Out[130]:
grade
very bad 1
bad 0
medium 0
good 2
very good 3
dtype: int64
If you are attempting to perform an operation you might see an exception like:
if pd.Series([False, True, False]):
... print("I was true")
Traceback
...
ValueError: The truth value of an array is ambiguous. Use a.empty, a.any() or a.all().
See Comparisons for an explanation and what to do.