question_id
int64 59.6M
70.5M
| question_title
stringlengths 15
150
| question_body
stringlengths 134
33.4k
| accepted_answer_id
int64 59.6M
73.3M
| question_creation_date
timestamp[us] | question_answer_count
int64 1
9
| question_favorite_count
float64 0
8
⌀ | question_score
int64 -6
52
| question_view_count
int64 10
79k
| tags
stringclasses 2
values | answer_body
stringlengths 48
16.3k
| answer_creation_date
timestamp[us] | answer_score
int64 -2
59
| link
stringlengths 31
107
| context
stringlengths 134
251k
| answer_start
int64 0
1.28k
| answer_end
int64 49
10.2k
| question
stringlengths 158
33.1k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
62,318,418 | Pandas: Merge two Dataframes (same columns) with condition... How can i improve this code? | <p>(Sorry, my english skills is bad...)</p>
<p>I'm studying with public data.
I'm trying merge two excel files with some condition.
I tried multi-loop code, but it's too slow...
How can I improve my code?</p>
<p>Please help me TvT</p>
<h1>DataStructure example is</h1>
<p>old data(entire_file.xlsx)</p>
<pre><code> KeyCode Date Something
0 aaa 2020-01-01 00:00:00 adaf
1 bbb 2020-02-01 00:00:00 awd
2 ccc 2020-03-01 00:00:00 feq
...
6000 aewi 2020-03-03 00:00:00 awefeaw
</code></pre>
<p>new data(file2.xlsx)</p>
<pre><code> KeyCode Date Something
1 bbb 2020-06-01 20:00:00 aafewfaewfaw
2 ccc 2020-06-01 20:00:00 dfqefqe
3 new 2020-06-01 20:00:00 newrow
</code></pre>
<p>hope(file3.xlsx)</p>
<pre><code> KeyCode Date Something
0 aaa 2020-01-01 00:00:00 adaf
1 bbb 2020-06-01 20:00:00 aafewfaewfaw
2 ccc 2020-06-01 20:00:00 dfqefqe
...
6000 aewi 2020-03-03 00:00:00 awefeaw
6001 new 2020-06-01 20:00:00 newrow
</code></pre>
<p><strong>Code:</strong></p>
<pre class="lang-py prettyprint-override"><code> import numpy as np
import pandas as pd
%matplotlib notebook
import matplotlib.pyplot as plt
data = pd.read_excel('fulldata_01_01_01_P_병원.xlsx', index_col='번호')
tmp = pd.read_excel('(20200601~20200607)_01_01_01_P_병원.xlsx', index_col='번호')
print('{} is tmp rows count'.format(len(tmp.index)))
print('{} is data rows count'.format(len(data.index)))
new_data = pd.DataFrame([])
for j in range(len(tmp.index)):
ischange = False;
isexist = False;
for i in range(len(data.index)):
if (data.iloc[i].loc['KeyCode'] == tmp.iloc[j].loc['KeyCode']) and (data.iloc[i].loc['Date'] < tmp.iloc[j].loc['Date']) :
ischange = True
data.iloc[i] = tmp.iloc[j]
break
elif (data.iloc[i].loc['KeyCode'] == tmp.iloc[j].loc['KeyCode']) :
isexist = True
break
if ischange :
print('{} is change'.format(j))
elif isexist :
print('{} is exist'.format(j))
elif not ischange and not isexist :
print('{} is append'.format(j))
new_data.append(tmp.iloc[j], ignore_index=True)
data.append(new_data, ignore_index=True)
print('{} is tmp rows count'.format(len(tmp.index)))
print('{} is data rows count'.format(len(data.index)))
</code></pre>
<p>But... it is not working...</p> | 62,318,673 | 2020-06-11T06:34:19.763000 | 3 | null | 0 | 780 | python|pandas | <p>If you just want to get the new data or the updated but not existing:</p>
<pre><code>result = pd.concat([data, tmp], ignore_index=True, sort=False)
result = result.sort_values(['KeyCode', 'Date'], ascending=[True,True]) # order to find duplicates later
result = result.drop_duplicates('KeyCode', keep='first') # drop old data
</code></pre> | 2020-06-11T06:52:25.613000 | 0 | https://pandas.pydata.org/docs/dev/user_guide/merging.html | Merge, join, concatenate and compare#
Merge, join, concatenate and compare#
pandas provides various facilities for easily combining together Series or
DataFrame with various kinds of set logic for the indexes
and relational algebra functionality in the case of join / merge-type
operations.
In addition, pandas also provides utilities to compare two Series or DataFrame
and summarize their differences.
Concatenating objects#
The concat() function (in the main pandas namespace) does all of
the heavy lifting of performing concatenation operations along an axis while
If you just want to get the new data or the updated but not existing:
result = pd.concat([data, tmp], ignore_index=True, sort=False)
result = result.sort_values(['KeyCode', 'Date'], ascending=[True,True]) # order to find duplicates later
result = result.drop_duplicates('KeyCode', keep='first') # drop old data
performing optional set logic (union or intersection) of the indexes (if any) on
the other axes. Note that I say “if any” because there is only a single possible
axis of concatenation for Series.
Before diving into all of the details of concat and what it can do, here is
a simple example:
In [1]: df1 = pd.DataFrame(
...: {
...: "A": ["A0", "A1", "A2", "A3"],
...: "B": ["B0", "B1", "B2", "B3"],
...: "C": ["C0", "C1", "C2", "C3"],
...: "D": ["D0", "D1", "D2", "D3"],
...: },
...: index=[0, 1, 2, 3],
...: )
...:
In [2]: df2 = pd.DataFrame(
...: {
...: "A": ["A4", "A5", "A6", "A7"],
...: "B": ["B4", "B5", "B6", "B7"],
...: "C": ["C4", "C5", "C6", "C7"],
...: "D": ["D4", "D5", "D6", "D7"],
...: },
...: index=[4, 5, 6, 7],
...: )
...:
In [3]: df3 = pd.DataFrame(
...: {
...: "A": ["A8", "A9", "A10", "A11"],
...: "B": ["B8", "B9", "B10", "B11"],
...: "C": ["C8", "C9", "C10", "C11"],
...: "D": ["D8", "D9", "D10", "D11"],
...: },
...: index=[8, 9, 10, 11],
...: )
...:
In [4]: frames = [df1, df2, df3]
In [5]: result = pd.concat(frames)
Like its sibling function on ndarrays, numpy.concatenate, pandas.concat
takes a list or dict of homogeneously-typed objects and concatenates them with
some configurable handling of “what to do with the other axes”:
pd.concat(
objs,
axis=0,
join="outer",
ignore_index=False,
keys=None,
levels=None,
names=None,
verify_integrity=False,
copy=True,
)
objs : a sequence or mapping of Series or DataFrame objects. If a
dict is passed, the sorted keys will be used as the keys argument, unless
it is passed, in which case the values will be selected (see below). Any None
objects will be dropped silently unless they are all None in which case a
ValueError will be raised.
axis : {0, 1, …}, default 0. The axis to concatenate along.
join : {‘inner’, ‘outer’}, default ‘outer’. How to handle indexes on
other axis(es). Outer for union and inner for intersection.
ignore_index : boolean, default False. If True, do not use the index
values on the concatenation axis. The resulting axis will be labeled 0, …,
n - 1. This is useful if you are concatenating objects where the
concatenation axis does not have meaningful indexing information. Note
the index values on the other axes are still respected in the join.
keys : sequence, default None. Construct hierarchical index using the
passed keys as the outermost level. If multiple levels passed, should
contain tuples.
levels : list of sequences, default None. Specific levels (unique values)
to use for constructing a MultiIndex. Otherwise they will be inferred from the
keys.
names : list, default None. Names for the levels in the resulting
hierarchical index.
verify_integrity : boolean, default False. Check whether the new
concatenated axis contains duplicates. This can be very expensive relative
to the actual data concatenation.
copy : boolean, default True. If False, do not copy data unnecessarily.
Without a little bit of context many of these arguments don’t make much sense.
Let’s revisit the above example. Suppose we wanted to associate specific keys
with each of the pieces of the chopped up DataFrame. We can do this using the
keys argument:
In [6]: result = pd.concat(frames, keys=["x", "y", "z"])
As you can see (if you’ve read the rest of the documentation), the resulting
object’s index has a hierarchical index. This
means that we can now select out each chunk by key:
In [7]: result.loc["y"]
Out[7]:
A B C D
4 A4 B4 C4 D4
5 A5 B5 C5 D5
6 A6 B6 C6 D6
7 A7 B7 C7 D7
It’s not a stretch to see how this can be very useful. More detail on this
functionality below.
Note
It is worth noting that concat() makes a full copy of the data, and that constantly
reusing this function can create a significant performance hit. If you need
to use the operation over several datasets, use a list comprehension.
frames = [ process_your_file(f) for f in files ]
result = pd.concat(frames)
Note
When concatenating DataFrames with named axes, pandas will attempt to preserve
these index/column names whenever possible. In the case where all inputs share a
common name, this name will be assigned to the result. When the input names do
not all agree, the result will be unnamed. The same is true for MultiIndex,
but the logic is applied separately on a level-by-level basis.
Set logic on the other axes#
When gluing together multiple DataFrames, you have a choice of how to handle
the other axes (other than the one being concatenated). This can be done in
the following two ways:
Take the union of them all, join='outer'. This is the default
option as it results in zero information loss.
Take the intersection, join='inner'.
Here is an example of each of these methods. First, the default join='outer'
behavior:
In [8]: df4 = pd.DataFrame(
...: {
...: "B": ["B2", "B3", "B6", "B7"],
...: "D": ["D2", "D3", "D6", "D7"],
...: "F": ["F2", "F3", "F6", "F7"],
...: },
...: index=[2, 3, 6, 7],
...: )
...:
In [9]: result = pd.concat([df1, df4], axis=1)
Here is the same thing with join='inner':
In [10]: result = pd.concat([df1, df4], axis=1, join="inner")
Lastly, suppose we just wanted to reuse the exact index from the original
DataFrame:
In [11]: result = pd.concat([df1, df4], axis=1).reindex(df1.index)
Similarly, we could index before the concatenation:
In [12]: pd.concat([df1, df4.reindex(df1.index)], axis=1)
Out[12]:
A B C D B D F
0 A0 B0 C0 D0 NaN NaN NaN
1 A1 B1 C1 D1 NaN NaN NaN
2 A2 B2 C2 D2 B2 D2 F2
3 A3 B3 C3 D3 B3 D3 F3
Ignoring indexes on the concatenation axis#
For DataFrame objects which don’t have a meaningful index, you may wish
to append them and ignore the fact that they may have overlapping indexes. To
do this, use the ignore_index argument:
In [13]: result = pd.concat([df1, df4], ignore_index=True, sort=False)
Concatenating with mixed ndims#
You can concatenate a mix of Series and DataFrame objects. The
Series will be transformed to DataFrame with the column name as
the name of the Series.
In [14]: s1 = pd.Series(["X0", "X1", "X2", "X3"], name="X")
In [15]: result = pd.concat([df1, s1], axis=1)
Note
Since we’re concatenating a Series to a DataFrame, we could have
achieved the same result with DataFrame.assign(). To concatenate an
arbitrary number of pandas objects (DataFrame or Series), use
concat.
If unnamed Series are passed they will be numbered consecutively.
In [16]: s2 = pd.Series(["_0", "_1", "_2", "_3"])
In [17]: result = pd.concat([df1, s2, s2, s2], axis=1)
Passing ignore_index=True will drop all name references.
In [18]: result = pd.concat([df1, s1], axis=1, ignore_index=True)
More concatenating with group keys#
A fairly common use of the keys argument is to override the column names
when creating a new DataFrame based on existing Series.
Notice how the default behaviour consists on letting the resulting DataFrame
inherit the parent Series’ name, when these existed.
In [19]: s3 = pd.Series([0, 1, 2, 3], name="foo")
In [20]: s4 = pd.Series([0, 1, 2, 3])
In [21]: s5 = pd.Series([0, 1, 4, 5])
In [22]: pd.concat([s3, s4, s5], axis=1)
Out[22]:
foo 0 1
0 0 0 0
1 1 1 1
2 2 2 4
3 3 3 5
Through the keys argument we can override the existing column names.
In [23]: pd.concat([s3, s4, s5], axis=1, keys=["red", "blue", "yellow"])
Out[23]:
red blue yellow
0 0 0 0
1 1 1 1
2 2 2 4
3 3 3 5
Let’s consider a variation of the very first example presented:
In [24]: result = pd.concat(frames, keys=["x", "y", "z"])
You can also pass a dict to concat in which case the dict keys will be used
for the keys argument (unless other keys are specified):
In [25]: pieces = {"x": df1, "y": df2, "z": df3}
In [26]: result = pd.concat(pieces)
In [27]: result = pd.concat(pieces, keys=["z", "y"])
The MultiIndex created has levels that are constructed from the passed keys and
the index of the DataFrame pieces:
In [28]: result.index.levels
Out[28]: FrozenList([['z', 'y'], [4, 5, 6, 7, 8, 9, 10, 11]])
If you wish to specify other levels (as will occasionally be the case), you can
do so using the levels argument:
In [29]: result = pd.concat(
....: pieces, keys=["x", "y", "z"], levels=[["z", "y", "x", "w"]], names=["group_key"]
....: )
....:
In [30]: result.index.levels
Out[30]: FrozenList([['z', 'y', 'x', 'w'], [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]])
This is fairly esoteric, but it is actually necessary for implementing things
like GroupBy where the order of a categorical variable is meaningful.
Appending rows to a DataFrame#
If you have a series that you want to append as a single row to a DataFrame, you can convert the row into a
DataFrame and use concat
In [31]: s2 = pd.Series(["X0", "X1", "X2", "X3"], index=["A", "B", "C", "D"])
In [32]: result = pd.concat([df1, s2.to_frame().T], ignore_index=True)
You should use ignore_index with this method to instruct DataFrame to
discard its index. If you wish to preserve the index, you should construct an
appropriately-indexed DataFrame and append or concatenate those objects.
Database-style DataFrame or named Series joining/merging#
pandas has full-featured, high performance in-memory join operations
idiomatically very similar to relational databases like SQL. These methods
perform significantly better (in some cases well over an order of magnitude
better) than other open source implementations (like base::merge.data.frame
in R). The reason for this is careful algorithmic design and the internal layout
of the data in DataFrame.
See the cookbook for some advanced strategies.
Users who are familiar with SQL but new to pandas might be interested in a
comparison with SQL.
pandas provides a single function, merge(), as the entry point for
all standard database join operations between DataFrame or named Series objects:
pd.merge(
left,
right,
how="inner",
on=None,
left_on=None,
right_on=None,
left_index=False,
right_index=False,
sort=True,
suffixes=("_x", "_y"),
copy=True,
indicator=False,
validate=None,
)
left: A DataFrame or named Series object.
right: Another DataFrame or named Series object.
on: Column or index level names to join on. Must be found in both the left
and right DataFrame and/or Series objects. If not passed and left_index and
right_index are False, the intersection of the columns in the
DataFrames and/or Series will be inferred to be the join keys.
left_on: Columns or index levels from the left DataFrame or Series to use as
keys. Can either be column names, index level names, or arrays with length
equal to the length of the DataFrame or Series.
right_on: Columns or index levels from the right DataFrame or Series to use as
keys. Can either be column names, index level names, or arrays with length
equal to the length of the DataFrame or Series.
left_index: If True, use the index (row labels) from the left
DataFrame or Series as its join key(s). In the case of a DataFrame or Series with a MultiIndex
(hierarchical), the number of levels must match the number of join keys
from the right DataFrame or Series.
right_index: Same usage as left_index for the right DataFrame or Series
how: One of 'left', 'right', 'outer', 'inner', 'cross'. Defaults
to inner. See below for more detailed description of each method.
sort: Sort the result DataFrame by the join keys in lexicographical
order. Defaults to True, setting to False will improve performance
substantially in many cases.
suffixes: A tuple of string suffixes to apply to overlapping
columns. Defaults to ('_x', '_y').
copy: Always copy data (default True) from the passed DataFrame or named Series
objects, even when reindexing is not necessary. Cannot be avoided in many
cases but may improve performance / memory usage. The cases where copying
can be avoided are somewhat pathological but this option is provided
nonetheless.
indicator: Add a column to the output DataFrame called _merge
with information on the source of each row. _merge is Categorical-type
and takes on a value of left_only for observations whose merge key
only appears in 'left' DataFrame or Series, right_only for observations whose
merge key only appears in 'right' DataFrame or Series, and both if the
observation’s merge key is found in both.
validate : string, default None.
If specified, checks if merge is of specified type.
“one_to_one” or “1:1”: checks if merge keys are unique in both
left and right datasets.
“one_to_many” or “1:m”: checks if merge keys are unique in left
dataset.
“many_to_one” or “m:1”: checks if merge keys are unique in right
dataset.
“many_to_many” or “m:m”: allowed, but does not result in checks.
Note
Support for specifying index levels as the on, left_on, and
right_on parameters was added in version 0.23.0.
Support for merging named Series objects was added in version 0.24.0.
The return type will be the same as left. If left is a DataFrame or named Series
and right is a subclass of DataFrame, the return type will still be DataFrame.
merge is a function in the pandas namespace, and it is also available as a
DataFrame instance method merge(), with the calling
DataFrame being implicitly considered the left object in the join.
The related join() method, uses merge internally for the
index-on-index (by default) and column(s)-on-index join. If you are joining on
index only, you may wish to use DataFrame.join to save yourself some typing.
Brief primer on merge methods (relational algebra)#
Experienced users of relational databases like SQL will be familiar with the
terminology used to describe join operations between two SQL-table like
structures (DataFrame objects). There are several cases to consider which
are very important to understand:
one-to-one joins: for example when joining two DataFrame objects on
their indexes (which must contain unique values).
many-to-one joins: for example when joining an index (unique) to one or
more columns in a different DataFrame.
many-to-many joins: joining columns on columns.
Note
When joining columns on columns (potentially a many-to-many join), any
indexes on the passed DataFrame objects will be discarded.
It is worth spending some time understanding the result of the many-to-many
join case. In SQL / standard relational algebra, if a key combination appears
more than once in both tables, the resulting table will have the Cartesian
product of the associated data. Here is a very basic example with one unique
key combination:
In [33]: left = pd.DataFrame(
....: {
....: "key": ["K0", "K1", "K2", "K3"],
....: "A": ["A0", "A1", "A2", "A3"],
....: "B": ["B0", "B1", "B2", "B3"],
....: }
....: )
....:
In [34]: right = pd.DataFrame(
....: {
....: "key": ["K0", "K1", "K2", "K3"],
....: "C": ["C0", "C1", "C2", "C3"],
....: "D": ["D0", "D1", "D2", "D3"],
....: }
....: )
....:
In [35]: result = pd.merge(left, right, on="key")
Here is a more complicated example with multiple join keys. Only the keys
appearing in left and right are present (the intersection), since
how='inner' by default.
In [36]: left = pd.DataFrame(
....: {
....: "key1": ["K0", "K0", "K1", "K2"],
....: "key2": ["K0", "K1", "K0", "K1"],
....: "A": ["A0", "A1", "A2", "A3"],
....: "B": ["B0", "B1", "B2", "B3"],
....: }
....: )
....:
In [37]: right = pd.DataFrame(
....: {
....: "key1": ["K0", "K1", "K1", "K2"],
....: "key2": ["K0", "K0", "K0", "K0"],
....: "C": ["C0", "C1", "C2", "C3"],
....: "D": ["D0", "D1", "D2", "D3"],
....: }
....: )
....:
In [38]: result = pd.merge(left, right, on=["key1", "key2"])
The how argument to merge specifies how to determine which keys are to
be included in the resulting table. If a key combination does not appear in
either the left or right tables, the values in the joined table will be
NA. Here is a summary of the how options and their SQL equivalent names:
Merge method
SQL Join Name
Description
left
LEFT OUTER JOIN
Use keys from left frame only
right
RIGHT OUTER JOIN
Use keys from right frame only
outer
FULL OUTER JOIN
Use union of keys from both frames
inner
INNER JOIN
Use intersection of keys from both frames
cross
CROSS JOIN
Create the cartesian product of rows of both frames
In [39]: result = pd.merge(left, right, how="left", on=["key1", "key2"])
In [40]: result = pd.merge(left, right, how="right", on=["key1", "key2"])
In [41]: result = pd.merge(left, right, how="outer", on=["key1", "key2"])
In [42]: result = pd.merge(left, right, how="inner", on=["key1", "key2"])
In [43]: result = pd.merge(left, right, how="cross")
You can merge a mult-indexed Series and a DataFrame, if the names of
the MultiIndex correspond to the columns from the DataFrame. Transform
the Series to a DataFrame using Series.reset_index() before merging,
as shown in the following example.
In [44]: df = pd.DataFrame({"Let": ["A", "B", "C"], "Num": [1, 2, 3]})
In [45]: df
Out[45]:
Let Num
0 A 1
1 B 2
2 C 3
In [46]: ser = pd.Series(
....: ["a", "b", "c", "d", "e", "f"],
....: index=pd.MultiIndex.from_arrays(
....: [["A", "B", "C"] * 2, [1, 2, 3, 4, 5, 6]], names=["Let", "Num"]
....: ),
....: )
....:
In [47]: ser
Out[47]:
Let Num
A 1 a
B 2 b
C 3 c
A 4 d
B 5 e
C 6 f
dtype: object
In [48]: pd.merge(df, ser.reset_index(), on=["Let", "Num"])
Out[48]:
Let Num 0
0 A 1 a
1 B 2 b
2 C 3 c
Here is another example with duplicate join keys in DataFrames:
In [49]: left = pd.DataFrame({"A": [1, 2], "B": [2, 2]})
In [50]: right = pd.DataFrame({"A": [4, 5, 6], "B": [2, 2, 2]})
In [51]: result = pd.merge(left, right, on="B", how="outer")
Warning
Joining / merging on duplicate keys can cause a returned frame that is the multiplication of the row dimensions, which may result in memory overflow. It is the user’ s responsibility to manage duplicate values in keys before joining large DataFrames.
Checking for duplicate keys#
Users can use the validate argument to automatically check whether there
are unexpected duplicates in their merge keys. Key uniqueness is checked before
merge operations and so should protect against memory overflows. Checking key
uniqueness is also a good way to ensure user data structures are as expected.
In the following example, there are duplicate values of B in the right
DataFrame. As this is not a one-to-one merge – as specified in the
validate argument – an exception will be raised.
In [52]: left = pd.DataFrame({"A": [1, 2], "B": [1, 2]})
In [53]: right = pd.DataFrame({"A": [4, 5, 6], "B": [2, 2, 2]})
In [53]: result = pd.merge(left, right, on="B", how="outer", validate="one_to_one")
...
MergeError: Merge keys are not unique in right dataset; not a one-to-one merge
If the user is aware of the duplicates in the right DataFrame but wants to
ensure there are no duplicates in the left DataFrame, one can use the
validate='one_to_many' argument instead, which will not raise an exception.
In [54]: pd.merge(left, right, on="B", how="outer", validate="one_to_many")
Out[54]:
A_x B A_y
0 1 1 NaN
1 2 2 4.0
2 2 2 5.0
3 2 2 6.0
The merge indicator#
merge() accepts the argument indicator. If True, a
Categorical-type column called _merge will be added to the output object
that takes on values:
Observation Origin
_merge value
Merge key only in 'left' frame
left_only
Merge key only in 'right' frame
right_only
Merge key in both frames
both
In [55]: df1 = pd.DataFrame({"col1": [0, 1], "col_left": ["a", "b"]})
In [56]: df2 = pd.DataFrame({"col1": [1, 2, 2], "col_right": [2, 2, 2]})
In [57]: pd.merge(df1, df2, on="col1", how="outer", indicator=True)
Out[57]:
col1 col_left col_right _merge
0 0 a NaN left_only
1 1 b 2.0 both
2 2 NaN 2.0 right_only
3 2 NaN 2.0 right_only
The indicator argument will also accept string arguments, in which case the indicator function will use the value of the passed string as the name for the indicator column.
In [58]: pd.merge(df1, df2, on="col1", how="outer", indicator="indicator_column")
Out[58]:
col1 col_left col_right indicator_column
0 0 a NaN left_only
1 1 b 2.0 both
2 2 NaN 2.0 right_only
3 2 NaN 2.0 right_only
Merge dtypes#
Merging will preserve the dtype of the join keys.
In [59]: left = pd.DataFrame({"key": [1], "v1": [10]})
In [60]: left
Out[60]:
key v1
0 1 10
In [61]: right = pd.DataFrame({"key": [1, 2], "v1": [20, 30]})
In [62]: right
Out[62]:
key v1
0 1 20
1 2 30
We are able to preserve the join keys:
In [63]: pd.merge(left, right, how="outer")
Out[63]:
key v1
0 1 10
1 1 20
2 2 30
In [64]: pd.merge(left, right, how="outer").dtypes
Out[64]:
key int64
v1 int64
dtype: object
Of course if you have missing values that are introduced, then the
resulting dtype will be upcast.
In [65]: pd.merge(left, right, how="outer", on="key")
Out[65]:
key v1_x v1_y
0 1 10.0 20
1 2 NaN 30
In [66]: pd.merge(left, right, how="outer", on="key").dtypes
Out[66]:
key int64
v1_x float64
v1_y int64
dtype: object
Merging will preserve category dtypes of the mergands. See also the section on categoricals.
The left frame.
In [67]: from pandas.api.types import CategoricalDtype
In [68]: X = pd.Series(np.random.choice(["foo", "bar"], size=(10,)))
In [69]: X = X.astype(CategoricalDtype(categories=["foo", "bar"]))
In [70]: left = pd.DataFrame(
....: {"X": X, "Y": np.random.choice(["one", "two", "three"], size=(10,))}
....: )
....:
In [71]: left
Out[71]:
X Y
0 bar one
1 foo one
2 foo three
3 bar three
4 foo one
5 bar one
6 bar three
7 bar three
8 bar three
9 foo three
In [72]: left.dtypes
Out[72]:
X category
Y object
dtype: object
The right frame.
In [73]: right = pd.DataFrame(
....: {
....: "X": pd.Series(["foo", "bar"], dtype=CategoricalDtype(["foo", "bar"])),
....: "Z": [1, 2],
....: }
....: )
....:
In [74]: right
Out[74]:
X Z
0 foo 1
1 bar 2
In [75]: right.dtypes
Out[75]:
X category
Z int64
dtype: object
The merged result:
In [76]: result = pd.merge(left, right, how="outer")
In [77]: result
Out[77]:
X Y Z
0 bar one 2
1 bar three 2
2 bar one 2
3 bar three 2
4 bar three 2
5 bar three 2
6 foo one 1
7 foo three 1
8 foo one 1
9 foo three 1
In [78]: result.dtypes
Out[78]:
X category
Y object
Z int64
dtype: object
Note
The category dtypes must be exactly the same, meaning the same categories and the ordered attribute.
Otherwise the result will coerce to the categories’ dtype.
Note
Merging on category dtypes that are the same can be quite performant compared to object dtype merging.
Joining on index#
DataFrame.join() is a convenient method for combining the columns of two
potentially differently-indexed DataFrames into a single result
DataFrame. Here is a very basic example:
In [79]: left = pd.DataFrame(
....: {"A": ["A0", "A1", "A2"], "B": ["B0", "B1", "B2"]}, index=["K0", "K1", "K2"]
....: )
....:
In [80]: right = pd.DataFrame(
....: {"C": ["C0", "C2", "C3"], "D": ["D0", "D2", "D3"]}, index=["K0", "K2", "K3"]
....: )
....:
In [81]: result = left.join(right)
In [82]: result = left.join(right, how="outer")
The same as above, but with how='inner'.
In [83]: result = left.join(right, how="inner")
The data alignment here is on the indexes (row labels). This same behavior can
be achieved using merge plus additional arguments instructing it to use the
indexes:
In [84]: result = pd.merge(left, right, left_index=True, right_index=True, how="outer")
In [85]: result = pd.merge(left, right, left_index=True, right_index=True, how="inner")
Joining key columns on an index#
join() takes an optional on argument which may be a column
or multiple column names, which specifies that the passed DataFrame is to be
aligned on that column in the DataFrame. These two function calls are
completely equivalent:
left.join(right, on=key_or_keys)
pd.merge(
left, right, left_on=key_or_keys, right_index=True, how="left", sort=False
)
Obviously you can choose whichever form you find more convenient. For
many-to-one joins (where one of the DataFrame’s is already indexed by the
join key), using join may be more convenient. Here is a simple example:
In [86]: left = pd.DataFrame(
....: {
....: "A": ["A0", "A1", "A2", "A3"],
....: "B": ["B0", "B1", "B2", "B3"],
....: "key": ["K0", "K1", "K0", "K1"],
....: }
....: )
....:
In [87]: right = pd.DataFrame({"C": ["C0", "C1"], "D": ["D0", "D1"]}, index=["K0", "K1"])
In [88]: result = left.join(right, on="key")
In [89]: result = pd.merge(
....: left, right, left_on="key", right_index=True, how="left", sort=False
....: )
....:
To join on multiple keys, the passed DataFrame must have a MultiIndex:
In [90]: left = pd.DataFrame(
....: {
....: "A": ["A0", "A1", "A2", "A3"],
....: "B": ["B0", "B1", "B2", "B3"],
....: "key1": ["K0", "K0", "K1", "K2"],
....: "key2": ["K0", "K1", "K0", "K1"],
....: }
....: )
....:
In [91]: index = pd.MultiIndex.from_tuples(
....: [("K0", "K0"), ("K1", "K0"), ("K2", "K0"), ("K2", "K1")]
....: )
....:
In [92]: right = pd.DataFrame(
....: {"C": ["C0", "C1", "C2", "C3"], "D": ["D0", "D1", "D2", "D3"]}, index=index
....: )
....:
Now this can be joined by passing the two key column names:
In [93]: result = left.join(right, on=["key1", "key2"])
The default for DataFrame.join is to perform a left join (essentially a
“VLOOKUP” operation, for Excel users), which uses only the keys found in the
calling DataFrame. Other join types, for example inner join, can be just as
easily performed:
In [94]: result = left.join(right, on=["key1", "key2"], how="inner")
As you can see, this drops any rows where there was no match.
Joining a single Index to a MultiIndex#
You can join a singly-indexed DataFrame with a level of a MultiIndexed DataFrame.
The level will match on the name of the index of the singly-indexed frame against
a level name of the MultiIndexed frame.
In [95]: left = pd.DataFrame(
....: {"A": ["A0", "A1", "A2"], "B": ["B0", "B1", "B2"]},
....: index=pd.Index(["K0", "K1", "K2"], name="key"),
....: )
....:
In [96]: index = pd.MultiIndex.from_tuples(
....: [("K0", "Y0"), ("K1", "Y1"), ("K2", "Y2"), ("K2", "Y3")],
....: names=["key", "Y"],
....: )
....:
In [97]: right = pd.DataFrame(
....: {"C": ["C0", "C1", "C2", "C3"], "D": ["D0", "D1", "D2", "D3"]},
....: index=index,
....: )
....:
In [98]: result = left.join(right, how="inner")
This is equivalent but less verbose and more memory efficient / faster than this.
In [99]: result = pd.merge(
....: left.reset_index(), right.reset_index(), on=["key"], how="inner"
....: ).set_index(["key","Y"])
....:
Joining with two MultiIndexes#
This is supported in a limited way, provided that the index for the right
argument is completely used in the join, and is a subset of the indices in
the left argument, as in this example:
In [100]: leftindex = pd.MultiIndex.from_product(
.....: [list("abc"), list("xy"), [1, 2]], names=["abc", "xy", "num"]
.....: )
.....:
In [101]: left = pd.DataFrame({"v1": range(12)}, index=leftindex)
In [102]: left
Out[102]:
v1
abc xy num
a x 1 0
2 1
y 1 2
2 3
b x 1 4
2 5
y 1 6
2 7
c x 1 8
2 9
y 1 10
2 11
In [103]: rightindex = pd.MultiIndex.from_product(
.....: [list("abc"), list("xy")], names=["abc", "xy"]
.....: )
.....:
In [104]: right = pd.DataFrame({"v2": [100 * i for i in range(1, 7)]}, index=rightindex)
In [105]: right
Out[105]:
v2
abc xy
a x 100
y 200
b x 300
y 400
c x 500
y 600
In [106]: left.join(right, on=["abc", "xy"], how="inner")
Out[106]:
v1 v2
abc xy num
a x 1 0 100
2 1 100
y 1 2 200
2 3 200
b x 1 4 300
2 5 300
y 1 6 400
2 7 400
c x 1 8 500
2 9 500
y 1 10 600
2 11 600
If that condition is not satisfied, a join with two multi-indexes can be
done using the following code.
In [107]: leftindex = pd.MultiIndex.from_tuples(
.....: [("K0", "X0"), ("K0", "X1"), ("K1", "X2")], names=["key", "X"]
.....: )
.....:
In [108]: left = pd.DataFrame(
.....: {"A": ["A0", "A1", "A2"], "B": ["B0", "B1", "B2"]}, index=leftindex
.....: )
.....:
In [109]: rightindex = pd.MultiIndex.from_tuples(
.....: [("K0", "Y0"), ("K1", "Y1"), ("K2", "Y2"), ("K2", "Y3")], names=["key", "Y"]
.....: )
.....:
In [110]: right = pd.DataFrame(
.....: {"C": ["C0", "C1", "C2", "C3"], "D": ["D0", "D1", "D2", "D3"]}, index=rightindex
.....: )
.....:
In [111]: result = pd.merge(
.....: left.reset_index(), right.reset_index(), on=["key"], how="inner"
.....: ).set_index(["key", "X", "Y"])
.....:
Merging on a combination of columns and index levels#
Strings passed as the on, left_on, and right_on parameters
may refer to either column names or index level names. This enables merging
DataFrame instances on a combination of index levels and columns without
resetting indexes.
In [112]: left_index = pd.Index(["K0", "K0", "K1", "K2"], name="key1")
In [113]: left = pd.DataFrame(
.....: {
.....: "A": ["A0", "A1", "A2", "A3"],
.....: "B": ["B0", "B1", "B2", "B3"],
.....: "key2": ["K0", "K1", "K0", "K1"],
.....: },
.....: index=left_index,
.....: )
.....:
In [114]: right_index = pd.Index(["K0", "K1", "K2", "K2"], name="key1")
In [115]: right = pd.DataFrame(
.....: {
.....: "C": ["C0", "C1", "C2", "C3"],
.....: "D": ["D0", "D1", "D2", "D3"],
.....: "key2": ["K0", "K0", "K0", "K1"],
.....: },
.....: index=right_index,
.....: )
.....:
In [116]: result = left.merge(right, on=["key1", "key2"])
Note
When DataFrames are merged on a string that matches an index level in both
frames, the index level is preserved as an index level in the resulting
DataFrame.
Note
When DataFrames are merged using only some of the levels of a MultiIndex,
the extra levels will be dropped from the resulting merge. In order to
preserve those levels, use reset_index on those level names to move
those levels to columns prior to doing the merge.
Note
If a string matches both a column name and an index level name, then a
warning is issued and the column takes precedence. This will result in an
ambiguity error in a future version.
Overlapping value columns#
The merge suffixes argument takes a tuple of list of strings to append to
overlapping column names in the input DataFrames to disambiguate the result
columns:
In [117]: left = pd.DataFrame({"k": ["K0", "K1", "K2"], "v": [1, 2, 3]})
In [118]: right = pd.DataFrame({"k": ["K0", "K0", "K3"], "v": [4, 5, 6]})
In [119]: result = pd.merge(left, right, on="k")
In [120]: result = pd.merge(left, right, on="k", suffixes=("_l", "_r"))
DataFrame.join() has lsuffix and rsuffix arguments which behave
similarly.
In [121]: left = left.set_index("k")
In [122]: right = right.set_index("k")
In [123]: result = left.join(right, lsuffix="_l", rsuffix="_r")
Joining multiple DataFrames#
A list or tuple of DataFrames can also be passed to join()
to join them together on their indexes.
In [124]: right2 = pd.DataFrame({"v": [7, 8, 9]}, index=["K1", "K1", "K2"])
In [125]: result = left.join([right, right2])
Merging together values within Series or DataFrame columns#
Another fairly common situation is to have two like-indexed (or similarly
indexed) Series or DataFrame objects and wanting to “patch” values in
one object from values for matching indices in the other. Here is an example:
In [126]: df1 = pd.DataFrame(
.....: [[np.nan, 3.0, 5.0], [-4.6, np.nan, np.nan], [np.nan, 7.0, np.nan]]
.....: )
.....:
In [127]: df2 = pd.DataFrame([[-42.6, np.nan, -8.2], [-5.0, 1.6, 4]], index=[1, 2])
For this, use the combine_first() method:
In [128]: result = df1.combine_first(df2)
Note that this method only takes values from the right DataFrame if they are
missing in the left DataFrame. A related method, update(),
alters non-NA values in place:
In [129]: df1.update(df2)
Timeseries friendly merging#
Merging ordered data#
A merge_ordered() function allows combining time series and other
ordered data. In particular it has an optional fill_method keyword to
fill/interpolate missing data:
In [130]: left = pd.DataFrame(
.....: {"k": ["K0", "K1", "K1", "K2"], "lv": [1, 2, 3, 4], "s": ["a", "b", "c", "d"]}
.....: )
.....:
In [131]: right = pd.DataFrame({"k": ["K1", "K2", "K4"], "rv": [1, 2, 3]})
In [132]: pd.merge_ordered(left, right, fill_method="ffill", left_by="s")
Out[132]:
k lv s rv
0 K0 1.0 a NaN
1 K1 1.0 a 1.0
2 K2 1.0 a 2.0
3 K4 1.0 a 3.0
4 K1 2.0 b 1.0
5 K2 2.0 b 2.0
6 K4 2.0 b 3.0
7 K1 3.0 c 1.0
8 K2 3.0 c 2.0
9 K4 3.0 c 3.0
10 K1 NaN d 1.0
11 K2 4.0 d 2.0
12 K4 4.0 d 3.0
Merging asof#
A merge_asof() is similar to an ordered left-join except that we match on
nearest key rather than equal keys. For each row in the left DataFrame,
we select the last row in the right DataFrame whose on key is less
than the left’s key. Both DataFrames must be sorted by the key.
Optionally an asof merge can perform a group-wise merge. This matches the
by key equally, in addition to the nearest match on the on key.
For example; we might have trades and quotes and we want to asof
merge them.
In [133]: trades = pd.DataFrame(
.....: {
.....: "time": pd.to_datetime(
.....: [
.....: "20160525 13:30:00.023",
.....: "20160525 13:30:00.038",
.....: "20160525 13:30:00.048",
.....: "20160525 13:30:00.048",
.....: "20160525 13:30:00.048",
.....: ]
.....: ),
.....: "ticker": ["MSFT", "MSFT", "GOOG", "GOOG", "AAPL"],
.....: "price": [51.95, 51.95, 720.77, 720.92, 98.00],
.....: "quantity": [75, 155, 100, 100, 100],
.....: },
.....: columns=["time", "ticker", "price", "quantity"],
.....: )
.....:
In [134]: quotes = pd.DataFrame(
.....: {
.....: "time": pd.to_datetime(
.....: [
.....: "20160525 13:30:00.023",
.....: "20160525 13:30:00.023",
.....: "20160525 13:30:00.030",
.....: "20160525 13:30:00.041",
.....: "20160525 13:30:00.048",
.....: "20160525 13:30:00.049",
.....: "20160525 13:30:00.072",
.....: "20160525 13:30:00.075",
.....: ]
.....: ),
.....: "ticker": ["GOOG", "MSFT", "MSFT", "MSFT", "GOOG", "AAPL", "GOOG", "MSFT"],
.....: "bid": [720.50, 51.95, 51.97, 51.99, 720.50, 97.99, 720.50, 52.01],
.....: "ask": [720.93, 51.96, 51.98, 52.00, 720.93, 98.01, 720.88, 52.03],
.....: },
.....: columns=["time", "ticker", "bid", "ask"],
.....: )
.....:
In [135]: trades
Out[135]:
time ticker price quantity
0 2016-05-25 13:30:00.023 MSFT 51.95 75
1 2016-05-25 13:30:00.038 MSFT 51.95 155
2 2016-05-25 13:30:00.048 GOOG 720.77 100
3 2016-05-25 13:30:00.048 GOOG 720.92 100
4 2016-05-25 13:30:00.048 AAPL 98.00 100
In [136]: quotes
Out[136]:
time ticker bid ask
0 2016-05-25 13:30:00.023 GOOG 720.50 720.93
1 2016-05-25 13:30:00.023 MSFT 51.95 51.96
2 2016-05-25 13:30:00.030 MSFT 51.97 51.98
3 2016-05-25 13:30:00.041 MSFT 51.99 52.00
4 2016-05-25 13:30:00.048 GOOG 720.50 720.93
5 2016-05-25 13:30:00.049 AAPL 97.99 98.01
6 2016-05-25 13:30:00.072 GOOG 720.50 720.88
7 2016-05-25 13:30:00.075 MSFT 52.01 52.03
By default we are taking the asof of the quotes.
In [137]: pd.merge_asof(trades, quotes, on="time", by="ticker")
Out[137]:
time ticker price quantity bid ask
0 2016-05-25 13:30:00.023 MSFT 51.95 75 51.95 51.96
1 2016-05-25 13:30:00.038 MSFT 51.95 155 51.97 51.98
2 2016-05-25 13:30:00.048 GOOG 720.77 100 720.50 720.93
3 2016-05-25 13:30:00.048 GOOG 720.92 100 720.50 720.93
4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN
We only asof within 2ms between the quote time and the trade time.
In [138]: pd.merge_asof(trades, quotes, on="time", by="ticker", tolerance=pd.Timedelta("2ms"))
Out[138]:
time ticker price quantity bid ask
0 2016-05-25 13:30:00.023 MSFT 51.95 75 51.95 51.96
1 2016-05-25 13:30:00.038 MSFT 51.95 155 NaN NaN
2 2016-05-25 13:30:00.048 GOOG 720.77 100 720.50 720.93
3 2016-05-25 13:30:00.048 GOOG 720.92 100 720.50 720.93
4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN
We only asof within 10ms between the quote time and the trade time and we
exclude exact matches on time. Note that though we exclude the exact matches
(of the quotes), prior quotes do propagate to that point in time.
In [139]: pd.merge_asof(
.....: trades,
.....: quotes,
.....: on="time",
.....: by="ticker",
.....: tolerance=pd.Timedelta("10ms"),
.....: allow_exact_matches=False,
.....: )
.....:
Out[139]:
time ticker price quantity bid ask
0 2016-05-25 13:30:00.023 MSFT 51.95 75 NaN NaN
1 2016-05-25 13:30:00.038 MSFT 51.95 155 51.97 51.98
2 2016-05-25 13:30:00.048 GOOG 720.77 100 NaN NaN
3 2016-05-25 13:30:00.048 GOOG 720.92 100 NaN NaN
4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN
Comparing objects#
The compare() and compare() methods allow you to
compare two DataFrame or Series, respectively, and summarize their differences.
This feature was added in V1.1.0.
For example, you might want to compare two DataFrame and stack their differences
side by side.
In [140]: df = pd.DataFrame(
.....: {
.....: "col1": ["a", "a", "b", "b", "a"],
.....: "col2": [1.0, 2.0, 3.0, np.nan, 5.0],
.....: "col3": [1.0, 2.0, 3.0, 4.0, 5.0],
.....: },
.....: columns=["col1", "col2", "col3"],
.....: )
.....:
In [141]: df
Out[141]:
col1 col2 col3
0 a 1.0 1.0
1 a 2.0 2.0
2 b 3.0 3.0
3 b NaN 4.0
4 a 5.0 5.0
In [142]: df2 = df.copy()
In [143]: df2.loc[0, "col1"] = "c"
In [144]: df2.loc[2, "col3"] = 4.0
In [145]: df2
Out[145]:
col1 col2 col3
0 c 1.0 1.0
1 a 2.0 2.0
2 b 3.0 4.0
3 b NaN 4.0
4 a 5.0 5.0
In [146]: df.compare(df2)
Out[146]:
col1 col3
self other self other
0 a c NaN NaN
2 NaN NaN 3.0 4.0
By default, if two corresponding values are equal, they will be shown as NaN.
Furthermore, if all values in an entire row / column, the row / column will be
omitted from the result. The remaining differences will be aligned on columns.
If you wish, you may choose to stack the differences on rows.
In [147]: df.compare(df2, align_axis=0)
Out[147]:
col1 col3
0 self a NaN
other c NaN
2 self NaN 3.0
other NaN 4.0
If you wish to keep all original rows and columns, set keep_shape argument
to True.
In [148]: df.compare(df2, keep_shape=True)
Out[148]:
col1 col2 col3
self other self other self other
0 a c NaN NaN NaN NaN
1 NaN NaN NaN NaN NaN NaN
2 NaN NaN NaN NaN 3.0 4.0
3 NaN NaN NaN NaN NaN NaN
4 NaN NaN NaN NaN NaN NaN
You may also keep all the original values even if they are equal.
In [149]: df.compare(df2, keep_shape=True, keep_equal=True)
Out[149]:
col1 col2 col3
self other self other self other
0 a c 1.0 1.0 1.0 1.0
1 a a 2.0 2.0 2.0 2.0
2 b b 3.0 3.0 3.0 4.0
3 b b NaN NaN 4.0 4.0
4 a a 5.0 5.0 5.0 5.0
| 571 | 884 | Pandas: Merge two Dataframes (same columns) with condition... How can i improve this code?
(Sorry, my english skills is bad...)
I'm studying with public data.
I'm trying merge two excel files with some condition.
I tried multi-loop code, but it's too slow...
How can I improve my code?
Please help me TvT
DataStructure example is
old data(entire_file.xlsx)
KeyCode Date Something
0 aaa 2020-01-01 00:00:00 adaf
1 bbb 2020-02-01 00:00:00 awd
2 ccc 2020-03-01 00:00:00 feq
...
6000 aewi 2020-03-03 00:00:00 awefeaw
new data(file2.xlsx)
KeyCode Date Something
1 bbb 2020-06-01 20:00:00 aafewfaewfaw
2 ccc 2020-06-01 20:00:00 dfqefqe
3 new 2020-06-01 20:00:00 newrow
hope(file3.xlsx)
KeyCode Date Something
0 aaa 2020-01-01 00:00:00 adaf
1 bbb 2020-06-01 20:00:00 aafewfaewfaw
2 ccc 2020-06-01 20:00:00 dfqefqe
...
6000 aewi 2020-03-03 00:00:00 awefeaw
6001 new 2020-06-01 20:00:00 newrow
Code:
import numpy as np
import pandas as pd
%matplotlib notebook
import matplotlib.pyplot as plt
data = pd.read_excel('fulldata_01_01_01_P_병원.xlsx', index_col='번호')
tmp = pd.read_excel('(20200601~20200607)_01_01_01_P_병원.xlsx', index_col='번호')
print('{} is tmp rows count'.format(len(tmp.index)))
print('{} is data rows count'.format(len(data.index)))
new_data = pd.DataFrame([])
for j in range(len(tmp.index)):
ischange = False;
isexist = False;
for i in range(len(data.index)):
if (data.iloc[i].loc['KeyCode'] == tmp.iloc[j].loc['KeyCode']) and (data.iloc[i].loc['Date'] < tmp.iloc[j].loc['Date']) :
ischange = True
data.iloc[i] = tmp.iloc[j]
break
elif (data.iloc[i].loc['KeyCode'] == tmp.iloc[j].loc['KeyCode']) :
isexist = True
break
if ischange :
print('{} is change'.format(j))
elif isexist :
print('{} is exist'.format(j))
elif not ischange and not isexist :
print('{} is append'.format(j))
new_data.append(tmp.iloc[j], ignore_index=True)
data.append(new_data, ignore_index=True)
print('{} is tmp rows count'.format(len(tmp.index)))
print('{} is data rows count'.format(len(data.index)))
But... it is not working... |
65,874,915 | pandas python get table with the first date of event in every year, each country, alternative groupby | <p>Who can help, I'm trying to group this table here ( <a href="https://i.stack.imgur.com/bG1qG.jpg" rel="nofollow noreferrer">original table</a> ) with tables : (country, year, date of the earthquake) in this form: the first earthquake in every year, each country. I was able to group through groupby, ( <a href="https://i.stack.imgur.com/e7ecY.jpg" rel="nofollow noreferrer">table with groupby</a> ), but this view does not suit me, I need the same result but in this view :</p>
<pre><code>China 2002 06-28
China 2005 07-25
China 2009 05-10
China 2010 03-10
China 2011 05-10
... ... ... ...
the Kuril Islands 2017 04-07
the Kuril Islands 2018 01-06
the Volcano Islands 2010 10-24
the Volcano Islands 2013 08-24
the Volcano Islands 2015 04-02
</code></pre>
<p>06-28 = month-day</p>
<p>How can I do it?</p>
<p>Thanks</p> | 65,875,225 | 2021-01-24T19:24:12.550000 | 1 | null | -1 | 24 | python|pandas | <p>Once you get your <code>groupby</code> use <code>df = df.reset_index()</code>.
This will bring the columns you used in the groupby to columns and will get you the result you want</p> | 2021-01-24T19:54:19.217000 | 0 | https://pandas.pydata.org/docs/user_guide/visualization.html | Chart visualization#
Chart visualization#
Note
The examples below assume that you’re using Jupyter.
This section demonstrates visualization through charting. For information on
visualization of tabular data please see the section on Table Visualization.
We use the standard convention for referencing the matplotlib API:
In [1]: import matplotlib.pyplot as plt
In [2]: plt.close("all")
We provide the basics in pandas to easily create decent looking plots.
See the ecosystem section for visualization
Once you get your groupby use df = df.reset_index().
This will bring the columns you used in the groupby to columns and will get you the result you want
libraries that go beyond the basics documented here.
Note
All calls to np.random are seeded with 123456.
Basic plotting: plot#
We will demonstrate the basics, see the cookbook for
some advanced strategies.
The plot method on Series and DataFrame is just a simple wrapper around
plt.plot():
In [3]: ts = pd.Series(np.random.randn(1000), index=pd.date_range("1/1/2000", periods=1000))
In [4]: ts = ts.cumsum()
In [5]: ts.plot();
If the index consists of dates, it calls gcf().autofmt_xdate()
to try to format the x-axis nicely as per above.
On DataFrame, plot() is a convenience to plot all of the columns with labels:
In [6]: df = pd.DataFrame(np.random.randn(1000, 4), index=ts.index, columns=list("ABCD"))
In [7]: df = df.cumsum()
In [8]: plt.figure();
In [9]: df.plot();
You can plot one column versus another using the x and y keywords in
plot():
In [10]: df3 = pd.DataFrame(np.random.randn(1000, 2), columns=["B", "C"]).cumsum()
In [11]: df3["A"] = pd.Series(list(range(len(df))))
In [12]: df3.plot(x="A", y="B");
Note
For more formatting and styling options, see
formatting below.
Other plots#
Plotting methods allow for a handful of plot styles other than the
default line plot. These methods can be provided as the kind
keyword argument to plot(), and include:
‘bar’ or ‘barh’ for bar plots
‘hist’ for histogram
‘box’ for boxplot
‘kde’ or ‘density’ for density plots
‘area’ for area plots
‘scatter’ for scatter plots
‘hexbin’ for hexagonal bin plots
‘pie’ for pie plots
For example, a bar plot can be created the following way:
In [13]: plt.figure();
In [14]: df.iloc[5].plot(kind="bar");
You can also create these other plots using the methods DataFrame.plot.<kind> instead of providing the kind keyword argument. This makes it easier to discover plot methods and the specific arguments they use:
In [15]: df = pd.DataFrame()
In [16]: df.plot.<TAB> # noqa: E225, E999
df.plot.area df.plot.barh df.plot.density df.plot.hist df.plot.line df.plot.scatter
df.plot.bar df.plot.box df.plot.hexbin df.plot.kde df.plot.pie
In addition to these kind s, there are the DataFrame.hist(),
and DataFrame.boxplot() methods, which use a separate interface.
Finally, there are several plotting functions in pandas.plotting
that take a Series or DataFrame as an argument. These
include:
Scatter Matrix
Andrews Curves
Parallel Coordinates
Lag Plot
Autocorrelation Plot
Bootstrap Plot
RadViz
Plots may also be adorned with errorbars
or tables.
Bar plots#
For labeled, non-time series data, you may wish to produce a bar plot:
In [17]: plt.figure();
In [18]: df.iloc[5].plot.bar();
In [19]: plt.axhline(0, color="k");
Calling a DataFrame’s plot.bar() method produces a multiple
bar plot:
In [20]: df2 = pd.DataFrame(np.random.rand(10, 4), columns=["a", "b", "c", "d"])
In [21]: df2.plot.bar();
To produce a stacked bar plot, pass stacked=True:
In [22]: df2.plot.bar(stacked=True);
To get horizontal bar plots, use the barh method:
In [23]: df2.plot.barh(stacked=True);
Histograms#
Histograms can be drawn by using the DataFrame.plot.hist() and Series.plot.hist() methods.
In [24]: df4 = pd.DataFrame(
....: {
....: "a": np.random.randn(1000) + 1,
....: "b": np.random.randn(1000),
....: "c": np.random.randn(1000) - 1,
....: },
....: columns=["a", "b", "c"],
....: )
....:
In [25]: plt.figure();
In [26]: df4.plot.hist(alpha=0.5);
A histogram can be stacked using stacked=True. Bin size can be changed
using the bins keyword.
In [27]: plt.figure();
In [28]: df4.plot.hist(stacked=True, bins=20);
You can pass other keywords supported by matplotlib hist. For example,
horizontal and cumulative histograms can be drawn by
orientation='horizontal' and cumulative=True.
In [29]: plt.figure();
In [30]: df4["a"].plot.hist(orientation="horizontal", cumulative=True);
See the hist method and the
matplotlib hist documentation for more.
The existing interface DataFrame.hist to plot histogram still can be used.
In [31]: plt.figure();
In [32]: df["A"].diff().hist();
DataFrame.hist() plots the histograms of the columns on multiple
subplots:
In [33]: plt.figure();
In [34]: df.diff().hist(color="k", alpha=0.5, bins=50);
The by keyword can be specified to plot grouped histograms:
In [35]: data = pd.Series(np.random.randn(1000))
In [36]: data.hist(by=np.random.randint(0, 4, 1000), figsize=(6, 4));
In addition, the by keyword can also be specified in DataFrame.plot.hist().
Changed in version 1.4.0.
In [37]: data = pd.DataFrame(
....: {
....: "a": np.random.choice(["x", "y", "z"], 1000),
....: "b": np.random.choice(["e", "f", "g"], 1000),
....: "c": np.random.randn(1000),
....: "d": np.random.randn(1000) - 1,
....: },
....: )
....:
In [38]: data.plot.hist(by=["a", "b"], figsize=(10, 5));
Box plots#
Boxplot can be drawn calling Series.plot.box() and DataFrame.plot.box(),
or DataFrame.boxplot() to visualize the distribution of values within each column.
For instance, here is a boxplot representing five trials of 10 observations of
a uniform random variable on [0,1).
In [39]: df = pd.DataFrame(np.random.rand(10, 5), columns=["A", "B", "C", "D", "E"])
In [40]: df.plot.box();
Boxplot can be colorized by passing color keyword. You can pass a dict
whose keys are boxes, whiskers, medians and caps.
If some keys are missing in the dict, default colors are used
for the corresponding artists. Also, boxplot has sym keyword to specify fliers style.
When you pass other type of arguments via color keyword, it will be directly
passed to matplotlib for all the boxes, whiskers, medians and caps
colorization.
The colors are applied to every boxes to be drawn. If you want
more complicated colorization, you can get each drawn artists by passing
return_type.
In [41]: color = {
....: "boxes": "DarkGreen",
....: "whiskers": "DarkOrange",
....: "medians": "DarkBlue",
....: "caps": "Gray",
....: }
....:
In [42]: df.plot.box(color=color, sym="r+");
Also, you can pass other keywords supported by matplotlib boxplot.
For example, horizontal and custom-positioned boxplot can be drawn by
vert=False and positions keywords.
In [43]: df.plot.box(vert=False, positions=[1, 4, 5, 6, 8]);
See the boxplot method and the
matplotlib boxplot documentation for more.
The existing interface DataFrame.boxplot to plot boxplot still can be used.
In [44]: df = pd.DataFrame(np.random.rand(10, 5))
In [45]: plt.figure();
In [46]: bp = df.boxplot()
You can create a stratified boxplot using the by keyword argument to create
groupings. For instance,
In [47]: df = pd.DataFrame(np.random.rand(10, 2), columns=["Col1", "Col2"])
In [48]: df["X"] = pd.Series(["A", "A", "A", "A", "A", "B", "B", "B", "B", "B"])
In [49]: plt.figure();
In [50]: bp = df.boxplot(by="X")
You can also pass a subset of columns to plot, as well as group by multiple
columns:
In [51]: df = pd.DataFrame(np.random.rand(10, 3), columns=["Col1", "Col2", "Col3"])
In [52]: df["X"] = pd.Series(["A", "A", "A", "A", "A", "B", "B", "B", "B", "B"])
In [53]: df["Y"] = pd.Series(["A", "B", "A", "B", "A", "B", "A", "B", "A", "B"])
In [54]: plt.figure();
In [55]: bp = df.boxplot(column=["Col1", "Col2"], by=["X", "Y"])
You could also create groupings with DataFrame.plot.box(), for instance:
Changed in version 1.4.0.
In [56]: df = pd.DataFrame(np.random.rand(10, 3), columns=["Col1", "Col2", "Col3"])
In [57]: df["X"] = pd.Series(["A", "A", "A", "A", "A", "B", "B", "B", "B", "B"])
In [58]: plt.figure();
In [59]: bp = df.plot.box(column=["Col1", "Col2"], by="X")
In boxplot, the return type can be controlled by the return_type, keyword. The valid choices are {"axes", "dict", "both", None}.
Faceting, created by DataFrame.boxplot with the by
keyword, will affect the output type as well:
return_type
Faceted
Output type
None
No
axes
None
Yes
2-D ndarray of axes
'axes'
No
axes
'axes'
Yes
Series of axes
'dict'
No
dict of artists
'dict'
Yes
Series of dicts of artists
'both'
No
namedtuple
'both'
Yes
Series of namedtuples
Groupby.boxplot always returns a Series of return_type.
In [60]: np.random.seed(1234)
In [61]: df_box = pd.DataFrame(np.random.randn(50, 2))
In [62]: df_box["g"] = np.random.choice(["A", "B"], size=50)
In [63]: df_box.loc[df_box["g"] == "B", 1] += 3
In [64]: bp = df_box.boxplot(by="g")
The subplots above are split by the numeric columns first, then the value of
the g column. Below the subplots are first split by the value of g,
then by the numeric columns.
In [65]: bp = df_box.groupby("g").boxplot()
Area plot#
You can create area plots with Series.plot.area() and DataFrame.plot.area().
Area plots are stacked by default. To produce stacked area plot, each column must be either all positive or all negative values.
When input data contains NaN, it will be automatically filled by 0. If you want to drop or fill by different values, use dataframe.dropna() or dataframe.fillna() before calling plot.
In [66]: df = pd.DataFrame(np.random.rand(10, 4), columns=["a", "b", "c", "d"])
In [67]: df.plot.area();
To produce an unstacked plot, pass stacked=False. Alpha value is set to 0.5 unless otherwise specified:
In [68]: df.plot.area(stacked=False);
Scatter plot#
Scatter plot can be drawn by using the DataFrame.plot.scatter() method.
Scatter plot requires numeric columns for the x and y axes.
These can be specified by the x and y keywords.
In [69]: df = pd.DataFrame(np.random.rand(50, 4), columns=["a", "b", "c", "d"])
In [70]: df["species"] = pd.Categorical(
....: ["setosa"] * 20 + ["versicolor"] * 20 + ["virginica"] * 10
....: )
....:
In [71]: df.plot.scatter(x="a", y="b");
To plot multiple column groups in a single axes, repeat plot method specifying target ax.
It is recommended to specify color and label keywords to distinguish each groups.
In [72]: ax = df.plot.scatter(x="a", y="b", color="DarkBlue", label="Group 1")
In [73]: df.plot.scatter(x="c", y="d", color="DarkGreen", label="Group 2", ax=ax);
The keyword c may be given as the name of a column to provide colors for
each point:
In [74]: df.plot.scatter(x="a", y="b", c="c", s=50);
If a categorical column is passed to c, then a discrete colorbar will be produced:
New in version 1.3.0.
In [75]: df.plot.scatter(x="a", y="b", c="species", cmap="viridis", s=50);
You can pass other keywords supported by matplotlib
scatter. The example below shows a
bubble chart using a column of the DataFrame as the bubble size.
In [76]: df.plot.scatter(x="a", y="b", s=df["c"] * 200);
See the scatter method and the
matplotlib scatter documentation for more.
Hexagonal bin plot#
You can create hexagonal bin plots with DataFrame.plot.hexbin().
Hexbin plots can be a useful alternative to scatter plots if your data are
too dense to plot each point individually.
In [77]: df = pd.DataFrame(np.random.randn(1000, 2), columns=["a", "b"])
In [78]: df["b"] = df["b"] + np.arange(1000)
In [79]: df.plot.hexbin(x="a", y="b", gridsize=25);
A useful keyword argument is gridsize; it controls the number of hexagons
in the x-direction, and defaults to 100. A larger gridsize means more, smaller
bins.
By default, a histogram of the counts around each (x, y) point is computed.
You can specify alternative aggregations by passing values to the C and
reduce_C_function arguments. C specifies the value at each (x, y) point
and reduce_C_function is a function of one argument that reduces all the
values in a bin to a single number (e.g. mean, max, sum, std). In this
example the positions are given by columns a and b, while the value is
given by column z. The bins are aggregated with NumPy’s max function.
In [80]: df = pd.DataFrame(np.random.randn(1000, 2), columns=["a", "b"])
In [81]: df["b"] = df["b"] + np.arange(1000)
In [82]: df["z"] = np.random.uniform(0, 3, 1000)
In [83]: df.plot.hexbin(x="a", y="b", C="z", reduce_C_function=np.max, gridsize=25);
See the hexbin method and the
matplotlib hexbin documentation for more.
Pie plot#
You can create a pie plot with DataFrame.plot.pie() or Series.plot.pie().
If your data includes any NaN, they will be automatically filled with 0.
A ValueError will be raised if there are any negative values in your data.
In [84]: series = pd.Series(3 * np.random.rand(4), index=["a", "b", "c", "d"], name="series")
In [85]: series.plot.pie(figsize=(6, 6));
For pie plots it’s best to use square figures, i.e. a figure aspect ratio 1.
You can create the figure with equal width and height, or force the aspect ratio
to be equal after plotting by calling ax.set_aspect('equal') on the returned
axes object.
Note that pie plot with DataFrame requires that you either specify a
target column by the y argument or subplots=True. When y is
specified, pie plot of selected column will be drawn. If subplots=True is
specified, pie plots for each column are drawn as subplots. A legend will be
drawn in each pie plots by default; specify legend=False to hide it.
In [86]: df = pd.DataFrame(
....: 3 * np.random.rand(4, 2), index=["a", "b", "c", "d"], columns=["x", "y"]
....: )
....:
In [87]: df.plot.pie(subplots=True, figsize=(8, 4));
You can use the labels and colors keywords to specify the labels and colors of each wedge.
Warning
Most pandas plots use the label and color arguments (note the lack of “s” on those).
To be consistent with matplotlib.pyplot.pie() you must use labels and colors.
If you want to hide wedge labels, specify labels=None.
If fontsize is specified, the value will be applied to wedge labels.
Also, other keywords supported by matplotlib.pyplot.pie() can be used.
In [88]: series.plot.pie(
....: labels=["AA", "BB", "CC", "DD"],
....: colors=["r", "g", "b", "c"],
....: autopct="%.2f",
....: fontsize=20,
....: figsize=(6, 6),
....: );
....:
If you pass values whose sum total is less than 1.0 they will be rescaled so that they sum to 1.
In [89]: series = pd.Series([0.1] * 4, index=["a", "b", "c", "d"], name="series2")
In [90]: series.plot.pie(figsize=(6, 6));
See the matplotlib pie documentation for more.
Plotting with missing data#
pandas tries to be pragmatic about plotting DataFrames or Series
that contain missing data. Missing values are dropped, left out, or filled
depending on the plot type.
Plot Type
NaN Handling
Line
Leave gaps at NaNs
Line (stacked)
Fill 0’s
Bar
Fill 0’s
Scatter
Drop NaNs
Histogram
Drop NaNs (column-wise)
Box
Drop NaNs (column-wise)
Area
Fill 0’s
KDE
Drop NaNs (column-wise)
Hexbin
Drop NaNs
Pie
Fill 0’s
If any of these defaults are not what you want, or if you want to be
explicit about how missing values are handled, consider using
fillna() or dropna()
before plotting.
Plotting tools#
These functions can be imported from pandas.plotting
and take a Series or DataFrame as an argument.
Scatter matrix plot#
You can create a scatter plot matrix using the
scatter_matrix method in pandas.plotting:
In [91]: from pandas.plotting import scatter_matrix
In [92]: df = pd.DataFrame(np.random.randn(1000, 4), columns=["a", "b", "c", "d"])
In [93]: scatter_matrix(df, alpha=0.2, figsize=(6, 6), diagonal="kde");
Density plot#
You can create density plots using the Series.plot.kde() and DataFrame.plot.kde() methods.
In [94]: ser = pd.Series(np.random.randn(1000))
In [95]: ser.plot.kde();
Andrews curves#
Andrews curves allow one to plot multivariate data as a large number
of curves that are created using the attributes of samples as coefficients
for Fourier series, see the Wikipedia entry
for more information. By coloring these curves differently for each class
it is possible to visualize data clustering. Curves belonging to samples
of the same class will usually be closer together and form larger structures.
Note: The “Iris” dataset is available here.
In [96]: from pandas.plotting import andrews_curves
In [97]: data = pd.read_csv("data/iris.data")
In [98]: plt.figure();
In [99]: andrews_curves(data, "Name");
Parallel coordinates#
Parallel coordinates is a plotting technique for plotting multivariate data,
see the Wikipedia entry
for an introduction.
Parallel coordinates allows one to see clusters in data and to estimate other statistics visually.
Using parallel coordinates points are represented as connected line segments.
Each vertical line represents one attribute. One set of connected line segments
represents one data point. Points that tend to cluster will appear closer together.
In [100]: from pandas.plotting import parallel_coordinates
In [101]: data = pd.read_csv("data/iris.data")
In [102]: plt.figure();
In [103]: parallel_coordinates(data, "Name");
Lag plot#
Lag plots are used to check if a data set or time series is random. Random
data should not exhibit any structure in the lag plot. Non-random structure
implies that the underlying data are not random. The lag argument may
be passed, and when lag=1 the plot is essentially data[:-1] vs.
data[1:].
In [104]: from pandas.plotting import lag_plot
In [105]: plt.figure();
In [106]: spacing = np.linspace(-99 * np.pi, 99 * np.pi, num=1000)
In [107]: data = pd.Series(0.1 * np.random.rand(1000) + 0.9 * np.sin(spacing))
In [108]: lag_plot(data);
Autocorrelation plot#
Autocorrelation plots are often used for checking randomness in time series.
This is done by computing autocorrelations for data values at varying time lags.
If time series is random, such autocorrelations should be near zero for any and
all time-lag separations. If time series is non-random then one or more of the
autocorrelations will be significantly non-zero. The horizontal lines displayed
in the plot correspond to 95% and 99% confidence bands. The dashed line is 99%
confidence band. See the
Wikipedia entry for more about
autocorrelation plots.
In [109]: from pandas.plotting import autocorrelation_plot
In [110]: plt.figure();
In [111]: spacing = np.linspace(-9 * np.pi, 9 * np.pi, num=1000)
In [112]: data = pd.Series(0.7 * np.random.rand(1000) + 0.3 * np.sin(spacing))
In [113]: autocorrelation_plot(data);
Bootstrap plot#
Bootstrap plots are used to visually assess the uncertainty of a statistic, such
as mean, median, midrange, etc. A random subset of a specified size is selected
from a data set, the statistic in question is computed for this subset and the
process is repeated a specified number of times. Resulting plots and histograms
are what constitutes the bootstrap plot.
In [114]: from pandas.plotting import bootstrap_plot
In [115]: data = pd.Series(np.random.rand(1000))
In [116]: bootstrap_plot(data, size=50, samples=500, color="grey");
RadViz#
RadViz is a way of visualizing multi-variate data. It is based on a simple
spring tension minimization algorithm. Basically you set up a bunch of points in
a plane. In our case they are equally spaced on a unit circle. Each point
represents a single attribute. You then pretend that each sample in the data set
is attached to each of these points by a spring, the stiffness of which is
proportional to the numerical value of that attribute (they are normalized to
unit interval). The point in the plane, where our sample settles to (where the
forces acting on our sample are at an equilibrium) is where a dot representing
our sample will be drawn. Depending on which class that sample belongs it will
be colored differently.
See the R package Radviz
for more information.
Note: The “Iris” dataset is available here.
In [117]: from pandas.plotting import radviz
In [118]: data = pd.read_csv("data/iris.data")
In [119]: plt.figure();
In [120]: radviz(data, "Name");
Plot formatting#
Setting the plot style#
From version 1.5 and up, matplotlib offers a range of pre-configured plotting styles. Setting the
style can be used to easily give plots the general look that you want.
Setting the style is as easy as calling matplotlib.style.use(my_plot_style) before
creating your plot. For example you could write matplotlib.style.use('ggplot') for ggplot-style
plots.
You can see the various available style names at matplotlib.style.available and it’s very
easy to try them out.
General plot style arguments#
Most plotting methods have a set of keyword arguments that control the
layout and formatting of the returned plot:
In [121]: plt.figure();
In [122]: ts.plot(style="k--", label="Series");
For each kind of plot (e.g. line, bar, scatter) any additional arguments
keywords are passed along to the corresponding matplotlib function
(ax.plot(),
ax.bar(),
ax.scatter()). These can be used
to control additional styling, beyond what pandas provides.
Controlling the legend#
You may set the legend argument to False to hide the legend, which is
shown by default.
In [123]: df = pd.DataFrame(np.random.randn(1000, 4), index=ts.index, columns=list("ABCD"))
In [124]: df = df.cumsum()
In [125]: df.plot(legend=False);
Controlling the labels#
New in version 1.1.0.
You may set the xlabel and ylabel arguments to give the plot custom labels
for x and y axis. By default, pandas will pick up index name as xlabel, while leaving
it empty for ylabel.
In [126]: df.plot();
In [127]: df.plot(xlabel="new x", ylabel="new y");
Scales#
You may pass logy to get a log-scale Y axis.
In [128]: ts = pd.Series(np.random.randn(1000), index=pd.date_range("1/1/2000", periods=1000))
In [129]: ts = np.exp(ts.cumsum())
In [130]: ts.plot(logy=True);
See also the logx and loglog keyword arguments.
Plotting on a secondary y-axis#
To plot data on a secondary y-axis, use the secondary_y keyword:
In [131]: df["A"].plot();
In [132]: df["B"].plot(secondary_y=True, style="g");
To plot some columns in a DataFrame, give the column names to the secondary_y
keyword:
In [133]: plt.figure();
In [134]: ax = df.plot(secondary_y=["A", "B"])
In [135]: ax.set_ylabel("CD scale");
In [136]: ax.right_ax.set_ylabel("AB scale");
Note that the columns plotted on the secondary y-axis is automatically marked
with “(right)” in the legend. To turn off the automatic marking, use the
mark_right=False keyword:
In [137]: plt.figure();
In [138]: df.plot(secondary_y=["A", "B"], mark_right=False);
Custom formatters for timeseries plots#
Changed in version 1.0.0.
pandas provides custom formatters for timeseries plots. These change the
formatting of the axis labels for dates and times. By default,
the custom formatters are applied only to plots created by pandas with
DataFrame.plot() or Series.plot(). To have them apply to all
plots, including those made by matplotlib, set the option
pd.options.plotting.matplotlib.register_converters = True or use
pandas.plotting.register_matplotlib_converters().
Suppressing tick resolution adjustment#
pandas includes automatic tick resolution adjustment for regular frequency
time-series data. For limited cases where pandas cannot infer the frequency
information (e.g., in an externally created twinx), you can choose to
suppress this behavior for alignment purposes.
Here is the default behavior, notice how the x-axis tick labeling is performed:
In [139]: plt.figure();
In [140]: df["A"].plot();
Using the x_compat parameter, you can suppress this behavior:
In [141]: plt.figure();
In [142]: df["A"].plot(x_compat=True);
If you have more than one plot that needs to be suppressed, the use method
in pandas.plotting.plot_params can be used in a with statement:
In [143]: plt.figure();
In [144]: with pd.plotting.plot_params.use("x_compat", True):
.....: df["A"].plot(color="r")
.....: df["B"].plot(color="g")
.....: df["C"].plot(color="b")
.....:
Automatic date tick adjustment#
TimedeltaIndex now uses the native matplotlib
tick locator methods, it is useful to call the automatic
date tick adjustment from matplotlib for figures whose ticklabels overlap.
See the autofmt_xdate method and the
matplotlib documentation for more.
Subplots#
Each Series in a DataFrame can be plotted on a different axis
with the subplots keyword:
In [145]: df.plot(subplots=True, figsize=(6, 6));
Using layout and targeting multiple axes#
The layout of subplots can be specified by the layout keyword. It can accept
(rows, columns). The layout keyword can be used in
hist and boxplot also. If the input is invalid, a ValueError will be raised.
The number of axes which can be contained by rows x columns specified by layout must be
larger than the number of required subplots. If layout can contain more axes than required,
blank axes are not drawn. Similar to a NumPy array’s reshape method, you
can use -1 for one dimension to automatically calculate the number of rows
or columns needed, given the other.
In [146]: df.plot(subplots=True, layout=(2, 3), figsize=(6, 6), sharex=False);
The above example is identical to using:
In [147]: df.plot(subplots=True, layout=(2, -1), figsize=(6, 6), sharex=False);
The required number of columns (3) is inferred from the number of series to plot
and the given number of rows (2).
You can pass multiple axes created beforehand as list-like via ax keyword.
This allows more complicated layouts.
The passed axes must be the same number as the subplots being drawn.
When multiple axes are passed via the ax keyword, layout, sharex and sharey keywords
don’t affect to the output. You should explicitly pass sharex=False and sharey=False,
otherwise you will see a warning.
In [148]: fig, axes = plt.subplots(4, 4, figsize=(9, 9))
In [149]: plt.subplots_adjust(wspace=0.5, hspace=0.5)
In [150]: target1 = [axes[0][0], axes[1][1], axes[2][2], axes[3][3]]
In [151]: target2 = [axes[3][0], axes[2][1], axes[1][2], axes[0][3]]
In [152]: df.plot(subplots=True, ax=target1, legend=False, sharex=False, sharey=False);
In [153]: (-df).plot(subplots=True, ax=target2, legend=False, sharex=False, sharey=False);
Another option is passing an ax argument to Series.plot() to plot on a particular axis:
In [154]: fig, axes = plt.subplots(nrows=2, ncols=2)
In [155]: plt.subplots_adjust(wspace=0.2, hspace=0.5)
In [156]: df["A"].plot(ax=axes[0, 0]);
In [157]: axes[0, 0].set_title("A");
In [158]: df["B"].plot(ax=axes[0, 1]);
In [159]: axes[0, 1].set_title("B");
In [160]: df["C"].plot(ax=axes[1, 0]);
In [161]: axes[1, 0].set_title("C");
In [162]: df["D"].plot(ax=axes[1, 1]);
In [163]: axes[1, 1].set_title("D");
Plotting with error bars#
Plotting with error bars is supported in DataFrame.plot() and Series.plot().
Horizontal and vertical error bars can be supplied to the xerr and yerr keyword arguments to plot(). The error values can be specified using a variety of formats:
As a DataFrame or dict of errors with column names matching the columns attribute of the plotting DataFrame or matching the name attribute of the Series.
As a str indicating which of the columns of plotting DataFrame contain the error values.
As raw values (list, tuple, or np.ndarray). Must be the same length as the plotting DataFrame/Series.
Here is an example of one way to easily plot group means with standard deviations from the raw data.
# Generate the data
In [164]: ix3 = pd.MultiIndex.from_arrays(
.....: [
.....: ["a", "a", "a", "a", "a", "b", "b", "b", "b", "b"],
.....: ["foo", "foo", "foo", "bar", "bar", "foo", "foo", "bar", "bar", "bar"],
.....: ],
.....: names=["letter", "word"],
.....: )
.....:
In [165]: df3 = pd.DataFrame(
.....: {
.....: "data1": [9, 3, 2, 4, 3, 2, 4, 6, 3, 2],
.....: "data2": [9, 6, 5, 7, 5, 4, 5, 6, 5, 1],
.....: },
.....: index=ix3,
.....: )
.....:
# Group by index labels and take the means and standard deviations
# for each group
In [166]: gp3 = df3.groupby(level=("letter", "word"))
In [167]: means = gp3.mean()
In [168]: errors = gp3.std()
In [169]: means
Out[169]:
data1 data2
letter word
a bar 3.500000 6.000000
foo 4.666667 6.666667
b bar 3.666667 4.000000
foo 3.000000 4.500000
In [170]: errors
Out[170]:
data1 data2
letter word
a bar 0.707107 1.414214
foo 3.785939 2.081666
b bar 2.081666 2.645751
foo 1.414214 0.707107
# Plot
In [171]: fig, ax = plt.subplots()
In [172]: means.plot.bar(yerr=errors, ax=ax, capsize=4, rot=0);
Asymmetrical error bars are also supported, however raw error values must be provided in this case. For a N length Series, a 2xN array should be provided indicating lower and upper (or left and right) errors. For a MxN DataFrame, asymmetrical errors should be in a Mx2xN array.
Here is an example of one way to plot the min/max range using asymmetrical error bars.
In [173]: mins = gp3.min()
In [174]: maxs = gp3.max()
# errors should be positive, and defined in the order of lower, upper
In [175]: errors = [[means[c] - mins[c], maxs[c] - means[c]] for c in df3.columns]
# Plot
In [176]: fig, ax = plt.subplots()
In [177]: means.plot.bar(yerr=errors, ax=ax, capsize=4, rot=0);
Plotting tables#
Plotting with matplotlib table is now supported in DataFrame.plot() and Series.plot() with a table keyword. The table keyword can accept bool, DataFrame or Series. The simple way to draw a table is to specify table=True. Data will be transposed to meet matplotlib’s default layout.
In [178]: fig, ax = plt.subplots(1, 1, figsize=(7, 6.5))
In [179]: df = pd.DataFrame(np.random.rand(5, 3), columns=["a", "b", "c"])
In [180]: ax.xaxis.tick_top() # Display x-axis ticks on top.
In [181]: df.plot(table=True, ax=ax);
Also, you can pass a different DataFrame or Series to the
table keyword. The data will be drawn as displayed in print method
(not transposed automatically). If required, it should be transposed manually
as seen in the example below.
In [182]: fig, ax = plt.subplots(1, 1, figsize=(7, 6.75))
In [183]: ax.xaxis.tick_top() # Display x-axis ticks on top.
In [184]: df.plot(table=np.round(df.T, 2), ax=ax);
There also exists a helper function pandas.plotting.table, which creates a
table from DataFrame or Series, and adds it to an
matplotlib.Axes instance. This function can accept keywords which the
matplotlib table has.
In [185]: from pandas.plotting import table
In [186]: fig, ax = plt.subplots(1, 1)
In [187]: table(ax, np.round(df.describe(), 2), loc="upper right", colWidths=[0.2, 0.2, 0.2]);
In [188]: df.plot(ax=ax, ylim=(0, 2), legend=None);
Note: You can get table instances on the axes using axes.tables property for further decorations. See the matplotlib table documentation for more.
Colormaps#
A potential issue when plotting a large number of columns is that it can be
difficult to distinguish some series due to repetition in the default colors. To
remedy this, DataFrame plotting supports the use of the colormap argument,
which accepts either a Matplotlib colormap
or a string that is a name of a colormap registered with Matplotlib. A
visualization of the default matplotlib colormaps is available here.
As matplotlib does not directly support colormaps for line-based plots, the
colors are selected based on an even spacing determined by the number of columns
in the DataFrame. There is no consideration made for background color, so some
colormaps will produce lines that are not easily visible.
To use the cubehelix colormap, we can pass colormap='cubehelix'.
In [189]: df = pd.DataFrame(np.random.randn(1000, 10), index=ts.index)
In [190]: df = df.cumsum()
In [191]: plt.figure();
In [192]: df.plot(colormap="cubehelix");
Alternatively, we can pass the colormap itself:
In [193]: from matplotlib import cm
In [194]: plt.figure();
In [195]: df.plot(colormap=cm.cubehelix);
Colormaps can also be used other plot types, like bar charts:
In [196]: dd = pd.DataFrame(np.random.randn(10, 10)).applymap(abs)
In [197]: dd = dd.cumsum()
In [198]: plt.figure();
In [199]: dd.plot.bar(colormap="Greens");
Parallel coordinates charts:
In [200]: plt.figure();
In [201]: parallel_coordinates(data, "Name", colormap="gist_rainbow");
Andrews curves charts:
In [202]: plt.figure();
In [203]: andrews_curves(data, "Name", colormap="winter");
Plotting directly with Matplotlib#
In some situations it may still be preferable or necessary to prepare plots
directly with matplotlib, for instance when a certain type of plot or
customization is not (yet) supported by pandas. Series and DataFrame
objects behave like arrays and can therefore be passed directly to
matplotlib functions without explicit casts.
pandas also automatically registers formatters and locators that recognize date
indices, thereby extending date and time support to practically all plot types
available in matplotlib. Although this formatting does not provide the same
level of refinement you would get when plotting via pandas, it can be faster
when plotting a large number of points.
In [204]: price = pd.Series(
.....: np.random.randn(150).cumsum(),
.....: index=pd.date_range("2000-1-1", periods=150, freq="B"),
.....: )
.....:
In [205]: ma = price.rolling(20).mean()
In [206]: mstd = price.rolling(20).std()
In [207]: plt.figure();
In [208]: plt.plot(price.index, price, "k");
In [209]: plt.plot(ma.index, ma, "b");
In [210]: plt.fill_between(mstd.index, ma - 2 * mstd, ma + 2 * mstd, color="b", alpha=0.2);
Plotting backends#
Starting in version 0.25, pandas can be extended with third-party plotting backends. The
main idea is letting users select a plotting backend different than the provided
one based on Matplotlib.
This can be done by passing ‘backend.module’ as the argument backend in plot
function. For example:
>>> Series([1, 2, 3]).plot(backend="backend.module")
Alternatively, you can also set this option globally, do you don’t need to specify
the keyword in each plot call. For example:
>>> pd.set_option("plotting.backend", "backend.module")
>>> pd.Series([1, 2, 3]).plot()
Or:
>>> pd.options.plotting.backend = "backend.module"
>>> pd.Series([1, 2, 3]).plot()
This would be more or less equivalent to:
>>> import backend.module
>>> backend.module.plot(pd.Series([1, 2, 3]))
The backend module can then use other visualization tools (Bokeh, Altair, hvplot,…)
to generate the plots. Some libraries implementing a backend for pandas are listed
on the ecosystem Visualization page.
Developers guide can be found at
https://pandas.pydata.org/docs/dev/development/extending.html#plotting-backends
| 508 | 660 | pandas python get table with the first date of event in every year, each country, alternative groupby
Who can help, I'm trying to group this table here ( original table ) with tables : (country, year, date of the earthquake) in this form: the first earthquake in every year, each country. I was able to group through groupby, ( table with groupby ), but this view does not suit me, I need the same result but in this view :
China 2002 06-28
China 2005 07-25
China 2009 05-10
China 2010 03-10
China 2011 05-10
... ... ... ...
the Kuril Islands 2017 04-07
the Kuril Islands 2018 01-06
the Volcano Islands 2010 10-24
the Volcano Islands 2013 08-24
the Volcano Islands 2015 04-02
06-28 = month-day
How can I do it?
Thanks |
63,177,142 | Change column values into rating and sum | <p>Change the column values and sum the row according to conditions.</p>
<pre><code>d = {'col1': [20, 40], 'col2': [30, 40],'col3':[200,300}
df = pd.DataFrame(data=d)
col1 col2 col3
0 20 30 200
1 40 40 300
Col4 shoud give back the sum of the row after the values have been tranfered to a rating.
Col1 Value between 0-20 ->2 Points, 20-40 -> 3 Points
Col2 Value between 40-50 ->2 Points, 70-80 -> 3 Points
Col3 Value between 0-100 ->2 Points, 100-300 -> 2 Points
col 4 (Points)
0 2
1 6
</code></pre> | 63,181,949 | 2020-07-30T16:09:53.107000 | 1 | null | 1 | 26 | pandas | <p>Use pd. cut as follows. Values didnt add up though. Happy to asist further if clarified.</p>
<p>pd.cut to bin and save in new columnms suffixed withname Points. Select only columns with string Points and add.</p>
<pre><code>df['col1Points'],df['col2Points'],df['col3Points']=\
pd.cut(df.col1, [0,20,40],labels=[2,3])\
,pd.cut(df.col2, [40,70,80],labels=[2,3])\
,pd.cut(df.col3, [-0,100,300],labels=[2,3])
df['col4']=df.filter(like='Points').sum(axis=1)
col1 col2 col3 col1Points col2Points col3Points col4
0 20 30 200 2 NaN 3 5.0
1 40 40 300 3 NaN 3 6.0
</code></pre> | 2020-07-30T21:57:02.090000 | 0 | https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.rank.html | pandas.DataFrame.rank#
pandas.DataFrame.rank#
DataFrame.rank(axis=0, method='average', numeric_only=_NoDefault.no_default, na_option='keep', ascending=True, pct=False)[source]#
Compute numerical data ranks (1 through n) along axis.
By default, equal values are assigned a rank that is the average of the
ranks of those values.
Parameters
axis{0 or ‘index’, 1 or ‘columns’}, default 0Index to direct ranking.
Use pd. cut as follows. Values didnt add up though. Happy to asist further if clarified.
pd.cut to bin and save in new columnms suffixed withname Points. Select only columns with string Points and add.
df['col1Points'],df['col2Points'],df['col3Points']=\
pd.cut(df.col1, [0,20,40],labels=[2,3])\
,pd.cut(df.col2, [40,70,80],labels=[2,3])\
,pd.cut(df.col3, [-0,100,300],labels=[2,3])
df['col4']=df.filter(like='Points').sum(axis=1)
col1 col2 col3 col1Points col2Points col3Points col4
0 20 30 200 2 NaN 3 5.0
1 40 40 300 3 NaN 3 6.0
For Series this parameter is unused and defaults to 0.
method{‘average’, ‘min’, ‘max’, ‘first’, ‘dense’}, default ‘average’How to rank the group of records that have the same value (i.e. ties):
average: average rank of the group
min: lowest rank in the group
max: highest rank in the group
first: ranks assigned in order they appear in the array
dense: like ‘min’, but rank always increases by 1 between groups.
numeric_onlybool, optionalFor DataFrame objects, rank only numeric columns if set to True.
na_option{‘keep’, ‘top’, ‘bottom’}, default ‘keep’How to rank NaN values:
keep: assign NaN rank to NaN values
top: assign lowest rank to NaN values
bottom: assign highest rank to NaN values
ascendingbool, default TrueWhether or not the elements should be ranked in ascending order.
pctbool, default FalseWhether or not to display the returned rankings in percentile
form.
Returns
same type as callerReturn a Series or DataFrame with data ranks as values.
See also
core.groupby.GroupBy.rankRank of values within each group.
Examples
>>> df = pd.DataFrame(data={'Animal': ['cat', 'penguin', 'dog',
... 'spider', 'snake'],
... 'Number_legs': [4, 2, 4, 8, np.nan]})
>>> df
Animal Number_legs
0 cat 4.0
1 penguin 2.0
2 dog 4.0
3 spider 8.0
4 snake NaN
Ties are assigned the mean of the ranks (by default) for the group.
>>> s = pd.Series(range(5), index=list("abcde"))
>>> s["d"] = s["b"]
>>> s.rank()
a 1.0
b 2.5
c 4.0
d 2.5
e 5.0
dtype: float64
The following example shows how the method behaves with the above
parameters:
default_rank: this is the default behaviour obtained without using
any parameter.
max_rank: setting method = 'max' the records that have the
same values are ranked using the highest rank (e.g.: since ‘cat’
and ‘dog’ are both in the 2nd and 3rd position, rank 3 is assigned.)
NA_bottom: choosing na_option = 'bottom', if there are records
with NaN values they are placed at the bottom of the ranking.
pct_rank: when setting pct = True, the ranking is expressed as
percentile rank.
>>> df['default_rank'] = df['Number_legs'].rank()
>>> df['max_rank'] = df['Number_legs'].rank(method='max')
>>> df['NA_bottom'] = df['Number_legs'].rank(na_option='bottom')
>>> df['pct_rank'] = df['Number_legs'].rank(pct=True)
>>> df
Animal Number_legs default_rank max_rank NA_bottom pct_rank
0 cat 4.0 2.5 3.0 2.5 0.625
1 penguin 2.0 1.0 1.0 1.0 0.250
2 dog 4.0 2.5 3.0 2.5 0.625
3 spider 8.0 4.0 4.0 4.0 1.000
4 snake NaN NaN NaN 5.0 NaN
| 414 | 1,026 | Change column values into rating and sum
Change the column values and sum the row according to conditions.
d = {'col1': [20, 40], 'col2': [30, 40],'col3':[200,300}
df = pd.DataFrame(data=d)
col1 col2 col3
0 20 30 200
1 40 40 300
Col4 shoud give back the sum of the row after the values have been tranfered to a rating.
Col1 Value between 0-20 ->2 Points, 20-40 -> 3 Points
Col2 Value between 40-50 ->2 Points, 70-80 -> 3 Points
Col3 Value between 0-100 ->2 Points, 100-300 -> 2 Points
col 4 (Points)
0 2
1 6
|
60,569,207 | Using join on a dictionary of dataframes by datetime | <p>I have a dictionary of dataframes which have two columns 'Time' (datetimeformat) and another column which is different for each dataframe. The Time/Value entries are variable.</p>
<p>I want to join all of the dataframes to a master time dataframe which has 1 minute increments for the entire time range using the 'Time' value as a key.</p>
<p><code>df_man_data</code> is the master time dataframe. It looks like:</p>
<pre><code> Time
0 2019-01-01 13:44:00
1 2019-01-01 13:45:00
2 2019-01-01 13:46:00
531498 2020-01-05 16:02:00
531499 2020-01-05 16:03:00
531500 2020-01-05 16:04:00
531501 2020-01-05 16:05:00
</code></pre>
<p>one of the dictionary dataframes looks like this:</p>
<pre><code> Time V-106A_TAP_7
0 2019-01-05 09:39:00 22.0
1 2019-01-07 09:42:00 30.0
2 2019-02-06 08:58:00 8.0
3 2019-02-06 21:25:00 16.0
262 2020-02-11 09:00:00 32.0
263 2020-02-12 20:08:00 34.0
264 2020-02-13 09:34:00 2.0
</code></pre>
<p>I've tried this:</p>
<pre><code>df_man_data = df_time
for tag in tags:
df_man_data.join(df_dic[tag].set_index('Time'), on='Time', how='left')
</code></pre>
<p>but my <code>df_man_data</code> comes out with no extra columns</p> | 60,569,419 | 2020-03-06T17:48:45.797000 | 1 | null | -1 | 26 | python|pandas | <p>Change your for loop to </p>
<pre><code>for tag in tags:
df_man_data = df_man_data.join(df_dic[tag].set_index('Time'), on = 'Time',how = 'left')
</code></pre>
<p>.join() returns a new dataframe and assigning that new, joined dataframe to df_man_data each loop should capture all of your new columns of data iteratively.</p> | 2020-03-06T18:05:30.197000 | 0 | https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.to_dict.html | pandas.DataFrame.to_dict#
pandas.DataFrame.to_dict#
DataFrame.to_dict(orient='dict', into=<class 'dict'>)[source]#
Convert the DataFrame to a dictionary.
The type of the key-value pairs can be customized with the parameters
(see below).
Parameters
orientstr {‘dict’, ‘list’, ‘series’, ‘split’, ‘tight’, ‘records’, ‘index’}Determines the type of the values of the dictionary.
‘dict’ (default) : dict like {column -> {index -> value}}
‘list’ : dict like {column -> [values]}
‘series’ : dict like {column -> Series(values)}
‘split’ : dict like
{‘index’ -> [index], ‘columns’ -> [columns], ‘data’ -> [values]}
‘tight’ : dict like
{‘index’ -> [index], ‘columns’ -> [columns], ‘data’ -> [values],
‘index_names’ -> [index.names], ‘column_names’ -> [column.names]}
‘records’ : list like
[{column -> value}, … , {column -> value}]
‘index’ : dict like {index -> {column -> value}}
Change your for loop to
for tag in tags:
df_man_data = df_man_data.join(df_dic[tag].set_index('Time'), on = 'Time',how = 'left')
.join() returns a new dataframe and assigning that new, joined dataframe to df_man_data each loop should capture all of your new columns of data iteratively.
Abbreviations are allowed. s indicates series and sp
indicates split.
New in version 1.4.0: ‘tight’ as an allowed value for the orient argument
intoclass, default dictThe collections.abc.Mapping subclass used for all Mappings
in the return value. Can be the actual class or an empty
instance of the mapping type you want. If you want a
collections.defaultdict, you must pass it initialized.
Returns
dict, list or collections.abc.MappingReturn a collections.abc.Mapping object representing the DataFrame.
The resulting transformation depends on the orient parameter.
See also
DataFrame.from_dictCreate a DataFrame from a dictionary.
DataFrame.to_jsonConvert a DataFrame to JSON format.
Examples
>>> df = pd.DataFrame({'col1': [1, 2],
... 'col2': [0.5, 0.75]},
... index=['row1', 'row2'])
>>> df
col1 col2
row1 1 0.50
row2 2 0.75
>>> df.to_dict()
{'col1': {'row1': 1, 'row2': 2}, 'col2': {'row1': 0.5, 'row2': 0.75}}
You can specify the return orientation.
>>> df.to_dict('series')
{'col1': row1 1
row2 2
Name: col1, dtype: int64,
'col2': row1 0.50
row2 0.75
Name: col2, dtype: float64}
>>> df.to_dict('split')
{'index': ['row1', 'row2'], 'columns': ['col1', 'col2'],
'data': [[1, 0.5], [2, 0.75]]}
>>> df.to_dict('records')
[{'col1': 1, 'col2': 0.5}, {'col1': 2, 'col2': 0.75}]
>>> df.to_dict('index')
{'row1': {'col1': 1, 'col2': 0.5}, 'row2': {'col1': 2, 'col2': 0.75}}
>>> df.to_dict('tight')
{'index': ['row1', 'row2'], 'columns': ['col1', 'col2'],
'data': [[1, 0.5], [2, 0.75]], 'index_names': [None], 'column_names': [None]}
You can also specify the mapping type.
>>> from collections import OrderedDict, defaultdict
>>> df.to_dict(into=OrderedDict)
OrderedDict([('col1', OrderedDict([('row1', 1), ('row2', 2)])),
('col2', OrderedDict([('row1', 0.5), ('row2', 0.75)]))])
If you want a defaultdict, you need to initialize it:
>>> dd = defaultdict(list)
>>> df.to_dict('records', into=dd)
[defaultdict(<class 'list'>, {'col1': 1, 'col2': 0.5}),
defaultdict(<class 'list'>, {'col1': 2, 'col2': 0.75})]
| 878 | 1,170 | Using join on a dictionary of dataframes by datetime
I have a dictionary of dataframes which have two columns 'Time' (datetimeformat) and another column which is different for each dataframe. The Time/Value entries are variable.
I want to join all of the dataframes to a master time dataframe which has 1 minute increments for the entire time range using the 'Time' value as a key.
df_man_data is the master time dataframe. It looks like:
Time
0 2019-01-01 13:44:00
1 2019-01-01 13:45:00
2 2019-01-01 13:46:00
531498 2020-01-05 16:02:00
531499 2020-01-05 16:03:00
531500 2020-01-05 16:04:00
531501 2020-01-05 16:05:00
one of the dictionary dataframes looks like this:
Time V-106A_TAP_7
0 2019-01-05 09:39:00 22.0
1 2019-01-07 09:42:00 30.0
2 2019-02-06 08:58:00 8.0
3 2019-02-06 21:25:00 16.0
262 2020-02-11 09:00:00 32.0
263 2020-02-12 20:08:00 34.0
264 2020-02-13 09:34:00 2.0
I've tried this:
df_man_data = df_time
for tag in tags:
df_man_data.join(df_dic[tag].set_index('Time'), on='Time', how='left')
but my df_man_data comes out with no extra columns |
61,141,992 | Create subindices based on two categorical variables | <p>I have a dataframe containing two categorical variables. I would like to add a third column with ascending indices for each of the categories, where one category is nested within the other.</p>
<p>Example:</p>
<pre><code>import pandas as pd
foo = ['a','a','a','a','b','b','b','b']
bar = [0,0,1,1,0,0,1,1]
df = pd.DataFrame({'foo':foo,'bar':bar})
</code></pre>
<p>which gives you:</p>
<pre><code> foo bar
0 a 0
1 a 0
2 a 1
3 a 1
4 b 0
5 b 0
6 b 1
7 b 1
</code></pre>
<p>Add a third column to <code>df</code> so that you get:</p>
<pre><code> foo bar foobar
0 a 0 0
1 a 0 1
2 a 1 0
3 a 1 1
4 b 0 2
5 b 0 3
6 b 1 2
7 b 1 3
</code></pre>
<p>I guess this can be somehow done with <code>groupby()</code>?</p> | 61,142,516 | 2020-04-10T14:05:52.473000 | 1 | null | 1 | 29 | python|pandas | <p>IIUC:</p>
<pre><code>s = df.groupby(['foo','bar']).cumcount()
df['foobar'] = df['foo'].factorize()[0] * (s.max() + 1) + s
</code></pre>
<p>Output:</p>
<pre><code> foo bar foobar
0 a 0 0
1 a 0 1
2 a 1 0
3 a 1 1
4 b 0 2
5 b 0 3
6 b 1 2
7 b 1 3
</code></pre> | 2020-04-10T14:34:48.710000 | 0 | https://pandas.pydata.org/docs/user_guide/advanced.html | MultiIndex / advanced indexing#
IIUC:
s = df.groupby(['foo','bar']).cumcount()
df['foobar'] = df['foo'].factorize()[0] * (s.max() + 1) + s
Output:
foo bar foobar
0 a 0 0
1 a 0 1
2 a 1 0
3 a 1 1
4 b 0 2
5 b 0 3
6 b 1 2
7 b 1 3
MultiIndex / advanced indexing#
This section covers indexing with a MultiIndex
and other advanced indexing features.
See the Indexing and Selecting Data for general indexing documentation.
Warning
Whether a copy or a reference is returned for a setting operation may
depend on the context. This is sometimes called chained assignment and
should be avoided. See Returning a View versus Copy.
See the cookbook for some advanced strategies.
Hierarchical indexing (MultiIndex)#
Hierarchical / Multi-level indexing is very exciting as it opens the door to some
quite sophisticated data analysis and manipulation, especially for working with
higher dimensional data. In essence, it enables you to store and manipulate
data with an arbitrary number of dimensions in lower dimensional data
structures like Series (1d) and DataFrame (2d).
In this section, we will show what exactly we mean by “hierarchical” indexing
and how it integrates with all of the pandas indexing functionality
described above and in prior sections. Later, when discussing group by and pivoting and reshaping data, we’ll show
non-trivial applications to illustrate how it aids in structuring data for
analysis.
See the cookbook for some advanced strategies.
Creating a MultiIndex (hierarchical index) object#
The MultiIndex object is the hierarchical analogue of the standard
Index object which typically stores the axis labels in pandas objects. You
can think of MultiIndex as an array of tuples where each tuple is unique. A
MultiIndex can be created from a list of arrays (using
MultiIndex.from_arrays()), an array of tuples (using
MultiIndex.from_tuples()), a crossed set of iterables (using
MultiIndex.from_product()), or a DataFrame (using
MultiIndex.from_frame()). The Index constructor will attempt to return
a MultiIndex when it is passed a list of tuples. The following examples
demonstrate different ways to initialize MultiIndexes.
In [1]: arrays = [
...: ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
...: ["one", "two", "one", "two", "one", "two", "one", "two"],
...: ]
...:
In [2]: tuples = list(zip(*arrays))
In [3]: tuples
Out[3]:
[('bar', 'one'),
('bar', 'two'),
('baz', 'one'),
('baz', 'two'),
('foo', 'one'),
('foo', 'two'),
('qux', 'one'),
('qux', 'two')]
In [4]: index = pd.MultiIndex.from_tuples(tuples, names=["first", "second"])
In [5]: index
Out[5]:
MultiIndex([('bar', 'one'),
('bar', 'two'),
('baz', 'one'),
('baz', 'two'),
('foo', 'one'),
('foo', 'two'),
('qux', 'one'),
('qux', 'two')],
names=['first', 'second'])
In [6]: s = pd.Series(np.random.randn(8), index=index)
In [7]: s
Out[7]:
first second
bar one 0.469112
two -0.282863
baz one -1.509059
two -1.135632
foo one 1.212112
two -0.173215
qux one 0.119209
two -1.044236
dtype: float64
When you want every pairing of the elements in two iterables, it can be easier
to use the MultiIndex.from_product() method:
In [8]: iterables = [["bar", "baz", "foo", "qux"], ["one", "two"]]
In [9]: pd.MultiIndex.from_product(iterables, names=["first", "second"])
Out[9]:
MultiIndex([('bar', 'one'),
('bar', 'two'),
('baz', 'one'),
('baz', 'two'),
('foo', 'one'),
('foo', 'two'),
('qux', 'one'),
('qux', 'two')],
names=['first', 'second'])
You can also construct a MultiIndex from a DataFrame directly, using
the method MultiIndex.from_frame(). This is a complementary method to
MultiIndex.to_frame().
In [10]: df = pd.DataFrame(
....: [["bar", "one"], ["bar", "two"], ["foo", "one"], ["foo", "two"]],
....: columns=["first", "second"],
....: )
....:
In [11]: pd.MultiIndex.from_frame(df)
Out[11]:
MultiIndex([('bar', 'one'),
('bar', 'two'),
('foo', 'one'),
('foo', 'two')],
names=['first', 'second'])
As a convenience, you can pass a list of arrays directly into Series or
DataFrame to construct a MultiIndex automatically:
In [12]: arrays = [
....: np.array(["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"]),
....: np.array(["one", "two", "one", "two", "one", "two", "one", "two"]),
....: ]
....:
In [13]: s = pd.Series(np.random.randn(8), index=arrays)
In [14]: s
Out[14]:
bar one -0.861849
two -2.104569
baz one -0.494929
two 1.071804
foo one 0.721555
two -0.706771
qux one -1.039575
two 0.271860
dtype: float64
In [15]: df = pd.DataFrame(np.random.randn(8, 4), index=arrays)
In [16]: df
Out[16]:
0 1 2 3
bar one -0.424972 0.567020 0.276232 -1.087401
two -0.673690 0.113648 -1.478427 0.524988
baz one 0.404705 0.577046 -1.715002 -1.039268
two -0.370647 -1.157892 -1.344312 0.844885
foo one 1.075770 -0.109050 1.643563 -1.469388
two 0.357021 -0.674600 -1.776904 -0.968914
qux one -1.294524 0.413738 0.276662 -0.472035
two -0.013960 -0.362543 -0.006154 -0.923061
All of the MultiIndex constructors accept a names argument which stores
string names for the levels themselves. If no names are provided, None will
be assigned:
In [17]: df.index.names
Out[17]: FrozenList([None, None])
This index can back any axis of a pandas object, and the number of levels
of the index is up to you:
In [18]: df = pd.DataFrame(np.random.randn(3, 8), index=["A", "B", "C"], columns=index)
In [19]: df
Out[19]:
first bar baz ... foo qux
second one two one ... two one two
A 0.895717 0.805244 -1.206412 ... 1.340309 -1.170299 -0.226169
B 0.410835 0.813850 0.132003 ... -1.187678 1.130127 -1.436737
C -1.413681 1.607920 1.024180 ... -2.211372 0.974466 -2.006747
[3 rows x 8 columns]
In [20]: pd.DataFrame(np.random.randn(6, 6), index=index[:6], columns=index[:6])
Out[20]:
first bar baz foo
second one two one two one two
first second
bar one -0.410001 -0.078638 0.545952 -1.219217 -1.226825 0.769804
two -1.281247 -0.727707 -0.121306 -0.097883 0.695775 0.341734
baz one 0.959726 -1.110336 -0.619976 0.149748 -0.732339 0.687738
two 0.176444 0.403310 -0.154951 0.301624 -2.179861 -1.369849
foo one -0.954208 1.462696 -1.743161 -0.826591 -0.345352 1.314232
two 0.690579 0.995761 2.396780 0.014871 3.357427 -0.317441
We’ve “sparsified” the higher levels of the indexes to make the console output a
bit easier on the eyes. Note that how the index is displayed can be controlled using the
multi_sparse option in pandas.set_options():
In [21]: with pd.option_context("display.multi_sparse", False):
....: df
....:
It’s worth keeping in mind that there’s nothing preventing you from using
tuples as atomic labels on an axis:
In [22]: pd.Series(np.random.randn(8), index=tuples)
Out[22]:
(bar, one) -1.236269
(bar, two) 0.896171
(baz, one) -0.487602
(baz, two) -0.082240
(foo, one) -2.182937
(foo, two) 0.380396
(qux, one) 0.084844
(qux, two) 0.432390
dtype: float64
The reason that the MultiIndex matters is that it can allow you to do
grouping, selection, and reshaping operations as we will describe below and in
subsequent areas of the documentation. As you will see in later sections, you
can find yourself working with hierarchically-indexed data without creating a
MultiIndex explicitly yourself. However, when loading data from a file, you
may wish to generate your own MultiIndex when preparing the data set.
Reconstructing the level labels#
The method get_level_values() will return a vector of the labels for each
location at a particular level:
In [23]: index.get_level_values(0)
Out[23]: Index(['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'], dtype='object', name='first')
In [24]: index.get_level_values("second")
Out[24]: Index(['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two'], dtype='object', name='second')
Basic indexing on axis with MultiIndex#
One of the important features of hierarchical indexing is that you can select
data by a “partial” label identifying a subgroup in the data. Partial
selection “drops” levels of the hierarchical index in the result in a
completely analogous way to selecting a column in a regular DataFrame:
In [25]: df["bar"]
Out[25]:
second one two
A 0.895717 0.805244
B 0.410835 0.813850
C -1.413681 1.607920
In [26]: df["bar", "one"]
Out[26]:
A 0.895717
B 0.410835
C -1.413681
Name: (bar, one), dtype: float64
In [27]: df["bar"]["one"]
Out[27]:
A 0.895717
B 0.410835
C -1.413681
Name: one, dtype: float64
In [28]: s["qux"]
Out[28]:
one -1.039575
two 0.271860
dtype: float64
See Cross-section with hierarchical index for how to select
on a deeper level.
Defined levels#
The MultiIndex keeps all the defined levels of an index, even
if they are not actually used. When slicing an index, you may notice this.
For example:
In [29]: df.columns.levels # original MultiIndex
Out[29]: FrozenList([['bar', 'baz', 'foo', 'qux'], ['one', 'two']])
In [30]: df[["foo","qux"]].columns.levels # sliced
Out[30]: FrozenList([['bar', 'baz', 'foo', 'qux'], ['one', 'two']])
This is done to avoid a recomputation of the levels in order to make slicing
highly performant. If you want to see only the used levels, you can use the
get_level_values() method.
In [31]: df[["foo", "qux"]].columns.to_numpy()
Out[31]:
array([('foo', 'one'), ('foo', 'two'), ('qux', 'one'), ('qux', 'two')],
dtype=object)
# for a specific level
In [32]: df[["foo", "qux"]].columns.get_level_values(0)
Out[32]: Index(['foo', 'foo', 'qux', 'qux'], dtype='object', name='first')
To reconstruct the MultiIndex with only the used levels, the
remove_unused_levels() method may be used.
In [33]: new_mi = df[["foo", "qux"]].columns.remove_unused_levels()
In [34]: new_mi.levels
Out[34]: FrozenList([['foo', 'qux'], ['one', 'two']])
Data alignment and using reindex#
Operations between differently-indexed objects having MultiIndex on the
axes will work as you expect; data alignment will work the same as an Index of
tuples:
In [35]: s + s[:-2]
Out[35]:
bar one -1.723698
two -4.209138
baz one -0.989859
two 2.143608
foo one 1.443110
two -1.413542
qux one NaN
two NaN
dtype: float64
In [36]: s + s[::2]
Out[36]:
bar one -1.723698
two NaN
baz one -0.989859
two NaN
foo one 1.443110
two NaN
qux one -2.079150
two NaN
dtype: float64
The reindex() method of Series/DataFrames can be
called with another MultiIndex, or even a list or array of tuples:
In [37]: s.reindex(index[:3])
Out[37]:
first second
bar one -0.861849
two -2.104569
baz one -0.494929
dtype: float64
In [38]: s.reindex([("foo", "two"), ("bar", "one"), ("qux", "one"), ("baz", "one")])
Out[38]:
foo two -0.706771
bar one -0.861849
qux one -1.039575
baz one -0.494929
dtype: float64
Advanced indexing with hierarchical index#
Syntactically integrating MultiIndex in advanced indexing with .loc is a
bit challenging, but we’ve made every effort to do so. In general, MultiIndex
keys take the form of tuples. For example, the following works as you would expect:
In [39]: df = df.T
In [40]: df
Out[40]:
A B C
first second
bar one 0.895717 0.410835 -1.413681
two 0.805244 0.813850 1.607920
baz one -1.206412 0.132003 1.024180
two 2.565646 -0.827317 0.569605
foo one 1.431256 -0.076467 0.875906
two 1.340309 -1.187678 -2.211372
qux one -1.170299 1.130127 0.974466
two -0.226169 -1.436737 -2.006747
In [41]: df.loc[("bar", "two")]
Out[41]:
A 0.805244
B 0.813850
C 1.607920
Name: (bar, two), dtype: float64
Note that df.loc['bar', 'two'] would also work in this example, but this shorthand
notation can lead to ambiguity in general.
If you also want to index a specific column with .loc, you must use a tuple
like this:
In [42]: df.loc[("bar", "two"), "A"]
Out[42]: 0.8052440253863785
You don’t have to specify all levels of the MultiIndex by passing only the
first elements of the tuple. For example, you can use “partial” indexing to
get all elements with bar in the first level as follows:
In [43]: df.loc["bar"]
Out[43]:
A B C
second
one 0.895717 0.410835 -1.413681
two 0.805244 0.813850 1.607920
This is a shortcut for the slightly more verbose notation df.loc[('bar',),] (equivalent
to df.loc['bar',] in this example).
“Partial” slicing also works quite nicely.
In [44]: df.loc["baz":"foo"]
Out[44]:
A B C
first second
baz one -1.206412 0.132003 1.024180
two 2.565646 -0.827317 0.569605
foo one 1.431256 -0.076467 0.875906
two 1.340309 -1.187678 -2.211372
You can slice with a ‘range’ of values, by providing a slice of tuples.
In [45]: df.loc[("baz", "two"):("qux", "one")]
Out[45]:
A B C
first second
baz two 2.565646 -0.827317 0.569605
foo one 1.431256 -0.076467 0.875906
two 1.340309 -1.187678 -2.211372
qux one -1.170299 1.130127 0.974466
In [46]: df.loc[("baz", "two"):"foo"]
Out[46]:
A B C
first second
baz two 2.565646 -0.827317 0.569605
foo one 1.431256 -0.076467 0.875906
two 1.340309 -1.187678 -2.211372
Passing a list of labels or tuples works similar to reindexing:
In [47]: df.loc[[("bar", "two"), ("qux", "one")]]
Out[47]:
A B C
first second
bar two 0.805244 0.813850 1.607920
qux one -1.170299 1.130127 0.974466
Note
It is important to note that tuples and lists are not treated identically
in pandas when it comes to indexing. Whereas a tuple is interpreted as one
multi-level key, a list is used to specify several keys. Or in other words,
tuples go horizontally (traversing levels), lists go vertically (scanning levels).
Importantly, a list of tuples indexes several complete MultiIndex keys,
whereas a tuple of lists refer to several values within a level:
In [48]: s = pd.Series(
....: [1, 2, 3, 4, 5, 6],
....: index=pd.MultiIndex.from_product([["A", "B"], ["c", "d", "e"]]),
....: )
....:
In [49]: s.loc[[("A", "c"), ("B", "d")]] # list of tuples
Out[49]:
A c 1
B d 5
dtype: int64
In [50]: s.loc[(["A", "B"], ["c", "d"])] # tuple of lists
Out[50]:
A c 1
d 2
B c 4
d 5
dtype: int64
Using slicers#
You can slice a MultiIndex by providing multiple indexers.
You can provide any of the selectors as if you are indexing by label, see Selection by Label,
including slices, lists of labels, labels, and boolean indexers.
You can use slice(None) to select all the contents of that level. You do not need to specify all the
deeper levels, they will be implied as slice(None).
As usual, both sides of the slicers are included as this is label indexing.
Warning
You should specify all axes in the .loc specifier, meaning the indexer for the index and
for the columns. There are some ambiguous cases where the passed indexer could be mis-interpreted
as indexing both axes, rather than into say the MultiIndex for the rows.
You should do this:
df.loc[(slice("A1", "A3"), ...), :] # noqa: E999
You should not do this:
df.loc[(slice("A1", "A3"), ...)] # noqa: E999
In [51]: def mklbl(prefix, n):
....: return ["%s%s" % (prefix, i) for i in range(n)]
....:
In [52]: miindex = pd.MultiIndex.from_product(
....: [mklbl("A", 4), mklbl("B", 2), mklbl("C", 4), mklbl("D", 2)]
....: )
....:
In [53]: micolumns = pd.MultiIndex.from_tuples(
....: [("a", "foo"), ("a", "bar"), ("b", "foo"), ("b", "bah")], names=["lvl0", "lvl1"]
....: )
....:
In [54]: dfmi = (
....: pd.DataFrame(
....: np.arange(len(miindex) * len(micolumns)).reshape(
....: (len(miindex), len(micolumns))
....: ),
....: index=miindex,
....: columns=micolumns,
....: )
....: .sort_index()
....: .sort_index(axis=1)
....: )
....:
In [55]: dfmi
Out[55]:
lvl0 a b
lvl1 bar foo bah foo
A0 B0 C0 D0 1 0 3 2
D1 5 4 7 6
C1 D0 9 8 11 10
D1 13 12 15 14
C2 D0 17 16 19 18
... ... ... ... ...
A3 B1 C1 D1 237 236 239 238
C2 D0 241 240 243 242
D1 245 244 247 246
C3 D0 249 248 251 250
D1 253 252 255 254
[64 rows x 4 columns]
Basic MultiIndex slicing using slices, lists, and labels.
In [56]: dfmi.loc[(slice("A1", "A3"), slice(None), ["C1", "C3"]), :]
Out[56]:
lvl0 a b
lvl1 bar foo bah foo
A1 B0 C1 D0 73 72 75 74
D1 77 76 79 78
C3 D0 89 88 91 90
D1 93 92 95 94
B1 C1 D0 105 104 107 106
... ... ... ... ...
A3 B0 C3 D1 221 220 223 222
B1 C1 D0 233 232 235 234
D1 237 236 239 238
C3 D0 249 248 251 250
D1 253 252 255 254
[24 rows x 4 columns]
You can use pandas.IndexSlice to facilitate a more natural syntax
using :, rather than using slice(None).
In [57]: idx = pd.IndexSlice
In [58]: dfmi.loc[idx[:, :, ["C1", "C3"]], idx[:, "foo"]]
Out[58]:
lvl0 a b
lvl1 foo foo
A0 B0 C1 D0 8 10
D1 12 14
C3 D0 24 26
D1 28 30
B1 C1 D0 40 42
... ... ...
A3 B0 C3 D1 220 222
B1 C1 D0 232 234
D1 236 238
C3 D0 248 250
D1 252 254
[32 rows x 2 columns]
It is possible to perform quite complicated selections using this method on multiple
axes at the same time.
In [59]: dfmi.loc["A1", (slice(None), "foo")]
Out[59]:
lvl0 a b
lvl1 foo foo
B0 C0 D0 64 66
D1 68 70
C1 D0 72 74
D1 76 78
C2 D0 80 82
... ... ...
B1 C1 D1 108 110
C2 D0 112 114
D1 116 118
C3 D0 120 122
D1 124 126
[16 rows x 2 columns]
In [60]: dfmi.loc[idx[:, :, ["C1", "C3"]], idx[:, "foo"]]
Out[60]:
lvl0 a b
lvl1 foo foo
A0 B0 C1 D0 8 10
D1 12 14
C3 D0 24 26
D1 28 30
B1 C1 D0 40 42
... ... ...
A3 B0 C3 D1 220 222
B1 C1 D0 232 234
D1 236 238
C3 D0 248 250
D1 252 254
[32 rows x 2 columns]
Using a boolean indexer you can provide selection related to the values.
In [61]: mask = dfmi[("a", "foo")] > 200
In [62]: dfmi.loc[idx[mask, :, ["C1", "C3"]], idx[:, "foo"]]
Out[62]:
lvl0 a b
lvl1 foo foo
A3 B0 C1 D1 204 206
C3 D0 216 218
D1 220 222
B1 C1 D0 232 234
D1 236 238
C3 D0 248 250
D1 252 254
You can also specify the axis argument to .loc to interpret the passed
slicers on a single axis.
In [63]: dfmi.loc(axis=0)[:, :, ["C1", "C3"]]
Out[63]:
lvl0 a b
lvl1 bar foo bah foo
A0 B0 C1 D0 9 8 11 10
D1 13 12 15 14
C3 D0 25 24 27 26
D1 29 28 31 30
B1 C1 D0 41 40 43 42
... ... ... ... ...
A3 B0 C3 D1 221 220 223 222
B1 C1 D0 233 232 235 234
D1 237 236 239 238
C3 D0 249 248 251 250
D1 253 252 255 254
[32 rows x 4 columns]
Furthermore, you can set the values using the following methods.
In [64]: df2 = dfmi.copy()
In [65]: df2.loc(axis=0)[:, :, ["C1", "C3"]] = -10
In [66]: df2
Out[66]:
lvl0 a b
lvl1 bar foo bah foo
A0 B0 C0 D0 1 0 3 2
D1 5 4 7 6
C1 D0 -10 -10 -10 -10
D1 -10 -10 -10 -10
C2 D0 17 16 19 18
... ... ... ... ...
A3 B1 C1 D1 -10 -10 -10 -10
C2 D0 241 240 243 242
D1 245 244 247 246
C3 D0 -10 -10 -10 -10
D1 -10 -10 -10 -10
[64 rows x 4 columns]
You can use a right-hand-side of an alignable object as well.
In [67]: df2 = dfmi.copy()
In [68]: df2.loc[idx[:, :, ["C1", "C3"]], :] = df2 * 1000
In [69]: df2
Out[69]:
lvl0 a b
lvl1 bar foo bah foo
A0 B0 C0 D0 1 0 3 2
D1 5 4 7 6
C1 D0 9000 8000 11000 10000
D1 13000 12000 15000 14000
C2 D0 17 16 19 18
... ... ... ... ...
A3 B1 C1 D1 237000 236000 239000 238000
C2 D0 241 240 243 242
D1 245 244 247 246
C3 D0 249000 248000 251000 250000
D1 253000 252000 255000 254000
[64 rows x 4 columns]
Cross-section#
The xs() method of DataFrame additionally takes a level argument to make
selecting data at a particular level of a MultiIndex easier.
In [70]: df
Out[70]:
A B C
first second
bar one 0.895717 0.410835 -1.413681
two 0.805244 0.813850 1.607920
baz one -1.206412 0.132003 1.024180
two 2.565646 -0.827317 0.569605
foo one 1.431256 -0.076467 0.875906
two 1.340309 -1.187678 -2.211372
qux one -1.170299 1.130127 0.974466
two -0.226169 -1.436737 -2.006747
In [71]: df.xs("one", level="second")
Out[71]:
A B C
first
bar 0.895717 0.410835 -1.413681
baz -1.206412 0.132003 1.024180
foo 1.431256 -0.076467 0.875906
qux -1.170299 1.130127 0.974466
# using the slicers
In [72]: df.loc[(slice(None), "one"), :]
Out[72]:
A B C
first second
bar one 0.895717 0.410835 -1.413681
baz one -1.206412 0.132003 1.024180
foo one 1.431256 -0.076467 0.875906
qux one -1.170299 1.130127 0.974466
You can also select on the columns with xs, by
providing the axis argument.
In [73]: df = df.T
In [74]: df.xs("one", level="second", axis=1)
Out[74]:
first bar baz foo qux
A 0.895717 -1.206412 1.431256 -1.170299
B 0.410835 0.132003 -0.076467 1.130127
C -1.413681 1.024180 0.875906 0.974466
# using the slicers
In [75]: df.loc[:, (slice(None), "one")]
Out[75]:
first bar baz foo qux
second one one one one
A 0.895717 -1.206412 1.431256 -1.170299
B 0.410835 0.132003 -0.076467 1.130127
C -1.413681 1.024180 0.875906 0.974466
xs also allows selection with multiple keys.
In [76]: df.xs(("one", "bar"), level=("second", "first"), axis=1)
Out[76]:
first bar
second one
A 0.895717
B 0.410835
C -1.413681
# using the slicers
In [77]: df.loc[:, ("bar", "one")]
Out[77]:
A 0.895717
B 0.410835
C -1.413681
Name: (bar, one), dtype: float64
You can pass drop_level=False to xs to retain
the level that was selected.
In [78]: df.xs("one", level="second", axis=1, drop_level=False)
Out[78]:
first bar baz foo qux
second one one one one
A 0.895717 -1.206412 1.431256 -1.170299
B 0.410835 0.132003 -0.076467 1.130127
C -1.413681 1.024180 0.875906 0.974466
Compare the above with the result using drop_level=True (the default value).
In [79]: df.xs("one", level="second", axis=1, drop_level=True)
Out[79]:
first bar baz foo qux
A 0.895717 -1.206412 1.431256 -1.170299
B 0.410835 0.132003 -0.076467 1.130127
C -1.413681 1.024180 0.875906 0.974466
Advanced reindexing and alignment#
Using the parameter level in the reindex() and
align() methods of pandas objects is useful to broadcast
values across a level. For instance:
In [80]: midx = pd.MultiIndex(
....: levels=[["zero", "one"], ["x", "y"]], codes=[[1, 1, 0, 0], [1, 0, 1, 0]]
....: )
....:
In [81]: df = pd.DataFrame(np.random.randn(4, 2), index=midx)
In [82]: df
Out[82]:
0 1
one y 1.519970 -0.493662
x 0.600178 0.274230
zero y 0.132885 -0.023688
x 2.410179 1.450520
In [83]: df2 = df.groupby(level=0).mean()
In [84]: df2
Out[84]:
0 1
one 1.060074 -0.109716
zero 1.271532 0.713416
In [85]: df2.reindex(df.index, level=0)
Out[85]:
0 1
one y 1.060074 -0.109716
x 1.060074 -0.109716
zero y 1.271532 0.713416
x 1.271532 0.713416
# aligning
In [86]: df_aligned, df2_aligned = df.align(df2, level=0)
In [87]: df_aligned
Out[87]:
0 1
one y 1.519970 -0.493662
x 0.600178 0.274230
zero y 0.132885 -0.023688
x 2.410179 1.450520
In [88]: df2_aligned
Out[88]:
0 1
one y 1.060074 -0.109716
x 1.060074 -0.109716
zero y 1.271532 0.713416
x 1.271532 0.713416
Swapping levels with swaplevel#
The swaplevel() method can switch the order of two levels:
In [89]: df[:5]
Out[89]:
0 1
one y 1.519970 -0.493662
x 0.600178 0.274230
zero y 0.132885 -0.023688
x 2.410179 1.450520
In [90]: df[:5].swaplevel(0, 1, axis=0)
Out[90]:
0 1
y one 1.519970 -0.493662
x one 0.600178 0.274230
y zero 0.132885 -0.023688
x zero 2.410179 1.450520
Reordering levels with reorder_levels#
The reorder_levels() method generalizes the swaplevel
method, allowing you to permute the hierarchical index levels in one step:
In [91]: df[:5].reorder_levels([1, 0], axis=0)
Out[91]:
0 1
y one 1.519970 -0.493662
x one 0.600178 0.274230
y zero 0.132885 -0.023688
x zero 2.410179 1.450520
Renaming names of an Index or MultiIndex#
The rename() method is used to rename the labels of a
MultiIndex, and is typically used to rename the columns of a DataFrame.
The columns argument of rename allows a dictionary to be specified
that includes only the columns you wish to rename.
In [92]: df.rename(columns={0: "col0", 1: "col1"})
Out[92]:
col0 col1
one y 1.519970 -0.493662
x 0.600178 0.274230
zero y 0.132885 -0.023688
x 2.410179 1.450520
This method can also be used to rename specific labels of the main index
of the DataFrame.
In [93]: df.rename(index={"one": "two", "y": "z"})
Out[93]:
0 1
two z 1.519970 -0.493662
x 0.600178 0.274230
zero z 0.132885 -0.023688
x 2.410179 1.450520
The rename_axis() method is used to rename the name of a
Index or MultiIndex. In particular, the names of the levels of a
MultiIndex can be specified, which is useful if reset_index() is later
used to move the values from the MultiIndex to a column.
In [94]: df.rename_axis(index=["abc", "def"])
Out[94]:
0 1
abc def
one y 1.519970 -0.493662
x 0.600178 0.274230
zero y 0.132885 -0.023688
x 2.410179 1.450520
Note that the columns of a DataFrame are an index, so that using
rename_axis with the columns argument will change the name of that
index.
In [95]: df.rename_axis(columns="Cols").columns
Out[95]: RangeIndex(start=0, stop=2, step=1, name='Cols')
Both rename and rename_axis support specifying a dictionary,
Series or a mapping function to map labels/names to new values.
When working with an Index object directly, rather than via a DataFrame,
Index.set_names() can be used to change the names.
In [96]: mi = pd.MultiIndex.from_product([[1, 2], ["a", "b"]], names=["x", "y"])
In [97]: mi.names
Out[97]: FrozenList(['x', 'y'])
In [98]: mi2 = mi.rename("new name", level=0)
In [99]: mi2
Out[99]:
MultiIndex([(1, 'a'),
(1, 'b'),
(2, 'a'),
(2, 'b')],
names=['new name', 'y'])
You cannot set the names of the MultiIndex via a level.
In [100]: mi.levels[0].name = "name via level"
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[100], line 1
----> 1 mi.levels[0].name = "name via level"
File ~/work/pandas/pandas/pandas/core/indexes/base.py:1745, in Index.name(self, value)
1741 @name.setter
1742 def name(self, value: Hashable) -> None:
1743 if self._no_setting_name:
1744 # Used in MultiIndex.levels to avoid silently ignoring name updates.
-> 1745 raise RuntimeError(
1746 "Cannot set name on a level of a MultiIndex. Use "
1747 "'MultiIndex.set_names' instead."
1748 )
1749 maybe_extract_name(value, None, type(self))
1750 self._name = value
RuntimeError: Cannot set name on a level of a MultiIndex. Use 'MultiIndex.set_names' instead.
Use Index.set_names() instead.
Sorting a MultiIndex#
For MultiIndex-ed objects to be indexed and sliced effectively,
they need to be sorted. As with any index, you can use sort_index().
In [101]: import random
In [102]: random.shuffle(tuples)
In [103]: s = pd.Series(np.random.randn(8), index=pd.MultiIndex.from_tuples(tuples))
In [104]: s
Out[104]:
baz two 0.206053
foo two -0.251905
bar one -2.213588
qux two 1.063327
baz one 1.266143
qux one 0.299368
foo one -0.863838
bar two 0.408204
dtype: float64
In [105]: s.sort_index()
Out[105]:
bar one -2.213588
two 0.408204
baz one 1.266143
two 0.206053
foo one -0.863838
two -0.251905
qux one 0.299368
two 1.063327
dtype: float64
In [106]: s.sort_index(level=0)
Out[106]:
bar one -2.213588
two 0.408204
baz one 1.266143
two 0.206053
foo one -0.863838
two -0.251905
qux one 0.299368
two 1.063327
dtype: float64
In [107]: s.sort_index(level=1)
Out[107]:
bar one -2.213588
baz one 1.266143
foo one -0.863838
qux one 0.299368
bar two 0.408204
baz two 0.206053
foo two -0.251905
qux two 1.063327
dtype: float64
You may also pass a level name to sort_index if the MultiIndex levels
are named.
In [108]: s.index.set_names(["L1", "L2"], inplace=True)
In [109]: s.sort_index(level="L1")
Out[109]:
L1 L2
bar one -2.213588
two 0.408204
baz one 1.266143
two 0.206053
foo one -0.863838
two -0.251905
qux one 0.299368
two 1.063327
dtype: float64
In [110]: s.sort_index(level="L2")
Out[110]:
L1 L2
bar one -2.213588
baz one 1.266143
foo one -0.863838
qux one 0.299368
bar two 0.408204
baz two 0.206053
foo two -0.251905
qux two 1.063327
dtype: float64
On higher dimensional objects, you can sort any of the other axes by level if
they have a MultiIndex:
In [111]: df.T.sort_index(level=1, axis=1)
Out[111]:
one zero one zero
x x y y
0 0.600178 2.410179 1.519970 0.132885
1 0.274230 1.450520 -0.493662 -0.023688
Indexing will work even if the data are not sorted, but will be rather
inefficient (and show a PerformanceWarning). It will also
return a copy of the data rather than a view:
In [112]: dfm = pd.DataFrame(
.....: {"jim": [0, 0, 1, 1], "joe": ["x", "x", "z", "y"], "jolie": np.random.rand(4)}
.....: )
.....:
In [113]: dfm = dfm.set_index(["jim", "joe"])
In [114]: dfm
Out[114]:
jolie
jim joe
0 x 0.490671
x 0.120248
1 z 0.537020
y 0.110968
In [4]: dfm.loc[(1, 'z')]
PerformanceWarning: indexing past lexsort depth may impact performance.
Out[4]:
jolie
jim joe
1 z 0.64094
Furthermore, if you try to index something that is not fully lexsorted, this can raise:
In [5]: dfm.loc[(0, 'y'):(1, 'z')]
UnsortedIndexError: 'Key length (2) was greater than MultiIndex lexsort depth (1)'
The is_monotonic_increasing() method on a MultiIndex shows if the
index is sorted:
In [115]: dfm.index.is_monotonic_increasing
Out[115]: False
In [116]: dfm = dfm.sort_index()
In [117]: dfm
Out[117]:
jolie
jim joe
0 x 0.490671
x 0.120248
1 y 0.110968
z 0.537020
In [118]: dfm.index.is_monotonic_increasing
Out[118]: True
And now selection works as expected.
In [119]: dfm.loc[(0, "y"):(1, "z")]
Out[119]:
jolie
jim joe
1 y 0.110968
z 0.537020
Take methods#
Similar to NumPy ndarrays, pandas Index, Series, and DataFrame also provides
the take() method that retrieves elements along a given axis at the given
indices. The given indices must be either a list or an ndarray of integer
index positions. take will also accept negative integers as relative positions to the end of the object.
In [120]: index = pd.Index(np.random.randint(0, 1000, 10))
In [121]: index
Out[121]: Int64Index([214, 502, 712, 567, 786, 175, 993, 133, 758, 329], dtype='int64')
In [122]: positions = [0, 9, 3]
In [123]: index[positions]
Out[123]: Int64Index([214, 329, 567], dtype='int64')
In [124]: index.take(positions)
Out[124]: Int64Index([214, 329, 567], dtype='int64')
In [125]: ser = pd.Series(np.random.randn(10))
In [126]: ser.iloc[positions]
Out[126]:
0 -0.179666
9 1.824375
3 0.392149
dtype: float64
In [127]: ser.take(positions)
Out[127]:
0 -0.179666
9 1.824375
3 0.392149
dtype: float64
For DataFrames, the given indices should be a 1d list or ndarray that specifies
row or column positions.
In [128]: frm = pd.DataFrame(np.random.randn(5, 3))
In [129]: frm.take([1, 4, 3])
Out[129]:
0 1 2
1 -1.237881 0.106854 -1.276829
4 0.629675 -1.425966 1.857704
3 0.979542 -1.633678 0.615855
In [130]: frm.take([0, 2], axis=1)
Out[130]:
0 2
0 0.595974 0.601544
1 -1.237881 -1.276829
2 -0.767101 1.499591
3 0.979542 0.615855
4 0.629675 1.857704
It is important to note that the take method on pandas objects are not
intended to work on boolean indices and may return unexpected results.
In [131]: arr = np.random.randn(10)
In [132]: arr.take([False, False, True, True])
Out[132]: array([-1.1935, -1.1935, 0.6775, 0.6775])
In [133]: arr[[0, 1]]
Out[133]: array([-1.1935, 0.6775])
In [134]: ser = pd.Series(np.random.randn(10))
In [135]: ser.take([False, False, True, True])
Out[135]:
0 0.233141
0 0.233141
1 -0.223540
1 -0.223540
dtype: float64
In [136]: ser.iloc[[0, 1]]
Out[136]:
0 0.233141
1 -0.223540
dtype: float64
Finally, as a small note on performance, because the take method handles
a narrower range of inputs, it can offer performance that is a good deal
faster than fancy indexing.
In [137]: arr = np.random.randn(10000, 5)
In [138]: indexer = np.arange(10000)
In [139]: random.shuffle(indexer)
In [140]: %timeit arr[indexer]
.....: %timeit arr.take(indexer, axis=0)
.....:
141 us +- 1.18 us per loop (mean +- std. dev. of 7 runs, 10,000 loops each)
43.6 us +- 1.01 us per loop (mean +- std. dev. of 7 runs, 10,000 loops each)
In [141]: ser = pd.Series(arr[:, 0])
In [142]: %timeit ser.iloc[indexer]
.....: %timeit ser.take(indexer)
.....:
71.3 us +- 2.24 us per loop (mean +- std. dev. of 7 runs, 10,000 loops each)
63.1 us +- 4.29 us per loop (mean +- std. dev. of 7 runs, 10,000 loops each)
Index types#
We have discussed MultiIndex in the previous sections pretty extensively.
Documentation about DatetimeIndex and PeriodIndex are shown here,
and documentation about TimedeltaIndex is found here.
In the following sub-sections we will highlight some other index types.
CategoricalIndex#
CategoricalIndex is a type of index that is useful for supporting
indexing with duplicates. This is a container around a Categorical
and allows efficient indexing and storage of an index with a large number of duplicated elements.
In [143]: from pandas.api.types import CategoricalDtype
In [144]: df = pd.DataFrame({"A": np.arange(6), "B": list("aabbca")})
In [145]: df["B"] = df["B"].astype(CategoricalDtype(list("cab")))
In [146]: df
Out[146]:
A B
0 0 a
1 1 a
2 2 b
3 3 b
4 4 c
5 5 a
In [147]: df.dtypes
Out[147]:
A int64
B category
dtype: object
In [148]: df["B"].cat.categories
Out[148]: Index(['c', 'a', 'b'], dtype='object')
Setting the index will create a CategoricalIndex.
In [149]: df2 = df.set_index("B")
In [150]: df2.index
Out[150]: CategoricalIndex(['a', 'a', 'b', 'b', 'c', 'a'], categories=['c', 'a', 'b'], ordered=False, dtype='category', name='B')
Indexing with __getitem__/.iloc/.loc works similarly to an Index with duplicates.
The indexers must be in the category or the operation will raise a KeyError.
In [151]: df2.loc["a"]
Out[151]:
A
B
a 0
a 1
a 5
The CategoricalIndex is preserved after indexing:
In [152]: df2.loc["a"].index
Out[152]: CategoricalIndex(['a', 'a', 'a'], categories=['c', 'a', 'b'], ordered=False, dtype='category', name='B')
Sorting the index will sort by the order of the categories (recall that we
created the index with CategoricalDtype(list('cab')), so the sorted
order is cab).
In [153]: df2.sort_index()
Out[153]:
A
B
c 4
a 0
a 1
a 5
b 2
b 3
Groupby operations on the index will preserve the index nature as well.
In [154]: df2.groupby(level=0).sum()
Out[154]:
A
B
c 4
a 6
b 5
In [155]: df2.groupby(level=0).sum().index
Out[155]: CategoricalIndex(['c', 'a', 'b'], categories=['c', 'a', 'b'], ordered=False, dtype='category', name='B')
Reindexing operations will return a resulting index based on the type of the passed
indexer. Passing a list will return a plain-old Index; indexing with
a Categorical will return a CategoricalIndex, indexed according to the categories
of the passed Categorical dtype. This allows one to arbitrarily index these even with
values not in the categories, similarly to how you can reindex any pandas index.
In [156]: df3 = pd.DataFrame(
.....: {"A": np.arange(3), "B": pd.Series(list("abc")).astype("category")}
.....: )
.....:
In [157]: df3 = df3.set_index("B")
In [158]: df3
Out[158]:
A
B
a 0
b 1
c 2
In [159]: df3.reindex(["a", "e"])
Out[159]:
A
B
a 0.0
e NaN
In [160]: df3.reindex(["a", "e"]).index
Out[160]: Index(['a', 'e'], dtype='object', name='B')
In [161]: df3.reindex(pd.Categorical(["a", "e"], categories=list("abe")))
Out[161]:
A
B
a 0.0
e NaN
In [162]: df3.reindex(pd.Categorical(["a", "e"], categories=list("abe"))).index
Out[162]: CategoricalIndex(['a', 'e'], categories=['a', 'b', 'e'], ordered=False, dtype='category', name='B')
Warning
Reshaping and Comparison operations on a CategoricalIndex must have the same categories
or a TypeError will be raised.
In [163]: df4 = pd.DataFrame({"A": np.arange(2), "B": list("ba")})
In [164]: df4["B"] = df4["B"].astype(CategoricalDtype(list("ab")))
In [165]: df4 = df4.set_index("B")
In [166]: df4.index
Out[166]: CategoricalIndex(['b', 'a'], categories=['a', 'b'], ordered=False, dtype='category', name='B')
In [167]: df5 = pd.DataFrame({"A": np.arange(2), "B": list("bc")})
In [168]: df5["B"] = df5["B"].astype(CategoricalDtype(list("bc")))
In [169]: df5 = df5.set_index("B")
In [170]: df5.index
Out[170]: CategoricalIndex(['b', 'c'], categories=['b', 'c'], ordered=False, dtype='category', name='B')
In [1]: pd.concat([df4, df5])
TypeError: categories must match existing categories when appending
Int64Index and RangeIndex#
Deprecated since version 1.4.0: In pandas 2.0, Index will become the default index type for numeric types
instead of Int64Index, Float64Index and UInt64Index and those index types
are therefore deprecated and will be removed in a futire version.
RangeIndex will not be removed, as it represents an optimized version of an integer index.
Int64Index is a fundamental basic index in pandas. This is an immutable array
implementing an ordered, sliceable set.
RangeIndex is a sub-class of Int64Index that provides the default index for all NDFrame objects.
RangeIndex is an optimized version of Int64Index that can represent a monotonic ordered set. These are analogous to Python range types.
Float64Index#
Deprecated since version 1.4.0: Index will become the default index type for numeric types in the future
instead of Int64Index, Float64Index and UInt64Index and those index types
are therefore deprecated and will be removed in a future version of Pandas.
RangeIndex will not be removed as it represents an optimized version of an integer index.
By default a Float64Index will be automatically created when passing floating, or mixed-integer-floating values in index creation.
This enables a pure label-based slicing paradigm that makes [],ix,loc for scalar indexing and slicing work exactly the
same.
In [171]: indexf = pd.Index([1.5, 2, 3, 4.5, 5])
In [172]: indexf
Out[172]: Float64Index([1.5, 2.0, 3.0, 4.5, 5.0], dtype='float64')
In [173]: sf = pd.Series(range(5), index=indexf)
In [174]: sf
Out[174]:
1.5 0
2.0 1
3.0 2
4.5 3
5.0 4
dtype: int64
Scalar selection for [],.loc will always be label based. An integer will match an equal float index (e.g. 3 is equivalent to 3.0).
In [175]: sf[3]
Out[175]: 2
In [176]: sf[3.0]
Out[176]: 2
In [177]: sf.loc[3]
Out[177]: 2
In [178]: sf.loc[3.0]
Out[178]: 2
The only positional indexing is via iloc.
In [179]: sf.iloc[3]
Out[179]: 3
A scalar index that is not found will raise a KeyError.
Slicing is primarily on the values of the index when using [],ix,loc, and
always positional when using iloc. The exception is when the slice is
boolean, in which case it will always be positional.
In [180]: sf[2:4]
Out[180]:
2.0 1
3.0 2
dtype: int64
In [181]: sf.loc[2:4]
Out[181]:
2.0 1
3.0 2
dtype: int64
In [182]: sf.iloc[2:4]
Out[182]:
3.0 2
4.5 3
dtype: int64
In float indexes, slicing using floats is allowed.
In [183]: sf[2.1:4.6]
Out[183]:
3.0 2
4.5 3
dtype: int64
In [184]: sf.loc[2.1:4.6]
Out[184]:
3.0 2
4.5 3
dtype: int64
In non-float indexes, slicing using floats will raise a TypeError.
In [1]: pd.Series(range(5))[3.5]
TypeError: the label [3.5] is not a proper indexer for this index type (Int64Index)
In [1]: pd.Series(range(5))[3.5:4.5]
TypeError: the slice start [3.5] is not a proper indexer for this index type (Int64Index)
Here is a typical use-case for using this type of indexing. Imagine that you have a somewhat
irregular timedelta-like indexing scheme, but the data is recorded as floats. This could, for
example, be millisecond offsets.
In [185]: dfir = pd.concat(
.....: [
.....: pd.DataFrame(
.....: np.random.randn(5, 2), index=np.arange(5) * 250.0, columns=list("AB")
.....: ),
.....: pd.DataFrame(
.....: np.random.randn(6, 2),
.....: index=np.arange(4, 10) * 250.1,
.....: columns=list("AB"),
.....: ),
.....: ]
.....: )
.....:
In [186]: dfir
Out[186]:
A B
0.0 -0.435772 -1.188928
250.0 -0.808286 -0.284634
500.0 -1.815703 1.347213
750.0 -0.243487 0.514704
1000.0 1.162969 -0.287725
1000.4 -0.179734 0.993962
1250.5 -0.212673 0.909872
1500.6 -0.733333 -0.349893
1750.7 0.456434 -0.306735
2000.8 0.553396 0.166221
2250.9 -0.101684 -0.734907
Selection operations then will always work on a value basis, for all selection operators.
In [187]: dfir[0:1000.4]
Out[187]:
A B
0.0 -0.435772 -1.188928
250.0 -0.808286 -0.284634
500.0 -1.815703 1.347213
750.0 -0.243487 0.514704
1000.0 1.162969 -0.287725
1000.4 -0.179734 0.993962
In [188]: dfir.loc[0:1001, "A"]
Out[188]:
0.0 -0.435772
250.0 -0.808286
500.0 -1.815703
750.0 -0.243487
1000.0 1.162969
1000.4 -0.179734
Name: A, dtype: float64
In [189]: dfir.loc[1000.4]
Out[189]:
A -0.179734
B 0.993962
Name: 1000.4, dtype: float64
You could retrieve the first 1 second (1000 ms) of data as such:
In [190]: dfir[0:1000]
Out[190]:
A B
0.0 -0.435772 -1.188928
250.0 -0.808286 -0.284634
500.0 -1.815703 1.347213
750.0 -0.243487 0.514704
1000.0 1.162969 -0.287725
If you need integer based selection, you should use iloc:
In [191]: dfir.iloc[0:5]
Out[191]:
A B
0.0 -0.435772 -1.188928
250.0 -0.808286 -0.284634
500.0 -1.815703 1.347213
750.0 -0.243487 0.514704
1000.0 1.162969 -0.287725
IntervalIndex#
IntervalIndex together with its own dtype, IntervalDtype
as well as the Interval scalar type, allow first-class support in pandas
for interval notation.
The IntervalIndex allows some unique indexing and is also used as a
return type for the categories in cut() and qcut().
Indexing with an IntervalIndex#
An IntervalIndex can be used in Series and in DataFrame as the index.
In [192]: df = pd.DataFrame(
.....: {"A": [1, 2, 3, 4]}, index=pd.IntervalIndex.from_breaks([0, 1, 2, 3, 4])
.....: )
.....:
In [193]: df
Out[193]:
A
(0, 1] 1
(1, 2] 2
(2, 3] 3
(3, 4] 4
Label based indexing via .loc along the edges of an interval works as you would expect,
selecting that particular interval.
In [194]: df.loc[2]
Out[194]:
A 2
Name: (1, 2], dtype: int64
In [195]: df.loc[[2, 3]]
Out[195]:
A
(1, 2] 2
(2, 3] 3
If you select a label contained within an interval, this will also select the interval.
In [196]: df.loc[2.5]
Out[196]:
A 3
Name: (2, 3], dtype: int64
In [197]: df.loc[[2.5, 3.5]]
Out[197]:
A
(2, 3] 3
(3, 4] 4
Selecting using an Interval will only return exact matches (starting from pandas 0.25.0).
In [198]: df.loc[pd.Interval(1, 2)]
Out[198]:
A 2
Name: (1, 2], dtype: int64
Trying to select an Interval that is not exactly contained in the IntervalIndex will raise a KeyError.
In [7]: df.loc[pd.Interval(0.5, 2.5)]
---------------------------------------------------------------------------
KeyError: Interval(0.5, 2.5, closed='right')
Selecting all Intervals that overlap a given Interval can be performed using the
overlaps() method to create a boolean indexer.
In [199]: idxr = df.index.overlaps(pd.Interval(0.5, 2.5))
In [200]: idxr
Out[200]: array([ True, True, True, False])
In [201]: df[idxr]
Out[201]:
A
(0, 1] 1
(1, 2] 2
(2, 3] 3
Binning data with cut and qcut#
cut() and qcut() both return a Categorical object, and the bins they
create are stored as an IntervalIndex in its .categories attribute.
In [202]: c = pd.cut(range(4), bins=2)
In [203]: c
Out[203]:
[(-0.003, 1.5], (-0.003, 1.5], (1.5, 3.0], (1.5, 3.0]]
Categories (2, interval[float64, right]): [(-0.003, 1.5] < (1.5, 3.0]]
In [204]: c.categories
Out[204]: IntervalIndex([(-0.003, 1.5], (1.5, 3.0]], dtype='interval[float64, right]')
cut() also accepts an IntervalIndex for its bins argument, which enables
a useful pandas idiom. First, We call cut() with some data and bins set to a
fixed number, to generate the bins. Then, we pass the values of .categories as the
bins argument in subsequent calls to cut(), supplying new data which will be
binned into the same bins.
In [205]: pd.cut([0, 3, 5, 1], bins=c.categories)
Out[205]:
[(-0.003, 1.5], (1.5, 3.0], NaN, (-0.003, 1.5]]
Categories (2, interval[float64, right]): [(-0.003, 1.5] < (1.5, 3.0]]
Any value which falls outside all bins will be assigned a NaN value.
Generating ranges of intervals#
If we need intervals on a regular frequency, we can use the interval_range() function
to create an IntervalIndex using various combinations of start, end, and periods.
The default frequency for interval_range is a 1 for numeric intervals, and calendar day for
datetime-like intervals:
In [206]: pd.interval_range(start=0, end=5)
Out[206]: IntervalIndex([(0, 1], (1, 2], (2, 3], (3, 4], (4, 5]], dtype='interval[int64, right]')
In [207]: pd.interval_range(start=pd.Timestamp("2017-01-01"), periods=4)
Out[207]: IntervalIndex([(2017-01-01, 2017-01-02], (2017-01-02, 2017-01-03], (2017-01-03, 2017-01-04], (2017-01-04, 2017-01-05]], dtype='interval[datetime64[ns], right]')
In [208]: pd.interval_range(end=pd.Timedelta("3 days"), periods=3)
Out[208]: IntervalIndex([(0 days 00:00:00, 1 days 00:00:00], (1 days 00:00:00, 2 days 00:00:00], (2 days 00:00:00, 3 days 00:00:00]], dtype='interval[timedelta64[ns], right]')
The freq parameter can used to specify non-default frequencies, and can utilize a variety
of frequency aliases with datetime-like intervals:
In [209]: pd.interval_range(start=0, periods=5, freq=1.5)
Out[209]: IntervalIndex([(0.0, 1.5], (1.5, 3.0], (3.0, 4.5], (4.5, 6.0], (6.0, 7.5]], dtype='interval[float64, right]')
In [210]: pd.interval_range(start=pd.Timestamp("2017-01-01"), periods=4, freq="W")
Out[210]: IntervalIndex([(2017-01-01, 2017-01-08], (2017-01-08, 2017-01-15], (2017-01-15, 2017-01-22], (2017-01-22, 2017-01-29]], dtype='interval[datetime64[ns], right]')
In [211]: pd.interval_range(start=pd.Timedelta("0 days"), periods=3, freq="9H")
Out[211]: IntervalIndex([(0 days 00:00:00, 0 days 09:00:00], (0 days 09:00:00, 0 days 18:00:00], (0 days 18:00:00, 1 days 03:00:00]], dtype='interval[timedelta64[ns], right]')
Additionally, the closed parameter can be used to specify which side(s) the intervals
are closed on. Intervals are closed on the right side by default.
In [212]: pd.interval_range(start=0, end=4, closed="both")
Out[212]: IntervalIndex([[0, 1], [1, 2], [2, 3], [3, 4]], dtype='interval[int64, both]')
In [213]: pd.interval_range(start=0, end=4, closed="neither")
Out[213]: IntervalIndex([(0, 1), (1, 2), (2, 3), (3, 4)], dtype='interval[int64, neither]')
Specifying start, end, and periods will generate a range of evenly spaced
intervals from start to end inclusively, with periods number of elements
in the resulting IntervalIndex:
In [214]: pd.interval_range(start=0, end=6, periods=4)
Out[214]: IntervalIndex([(0.0, 1.5], (1.5, 3.0], (3.0, 4.5], (4.5, 6.0]], dtype='interval[float64, right]')
In [215]: pd.interval_range(pd.Timestamp("2018-01-01"), pd.Timestamp("2018-02-28"), periods=3)
Out[215]: IntervalIndex([(2018-01-01, 2018-01-20 08:00:00], (2018-01-20 08:00:00, 2018-02-08 16:00:00], (2018-02-08 16:00:00, 2018-02-28]], dtype='interval[datetime64[ns], right]')
Miscellaneous indexing FAQ#
Integer indexing#
Label-based indexing with integer axis labels is a thorny topic. It has been
discussed heavily on mailing lists and among various members of the scientific
Python community. In pandas, our general viewpoint is that labels matter more
than integer locations. Therefore, with an integer axis index only
label-based indexing is possible with the standard tools like .loc. The
following code will generate exceptions:
In [216]: s = pd.Series(range(5))
In [217]: s[-1]
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
File ~/work/pandas/pandas/pandas/core/indexes/range.py:391, in RangeIndex.get_loc(self, key, method, tolerance)
390 try:
--> 391 return self._range.index(new_key)
392 except ValueError as err:
ValueError: -1 is not in range
The above exception was the direct cause of the following exception:
KeyError Traceback (most recent call last)
Cell In[217], line 1
----> 1 s[-1]
File ~/work/pandas/pandas/pandas/core/series.py:981, in Series.__getitem__(self, key)
978 return self._values[key]
980 elif key_is_scalar:
--> 981 return self._get_value(key)
983 if is_hashable(key):
984 # Otherwise index.get_value will raise InvalidIndexError
985 try:
986 # For labels that don't resolve as scalars like tuples and frozensets
File ~/work/pandas/pandas/pandas/core/series.py:1089, in Series._get_value(self, label, takeable)
1086 return self._values[label]
1088 # Similar to Index.get_value, but we do not fall back to positional
-> 1089 loc = self.index.get_loc(label)
1090 return self.index._get_values_for_loc(self, loc, label)
File ~/work/pandas/pandas/pandas/core/indexes/range.py:393, in RangeIndex.get_loc(self, key, method, tolerance)
391 return self._range.index(new_key)
392 except ValueError as err:
--> 393 raise KeyError(key) from err
394 self._check_indexing_error(key)
395 raise KeyError(key)
KeyError: -1
In [218]: df = pd.DataFrame(np.random.randn(5, 4))
In [219]: df
Out[219]:
0 1 2 3
0 -0.130121 -0.476046 0.759104 0.213379
1 -0.082641 0.448008 0.656420 -1.051443
2 0.594956 -0.151360 -0.069303 1.221431
3 -0.182832 0.791235 0.042745 2.069775
4 1.446552 0.019814 -1.389212 -0.702312
In [220]: df.loc[-2:]
Out[220]:
0 1 2 3
0 -0.130121 -0.476046 0.759104 0.213379
1 -0.082641 0.448008 0.656420 -1.051443
2 0.594956 -0.151360 -0.069303 1.221431
3 -0.182832 0.791235 0.042745 2.069775
4 1.446552 0.019814 -1.389212 -0.702312
This deliberate decision was made to prevent ambiguities and subtle bugs (many
users reported finding bugs when the API change was made to stop “falling back”
on position-based indexing).
Non-monotonic indexes require exact matches#
If the index of a Series or DataFrame is monotonically increasing or decreasing, then the bounds
of a label-based slice can be outside the range of the index, much like slice indexing a
normal Python list. Monotonicity of an index can be tested with the is_monotonic_increasing() and
is_monotonic_decreasing() attributes.
In [221]: df = pd.DataFrame(index=[2, 3, 3, 4, 5], columns=["data"], data=list(range(5)))
In [222]: df.index.is_monotonic_increasing
Out[222]: True
# no rows 0 or 1, but still returns rows 2, 3 (both of them), and 4:
In [223]: df.loc[0:4, :]
Out[223]:
data
2 0
3 1
3 2
4 3
# slice is are outside the index, so empty DataFrame is returned
In [224]: df.loc[13:15, :]
Out[224]:
Empty DataFrame
Columns: [data]
Index: []
On the other hand, if the index is not monotonic, then both slice bounds must be
unique members of the index.
In [225]: df = pd.DataFrame(index=[2, 3, 1, 4, 3, 5], columns=["data"], data=list(range(6)))
In [226]: df.index.is_monotonic_increasing
Out[226]: False
# OK because 2 and 4 are in the index
In [227]: df.loc[2:4, :]
Out[227]:
data
2 0
3 1
1 2
4 3
# 0 is not in the index
In [9]: df.loc[0:4, :]
KeyError: 0
# 3 is not a unique label
In [11]: df.loc[2:3, :]
KeyError: 'Cannot get right slice bound for non-unique label: 3'
Index.is_monotonic_increasing and Index.is_monotonic_decreasing only check that
an index is weakly monotonic. To check for strict monotonicity, you can combine one of those with
the is_unique() attribute.
In [228]: weakly_monotonic = pd.Index(["a", "b", "c", "c"])
In [229]: weakly_monotonic
Out[229]: Index(['a', 'b', 'c', 'c'], dtype='object')
In [230]: weakly_monotonic.is_monotonic_increasing
Out[230]: True
In [231]: weakly_monotonic.is_monotonic_increasing & weakly_monotonic.is_unique
Out[231]: False
Endpoints are inclusive#
Compared with standard Python sequence slicing in which the slice endpoint is
not inclusive, label-based slicing in pandas is inclusive. The primary
reason for this is that it is often not possible to easily determine the
“successor” or next element after a particular label in an index. For example,
consider the following Series:
In [232]: s = pd.Series(np.random.randn(6), index=list("abcdef"))
In [233]: s
Out[233]:
a 0.301379
b 1.240445
c -0.846068
d -0.043312
e -1.658747
f -0.819549
dtype: float64
Suppose we wished to slice from c to e, using integers this would be
accomplished as such:
In [234]: s[2:5]
Out[234]:
c -0.846068
d -0.043312
e -1.658747
dtype: float64
However, if you only had c and e, determining the next element in the
index can be somewhat complicated. For example, the following does not work:
s.loc['c':'e' + 1]
A very common use case is to limit a time series to start and end at two
specific dates. To enable this, we made the design choice to make label-based
slicing include both endpoints:
In [235]: s.loc["c":"e"]
Out[235]:
c -0.846068
d -0.043312
e -1.658747
dtype: float64
This is most definitely a “practicality beats purity” sort of thing, but it is
something to watch out for if you expect label-based slicing to behave exactly
in the way that standard Python integer slicing works.
Indexing potentially changes underlying Series dtype#
The different indexing operation can potentially change the dtype of a Series.
In [236]: series1 = pd.Series([1, 2, 3])
In [237]: series1.dtype
Out[237]: dtype('int64')
In [238]: res = series1.reindex([0, 4])
In [239]: res.dtype
Out[239]: dtype('float64')
In [240]: res
Out[240]:
0 1.0
4 NaN
dtype: float64
In [241]: series2 = pd.Series([True])
In [242]: series2.dtype
Out[242]: dtype('bool')
In [243]: res = series2.reindex_like(series1)
In [244]: res.dtype
Out[244]: dtype('O')
In [245]: res
Out[245]:
0 True
1 NaN
2 NaN
dtype: object
This is because the (re)indexing operations above silently inserts NaNs and the dtype
changes accordingly. This can cause some issues when using numpy ufuncs
such as numpy.logical_and.
See the GH2388 for a more
detailed discussion.
| 33 | 321 | Create subindices based on two categorical variables
I have a dataframe containing two categorical variables. I would like to add a third column with ascending indices for each of the categories, where one category is nested within the other.
Example:
import pandas as pd
foo = ['a','a','a','a','b','b','b','b']
bar = [0,0,1,1,0,0,1,1]
df = pd.DataFrame({'foo':foo,'bar':bar})
which gives you:
foo bar
0 a 0
1 a 0
2 a 1
3 a 1
4 b 0
5 b 0
6 b 1
7 b 1
Add a third column to df so that you get:
foo bar foobar
0 a 0 0
1 a 0 1
2 a 1 0
3 a 1 1
4 b 0 2
5 b 0 3
6 b 1 2
7 b 1 3
I guess this can be somehow done with groupby()? |
65,954,888 | Get the list of unique elements from multiple list and count of unique elements-column as list in data frame | <p>I have a dataset which looks something like this :</p>
<pre><code>df = pd.DataFrame()
df['home']=[['us','uk','argentina'],
['denmark','china'],
'',
'',
['australia','protugal','chile','russia'],
['turkey']]
df["away"] = [['us','mexico'],
'',
'',
['uk','finland','greece'],
'',
['turkey']]
</code></pre>
<p>I want to create a column that gives the list of unique elements from the column -home and away and another column that gives the count of unique elements.</p>
<p>Desired output:</p>
<p><a href="https://i.stack.imgur.com/pP9Od.png" rel="nofollow noreferrer">desired output</a></p> | 65,955,314 | 2021-01-29T12:54:57.520000 | 1 | null | 0 | 32 | python|pandas | <p>I took the liberty of renaming the third column to unique country, as <code>row.unique</code> is already taken.</p>
<pre><code>df["unique_country"]=df.apply(lambda row: list(set((row.home if row.home else []) + (row.away if row.away else []))) , axis=1)
df["count_unique"]=df.apply(lambda row: len(row.unique_country),axis=1)
</code></pre> | 2021-01-29T13:22:59.483000 | 0 | https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.explode.html | pandas.DataFrame.explode#
pandas.DataFrame.explode#
DataFrame.explode(column, ignore_index=False)[source]#
Transform each element of a list-like to a row, replicating index values.
I took the liberty of renaming the third column to unique country, as row.unique is already taken.
df["unique_country"]=df.apply(lambda row: list(set((row.home if row.home else []) + (row.away if row.away else []))) , axis=1)
df["count_unique"]=df.apply(lambda row: len(row.unique_country),axis=1)
New in version 0.25.0.
Parameters
columnIndexLabelColumn(s) to explode.
For multiple columns, specify a non-empty list with each element
be str or tuple, and all specified columns their list-like data
on same row of the frame must have matching length.
New in version 1.3.0: Multi-column explode
ignore_indexbool, default FalseIf True, the resulting index will be labeled 0, 1, …, n - 1.
New in version 1.1.0.
Returns
DataFrameExploded lists to rows of the subset columns;
index will be duplicated for these rows.
Raises
ValueError
If columns of the frame are not unique.
If specified columns to explode is empty list.
If specified columns to explode have not matching count of
elements rowwise in the frame.
See also
DataFrame.unstackPivot a level of the (necessarily hierarchical) index labels.
DataFrame.meltUnpivot a DataFrame from wide format to long format.
Series.explodeExplode a DataFrame from list-like columns to long format.
Notes
This routine will explode list-likes including lists, tuples, sets,
Series, and np.ndarray. The result dtype of the subset rows will
be object. Scalars will be returned unchanged, and empty list-likes will
result in a np.nan for that row. In addition, the ordering of rows in the
output will be non-deterministic when exploding sets.
Reference the user guide for more examples.
Examples
>>> df = pd.DataFrame({'A': [[0, 1, 2], 'foo', [], [3, 4]],
... 'B': 1,
... 'C': [['a', 'b', 'c'], np.nan, [], ['d', 'e']]})
>>> df
A B C
0 [0, 1, 2] 1 [a, b, c]
1 foo 1 NaN
2 [] 1 []
3 [3, 4] 1 [d, e]
Single-column explode.
>>> df.explode('A')
A B C
0 0 1 [a, b, c]
0 1 1 [a, b, c]
0 2 1 [a, b, c]
1 foo 1 NaN
2 NaN 1 []
3 3 1 [d, e]
3 4 1 [d, e]
Multi-column explode.
>>> df.explode(list('AC'))
A B C
0 0 1 a
0 1 1 b
0 2 1 c
1 foo 1 NaN
2 NaN 1 NaN
3 3 1 d
3 4 1 e
| 185 | 483 | Get the list of unique elements from multiple list and count of unique elements-column as list in data frame
I have a dataset which looks something like this :
df = pd.DataFrame()
df['home']=[['us','uk','argentina'],
['denmark','china'],
'',
'',
['australia','protugal','chile','russia'],
['turkey']]
df["away"] = [['us','mexico'],
'',
'',
['uk','finland','greece'],
'',
['turkey']]
I want to create a column that gives the list of unique elements from the column -home and away and another column that gives the count of unique elements.
Desired output:
desired output |
64,829,590 | "Reading 2 csv files with pandas, using a value in one file to look up other values in the second fi(...TRUNCATED) | "<p>I have 2 txt files being read by Pandas.</p>\n<p>The first file contains:</p>\n<pre><code>code (...TRUNCATED) | 64,846,053 | 2020-11-14T00:05:55.210000 | 1 | null | 1 | 34 | python|pandas | "<p>What you want is a merge:</p>\n<pre><code>dataset = pd.read_csv('firstFile.csv', sep='\\s+')\ndf(...TRUNCATED) | 2020-11-15T15:11:25.030000 | 0 | https://pandas.pydata.org/docs/user_guide/io.html | "IO tools (text, CSV, HDF5, …)#\n\nIO tools (text, CSV, HDF5, …)#\nThe pandas I/O API is a set o(...TRUNCATED) | 495 | 861 | "Reading 2 csv files with pandas, using a value in one file to look up other values in the second fi(...TRUNCATED) |
61,189,774 | How to transform numbers in data-frame column to comma separated | "<p>i am working with python , so in my dataframe i have a column named <code>Company Profit</code> (...TRUNCATED) | 61,190,028 | 2020-04-13T14:14:17.007000 | 2 | null | 1 | 290 | python|pandas | "<p>Something like this will work:</p>\n\n<pre><code>In [601]: def thousand_separator(val): \n .(...TRUNCATED) | 2020-04-13T14:27:58.500000 | 0 | https://pandas.pydata.org/docs/reference/api/pandas.read_csv.html | "pandas.read_csv#\n\npandas.read_csv#\n\n\npandas.read_csv(filepath_or_buffer, *, sep=_NoDefault.no_(...TRUNCATED) | 1,025 | 1,485 | "How to transform numbers in data-frame column to comma separated\ni am working with python , so in (...TRUNCATED) |
64,659,356 | Pandas, most efficient way to apply a two functions on entire row | "<p>I have the following DataFrame:</p>\n<pre><code> Date Label (...TRUNCATED) | 64,659,496 | 2020-11-03T08:41:30.783000 | 1 | null | 0 | 37 | python|pandas | "<p>You need to pass in the rows to the apply-function. Try this:</p>\n<pre><code>def scorer(row):\n(...TRUNCATED) | 2020-11-03T08:50:49.767000 | 0 | https://pandas.pydata.org/docs/user_guide/groupby.html | "Group by: split-apply-combine#\n\nGroup by: split-apply-combine#\nBy “group by” we are referrin(...TRUNCATED) | 757 | 1,073 | "Pandas, most efficient way to apply a two functions on entire row\nI have the following DataFrame:\(...TRUNCATED) |
68,924,396 | pandas series row-wise comparison (preserve cardinality/indices of larger series) | "<p>I have two pandas series, both string dtypes.</p>\n<ol>\n<li><p>reports['corpus'] has 1287 rows<(...TRUNCATED) | 68,924,611 | 2021-08-25T14:04:15.793000 | 1 | null | 1 | 38 | python|pandas | "<p>Convert <code>uniq_labels</code> column from the <code>labels</code> dataframe to a list, and sp(...TRUNCATED) | 2021-08-25T14:18:21.953000 | 0 | https://pandas.pydata.org/docs/user_guide/scale.html | "Scaling to large datasets#\n\nScaling to large datasets#\npandas provides data structures for in-me(...TRUNCATED) | 226 | 681 | "pandas series row-wise comparison (preserve cardinality/indices of larger series)\nI have two panda(...TRUNCATED) |
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
Use the Edit dataset card button to edit it.
- Downloads last month
- 59