title
stringlengths 5
65
| summary
stringlengths 5
98.2k
| context
stringlengths 9
121k
| path
stringlengths 10
84
⌀ |
---|---|---|---|
pandas.tseries.offsets.CustomBusinessDay.isAnchored | pandas.tseries.offsets.CustomBusinessDay.isAnchored | CustomBusinessDay.isAnchored()#
| reference/api/pandas.tseries.offsets.CustomBusinessDay.isAnchored.html |
pandas.tseries.offsets.BYearEnd.onOffset | pandas.tseries.offsets.BYearEnd.onOffset | BYearEnd.onOffset()#
| reference/api/pandas.tseries.offsets.BYearEnd.onOffset.html |
pandas.tseries.offsets.BYearBegin.copy | `pandas.tseries.offsets.BYearBegin.copy`
Return a copy of the frequency.
```
>>> freq = pd.DateOffset(1)
>>> freq_copy = freq.copy()
>>> freq is freq_copy
False
``` | BYearBegin.copy()#
Return a copy of the frequency.
Examples
>>> freq = pd.DateOffset(1)
>>> freq_copy = freq.copy()
>>> freq is freq_copy
False
| reference/api/pandas.tseries.offsets.BYearBegin.copy.html |
pandas.tseries.offsets.LastWeekOfMonth.n | pandas.tseries.offsets.LastWeekOfMonth.n | LastWeekOfMonth.n#
| reference/api/pandas.tseries.offsets.LastWeekOfMonth.n.html |
pandas.DataFrame.bfill | `pandas.DataFrame.bfill`
Synonym for DataFrame.fillna() with method='bfill'.
Object with missing values filled or None if inplace=True. | DataFrame.bfill(*, axis=None, inplace=False, limit=None, downcast=None)[source]#
Synonym for DataFrame.fillna() with method='bfill'.
Returns
Series/DataFrame or NoneObject with missing values filled or None if inplace=True.
| reference/api/pandas.DataFrame.bfill.html |
pandas.tseries.offsets.FY5253.rule_code | pandas.tseries.offsets.FY5253.rule_code | FY5253.rule_code#
| reference/api/pandas.tseries.offsets.FY5253.rule_code.html |
pandas.api.extensions.ExtensionArray.shift | `pandas.api.extensions.ExtensionArray.shift`
Shift values by desired number. | ExtensionArray.shift(periods=1, fill_value=None)[source]#
Shift values by desired number.
Newly introduced missing values are filled with
self.dtype.na_value.
Parameters
periodsint, default 1The number of periods to shift. Negative values are allowed
for shifting backwards.
fill_valueobject, optionalThe scalar value to use for newly introduced missing values.
The default is self.dtype.na_value.
Returns
ExtensionArrayShifted.
Notes
If self is empty or periods is 0, a copy of self is
returned.
If periods > len(self), then an array of size
len(self) is returned, with all values filled with
self.dtype.na_value.
| reference/api/pandas.api.extensions.ExtensionArray.shift.html |
pandas.tseries.offsets.BYearBegin.isAnchored | pandas.tseries.offsets.BYearBegin.isAnchored | BYearBegin.isAnchored()#
| reference/api/pandas.tseries.offsets.BYearBegin.isAnchored.html |
pandas.DataFrame.plot.hist | `pandas.DataFrame.plot.hist`
Draw one histogram of the DataFrame’s columns.
A histogram is a representation of the distribution of data.
This function groups the values of all given Series in the DataFrame
into bins and draws all bins in one matplotlib.axes.Axes.
This is useful when the DataFrame’s Series are in a similar scale.
```
>>> df = pd.DataFrame(
... np.random.randint(1, 7, 6000),
... columns = ['one'])
>>> df['two'] = df['one'] + np.random.randint(1, 7, 6000)
>>> ax = df.plot.hist(bins=12, alpha=0.5)
``` | DataFrame.plot.hist(by=None, bins=10, **kwargs)[source]#
Draw one histogram of the DataFrame’s columns.
A histogram is a representation of the distribution of data.
This function groups the values of all given Series in the DataFrame
into bins and draws all bins in one matplotlib.axes.Axes.
This is useful when the DataFrame’s Series are in a similar scale.
Parameters
bystr or sequence, optionalColumn in the DataFrame to group by.
Changed in version 1.4.0: Previously, by is silently ignore and makes no groupings
binsint, default 10Number of histogram bins to be used.
**kwargsAdditional keyword arguments are documented in
DataFrame.plot().
Returns
class:matplotlib.AxesSubplotReturn a histogram plot.
See also
DataFrame.histDraw histograms per DataFrame’s Series.
Series.histDraw a histogram with Series’ data.
Examples
When we roll a die 6000 times, we expect to get each value around 1000
times. But when we roll two dice and sum the result, the distribution
is going to be quite different. A histogram illustrates those
distributions.
>>> df = pd.DataFrame(
... np.random.randint(1, 7, 6000),
... columns = ['one'])
>>> df['two'] = df['one'] + np.random.randint(1, 7, 6000)
>>> ax = df.plot.hist(bins=12, alpha=0.5)
A grouped histogram can be generated by providing the parameter by (which
can be a column name, or a list of column names):
>>> age_list = [8, 10, 12, 14, 72, 74, 76, 78, 20, 25, 30, 35, 60, 85]
>>> df = pd.DataFrame({"gender": list("MMMMMMMMFFFFFF"), "age": age_list})
>>> ax = df.plot.hist(column=["age"], by="gender", figsize=(10, 8))
| reference/api/pandas.DataFrame.plot.hist.html |
pandas.tseries.offsets.BusinessMonthEnd.is_on_offset | `pandas.tseries.offsets.BusinessMonthEnd.is_on_offset`
Return boolean whether a timestamp intersects with this frequency.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Day(1)
>>> freq.is_on_offset(ts)
True
``` | BusinessMonthEnd.is_on_offset()#
Return boolean whether a timestamp intersects with this frequency.
Parameters
dtdatetime.datetimeTimestamp to check intersections with frequency.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Day(1)
>>> freq.is_on_offset(ts)
True
>>> ts = pd.Timestamp(2022, 8, 6)
>>> ts.day_name()
'Saturday'
>>> freq = pd.offsets.BusinessDay(1)
>>> freq.is_on_offset(ts)
False
| reference/api/pandas.tseries.offsets.BusinessMonthEnd.is_on_offset.html |
pandas.tseries.offsets.QuarterEnd | `pandas.tseries.offsets.QuarterEnd`
DateOffset increments between Quarter end dates.
startingMonth = 1 corresponds to dates like 1/31/2007, 4/30/2007, …
startingMonth = 2 corresponds to dates like 2/28/2007, 5/31/2007, …
startingMonth = 3 corresponds to dates like 3/31/2007, 6/30/2007, …
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> ts + pd.offsets.QuarterEnd()
Timestamp('2022-03-31 00:00:00')
``` | class pandas.tseries.offsets.QuarterEnd#
DateOffset increments between Quarter end dates.
startingMonth = 1 corresponds to dates like 1/31/2007, 4/30/2007, …
startingMonth = 2 corresponds to dates like 2/28/2007, 5/31/2007, …
startingMonth = 3 corresponds to dates like 3/31/2007, 6/30/2007, …
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> ts + pd.offsets.QuarterEnd()
Timestamp('2022-03-31 00:00:00')
Attributes
base
Returns a copy of the calling offset object with n=1 and all other attributes equal.
freqstr
Return a string representing the frequency.
kwds
Return a dict of extra parameters for the offset.
name
Return a string representing the base frequency.
n
nanos
normalize
rule_code
startingMonth
Methods
__call__(*args, **kwargs)
Call self as a function.
apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
copy
Return a copy of the frequency.
is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
is_month_end
Return boolean whether a timestamp occurs on the month end.
is_month_start
Return boolean whether a timestamp occurs on the month start.
is_on_offset
Return boolean whether a timestamp intersects with this frequency.
is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
is_year_end
Return boolean whether a timestamp occurs on the year end.
is_year_start
Return boolean whether a timestamp occurs on the year start.
rollback
Roll provided date backward to next offset only if not on offset.
rollforward
Roll provided date forward to next offset only if not on offset.
apply
isAnchored
onOffset
| reference/api/pandas.tseries.offsets.QuarterEnd.html |
pandas.DataFrame.swapaxes | `pandas.DataFrame.swapaxes`
Interchange axes and swap values axes appropriately. | DataFrame.swapaxes(axis1, axis2, copy=True)[source]#
Interchange axes and swap values axes appropriately.
Returns
ysame as input
| reference/api/pandas.DataFrame.swapaxes.html |
pandas.DataFrame.to_feather | `pandas.DataFrame.to_feather`
Write a DataFrame to the binary Feather format. | DataFrame.to_feather(path, **kwargs)[source]#
Write a DataFrame to the binary Feather format.
Parameters
pathstr, path object, file-like objectString, path object (implementing os.PathLike[str]), or file-like
object implementing a binary write() function. If a string or a path,
it will be used as Root Directory path when writing a partitioned dataset.
**kwargsAdditional keywords passed to pyarrow.feather.write_feather().
Starting with pyarrow 0.17, this includes the compression,
compression_level, chunksize and version keywords.
New in version 1.1.0.
Notes
This function writes the dataframe as a feather file. Requires a default
index. For saving the DataFrame with your custom index use a method that
supports custom indices e.g. to_parquet.
| reference/api/pandas.DataFrame.to_feather.html |
pandas.tseries.offsets.BusinessDay.kwds | `pandas.tseries.offsets.BusinessDay.kwds`
Return a dict of extra parameters for the offset.
```
>>> pd.DateOffset(5).kwds
{}
``` | BusinessDay.kwds#
Return a dict of extra parameters for the offset.
Examples
>>> pd.DateOffset(5).kwds
{}
>>> pd.offsets.FY5253Quarter().kwds
{'weekday': 0,
'startingMonth': 1,
'qtr_with_extra_week': 1,
'variation': 'nearest'}
| reference/api/pandas.tseries.offsets.BusinessDay.kwds.html |
pandas.tseries.offsets.SemiMonthBegin | `pandas.tseries.offsets.SemiMonthBegin`
Two DateOffset’s per month repeating on the first day of the month & day_of_month.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> ts + pd.offsets.SemiMonthBegin()
Timestamp('2022-01-15 00:00:00')
``` | class pandas.tseries.offsets.SemiMonthBegin#
Two DateOffset’s per month repeating on the first day of the month & day_of_month.
Parameters
nint
normalizebool, default False
day_of_monthint, {2, 3,…,27}, default 15
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> ts + pd.offsets.SemiMonthBegin()
Timestamp('2022-01-15 00:00:00')
Attributes
base
Returns a copy of the calling offset object with n=1 and all other attributes equal.
freqstr
Return a string representing the frequency.
kwds
Return a dict of extra parameters for the offset.
name
Return a string representing the base frequency.
day_of_month
n
nanos
normalize
rule_code
Methods
__call__(*args, **kwargs)
Call self as a function.
apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
copy
Return a copy of the frequency.
is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
is_month_end
Return boolean whether a timestamp occurs on the month end.
is_month_start
Return boolean whether a timestamp occurs on the month start.
is_on_offset
Return boolean whether a timestamp intersects with this frequency.
is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
is_year_end
Return boolean whether a timestamp occurs on the year end.
is_year_start
Return boolean whether a timestamp occurs on the year start.
rollback
Roll provided date backward to next offset only if not on offset.
rollforward
Roll provided date forward to next offset only if not on offset.
apply
isAnchored
onOffset
| reference/api/pandas.tseries.offsets.SemiMonthBegin.html |
pandas.tseries.offsets.BQuarterBegin.__call__ | `pandas.tseries.offsets.BQuarterBegin.__call__`
Call self as a function. | BQuarterBegin.__call__(*args, **kwargs)#
Call self as a function.
| reference/api/pandas.tseries.offsets.BQuarterBegin.__call__.html |
General functions | General functions | Data manipulations#
melt(frame[, id_vars, value_vars, var_name, ...])
Unpivot a DataFrame from wide to long format, optionally leaving identifiers set.
pivot(data, *[, index, columns, values])
Return reshaped DataFrame organized by given index / column values.
pivot_table(data[, values, index, columns, ...])
Create a spreadsheet-style pivot table as a DataFrame.
crosstab(index, columns[, values, rownames, ...])
Compute a simple cross tabulation of two (or more) factors.
cut(x, bins[, right, labels, retbins, ...])
Bin values into discrete intervals.
qcut(x, q[, labels, retbins, precision, ...])
Quantile-based discretization function.
merge(left, right[, how, on, left_on, ...])
Merge DataFrame or named Series objects with a database-style join.
merge_ordered(left, right[, on, left_on, ...])
Perform a merge for ordered data with optional filling/interpolation.
merge_asof(left, right[, on, left_on, ...])
Perform a merge by key distance.
concat(objs, *[, axis, join, ignore_index, ...])
Concatenate pandas objects along a particular axis.
get_dummies(data[, prefix, prefix_sep, ...])
Convert categorical variable into dummy/indicator variables.
from_dummies(data[, sep, default_category])
Create a categorical DataFrame from a DataFrame of dummy variables.
factorize(values[, sort, na_sentinel, ...])
Encode the object as an enumerated type or categorical variable.
unique(values)
Return unique values based on a hash table.
wide_to_long(df, stubnames, i, j[, sep, suffix])
Unpivot a DataFrame from wide to long format.
Top-level missing data#
isna(obj)
Detect missing values for an array-like object.
isnull(obj)
Detect missing values for an array-like object.
notna(obj)
Detect non-missing values for an array-like object.
notnull(obj)
Detect non-missing values for an array-like object.
Top-level dealing with numeric data#
to_numeric(arg[, errors, downcast])
Convert argument to a numeric type.
Top-level dealing with datetimelike data#
to_datetime(arg[, errors, dayfirst, ...])
Convert argument to datetime.
to_timedelta(arg[, unit, errors])
Convert argument to timedelta.
date_range([start, end, periods, freq, tz, ...])
Return a fixed frequency DatetimeIndex.
bdate_range([start, end, periods, freq, tz, ...])
Return a fixed frequency DatetimeIndex with business day as the default.
period_range([start, end, periods, freq, name])
Return a fixed frequency PeriodIndex.
timedelta_range([start, end, periods, freq, ...])
Return a fixed frequency TimedeltaIndex with day as the default.
infer_freq(index[, warn])
Infer the most likely frequency given the input index.
Top-level dealing with Interval data#
interval_range([start, end, periods, freq, ...])
Return a fixed frequency IntervalIndex.
Top-level evaluation#
eval(expr[, parser, engine, truediv, ...])
Evaluate a Python expression as a string using various backends.
Hashing#
util.hash_array(vals[, encoding, hash_key, ...])
Given a 1d array, return an array of deterministic integers.
util.hash_pandas_object(obj[, index, ...])
Return a data hash of the Index/Series/DataFrame.
Importing from other DataFrame libraries#
api.interchange.from_dataframe(df[, allow_copy])
Build a pd.DataFrame from any DataFrame supporting the interchange protocol.
| reference/general_functions.html |
pandas.tseries.offsets.WeekOfMonth.is_anchored | `pandas.tseries.offsets.WeekOfMonth.is_anchored`
Return boolean whether the frequency is a unit frequency (n=1).
Examples
```
>>> pd.DateOffset().is_anchored()
True
>>> pd.DateOffset(2).is_anchored()
False
``` | WeekOfMonth.is_anchored()#
Return boolean whether the frequency is a unit frequency (n=1).
Examples
>>> pd.DateOffset().is_anchored()
True
>>> pd.DateOffset(2).is_anchored()
False
| reference/api/pandas.tseries.offsets.WeekOfMonth.is_anchored.html |
pandas.tseries.offsets.BusinessHour.normalize | pandas.tseries.offsets.BusinessHour.normalize | BusinessHour.normalize#
| reference/api/pandas.tseries.offsets.BusinessHour.normalize.html |
pandas.errors.OptionError | `pandas.errors.OptionError`
Exception raised for pandas.options. | exception pandas.errors.OptionError[source]#
Exception raised for pandas.options.
Backwards compatible with KeyError checks.
| reference/api/pandas.errors.OptionError.html |
Comparison with SQL | Comparison with SQL
Since many potential pandas users have some familiarity with
SQL, this page is meant to provide some examples of how
various SQL operations would be performed using pandas.
If you’re new to pandas, you might want to first read through 10 Minutes to pandas
to familiarize yourself with the library.
As is customary, we import pandas and NumPy as follows:
Most of the examples will utilize the tips dataset found within pandas tests. We’ll read
the data into a DataFrame called tips and assume we have a database table of the same name and
structure.
Most pandas operations return copies of the Series/DataFrame. To make the changes “stick”,
you’ll need to either assign to a new variable: | Since many potential pandas users have some familiarity with
SQL, this page is meant to provide some examples of how
various SQL operations would be performed using pandas.
If you’re new to pandas, you might want to first read through 10 Minutes to pandas
to familiarize yourself with the library.
As is customary, we import pandas and NumPy as follows:
In [1]: import pandas as pd
In [2]: import numpy as np
Most of the examples will utilize the tips dataset found within pandas tests. We’ll read
the data into a DataFrame called tips and assume we have a database table of the same name and
structure.
In [3]: url = (
...: "https://raw.githubusercontent.com/pandas-dev"
...: "/pandas/main/pandas/tests/io/data/csv/tips.csv"
...: )
...:
In [4]: tips = pd.read_csv(url)
In [5]: tips
Out[5]:
total_bill tip sex smoker day time size
0 16.99 1.01 Female No Sun Dinner 2
1 10.34 1.66 Male No Sun Dinner 3
2 21.01 3.50 Male No Sun Dinner 3
3 23.68 3.31 Male No Sun Dinner 2
4 24.59 3.61 Female No Sun Dinner 4
.. ... ... ... ... ... ... ...
239 29.03 5.92 Male No Sat Dinner 3
240 27.18 2.00 Female Yes Sat Dinner 2
241 22.67 2.00 Male Yes Sat Dinner 2
242 17.82 1.75 Male No Sat Dinner 2
243 18.78 3.00 Female No Thur Dinner 2
[244 rows x 7 columns]
Copies vs. in place operations#
Most pandas operations return copies of the Series/DataFrame. To make the changes “stick”,
you’ll need to either assign to a new variable:
sorted_df = df.sort_values("col1")
or overwrite the original one:
df = df.sort_values("col1")
Note
You will see an inplace=True keyword argument available for some methods:
df.sort_values("col1", inplace=True)
Its use is discouraged. More information.
SELECT#
In SQL, selection is done using a comma-separated list of columns you’d like to select (or a *
to select all columns):
SELECT total_bill, tip, smoker, time
FROM tips;
With pandas, column selection is done by passing a list of column names to your DataFrame:
In [6]: tips[["total_bill", "tip", "smoker", "time"]]
Out[6]:
total_bill tip smoker time
0 16.99 1.01 No Dinner
1 10.34 1.66 No Dinner
2 21.01 3.50 No Dinner
3 23.68 3.31 No Dinner
4 24.59 3.61 No Dinner
.. ... ... ... ...
239 29.03 5.92 No Dinner
240 27.18 2.00 Yes Dinner
241 22.67 2.00 Yes Dinner
242 17.82 1.75 No Dinner
243 18.78 3.00 No Dinner
[244 rows x 4 columns]
Calling the DataFrame without the list of column names would display all columns (akin to SQL’s
*).
In SQL, you can add a calculated column:
SELECT *, tip/total_bill as tip_rate
FROM tips;
With pandas, you can use the DataFrame.assign() method of a DataFrame to append a new column:
In [7]: tips.assign(tip_rate=tips["tip"] / tips["total_bill"])
Out[7]:
total_bill tip sex smoker day time size tip_rate
0 16.99 1.01 Female No Sun Dinner 2 0.059447
1 10.34 1.66 Male No Sun Dinner 3 0.160542
2 21.01 3.50 Male No Sun Dinner 3 0.166587
3 23.68 3.31 Male No Sun Dinner 2 0.139780
4 24.59 3.61 Female No Sun Dinner 4 0.146808
.. ... ... ... ... ... ... ... ...
239 29.03 5.92 Male No Sat Dinner 3 0.203927
240 27.18 2.00 Female Yes Sat Dinner 2 0.073584
241 22.67 2.00 Male Yes Sat Dinner 2 0.088222
242 17.82 1.75 Male No Sat Dinner 2 0.098204
243 18.78 3.00 Female No Thur Dinner 2 0.159744
[244 rows x 8 columns]
WHERE#
Filtering in SQL is done via a WHERE clause.
SELECT *
FROM tips
WHERE time = 'Dinner';
DataFrames can be filtered in multiple ways; the most intuitive of which is using
boolean indexing.
In [8]: tips[tips["total_bill"] > 10]
Out[8]:
total_bill tip sex smoker day time size
0 16.99 1.01 Female No Sun Dinner 2
1 10.34 1.66 Male No Sun Dinner 3
2 21.01 3.50 Male No Sun Dinner 3
3 23.68 3.31 Male No Sun Dinner 2
4 24.59 3.61 Female No Sun Dinner 4
.. ... ... ... ... ... ... ...
239 29.03 5.92 Male No Sat Dinner 3
240 27.18 2.00 Female Yes Sat Dinner 2
241 22.67 2.00 Male Yes Sat Dinner 2
242 17.82 1.75 Male No Sat Dinner 2
243 18.78 3.00 Female No Thur Dinner 2
[227 rows x 7 columns]
The above statement is simply passing a Series of True/False objects to the DataFrame,
returning all rows with True.
In [9]: is_dinner = tips["time"] == "Dinner"
In [10]: is_dinner
Out[10]:
0 True
1 True
2 True
3 True
4 True
...
239 True
240 True
241 True
242 True
243 True
Name: time, Length: 244, dtype: bool
In [11]: is_dinner.value_counts()
Out[11]:
True 176
False 68
Name: time, dtype: int64
In [12]: tips[is_dinner]
Out[12]:
total_bill tip sex smoker day time size
0 16.99 1.01 Female No Sun Dinner 2
1 10.34 1.66 Male No Sun Dinner 3
2 21.01 3.50 Male No Sun Dinner 3
3 23.68 3.31 Male No Sun Dinner 2
4 24.59 3.61 Female No Sun Dinner 4
.. ... ... ... ... ... ... ...
239 29.03 5.92 Male No Sat Dinner 3
240 27.18 2.00 Female Yes Sat Dinner 2
241 22.67 2.00 Male Yes Sat Dinner 2
242 17.82 1.75 Male No Sat Dinner 2
243 18.78 3.00 Female No Thur Dinner 2
[176 rows x 7 columns]
Just like SQL’s OR and AND, multiple conditions can be passed to a DataFrame using |
(OR) and & (AND).
Tips of more than $5 at Dinner meals:
SELECT *
FROM tips
WHERE time = 'Dinner' AND tip > 5.00;
In [13]: tips[(tips["time"] == "Dinner") & (tips["tip"] > 5.00)]
Out[13]:
total_bill tip sex smoker day time size
23 39.42 7.58 Male No Sat Dinner 4
44 30.40 5.60 Male No Sun Dinner 4
47 32.40 6.00 Male No Sun Dinner 4
52 34.81 5.20 Female No Sun Dinner 4
59 48.27 6.73 Male No Sat Dinner 4
116 29.93 5.07 Male No Sun Dinner 4
155 29.85 5.14 Female No Sun Dinner 5
170 50.81 10.00 Male Yes Sat Dinner 3
172 7.25 5.15 Male Yes Sun Dinner 2
181 23.33 5.65 Male Yes Sun Dinner 2
183 23.17 6.50 Male Yes Sun Dinner 4
211 25.89 5.16 Male Yes Sat Dinner 4
212 48.33 9.00 Male No Sat Dinner 4
214 28.17 6.50 Female Yes Sat Dinner 3
239 29.03 5.92 Male No Sat Dinner 3
Tips by parties of at least 5 diners OR bill total was more than $45:
SELECT *
FROM tips
WHERE size >= 5 OR total_bill > 45;
In [14]: tips[(tips["size"] >= 5) | (tips["total_bill"] > 45)]
Out[14]:
total_bill tip sex smoker day time size
59 48.27 6.73 Male No Sat Dinner 4
125 29.80 4.20 Female No Thur Lunch 6
141 34.30 6.70 Male No Thur Lunch 6
142 41.19 5.00 Male No Thur Lunch 5
143 27.05 5.00 Female No Thur Lunch 6
155 29.85 5.14 Female No Sun Dinner 5
156 48.17 5.00 Male No Sun Dinner 6
170 50.81 10.00 Male Yes Sat Dinner 3
182 45.35 3.50 Male Yes Sun Dinner 3
185 20.69 5.00 Male No Sun Dinner 5
187 30.46 2.00 Male Yes Sun Dinner 5
212 48.33 9.00 Male No Sat Dinner 4
216 28.15 3.00 Male Yes Sat Dinner 5
NULL checking is done using the notna() and isna()
methods.
In [15]: frame = pd.DataFrame(
....: {"col1": ["A", "B", np.NaN, "C", "D"], "col2": ["F", np.NaN, "G", "H", "I"]}
....: )
....:
In [16]: frame
Out[16]:
col1 col2
0 A F
1 B NaN
2 NaN G
3 C H
4 D I
Assume we have a table of the same structure as our DataFrame above. We can see only the records
where col2 IS NULL with the following query:
SELECT *
FROM frame
WHERE col2 IS NULL;
In [17]: frame[frame["col2"].isna()]
Out[17]:
col1 col2
1 B NaN
Getting items where col1 IS NOT NULL can be done with notna().
SELECT *
FROM frame
WHERE col1 IS NOT NULL;
In [18]: frame[frame["col1"].notna()]
Out[18]:
col1 col2
0 A F
1 B NaN
3 C H
4 D I
GROUP BY#
In pandas, SQL’s GROUP BY operations are performed using the similarly named
groupby() method. groupby() typically refers to a
process where we’d like to split a dataset into groups, apply some function (typically aggregation)
, and then combine the groups together.
A common SQL operation would be getting the count of records in each group throughout a dataset.
For instance, a query getting us the number of tips left by sex:
SELECT sex, count(*)
FROM tips
GROUP BY sex;
/*
Female 87
Male 157
*/
The pandas equivalent would be:
In [19]: tips.groupby("sex").size()
Out[19]:
sex
Female 87
Male 157
dtype: int64
Notice that in the pandas code we used size() and not
count(). This is because
count() applies the function to each column, returning
the number of NOT NULL records within each.
In [20]: tips.groupby("sex").count()
Out[20]:
total_bill tip smoker day time size
sex
Female 87 87 87 87 87 87
Male 157 157 157 157 157 157
Alternatively, we could have applied the count() method
to an individual column:
In [21]: tips.groupby("sex")["total_bill"].count()
Out[21]:
sex
Female 87
Male 157
Name: total_bill, dtype: int64
Multiple functions can also be applied at once. For instance, say we’d like to see how tip amount
differs by day of the week - agg() allows you to pass a dictionary
to your grouped DataFrame, indicating which functions to apply to specific columns.
SELECT day, AVG(tip), COUNT(*)
FROM tips
GROUP BY day;
/*
Fri 2.734737 19
Sat 2.993103 87
Sun 3.255132 76
Thu 2.771452 62
*/
In [22]: tips.groupby("day").agg({"tip": np.mean, "day": np.size})
Out[22]:
tip day
day
Fri 2.734737 19
Sat 2.993103 87
Sun 3.255132 76
Thur 2.771452 62
Grouping by more than one column is done by passing a list of columns to the
groupby() method.
SELECT smoker, day, COUNT(*), AVG(tip)
FROM tips
GROUP BY smoker, day;
/*
smoker day
No Fri 4 2.812500
Sat 45 3.102889
Sun 57 3.167895
Thu 45 2.673778
Yes Fri 15 2.714000
Sat 42 2.875476
Sun 19 3.516842
Thu 17 3.030000
*/
In [23]: tips.groupby(["smoker", "day"]).agg({"tip": [np.size, np.mean]})
Out[23]:
tip
size mean
smoker day
No Fri 4 2.812500
Sat 45 3.102889
Sun 57 3.167895
Thur 45 2.673778
Yes Fri 15 2.714000
Sat 42 2.875476
Sun 19 3.516842
Thur 17 3.030000
JOIN#
JOINs can be performed with join() or merge(). By
default, join() will join the DataFrames on their indices. Each method has
parameters allowing you to specify the type of join to perform (LEFT, RIGHT, INNER,
FULL) or the columns to join on (column names or indices).
Warning
If both key columns contain rows where the key is a null value, those
rows will be matched against each other. This is different from usual SQL
join behaviour and can lead to unexpected results.
In [24]: df1 = pd.DataFrame({"key": ["A", "B", "C", "D"], "value": np.random.randn(4)})
In [25]: df2 = pd.DataFrame({"key": ["B", "D", "D", "E"], "value": np.random.randn(4)})
Assume we have two database tables of the same name and structure as our DataFrames.
Now let’s go over the various types of JOINs.
INNER JOIN#
SELECT *
FROM df1
INNER JOIN df2
ON df1.key = df2.key;
# merge performs an INNER JOIN by default
In [26]: pd.merge(df1, df2, on="key")
Out[26]:
key value_x value_y
0 B -0.282863 1.212112
1 D -1.135632 -0.173215
2 D -1.135632 0.119209
merge() also offers parameters for cases when you’d like to join one DataFrame’s
column with another DataFrame’s index.
In [27]: indexed_df2 = df2.set_index("key")
In [28]: pd.merge(df1, indexed_df2, left_on="key", right_index=True)
Out[28]:
key value_x value_y
1 B -0.282863 1.212112
3 D -1.135632 -0.173215
3 D -1.135632 0.119209
LEFT OUTER JOIN#
Show all records from df1.
SELECT *
FROM df1
LEFT OUTER JOIN df2
ON df1.key = df2.key;
In [29]: pd.merge(df1, df2, on="key", how="left")
Out[29]:
key value_x value_y
0 A 0.469112 NaN
1 B -0.282863 1.212112
2 C -1.509059 NaN
3 D -1.135632 -0.173215
4 D -1.135632 0.119209
RIGHT JOIN#
Show all records from df2.
SELECT *
FROM df1
RIGHT OUTER JOIN df2
ON df1.key = df2.key;
In [30]: pd.merge(df1, df2, on="key", how="right")
Out[30]:
key value_x value_y
0 B -0.282863 1.212112
1 D -1.135632 -0.173215
2 D -1.135632 0.119209
3 E NaN -1.044236
FULL JOIN#
pandas also allows for FULL JOINs, which display both sides of the dataset, whether or not the
joined columns find a match. As of writing, FULL JOINs are not supported in all RDBMS (MySQL).
Show all records from both tables.
SELECT *
FROM df1
FULL OUTER JOIN df2
ON df1.key = df2.key;
In [31]: pd.merge(df1, df2, on="key", how="outer")
Out[31]:
key value_x value_y
0 A 0.469112 NaN
1 B -0.282863 1.212112
2 C -1.509059 NaN
3 D -1.135632 -0.173215
4 D -1.135632 0.119209
5 E NaN -1.044236
UNION#
UNION ALL can be performed using concat().
In [32]: df1 = pd.DataFrame(
....: {"city": ["Chicago", "San Francisco", "New York City"], "rank": range(1, 4)}
....: )
....:
In [33]: df2 = pd.DataFrame(
....: {"city": ["Chicago", "Boston", "Los Angeles"], "rank": [1, 4, 5]}
....: )
....:
SELECT city, rank
FROM df1
UNION ALL
SELECT city, rank
FROM df2;
/*
city rank
Chicago 1
San Francisco 2
New York City 3
Chicago 1
Boston 4
Los Angeles 5
*/
In [34]: pd.concat([df1, df2])
Out[34]:
city rank
0 Chicago 1
1 San Francisco 2
2 New York City 3
0 Chicago 1
1 Boston 4
2 Los Angeles 5
SQL’s UNION is similar to UNION ALL, however UNION will remove duplicate rows.
SELECT city, rank
FROM df1
UNION
SELECT city, rank
FROM df2;
-- notice that there is only one Chicago record this time
/*
city rank
Chicago 1
San Francisco 2
New York City 3
Boston 4
Los Angeles 5
*/
In pandas, you can use concat() in conjunction with
drop_duplicates().
In [35]: pd.concat([df1, df2]).drop_duplicates()
Out[35]:
city rank
0 Chicago 1
1 San Francisco 2
2 New York City 3
1 Boston 4
2 Los Angeles 5
LIMIT#
SELECT * FROM tips
LIMIT 10;
In [36]: tips.head(10)
Out[36]:
total_bill tip sex smoker day time size
0 16.99 1.01 Female No Sun Dinner 2
1 10.34 1.66 Male No Sun Dinner 3
2 21.01 3.50 Male No Sun Dinner 3
3 23.68 3.31 Male No Sun Dinner 2
4 24.59 3.61 Female No Sun Dinner 4
5 25.29 4.71 Male No Sun Dinner 4
6 8.77 2.00 Male No Sun Dinner 2
7 26.88 3.12 Male No Sun Dinner 4
8 15.04 1.96 Male No Sun Dinner 2
9 14.78 3.23 Male No Sun Dinner 2
pandas equivalents for some SQL analytic and aggregate functions#
Top n rows with offset#
-- MySQL
SELECT * FROM tips
ORDER BY tip DESC
LIMIT 10 OFFSET 5;
In [37]: tips.nlargest(10 + 5, columns="tip").tail(10)
Out[37]:
total_bill tip sex smoker day time size
183 23.17 6.50 Male Yes Sun Dinner 4
214 28.17 6.50 Female Yes Sat Dinner 3
47 32.40 6.00 Male No Sun Dinner 4
239 29.03 5.92 Male No Sat Dinner 3
88 24.71 5.85 Male No Thur Lunch 2
181 23.33 5.65 Male Yes Sun Dinner 2
44 30.40 5.60 Male No Sun Dinner 4
52 34.81 5.20 Female No Sun Dinner 4
85 34.83 5.17 Female No Thur Lunch 4
211 25.89 5.16 Male Yes Sat Dinner 4
Top n rows per group#
-- Oracle's ROW_NUMBER() analytic function
SELECT * FROM (
SELECT
t.*,
ROW_NUMBER() OVER(PARTITION BY day ORDER BY total_bill DESC) AS rn
FROM tips t
)
WHERE rn < 3
ORDER BY day, rn;
In [38]: (
....: tips.assign(
....: rn=tips.sort_values(["total_bill"], ascending=False)
....: .groupby(["day"])
....: .cumcount()
....: + 1
....: )
....: .query("rn < 3")
....: .sort_values(["day", "rn"])
....: )
....:
Out[38]:
total_bill tip sex smoker day time size rn
95 40.17 4.73 Male Yes Fri Dinner 4 1
90 28.97 3.00 Male Yes Fri Dinner 2 2
170 50.81 10.00 Male Yes Sat Dinner 3 1
212 48.33 9.00 Male No Sat Dinner 4 2
156 48.17 5.00 Male No Sun Dinner 6 1
182 45.35 3.50 Male Yes Sun Dinner 3 2
197 43.11 5.00 Female Yes Thur Lunch 4 1
142 41.19 5.00 Male No Thur Lunch 5 2
the same using rank(method='first') function
In [39]: (
....: tips.assign(
....: rnk=tips.groupby(["day"])["total_bill"].rank(
....: method="first", ascending=False
....: )
....: )
....: .query("rnk < 3")
....: .sort_values(["day", "rnk"])
....: )
....:
Out[39]:
total_bill tip sex smoker day time size rnk
95 40.17 4.73 Male Yes Fri Dinner 4 1.0
90 28.97 3.00 Male Yes Fri Dinner 2 2.0
170 50.81 10.00 Male Yes Sat Dinner 3 1.0
212 48.33 9.00 Male No Sat Dinner 4 2.0
156 48.17 5.00 Male No Sun Dinner 6 1.0
182 45.35 3.50 Male Yes Sun Dinner 3 2.0
197 43.11 5.00 Female Yes Thur Lunch 4 1.0
142 41.19 5.00 Male No Thur Lunch 5 2.0
-- Oracle's RANK() analytic function
SELECT * FROM (
SELECT
t.*,
RANK() OVER(PARTITION BY sex ORDER BY tip) AS rnk
FROM tips t
WHERE tip < 2
)
WHERE rnk < 3
ORDER BY sex, rnk;
Let’s find tips with (rank < 3) per gender group for (tips < 2).
Notice that when using rank(method='min') function
rnk_min remains the same for the same tip
(as Oracle’s RANK() function)
In [40]: (
....: tips[tips["tip"] < 2]
....: .assign(rnk_min=tips.groupby(["sex"])["tip"].rank(method="min"))
....: .query("rnk_min < 3")
....: .sort_values(["sex", "rnk_min"])
....: )
....:
Out[40]:
total_bill tip sex smoker day time size rnk_min
67 3.07 1.00 Female Yes Sat Dinner 1 1.0
92 5.75 1.00 Female Yes Fri Dinner 2 1.0
111 7.25 1.00 Female No Sat Dinner 1 1.0
236 12.60 1.00 Male Yes Sat Dinner 2 1.0
237 32.83 1.17 Male Yes Sat Dinner 2 2.0
UPDATE#
UPDATE tips
SET tip = tip*2
WHERE tip < 2;
In [41]: tips.loc[tips["tip"] < 2, "tip"] *= 2
DELETE#
DELETE FROM tips
WHERE tip > 9;
In pandas we select the rows that should remain instead of deleting them:
In [42]: tips = tips.loc[tips["tip"] <= 9]
| getting_started/comparison/comparison_with_sql.html |
pandas.Series.dt.strftime | `pandas.Series.dt.strftime`
Convert to Index using specified date_format.
Return an Index of formatted strings specified by date_format, which
supports the same string format as the python standard library. Details
of the string format can be found in python string format
doc.
```
>>> rng = pd.date_range(pd.Timestamp("2018-03-10 09:00"),
... periods=3, freq='s')
>>> rng.strftime('%B %d, %Y, %r')
Index(['March 10, 2018, 09:00:00 AM', 'March 10, 2018, 09:00:01 AM',
'March 10, 2018, 09:00:02 AM'],
dtype='object')
``` | Series.dt.strftime(*args, **kwargs)[source]#
Convert to Index using specified date_format.
Return an Index of formatted strings specified by date_format, which
supports the same string format as the python standard library. Details
of the string format can be found in python string format
doc.
Formats supported by the C strftime API but not by the python string format
doc (such as “%R”, “%r”) are not officially supported and should be
preferably replaced with their supported equivalents (such as “%H:%M”,
“%I:%M:%S %p”).
Note that PeriodIndex support additional directives, detailed in
Period.strftime.
Parameters
date_formatstrDate format string (e.g. “%Y-%m-%d”).
Returns
ndarray[object]NumPy ndarray of formatted strings.
See also
to_datetimeConvert the given argument to datetime.
DatetimeIndex.normalizeReturn DatetimeIndex with times to midnight.
DatetimeIndex.roundRound the DatetimeIndex to the specified freq.
DatetimeIndex.floorFloor the DatetimeIndex to the specified freq.
Timestamp.strftimeFormat a single Timestamp.
Period.strftimeFormat a single Period.
Examples
>>> rng = pd.date_range(pd.Timestamp("2018-03-10 09:00"),
... periods=3, freq='s')
>>> rng.strftime('%B %d, %Y, %r')
Index(['March 10, 2018, 09:00:00 AM', 'March 10, 2018, 09:00:01 AM',
'March 10, 2018, 09:00:02 AM'],
dtype='object')
| reference/api/pandas.Series.dt.strftime.html |
pandas.Index.groupby | `pandas.Index.groupby`
Group the index labels by a given array of values. | final Index.groupby(values)[source]#
Group the index labels by a given array of values.
Parameters
valuesarrayValues used to determine the groups.
Returns
dict{group name -> group labels}
| reference/api/pandas.Index.groupby.html |
pandas.Series.ndim | `pandas.Series.ndim`
Number of dimensions of the underlying data, by definition 1. | property Series.ndim[source]#
Number of dimensions of the underlying data, by definition 1.
| reference/api/pandas.Series.ndim.html |
pandas.Series.sparse.fill_value | `pandas.Series.sparse.fill_value`
Elements in data that are fill_value are not stored. | Series.sparse.fill_value[source]#
Elements in data that are fill_value are not stored.
For memory savings, this should be the most common value in the array.
| reference/api/pandas.Series.sparse.fill_value.html |
pandas.tseries.offsets.CustomBusinessDay.rollback | `pandas.tseries.offsets.CustomBusinessDay.rollback`
Roll provided date backward to next offset only if not on offset. | CustomBusinessDay.rollback()#
Roll provided date backward to next offset only if not on offset.
Returns
TimeStampRolled timestamp if not on offset, otherwise unchanged timestamp.
| reference/api/pandas.tseries.offsets.CustomBusinessDay.rollback.html |
pandas.PeriodIndex | `pandas.PeriodIndex`
Immutable ndarray holding ordinal values indicating regular periods in time.
```
>>> idx = pd.PeriodIndex(year=[2000, 2002], quarter=[1, 3])
>>> idx
PeriodIndex(['2000Q1', '2002Q3'], dtype='period[Q-DEC]')
``` | class pandas.PeriodIndex(data=None, ordinal=None, freq=None, dtype=None, copy=False, name=None, **fields)[source]#
Immutable ndarray holding ordinal values indicating regular periods in time.
Index keys are boxed to Period objects which carries the metadata (eg,
frequency information).
Parameters
dataarray-like (1d int np.ndarray or PeriodArray), optionalOptional period-like data to construct index with.
copyboolMake a copy of input ndarray.
freqstr or period object, optionalOne of pandas period strings or corresponding objects.
yearint, array, or Series, default None
monthint, array, or Series, default None
quarterint, array, or Series, default None
dayint, array, or Series, default None
hourint, array, or Series, default None
minuteint, array, or Series, default None
secondint, array, or Series, default None
dtypestr or PeriodDtype, default None
See also
IndexThe base pandas Index type.
PeriodRepresents a period of time.
DatetimeIndexIndex with datetime64 data.
TimedeltaIndexIndex of timedelta64 data.
period_rangeCreate a fixed-frequency PeriodIndex.
Examples
>>> idx = pd.PeriodIndex(year=[2000, 2002], quarter=[1, 3])
>>> idx
PeriodIndex(['2000Q1', '2002Q3'], dtype='period[Q-DEC]')
Attributes
day
The days of the period.
dayofweek
The day of the week with Monday=0, Sunday=6.
day_of_week
The day of the week with Monday=0, Sunday=6.
dayofyear
The ordinal day of the year.
day_of_year
The ordinal day of the year.
days_in_month
The number of days in the month.
daysinmonth
The number of days in the month.
end_time
Get the Timestamp for the end of the period.
freq
Return the frequency object if it is set, otherwise None.
freqstr
Return the frequency object as a string if its set, otherwise None.
hour
The hour of the period.
is_leap_year
Logical indicating if the date belongs to a leap year.
minute
The minute of the period.
month
The month as January=1, December=12.
quarter
The quarter of the date.
second
The second of the period.
start_time
Get the Timestamp for the start of the period.
week
The week ordinal of the year.
weekday
The day of the week with Monday=0, Sunday=6.
weekofyear
The week ordinal of the year.
year
The year of the period.
qyear
Methods
asfreq([freq, how])
Convert the PeriodArray to the specified frequency freq.
strftime(*args, **kwargs)
Convert to Index using specified date_format.
to_timestamp([freq, how])
Cast to DatetimeArray/Index.
| reference/api/pandas.PeriodIndex.html |
pandas.Series.swapaxes | `pandas.Series.swapaxes`
Interchange axes and swap values axes appropriately. | Series.swapaxes(axis1, axis2, copy=True)[source]#
Interchange axes and swap values axes appropriately.
Returns
ysame as input
| reference/api/pandas.Series.swapaxes.html |
pandas.core.groupby.GroupBy.agg | pandas.core.groupby.GroupBy.agg | GroupBy.agg(func, *args, **kwargs)[source]#
| reference/api/pandas.core.groupby.GroupBy.agg.html |
pandas.tseries.offsets.BQuarterBegin.rollback | `pandas.tseries.offsets.BQuarterBegin.rollback`
Roll provided date backward to next offset only if not on offset.
Rolled timestamp if not on offset, otherwise unchanged timestamp. | BQuarterBegin.rollback()#
Roll provided date backward to next offset only if not on offset.
Returns
TimeStampRolled timestamp if not on offset, otherwise unchanged timestamp.
| reference/api/pandas.tseries.offsets.BQuarterBegin.rollback.html |
pandas.tseries.offsets.MonthEnd.is_on_offset | `pandas.tseries.offsets.MonthEnd.is_on_offset`
Return boolean whether a timestamp intersects with this frequency.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Day(1)
>>> freq.is_on_offset(ts)
True
``` | MonthEnd.is_on_offset()#
Return boolean whether a timestamp intersects with this frequency.
Parameters
dtdatetime.datetimeTimestamp to check intersections with frequency.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Day(1)
>>> freq.is_on_offset(ts)
True
>>> ts = pd.Timestamp(2022, 8, 6)
>>> ts.day_name()
'Saturday'
>>> freq = pd.offsets.BusinessDay(1)
>>> freq.is_on_offset(ts)
False
| reference/api/pandas.tseries.offsets.MonthEnd.is_on_offset.html |
pandas.tseries.offsets.Week.is_anchored | `pandas.tseries.offsets.Week.is_anchored`
Return boolean whether the frequency is a unit frequency (n=1).
```
>>> pd.DateOffset().is_anchored()
True
>>> pd.DateOffset(2).is_anchored()
False
``` | Week.is_anchored()#
Return boolean whether the frequency is a unit frequency (n=1).
Examples
>>> pd.DateOffset().is_anchored()
True
>>> pd.DateOffset(2).is_anchored()
False
| reference/api/pandas.tseries.offsets.Week.is_anchored.html |
pandas.DataFrame.explode | `pandas.DataFrame.explode`
Transform each element of a list-like to a row, replicating index values.
```
>>> df = pd.DataFrame({'A': [[0, 1, 2], 'foo', [], [3, 4]],
... 'B': 1,
... 'C': [['a', 'b', 'c'], np.nan, [], ['d', 'e']]})
>>> df
A B C
0 [0, 1, 2] 1 [a, b, c]
1 foo 1 NaN
2 [] 1 []
3 [3, 4] 1 [d, e]
``` | DataFrame.explode(column, ignore_index=False)[source]#
Transform each element of a list-like to a row, replicating index values.
New in version 0.25.0.
Parameters
columnIndexLabelColumn(s) to explode.
For multiple columns, specify a non-empty list with each element
be str or tuple, and all specified columns their list-like data
on same row of the frame must have matching length.
New in version 1.3.0: Multi-column explode
ignore_indexbool, default FalseIf True, the resulting index will be labeled 0, 1, …, n - 1.
New in version 1.1.0.
Returns
DataFrameExploded lists to rows of the subset columns;
index will be duplicated for these rows.
Raises
ValueError
If columns of the frame are not unique.
If specified columns to explode is empty list.
If specified columns to explode have not matching count of
elements rowwise in the frame.
See also
DataFrame.unstackPivot a level of the (necessarily hierarchical) index labels.
DataFrame.meltUnpivot a DataFrame from wide format to long format.
Series.explodeExplode a DataFrame from list-like columns to long format.
Notes
This routine will explode list-likes including lists, tuples, sets,
Series, and np.ndarray. The result dtype of the subset rows will
be object. Scalars will be returned unchanged, and empty list-likes will
result in a np.nan for that row. In addition, the ordering of rows in the
output will be non-deterministic when exploding sets.
Reference the user guide for more examples.
Examples
>>> df = pd.DataFrame({'A': [[0, 1, 2], 'foo', [], [3, 4]],
... 'B': 1,
... 'C': [['a', 'b', 'c'], np.nan, [], ['d', 'e']]})
>>> df
A B C
0 [0, 1, 2] 1 [a, b, c]
1 foo 1 NaN
2 [] 1 []
3 [3, 4] 1 [d, e]
Single-column explode.
>>> df.explode('A')
A B C
0 0 1 [a, b, c]
0 1 1 [a, b, c]
0 2 1 [a, b, c]
1 foo 1 NaN
2 NaN 1 []
3 3 1 [d, e]
3 4 1 [d, e]
Multi-column explode.
>>> df.explode(list('AC'))
A B C
0 0 1 a
0 1 1 b
0 2 1 c
1 foo 1 NaN
2 NaN 1 NaN
3 3 1 d
3 4 1 e
| reference/api/pandas.DataFrame.explode.html |
pandas.tseries.offsets.Minute.is_month_start | `pandas.tseries.offsets.Minute.is_month_start`
Return boolean whether a timestamp occurs on the month start.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_start(ts)
True
``` | Minute.is_month_start()#
Return boolean whether a timestamp occurs on the month start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_start(ts)
True
| reference/api/pandas.tseries.offsets.Minute.is_month_start.html |
pandas.core.groupby.DataFrameGroupBy.backfill | `pandas.core.groupby.DataFrameGroupBy.backfill`
Backward fill the values.
Deprecated since version 1.4: Use bfill instead. | DataFrameGroupBy.backfill(limit=None)[source]#
Backward fill the values.
Deprecated since version 1.4: Use bfill instead.
Parameters
limitint, optionalLimit of how many values to fill.
Returns
Series or DataFrameObject with missing values filled.
| reference/api/pandas.core.groupby.DataFrameGroupBy.backfill.html |
pandas.Timestamp.round | `pandas.Timestamp.round`
Round the Timestamp to the specified resolution.
```
>>> ts = pd.Timestamp('2020-03-14T15:32:52.192548651')
``` | Timestamp.round(freq, ambiguous='raise', nonexistent='raise')#
Round the Timestamp to the specified resolution.
Parameters
freqstrFrequency string indicating the rounding resolution.
ambiguousbool or {‘raise’, ‘NaT’}, default ‘raise’The behavior is as follows:
bool contains flags to determine if time is dst or not (note
that this flag is only applicable for ambiguous fall dst dates).
‘NaT’ will return NaT for an ambiguous time.
‘raise’ will raise an AmbiguousTimeError for an ambiguous time.
nonexistent{‘raise’, ‘shift_forward’, ‘shift_backward, ‘NaT’, timedelta}, default ‘raise’A nonexistent time does not exist in a particular timezone
where clocks moved forward due to DST.
‘shift_forward’ will shift the nonexistent time forward to the
closest existing time.
‘shift_backward’ will shift the nonexistent time backward to the
closest existing time.
‘NaT’ will return NaT where there are nonexistent times.
timedelta objects will shift nonexistent times by the timedelta.
‘raise’ will raise an NonExistentTimeError if there are
nonexistent times.
Returns
a new Timestamp rounded to the given resolution of freq
Raises
ValueError if the freq cannot be converted
Notes
If the Timestamp has a timezone, rounding will take place relative to the
local (“wall”) time and re-localized to the same timezone. When rounding
near daylight savings time, use nonexistent and ambiguous to
control the re-localization behavior.
Examples
Create a timestamp object:
>>> ts = pd.Timestamp('2020-03-14T15:32:52.192548651')
A timestamp can be rounded using multiple frequency units:
>>> ts.round(freq='H') # hour
Timestamp('2020-03-14 16:00:00')
>>> ts.round(freq='T') # minute
Timestamp('2020-03-14 15:33:00')
>>> ts.round(freq='S') # seconds
Timestamp('2020-03-14 15:32:52')
>>> ts.round(freq='L') # milliseconds
Timestamp('2020-03-14 15:32:52.193000')
freq can also be a multiple of a single unit, like ‘5T’ (i.e. 5 minutes):
>>> ts.round(freq='5T')
Timestamp('2020-03-14 15:35:00')
or a combination of multiple units, like ‘1H30T’ (i.e. 1 hour and 30 minutes):
>>> ts.round(freq='1H30T')
Timestamp('2020-03-14 15:00:00')
Analogous for pd.NaT:
>>> pd.NaT.round()
NaT
When rounding near a daylight savings time transition, use ambiguous or
nonexistent to control how the timestamp should be re-localized.
>>> ts_tz = pd.Timestamp("2021-10-31 01:30:00").tz_localize("Europe/Amsterdam")
>>> ts_tz.round("H", ambiguous=False)
Timestamp('2021-10-31 02:00:00+0100', tz='Europe/Amsterdam')
>>> ts_tz.round("H", ambiguous=True)
Timestamp('2021-10-31 02:00:00+0200', tz='Europe/Amsterdam')
| reference/api/pandas.Timestamp.round.html |
pandas.tseries.offsets.BYearEnd.is_month_start | `pandas.tseries.offsets.BYearEnd.is_month_start`
Return boolean whether a timestamp occurs on the month start.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_start(ts)
True
``` | BYearEnd.is_month_start()#
Return boolean whether a timestamp occurs on the month start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_start(ts)
True
| reference/api/pandas.tseries.offsets.BYearEnd.is_month_start.html |
pandas.arrays.IntervalArray.is_empty | `pandas.arrays.IntervalArray.is_empty`
Indicates if an interval is empty, meaning it contains no points.
```
>>> pd.Interval(0, 1, closed='right').is_empty
False
``` | IntervalArray.is_empty#
Indicates if an interval is empty, meaning it contains no points.
New in version 0.25.0.
Returns
bool or ndarrayA boolean indicating if a scalar Interval is empty, or a
boolean ndarray positionally indicating if an Interval in
an IntervalArray or IntervalIndex is
empty.
Examples
An Interval that contains points is not empty:
>>> pd.Interval(0, 1, closed='right').is_empty
False
An Interval that does not contain any points is empty:
>>> pd.Interval(0, 0, closed='right').is_empty
True
>>> pd.Interval(0, 0, closed='left').is_empty
True
>>> pd.Interval(0, 0, closed='neither').is_empty
True
An Interval that contains a single point is not empty:
>>> pd.Interval(0, 0, closed='both').is_empty
False
An IntervalArray or IntervalIndex returns a
boolean ndarray positionally indicating if an Interval is
empty:
>>> ivs = [pd.Interval(0, 0, closed='neither'),
... pd.Interval(1, 2, closed='neither')]
>>> pd.arrays.IntervalArray(ivs).is_empty
array([ True, False])
Missing values are not considered empty:
>>> ivs = [pd.Interval(0, 0, closed='neither'), np.nan]
>>> pd.IntervalIndex(ivs).is_empty
array([ True, False])
| reference/api/pandas.arrays.IntervalArray.is_empty.html |
pandas.tseries.offsets.MonthBegin.base | `pandas.tseries.offsets.MonthBegin.base`
Returns a copy of the calling offset object with n=1 and all other
attributes equal. | MonthBegin.base#
Returns a copy of the calling offset object with n=1 and all other
attributes equal.
| reference/api/pandas.tseries.offsets.MonthBegin.base.html |
pandas.io.formats.style.Styler.where | `pandas.io.formats.style.Styler.where`
Apply CSS-styles based on a conditional function elementwise.
```
>>> df = pd.DataFrame([[1, 2], [3, 4]])
>>> def cond(v, limit=4):
... return v > 1 and v != limit
>>> df.style.where(cond, value='color:green;', other='color:red;')
...
``` | Styler.where(cond, value, other=None, subset=None, **kwargs)[source]#
Apply CSS-styles based on a conditional function elementwise.
Deprecated since version 1.3.0.
Updates the HTML representation with a style which is
selected in accordance with the return value of a function.
Parameters
condcallablecond should take a scalar, and optional keyword arguments, and return
a boolean.
valuestrApplied when cond returns true.
otherstrApplied when cond returns false.
subsetlabel, array-like, IndexSlice, optionalA valid 2d input to DataFrame.loc[<subset>], or, in the case of a 1d input
or single key, to DataFrame.loc[:, <subset>] where the columns are
prioritised, to limit data to before applying the function.
**kwargsdictPass along to cond.
Returns
selfStyler
See also
Styler.applymapApply a CSS-styling function elementwise.
Styler.applyApply a CSS-styling function column-wise, row-wise, or table-wise.
Notes
This method is deprecated.
This method is a convenience wrapper for Styler.applymap(), which we
recommend using instead.
The example:
>>> df = pd.DataFrame([[1, 2], [3, 4]])
>>> def cond(v, limit=4):
... return v > 1 and v != limit
>>> df.style.where(cond, value='color:green;', other='color:red;')
...
should be refactored to:
>>> def style_func(v, value, other, limit=4):
... cond = v > 1 and v != limit
... return value if cond else other
>>> df.style.applymap(style_func, value='color:green;', other='color:red;')
...
| reference/api/pandas.io.formats.style.Styler.where.html |
pandas.Series.str.rsplit | `pandas.Series.str.rsplit`
Split strings around given separator/delimiter.
```
>>> s = pd.Series(
... [
... "this is a regular sentence",
... "https://docs.python.org/3/tutorial/index.html",
... np.nan
... ]
... )
>>> s
0 this is a regular sentence
1 https://docs.python.org/3/tutorial/index.html
2 NaN
dtype: object
``` | Series.str.rsplit(pat=None, *, n=- 1, expand=False)[source]#
Split strings around given separator/delimiter.
Splits the string in the Series/Index from the end,
at the specified delimiter string.
Parameters
patstr, optionalString to split on.
If not specified, split on whitespace.
nint, default -1 (all)Limit number of splits in output.
None, 0 and -1 will be interpreted as return all splits.
expandbool, default FalseExpand the split strings into separate columns.
If True, return DataFrame/MultiIndex expanding dimensionality.
If False, return Series/Index, containing lists of strings.
Returns
Series, Index, DataFrame or MultiIndexType matches caller unless expand=True (see Notes).
See also
Series.str.splitSplit strings around given separator/delimiter.
Series.str.rsplitSplits string around given separator/delimiter, starting from the right.
Series.str.joinJoin lists contained as elements in the Series/Index with passed delimiter.
str.splitStandard library version for split.
str.rsplitStandard library version for rsplit.
Notes
The handling of the n keyword depends on the number of found splits:
If found splits > n, make first n splits only
If found splits <= n, make all splits
If for a certain row the number of found splits < n,
append None for padding up to n if expand=True
If using expand=True, Series and Index callers return DataFrame and
MultiIndex objects, respectively.
Examples
>>> s = pd.Series(
... [
... "this is a regular sentence",
... "https://docs.python.org/3/tutorial/index.html",
... np.nan
... ]
... )
>>> s
0 this is a regular sentence
1 https://docs.python.org/3/tutorial/index.html
2 NaN
dtype: object
In the default setting, the string is split by whitespace.
>>> s.str.split()
0 [this, is, a, regular, sentence]
1 [https://docs.python.org/3/tutorial/index.html]
2 NaN
dtype: object
Without the n parameter, the outputs of rsplit and split
are identical.
>>> s.str.rsplit()
0 [this, is, a, regular, sentence]
1 [https://docs.python.org/3/tutorial/index.html]
2 NaN
dtype: object
The n parameter can be used to limit the number of splits on the
delimiter. The outputs of split and rsplit are different.
>>> s.str.split(n=2)
0 [this, is, a regular sentence]
1 [https://docs.python.org/3/tutorial/index.html]
2 NaN
dtype: object
>>> s.str.rsplit(n=2)
0 [this is a, regular, sentence]
1 [https://docs.python.org/3/tutorial/index.html]
2 NaN
dtype: object
The pat parameter can be used to split by other characters.
>>> s.str.split(pat="/")
0 [this is a regular sentence]
1 [https:, , docs.python.org, 3, tutorial, index...
2 NaN
dtype: object
When using expand=True, the split elements will expand out into
separate columns. If NaN is present, it is propagated throughout
the columns during the split.
>>> s.str.split(expand=True)
0 1 2 3 4
0 this is a regular sentence
1 https://docs.python.org/3/tutorial/index.html None None None None
2 NaN NaN NaN NaN NaN
For slightly more complex use cases like splitting the html document name
from a url, a combination of parameter settings can be used.
>>> s.str.rsplit("/", n=1, expand=True)
0 1
0 this is a regular sentence None
1 https://docs.python.org/3/tutorial index.html
2 NaN NaN
| reference/api/pandas.Series.str.rsplit.html |
pandas.Interval.closed_left | `pandas.Interval.closed_left`
Check if the interval is closed on the left side.
For the meaning of closed and open see Interval. | Interval.closed_left#
Check if the interval is closed on the left side.
For the meaning of closed and open see Interval.
Returns
boolTrue if the Interval is closed on the left-side.
| reference/api/pandas.Interval.closed_left.html |
pandas.PeriodIndex.dayofyear | `pandas.PeriodIndex.dayofyear`
The ordinal day of the year. | property PeriodIndex.dayofyear[source]#
The ordinal day of the year.
| reference/api/pandas.PeriodIndex.dayofyear.html |
pandas.DataFrame.sem | `pandas.DataFrame.sem`
Return unbiased standard error of the mean over requested axis. | DataFrame.sem(axis=None, skipna=True, level=None, ddof=1, numeric_only=None, **kwargs)[source]#
Return unbiased standard error of the mean over requested axis.
Normalized by N-1 by default. This can be changed using the ddof argument
Parameters
axis{index (0), columns (1)}For Series this parameter is unused and defaults to 0.
skipnabool, default TrueExclude NA/null values. If an entire row/column is NA, the result
will be NA.
levelint or level name, default NoneIf the axis is a MultiIndex (hierarchical), count along a
particular level, collapsing into a Series.
Deprecated since version 1.3.0: The level keyword is deprecated. Use groupby instead.
ddofint, default 1Delta Degrees of Freedom. The divisor used in calculations is N - ddof,
where N represents the number of elements.
numeric_onlybool, default NoneInclude only float, int, boolean columns. If None, will attempt to use
everything, then use only numeric data. Not implemented for Series.
Deprecated since version 1.5.0: Specifying numeric_only=None is deprecated. The default value will be
False in a future version of pandas.
Returns
Series or DataFrame (if level specified)
| reference/api/pandas.DataFrame.sem.html |
Resampling | Resampling | Resampler objects are returned by resample calls: pandas.DataFrame.resample(), pandas.Series.resample().
Indexing, iteration#
Resampler.__iter__()
Groupby iterator.
Resampler.groups
Dict {group name -> group labels}.
Resampler.indices
Dict {group name -> group indices}.
Resampler.get_group(name[, obj])
Construct DataFrame from group with provided name.
Function application#
Resampler.apply([func])
Aggregate using one or more operations over the specified axis.
Resampler.aggregate([func])
Aggregate using one or more operations over the specified axis.
Resampler.transform(arg, *args, **kwargs)
Call function producing a like-indexed Series on each group.
Resampler.pipe(func, *args, **kwargs)
Apply a func with arguments to this Resampler object and return its result.
Upsampling#
Resampler.ffill([limit])
Forward fill the values.
Resampler.backfill([limit])
(DEPRECATED) Backward fill the values.
Resampler.bfill([limit])
Backward fill the new missing values in the resampled data.
Resampler.pad([limit])
(DEPRECATED) Forward fill the values.
Resampler.nearest([limit])
Resample by using the nearest value.
Resampler.fillna(method[, limit])
Fill missing values introduced by upsampling.
Resampler.asfreq([fill_value])
Return the values at the new freq, essentially a reindex.
Resampler.interpolate([method, axis, limit, ...])
Interpolate values according to different methods.
Computations / descriptive stats#
Resampler.count()
Compute count of group, excluding missing values.
Resampler.nunique(*args, **kwargs)
Return number of unique elements in the group.
Resampler.first([numeric_only, min_count])
Compute the first non-null entry of each column.
Resampler.last([numeric_only, min_count])
Compute the last non-null entry of each column.
Resampler.max([numeric_only, min_count])
Compute max of group values.
Resampler.mean([numeric_only])
Compute mean of groups, excluding missing values.
Resampler.median([numeric_only])
Compute median of groups, excluding missing values.
Resampler.min([numeric_only, min_count])
Compute min of group values.
Resampler.ohlc(*args, **kwargs)
Compute open, high, low and close values of a group, excluding missing values.
Resampler.prod([numeric_only, min_count])
Compute prod of group values.
Resampler.size()
Compute group sizes.
Resampler.sem([ddof, numeric_only])
Compute standard error of the mean of groups, excluding missing values.
Resampler.std([ddof, numeric_only])
Compute standard deviation of groups, excluding missing values.
Resampler.sum([numeric_only, min_count])
Compute sum of group values.
Resampler.var([ddof, numeric_only])
Compute variance of groups, excluding missing values.
Resampler.quantile([q])
Return value at the given quantile.
| reference/resampling.html |
pandas.tseries.offsets.Tick.is_on_offset | `pandas.tseries.offsets.Tick.is_on_offset`
Return boolean whether a timestamp intersects with this frequency.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Day(1)
>>> freq.is_on_offset(ts)
True
``` | Tick.is_on_offset()#
Return boolean whether a timestamp intersects with this frequency.
Parameters
dtdatetime.datetimeTimestamp to check intersections with frequency.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Day(1)
>>> freq.is_on_offset(ts)
True
>>> ts = pd.Timestamp(2022, 8, 6)
>>> ts.day_name()
'Saturday'
>>> freq = pd.offsets.BusinessDay(1)
>>> freq.is_on_offset(ts)
False
| reference/api/pandas.tseries.offsets.Tick.is_on_offset.html |
pandas.tseries.offsets.BYearBegin.apply | pandas.tseries.offsets.BYearBegin.apply | BYearBegin.apply()#
| reference/api/pandas.tseries.offsets.BYearBegin.apply.html |
pandas.DatetimeIndex.std | `pandas.DatetimeIndex.std`
Return sample standard deviation over requested axis. | DatetimeIndex.std(*args, **kwargs)[source]#
Return sample standard deviation over requested axis.
Normalized by N-1 by default. This can be changed using the ddof argument
Parameters
axisint optional, default NoneAxis for the function to be applied on.
For Series this parameter is unused and defaults to None.
ddofint, default 1Degrees of Freedom. The divisor used in calculations is N - ddof,
where N represents the number of elements.
skipnabool, default TrueExclude NA/null values. If an entire row/column is NA, the result will be
NA.
Returns
Timedelta
| reference/api/pandas.DatetimeIndex.std.html |
pandas.errors.EmptyDataError | `pandas.errors.EmptyDataError`
Exception raised in pd.read_csv when empty data or header is encountered. | exception pandas.errors.EmptyDataError[source]#
Exception raised in pd.read_csv when empty data or header is encountered.
| reference/api/pandas.errors.EmptyDataError.html |
pandas.Series.drop | `pandas.Series.drop`
Return Series with specified index labels removed.
```
>>> s = pd.Series(data=np.arange(3), index=['A', 'B', 'C'])
>>> s
A 0
B 1
C 2
dtype: int64
``` | Series.drop(labels=None, *, axis=0, index=None, columns=None, level=None, inplace=False, errors='raise')[source]#
Return Series with specified index labels removed.
Remove elements of a Series based on specifying the index labels.
When using a multi-index, labels on different levels can be removed
by specifying the level.
Parameters
labelssingle label or list-likeIndex labels to drop.
axis{0 or ‘index’}Unused. Parameter needed for compatibility with DataFrame.
indexsingle label or list-likeRedundant for application on Series, but ‘index’ can be used instead
of ‘labels’.
columnssingle label or list-likeNo change is made to the Series; use ‘index’ or ‘labels’ instead.
levelint or level name, optionalFor MultiIndex, level for which the labels will be removed.
inplacebool, default FalseIf True, do operation inplace and return None.
errors{‘ignore’, ‘raise’}, default ‘raise’If ‘ignore’, suppress error and only existing labels are dropped.
Returns
Series or NoneSeries with specified index labels removed or None if inplace=True.
Raises
KeyErrorIf none of the labels are found in the index.
See also
Series.reindexReturn only specified index labels of Series.
Series.dropnaReturn series without null values.
Series.drop_duplicatesReturn Series with duplicate values removed.
DataFrame.dropDrop specified labels from rows or columns.
Examples
>>> s = pd.Series(data=np.arange(3), index=['A', 'B', 'C'])
>>> s
A 0
B 1
C 2
dtype: int64
Drop labels B en C
>>> s.drop(labels=['B', 'C'])
A 0
dtype: int64
Drop 2nd level label in MultiIndex Series
>>> midx = pd.MultiIndex(levels=[['lama', 'cow', 'falcon'],
... ['speed', 'weight', 'length']],
... codes=[[0, 0, 0, 1, 1, 1, 2, 2, 2],
... [0, 1, 2, 0, 1, 2, 0, 1, 2]])
>>> s = pd.Series([45, 200, 1.2, 30, 250, 1.5, 320, 1, 0.3],
... index=midx)
>>> s
lama speed 45.0
weight 200.0
length 1.2
cow speed 30.0
weight 250.0
length 1.5
falcon speed 320.0
weight 1.0
length 0.3
dtype: float64
>>> s.drop(labels='weight', level=1)
lama speed 45.0
length 1.2
cow speed 30.0
length 1.5
falcon speed 320.0
length 0.3
dtype: float64
| reference/api/pandas.Series.drop.html |
pandas.Series.drop_duplicates | `pandas.Series.drop_duplicates`
Return Series with duplicate values removed.
Method to handle dropping duplicates:
```
>>> s = pd.Series(['lama', 'cow', 'lama', 'beetle', 'lama', 'hippo'],
... name='animal')
>>> s
0 lama
1 cow
2 lama
3 beetle
4 lama
5 hippo
Name: animal, dtype: object
``` | Series.drop_duplicates(*, keep='first', inplace=False)[source]#
Return Series with duplicate values removed.
Parameters
keep{‘first’, ‘last’, False}, default ‘first’Method to handle dropping duplicates:
‘first’ : Drop duplicates except for the first occurrence.
‘last’ : Drop duplicates except for the last occurrence.
False : Drop all duplicates.
inplacebool, default FalseIf True, performs operation inplace and returns None.
Returns
Series or NoneSeries with duplicates dropped or None if inplace=True.
See also
Index.drop_duplicatesEquivalent method on Index.
DataFrame.drop_duplicatesEquivalent method on DataFrame.
Series.duplicatedRelated method on Series, indicating duplicate Series values.
Series.uniqueReturn unique values as an array.
Examples
Generate a Series with duplicated entries.
>>> s = pd.Series(['lama', 'cow', 'lama', 'beetle', 'lama', 'hippo'],
... name='animal')
>>> s
0 lama
1 cow
2 lama
3 beetle
4 lama
5 hippo
Name: animal, dtype: object
With the ‘keep’ parameter, the selection behaviour of duplicated values
can be changed. The value ‘first’ keeps the first occurrence for each
set of duplicated entries. The default value of keep is ‘first’.
>>> s.drop_duplicates()
0 lama
1 cow
3 beetle
5 hippo
Name: animal, dtype: object
The value ‘last’ for parameter ‘keep’ keeps the last occurrence for
each set of duplicated entries.
>>> s.drop_duplicates(keep='last')
1 cow
3 beetle
4 lama
5 hippo
Name: animal, dtype: object
The value False for parameter ‘keep’ discards all sets of
duplicated entries. Setting the value of ‘inplace’ to True performs
the operation inplace and returns None.
>>> s.drop_duplicates(keep=False, inplace=True)
>>> s
1 cow
3 beetle
5 hippo
Name: animal, dtype: object
| reference/api/pandas.Series.drop_duplicates.html |
pandas.tseries.offsets.CustomBusinessHour.calendar | pandas.tseries.offsets.CustomBusinessHour.calendar | CustomBusinessHour.calendar#
| reference/api/pandas.tseries.offsets.CustomBusinessHour.calendar.html |
pandas.Series.str.contains | `pandas.Series.str.contains`
Test if pattern or regex is contained within a string of a Series or Index.
```
>>> s1 = pd.Series(['Mouse', 'dog', 'house and parrot', '23', np.NaN])
>>> s1.str.contains('og', regex=False)
0 False
1 True
2 False
3 False
4 NaN
dtype: object
``` | Series.str.contains(pat, case=True, flags=0, na=None, regex=True)[source]#
Test if pattern or regex is contained within a string of a Series or Index.
Return boolean Series or Index based on whether a given pattern or regex is
contained within a string of a Series or Index.
Parameters
patstrCharacter sequence or regular expression.
casebool, default TrueIf True, case sensitive.
flagsint, default 0 (no flags)Flags to pass through to the re module, e.g. re.IGNORECASE.
nascalar, optionalFill value for missing values. The default depends on dtype of the
array. For object-dtype, numpy.nan is used. For StringDtype,
pandas.NA is used.
regexbool, default TrueIf True, assumes the pat is a regular expression.
If False, treats the pat as a literal string.
Returns
Series or Index of boolean valuesA Series or Index of boolean values indicating whether the
given pattern is contained within the string of each element
of the Series or Index.
See also
matchAnalogous, but stricter, relying on re.match instead of re.search.
Series.str.startswithTest if the start of each string element matches a pattern.
Series.str.endswithSame as startswith, but tests the end of string.
Examples
Returning a Series of booleans using only a literal pattern.
>>> s1 = pd.Series(['Mouse', 'dog', 'house and parrot', '23', np.NaN])
>>> s1.str.contains('og', regex=False)
0 False
1 True
2 False
3 False
4 NaN
dtype: object
Returning an Index of booleans using only a literal pattern.
>>> ind = pd.Index(['Mouse', 'dog', 'house and parrot', '23.0', np.NaN])
>>> ind.str.contains('23', regex=False)
Index([False, False, False, True, nan], dtype='object')
Specifying case sensitivity using case.
>>> s1.str.contains('oG', case=True, regex=True)
0 False
1 False
2 False
3 False
4 NaN
dtype: object
Specifying na to be False instead of NaN replaces NaN values
with False. If Series or Index does not contain NaN values
the resultant dtype will be bool, otherwise, an object dtype.
>>> s1.str.contains('og', na=False, regex=True)
0 False
1 True
2 False
3 False
4 False
dtype: bool
Returning ‘house’ or ‘dog’ when either expression occurs in a string.
>>> s1.str.contains('house|dog', regex=True)
0 False
1 True
2 True
3 False
4 NaN
dtype: object
Ignoring case sensitivity using flags with regex.
>>> import re
>>> s1.str.contains('PARROT', flags=re.IGNORECASE, regex=True)
0 False
1 False
2 True
3 False
4 NaN
dtype: object
Returning any digit using regular expression.
>>> s1.str.contains('\\d', regex=True)
0 False
1 False
2 False
3 True
4 NaN
dtype: object
Ensure pat is a not a literal pattern when regex is set to True.
Note in the following example one might expect only s2[1] and s2[3] to
return True. However, ‘.0’ as a regex matches any character
followed by a 0.
>>> s2 = pd.Series(['40', '40.0', '41', '41.0', '35'])
>>> s2.str.contains('.0', regex=True)
0 True
1 True
2 False
3 True
4 False
dtype: bool
| reference/api/pandas.Series.str.contains.html |
pandas.tseries.offsets.Week.onOffset | pandas.tseries.offsets.Week.onOffset | Week.onOffset()#
| reference/api/pandas.tseries.offsets.Week.onOffset.html |
pandas.tseries.offsets.QuarterEnd.base | `pandas.tseries.offsets.QuarterEnd.base`
Returns a copy of the calling offset object with n=1 and all other
attributes equal. | QuarterEnd.base#
Returns a copy of the calling offset object with n=1 and all other
attributes equal.
| reference/api/pandas.tseries.offsets.QuarterEnd.base.html |
pandas.Timestamp.ceil | `pandas.Timestamp.ceil`
Return a new Timestamp ceiled to this resolution.
Frequency string indicating the ceiling resolution.
```
>>> ts = pd.Timestamp('2020-03-14T15:32:52.192548651')
``` | Timestamp.ceil(freq, ambiguous='raise', nonexistent='raise')#
Return a new Timestamp ceiled to this resolution.
Parameters
freqstrFrequency string indicating the ceiling resolution.
ambiguousbool or {‘raise’, ‘NaT’}, default ‘raise’The behavior is as follows:
bool contains flags to determine if time is dst or not (note
that this flag is only applicable for ambiguous fall dst dates).
‘NaT’ will return NaT for an ambiguous time.
‘raise’ will raise an AmbiguousTimeError for an ambiguous time.
nonexistent{‘raise’, ‘shift_forward’, ‘shift_backward, ‘NaT’, timedelta}, default ‘raise’A nonexistent time does not exist in a particular timezone
where clocks moved forward due to DST.
‘shift_forward’ will shift the nonexistent time forward to the
closest existing time.
‘shift_backward’ will shift the nonexistent time backward to the
closest existing time.
‘NaT’ will return NaT where there are nonexistent times.
timedelta objects will shift nonexistent times by the timedelta.
‘raise’ will raise an NonExistentTimeError if there are
nonexistent times.
Raises
ValueError if the freq cannot be converted.
Notes
If the Timestamp has a timezone, ceiling will take place relative to the
local (“wall”) time and re-localized to the same timezone. When ceiling
near daylight savings time, use nonexistent and ambiguous to
control the re-localization behavior.
Examples
Create a timestamp object:
>>> ts = pd.Timestamp('2020-03-14T15:32:52.192548651')
A timestamp can be ceiled using multiple frequency units:
>>> ts.ceil(freq='H') # hour
Timestamp('2020-03-14 16:00:00')
>>> ts.ceil(freq='T') # minute
Timestamp('2020-03-14 15:33:00')
>>> ts.ceil(freq='S') # seconds
Timestamp('2020-03-14 15:32:53')
>>> ts.ceil(freq='U') # microseconds
Timestamp('2020-03-14 15:32:52.192549')
freq can also be a multiple of a single unit, like ‘5T’ (i.e. 5 minutes):
>>> ts.ceil(freq='5T')
Timestamp('2020-03-14 15:35:00')
or a combination of multiple units, like ‘1H30T’ (i.e. 1 hour and 30 minutes):
>>> ts.ceil(freq='1H30T')
Timestamp('2020-03-14 16:30:00')
Analogous for pd.NaT:
>>> pd.NaT.ceil()
NaT
When rounding near a daylight savings time transition, use ambiguous or
nonexistent to control how the timestamp should be re-localized.
>>> ts_tz = pd.Timestamp("2021-10-31 01:30:00").tz_localize("Europe/Amsterdam")
>>> ts_tz.ceil("H", ambiguous=False)
Timestamp('2021-10-31 02:00:00+0100', tz='Europe/Amsterdam')
>>> ts_tz.ceil("H", ambiguous=True)
Timestamp('2021-10-31 02:00:00+0200', tz='Europe/Amsterdam')
| reference/api/pandas.Timestamp.ceil.html |
pandas.api.types.is_extension_type | `pandas.api.types.is_extension_type`
Check whether an array-like is of a pandas extension class instance.
```
>>> is_extension_type([1, 2, 3])
False
>>> is_extension_type(np.array([1, 2, 3]))
False
>>>
>>> cat = pd.Categorical([1, 2, 3])
>>>
>>> is_extension_type(cat)
True
>>> is_extension_type(pd.Series(cat))
True
>>> is_extension_type(pd.arrays.SparseArray([1, 2, 3]))
True
>>> from scipy.sparse import bsr_matrix
>>> is_extension_type(bsr_matrix([1, 2, 3]))
False
>>> is_extension_type(pd.DatetimeIndex([1, 2, 3]))
False
>>> is_extension_type(pd.DatetimeIndex([1, 2, 3], tz="US/Eastern"))
True
>>>
>>> dtype = DatetimeTZDtype("ns", tz="US/Eastern")
>>> s = pd.Series([], dtype=dtype)
>>> is_extension_type(s)
True
``` | pandas.api.types.is_extension_type(arr)[source]#
Check whether an array-like is of a pandas extension class instance.
Deprecated since version 1.0.0: Use is_extension_array_dtype instead.
Extension classes include categoricals, pandas sparse objects (i.e.
classes represented within the pandas library and not ones external
to it like scipy sparse matrices), and datetime-like arrays.
Parameters
arrarray-like, scalarThe array-like to check.
Returns
booleanWhether or not the array-like is of a pandas extension class instance.
Examples
>>> is_extension_type([1, 2, 3])
False
>>> is_extension_type(np.array([1, 2, 3]))
False
>>>
>>> cat = pd.Categorical([1, 2, 3])
>>>
>>> is_extension_type(cat)
True
>>> is_extension_type(pd.Series(cat))
True
>>> is_extension_type(pd.arrays.SparseArray([1, 2, 3]))
True
>>> from scipy.sparse import bsr_matrix
>>> is_extension_type(bsr_matrix([1, 2, 3]))
False
>>> is_extension_type(pd.DatetimeIndex([1, 2, 3]))
False
>>> is_extension_type(pd.DatetimeIndex([1, 2, 3], tz="US/Eastern"))
True
>>>
>>> dtype = DatetimeTZDtype("ns", tz="US/Eastern")
>>> s = pd.Series([], dtype=dtype)
>>> is_extension_type(s)
True
| reference/api/pandas.api.types.is_extension_type.html |
pandas.Series.str.removeprefix | `pandas.Series.str.removeprefix`
Remove a prefix from an object series.
If the prefix is not present, the original string will be returned.
```
>>> s = pd.Series(["str_foo", "str_bar", "no_prefix"])
>>> s
0 str_foo
1 str_bar
2 no_prefix
dtype: object
>>> s.str.removeprefix("str_")
0 foo
1 bar
2 no_prefix
dtype: object
``` | Series.str.removeprefix(prefix)[source]#
Remove a prefix from an object series.
If the prefix is not present, the original string will be returned.
Parameters
prefixstrRemove the prefix of the string.
Returns
Series/Index: objectThe Series or Index with given prefix removed.
See also
Series.str.removesuffixRemove a suffix from an object series.
Examples
>>> s = pd.Series(["str_foo", "str_bar", "no_prefix"])
>>> s
0 str_foo
1 str_bar
2 no_prefix
dtype: object
>>> s.str.removeprefix("str_")
0 foo
1 bar
2 no_prefix
dtype: object
>>> s = pd.Series(["foo_str", "bar_str", "no_suffix"])
>>> s
0 foo_str
1 bar_str
2 no_suffix
dtype: object
>>> s.str.removesuffix("_str")
0 foo
1 bar
2 no_suffix
dtype: object
| reference/api/pandas.Series.str.removeprefix.html |
pandas.tseries.offsets.FY5253.normalize | pandas.tseries.offsets.FY5253.normalize | FY5253.normalize#
| reference/api/pandas.tseries.offsets.FY5253.normalize.html |
pandas.tseries.offsets.QuarterBegin.is_year_end | `pandas.tseries.offsets.QuarterBegin.is_year_end`
Return boolean whether a timestamp occurs on the year end.
Examples
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_end(ts)
False
``` | QuarterBegin.is_year_end()#
Return boolean whether a timestamp occurs on the year end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_end(ts)
False
| reference/api/pandas.tseries.offsets.QuarterBegin.is_year_end.html |
pandas.tseries.offsets.Second.is_year_start | `pandas.tseries.offsets.Second.is_year_start`
Return boolean whether a timestamp occurs on the year start.
Examples
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_start(ts)
True
``` | Second.is_year_start()#
Return boolean whether a timestamp occurs on the year start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_start(ts)
True
| reference/api/pandas.tseries.offsets.Second.is_year_start.html |
pandas.Categorical | `pandas.Categorical`
Represent a categorical variable in classic R / S-plus fashion.
Categoricals can only take on only a limited, and usually fixed, number
of possible values (categories). In contrast to statistical categorical
variables, a Categorical might have an order, but numerical operations
(additions, divisions, …) are not possible.
```
>>> pd.Categorical([1, 2, 3, 1, 2, 3])
[1, 2, 3, 1, 2, 3]
Categories (3, int64): [1, 2, 3]
``` | class pandas.Categorical(values, categories=None, ordered=None, dtype=None, fastpath=False, copy=True)[source]#
Represent a categorical variable in classic R / S-plus fashion.
Categoricals can only take on only a limited, and usually fixed, number
of possible values (categories). In contrast to statistical categorical
variables, a Categorical might have an order, but numerical operations
(additions, divisions, …) are not possible.
All values of the Categorical are either in categories or np.nan.
Assigning values outside of categories will raise a ValueError. Order
is defined by the order of the categories, not lexical order of the
values.
Parameters
valueslist-likeThe values of the categorical. If categories are given, values not in
categories will be replaced with NaN.
categoriesIndex-like (unique), optionalThe unique categories for this categorical. If not given, the
categories are assumed to be the unique values of values (sorted, if
possible, otherwise in the order in which they appear).
orderedbool, default FalseWhether or not this categorical is treated as a ordered categorical.
If True, the resulting categorical will be ordered.
An ordered categorical respects, when sorted, the order of its
categories attribute (which in turn is the categories argument, if
provided).
dtypeCategoricalDtypeAn instance of CategoricalDtype to use for this categorical.
Raises
ValueErrorIf the categories do not validate.
TypeErrorIf an explicit ordered=True is given but no categories and the
values are not sortable.
See also
CategoricalDtypeType for categorical data.
CategoricalIndexAn Index with an underlying Categorical.
Notes
See the user guide
for more.
Examples
>>> pd.Categorical([1, 2, 3, 1, 2, 3])
[1, 2, 3, 1, 2, 3]
Categories (3, int64): [1, 2, 3]
>>> pd.Categorical(['a', 'b', 'c', 'a', 'b', 'c'])
['a', 'b', 'c', 'a', 'b', 'c']
Categories (3, object): ['a', 'b', 'c']
Missing values are not included as a category.
>>> c = pd.Categorical([1, 2, 3, 1, 2, 3, np.nan])
>>> c
[1, 2, 3, 1, 2, 3, NaN]
Categories (3, int64): [1, 2, 3]
However, their presence is indicated in the codes attribute
by code -1.
>>> c.codes
array([ 0, 1, 2, 0, 1, 2, -1], dtype=int8)
Ordered Categoricals can be sorted according to the custom order
of the categories and can have a min and max value.
>>> c = pd.Categorical(['a', 'b', 'c', 'a', 'b', 'c'], ordered=True,
... categories=['c', 'b', 'a'])
>>> c
['a', 'b', 'c', 'a', 'b', 'c']
Categories (3, object): ['c' < 'b' < 'a']
>>> c.min()
'c'
Attributes
categories
The categories of this categorical.
codes
The category codes of this categorical.
ordered
Whether the categories have an ordered relationship.
dtype
The CategoricalDtype for this instance.
Methods
from_codes(codes[, categories, ordered, dtype])
Make a Categorical type from codes and categories or dtype.
__array__([dtype])
The numpy array interface.
| reference/api/pandas.Categorical.html |
pandas.tseries.offsets.YearBegin.name | `pandas.tseries.offsets.YearBegin.name`
Return a string representing the base frequency.
Examples
```
>>> pd.offsets.Hour().name
'H'
``` | YearBegin.name#
Return a string representing the base frequency.
Examples
>>> pd.offsets.Hour().name
'H'
>>> pd.offsets.Hour(5).name
'H'
| reference/api/pandas.tseries.offsets.YearBegin.name.html |
pandas.DataFrame.shape | `pandas.DataFrame.shape`
Return a tuple representing the dimensionality of the DataFrame.
```
>>> df = pd.DataFrame({'col1': [1, 2], 'col2': [3, 4]})
>>> df.shape
(2, 2)
``` | property DataFrame.shape[source]#
Return a tuple representing the dimensionality of the DataFrame.
See also
ndarray.shapeTuple of array dimensions.
Examples
>>> df = pd.DataFrame({'col1': [1, 2], 'col2': [3, 4]})
>>> df.shape
(2, 2)
>>> df = pd.DataFrame({'col1': [1, 2], 'col2': [3, 4],
... 'col3': [5, 6]})
>>> df.shape
(2, 3)
| reference/api/pandas.DataFrame.shape.html |
pandas.DataFrame.to_excel | `pandas.DataFrame.to_excel`
Write object to an Excel sheet.
To write a single object to an Excel .xlsx file it is only necessary to
specify a target file name. To write to multiple sheets it is necessary to
create an ExcelWriter object with a target file name, and specify a sheet
in the file to write to.
```
>>> df1 = pd.DataFrame([['a', 'b'], ['c', 'd']],
... index=['row 1', 'row 2'],
... columns=['col 1', 'col 2'])
>>> df1.to_excel("output.xlsx")
``` | DataFrame.to_excel(excel_writer, sheet_name='Sheet1', na_rep='', float_format=None, columns=None, header=True, index=True, index_label=None, startrow=0, startcol=0, engine=None, merge_cells=True, encoding=_NoDefault.no_default, inf_rep='inf', verbose=_NoDefault.no_default, freeze_panes=None, storage_options=None)[source]#
Write object to an Excel sheet.
To write a single object to an Excel .xlsx file it is only necessary to
specify a target file name. To write to multiple sheets it is necessary to
create an ExcelWriter object with a target file name, and specify a sheet
in the file to write to.
Multiple sheets may be written to by specifying unique sheet_name.
With all data written to the file it is necessary to save the changes.
Note that creating an ExcelWriter object with a file name that already
exists will result in the contents of the existing file being erased.
Parameters
excel_writerpath-like, file-like, or ExcelWriter objectFile path or existing ExcelWriter.
sheet_namestr, default ‘Sheet1’Name of sheet which will contain DataFrame.
na_repstr, default ‘’Missing data representation.
float_formatstr, optionalFormat string for floating point numbers. For example
float_format="%.2f" will format 0.1234 to 0.12.
columnssequence or list of str, optionalColumns to write.
headerbool or list of str, default TrueWrite out the column names. If a list of string is given it is
assumed to be aliases for the column names.
indexbool, default TrueWrite row names (index).
index_labelstr or sequence, optionalColumn label for index column(s) if desired. If not specified, and
header and index are True, then the index names are used. A
sequence should be given if the DataFrame uses MultiIndex.
startrowint, default 0Upper left cell row to dump data frame.
startcolint, default 0Upper left cell column to dump data frame.
enginestr, optionalWrite engine to use, ‘openpyxl’ or ‘xlsxwriter’. You can also set this
via the options io.excel.xlsx.writer, io.excel.xls.writer, and
io.excel.xlsm.writer.
Deprecated since version 1.2.0: As the xlwt package is no longer
maintained, the xlwt engine will be removed in a future version
of pandas.
merge_cellsbool, default TrueWrite MultiIndex and Hierarchical Rows as merged cells.
encodingstr, optionalEncoding of the resulting excel file. Only necessary for xlwt,
other writers support unicode natively.
Deprecated since version 1.5.0: This keyword was not used.
inf_repstr, default ‘inf’Representation for infinity (there is no native representation for
infinity in Excel).
verbosebool, default TrueDisplay more information in the error logs.
Deprecated since version 1.5.0: This keyword was not used.
freeze_panestuple of int (length 2), optionalSpecifies the one-based bottommost row and rightmost column that
is to be frozen.
storage_optionsdict, optionalExtra options that make sense for a particular storage connection, e.g.
host, port, username, password, etc. For HTTP(S) URLs the key-value pairs
are forwarded to urllib.request.Request as header options. For other
URLs (e.g. starting with “s3://”, and “gcs://”) the key-value pairs are
forwarded to fsspec.open. Please see fsspec and urllib for more
details, and for more examples on storage options refer here.
New in version 1.2.0.
See also
to_csvWrite DataFrame to a comma-separated values (csv) file.
ExcelWriterClass for writing DataFrame objects into excel sheets.
read_excelRead an Excel file into a pandas DataFrame.
read_csvRead a comma-separated values (csv) file into DataFrame.
io.formats.style.Styler.to_excelAdd styles to Excel sheet.
Notes
For compatibility with to_csv(),
to_excel serializes lists and dicts to strings before writing.
Once a workbook has been saved it is not possible to write further
data without rewriting the whole workbook.
Examples
Create, write to and save a workbook:
>>> df1 = pd.DataFrame([['a', 'b'], ['c', 'd']],
... index=['row 1', 'row 2'],
... columns=['col 1', 'col 2'])
>>> df1.to_excel("output.xlsx")
To specify the sheet name:
>>> df1.to_excel("output.xlsx",
... sheet_name='Sheet_name_1')
If you wish to write to more than one sheet in the workbook, it is
necessary to specify an ExcelWriter object:
>>> df2 = df1.copy()
>>> with pd.ExcelWriter('output.xlsx') as writer:
... df1.to_excel(writer, sheet_name='Sheet_name_1')
... df2.to_excel(writer, sheet_name='Sheet_name_2')
ExcelWriter can also be used to append to an existing Excel file:
>>> with pd.ExcelWriter('output.xlsx',
... mode='a') as writer:
... df.to_excel(writer, sheet_name='Sheet_name_3')
To set the library that is used to write the Excel file,
you can pass the engine keyword (the default engine is
automatically chosen depending on the file extension):
>>> df1.to_excel('output1.xlsx', engine='xlsxwriter')
| reference/api/pandas.DataFrame.to_excel.html |
How do I select a subset of a DataFrame? | How do I select a subset of a DataFrame?
This tutorial uses the Titanic data set, stored as CSV. The data
consists of the following data columns:
PassengerId: Id of every passenger.
Survived: Indication whether passenger survived. 0 for yes and 1 for no.
Pclass: One out of the 3 ticket classes: Class 1, Class 2 and Class 3.
Name: Name of passenger.
Sex: Gender of passenger.
Age: Age of passenger in years.
SibSp: Number of siblings or spouses aboard.
Parch: Number of parents or children aboard.
Ticket: Ticket number of passenger.
Fare: Indicating the fare.
Cabin: Cabin number of passenger.
Embarked: Port of embarkation.
This tutorial uses the Titanic data set, stored as CSV. The data
consists of the following data columns:
PassengerId: Id of every passenger.
Survived: Indication whether passenger survived. 0 for yes and 1 for no.
Pclass: One out of the 3 ticket classes: Class 1, Class 2 and Class 3. | Data used for this tutorial:
Titanic data
This tutorial uses the Titanic data set, stored as CSV. The data
consists of the following data columns:
PassengerId: Id of every passenger.
Survived: Indication whether passenger survived. 0 for yes and 1 for no.
Pclass: One out of the 3 ticket classes: Class 1, Class 2 and Class 3.
Name: Name of passenger.
Sex: Gender of passenger.
Age: Age of passenger in years.
SibSp: Number of siblings or spouses aboard.
Parch: Number of parents or children aboard.
Ticket: Ticket number of passenger.
Fare: Indicating the fare.
Cabin: Cabin number of passenger.
Embarked: Port of embarkation.
To raw data
In [2]: titanic = pd.read_csv("data/titanic.csv")
In [3]: titanic.head()
Out[3]:
PassengerId Survived Pclass ... Fare Cabin Embarked
0 1 0 3 ... 7.2500 NaN S
1 2 1 1 ... 71.2833 C85 C
2 3 1 3 ... 7.9250 NaN S
3 4 1 1 ... 53.1000 C123 S
4 5 0 3 ... 8.0500 NaN S
[5 rows x 12 columns]
How do I select a subset of a DataFrame?#
How do I select specific columns from a DataFrame?#
I’m interested in the age of the Titanic passengers.
In [4]: ages = titanic["Age"]
In [5]: ages.head()
Out[5]:
0 22.0
1 38.0
2 26.0
3 35.0
4 35.0
Name: Age, dtype: float64
To select a single column, use square brackets [] with the column
name of the column of interest.
Each column in a DataFrame is a Series. As a single column is
selected, the returned object is a pandas Series. We can verify this
by checking the type of the output:
In [6]: type(titanic["Age"])
Out[6]: pandas.core.series.Series
And have a look at the shape of the output:
In [7]: titanic["Age"].shape
Out[7]: (891,)
DataFrame.shape is an attribute (remember tutorial on reading and writing, do not use parentheses for attributes) of a
pandas Series and DataFrame containing the number of rows and
columns: (nrows, ncolumns). A pandas Series is 1-dimensional and only
the number of rows is returned.
I’m interested in the age and sex of the Titanic passengers.
In [8]: age_sex = titanic[["Age", "Sex"]]
In [9]: age_sex.head()
Out[9]:
Age Sex
0 22.0 male
1 38.0 female
2 26.0 female
3 35.0 female
4 35.0 male
To select multiple columns, use a list of column names within the
selection brackets [].
Note
The inner square brackets define a
Python list with column names, whereas
the outer brackets are used to select the data from a pandas
DataFrame as seen in the previous example.
The returned data type is a pandas DataFrame:
In [10]: type(titanic[["Age", "Sex"]])
Out[10]: pandas.core.frame.DataFrame
In [11]: titanic[["Age", "Sex"]].shape
Out[11]: (891, 2)
The selection returned a DataFrame with 891 rows and 2 columns. Remember, a
DataFrame is 2-dimensional with both a row and column dimension.
To user guideFor basic information on indexing, see the user guide section on indexing and selecting data.
How do I filter specific rows from a DataFrame?#
I’m interested in the passengers older than 35 years.
In [12]: above_35 = titanic[titanic["Age"] > 35]
In [13]: above_35.head()
Out[13]:
PassengerId Survived Pclass ... Fare Cabin Embarked
1 2 1 1 ... 71.2833 C85 C
6 7 0 1 ... 51.8625 E46 S
11 12 1 1 ... 26.5500 C103 S
13 14 0 3 ... 31.2750 NaN S
15 16 1 2 ... 16.0000 NaN S
[5 rows x 12 columns]
To select rows based on a conditional expression, use a condition inside
the selection brackets [].
The condition inside the selection
brackets titanic["Age"] > 35 checks for which rows the Age
column has a value larger than 35:
In [14]: titanic["Age"] > 35
Out[14]:
0 False
1 True
2 False
3 False
4 False
...
886 False
887 False
888 False
889 False
890 False
Name: Age, Length: 891, dtype: bool
The output of the conditional expression (>, but also ==,
!=, <, <=,… would work) is actually a pandas Series of
boolean values (either True or False) with the same number of
rows as the original DataFrame. Such a Series of boolean values
can be used to filter the DataFrame by putting it in between the
selection brackets []. Only rows for which the value is True
will be selected.
We know from before that the original Titanic DataFrame consists of
891 rows. Let’s have a look at the number of rows which satisfy the
condition by checking the shape attribute of the resulting
DataFrame above_35:
In [15]: above_35.shape
Out[15]: (217, 12)
I’m interested in the Titanic passengers from cabin class 2 and 3.
In [16]: class_23 = titanic[titanic["Pclass"].isin([2, 3])]
In [17]: class_23.head()
Out[17]:
PassengerId Survived Pclass ... Fare Cabin Embarked
0 1 0 3 ... 7.2500 NaN S
2 3 1 3 ... 7.9250 NaN S
4 5 0 3 ... 8.0500 NaN S
5 6 0 3 ... 8.4583 NaN Q
7 8 0 3 ... 21.0750 NaN S
[5 rows x 12 columns]
Similar to the conditional expression, the isin() conditional function
returns a True for each row the values are in the provided list. To
filter the rows based on such a function, use the conditional function
inside the selection brackets []. In this case, the condition inside
the selection brackets titanic["Pclass"].isin([2, 3]) checks for
which rows the Pclass column is either 2 or 3.
The above is equivalent to filtering by rows for which the class is
either 2 or 3 and combining the two statements with an | (or)
operator:
In [18]: class_23 = titanic[(titanic["Pclass"] == 2) | (titanic["Pclass"] == 3)]
In [19]: class_23.head()
Out[19]:
PassengerId Survived Pclass ... Fare Cabin Embarked
0 1 0 3 ... 7.2500 NaN S
2 3 1 3 ... 7.9250 NaN S
4 5 0 3 ... 8.0500 NaN S
5 6 0 3 ... 8.4583 NaN Q
7 8 0 3 ... 21.0750 NaN S
[5 rows x 12 columns]
Note
When combining multiple conditional statements, each condition
must be surrounded by parentheses (). Moreover, you can not use
or/and but need to use the or operator | and the and
operator &.
To user guideSee the dedicated section in the user guide about boolean indexing or about the isin function.
I want to work with passenger data for which the age is known.
In [20]: age_no_na = titanic[titanic["Age"].notna()]
In [21]: age_no_na.head()
Out[21]:
PassengerId Survived Pclass ... Fare Cabin Embarked
0 1 0 3 ... 7.2500 NaN S
1 2 1 1 ... 71.2833 C85 C
2 3 1 3 ... 7.9250 NaN S
3 4 1 1 ... 53.1000 C123 S
4 5 0 3 ... 8.0500 NaN S
[5 rows x 12 columns]
The notna() conditional function returns a True for each row the
values are not a Null value. As such, this can be combined with the
selection brackets [] to filter the data table.
You might wonder what actually changed, as the first 5 lines are still
the same values. One way to verify is to check if the shape has changed:
In [22]: age_no_na.shape
Out[22]: (714, 12)
To user guideFor more dedicated functions on missing values, see the user guide section about handling missing data.
How do I select specific rows and columns from a DataFrame?#
I’m interested in the names of the passengers older than 35 years.
In [23]: adult_names = titanic.loc[titanic["Age"] > 35, "Name"]
In [24]: adult_names.head()
Out[24]:
1 Cumings, Mrs. John Bradley (Florence Briggs Th...
6 McCarthy, Mr. Timothy J
11 Bonnell, Miss. Elizabeth
13 Andersson, Mr. Anders Johan
15 Hewlett, Mrs. (Mary D Kingcome)
Name: Name, dtype: object
In this case, a subset of both rows and columns is made in one go and
just using selection brackets [] is not sufficient anymore. The
loc/iloc operators are required in front of the selection
brackets []. When using loc/iloc, the part before the comma
is the rows you want, and the part after the comma is the columns you
want to select.
When using the column names, row labels or a condition expression, use
the loc operator in front of the selection brackets []. For both
the part before and after the comma, you can use a single label, a list
of labels, a slice of labels, a conditional expression or a colon. Using
a colon specifies you want to select all rows or columns.
I’m interested in rows 10 till 25 and columns 3 to 5.
In [25]: titanic.iloc[9:25, 2:5]
Out[25]:
Pclass Name Sex
9 2 Nasser, Mrs. Nicholas (Adele Achem) female
10 3 Sandstrom, Miss. Marguerite Rut female
11 1 Bonnell, Miss. Elizabeth female
12 3 Saundercock, Mr. William Henry male
13 3 Andersson, Mr. Anders Johan male
.. ... ... ...
20 2 Fynney, Mr. Joseph J male
21 2 Beesley, Mr. Lawrence male
22 3 McGowan, Miss. Anna "Annie" female
23 1 Sloper, Mr. William Thompson male
24 3 Palsson, Miss. Torborg Danira female
[16 rows x 3 columns]
Again, a subset of both rows and columns is made in one go and just
using selection brackets [] is not sufficient anymore. When
specifically interested in certain rows and/or columns based on their
position in the table, use the iloc operator in front of the
selection brackets [].
When selecting specific rows and/or columns with loc or iloc,
new values can be assigned to the selected data. For example, to assign
the name anonymous to the first 3 elements of the third column:
In [26]: titanic.iloc[0:3, 3] = "anonymous"
In [27]: titanic.head()
Out[27]:
PassengerId Survived Pclass ... Fare Cabin Embarked
0 1 0 3 ... 7.2500 NaN S
1 2 1 1 ... 71.2833 C85 C
2 3 1 3 ... 7.9250 NaN S
3 4 1 1 ... 53.1000 C123 S
4 5 0 3 ... 8.0500 NaN S
[5 rows x 12 columns]
To user guideSee the user guide section on different choices for indexing to get more insight in the usage of loc and iloc.
REMEMBER
When selecting subsets of data, square brackets [] are used.
Inside these brackets, you can use a single column/row label, a list
of column/row labels, a slice of labels, a conditional expression or
a colon.
Select specific rows and/or columns using loc when using the row
and column names.
Select specific rows and/or columns using iloc when using the
positions in the table.
You can assign new values to a selection based on loc/iloc.
To user guideA full overview of indexing is provided in the user guide pages on indexing and selecting data.
| getting_started/intro_tutorials/03_subset_data.html |
Essential basic functionality | Essential basic functionality
Here we discuss a lot of the essential functionality common to the pandas data
structures. To begin, let’s create some example objects like we did in
the 10 minutes to pandas section:
To view a small sample of a Series or DataFrame object, use the
head() and tail() methods. The default number
of elements to display is five, but you may pass a custom number.
pandas objects have a number of attributes enabling you to access the metadata
shape: gives the axis dimensions of the object, consistent with ndarray
Series: index (only axis) | Here we discuss a lot of the essential functionality common to the pandas data
structures. To begin, let’s create some example objects like we did in
the 10 minutes to pandas section:
In [1]: index = pd.date_range("1/1/2000", periods=8)
In [2]: s = pd.Series(np.random.randn(5), index=["a", "b", "c", "d", "e"])
In [3]: df = pd.DataFrame(np.random.randn(8, 3), index=index, columns=["A", "B", "C"])
Head and tail#
To view a small sample of a Series or DataFrame object, use the
head() and tail() methods. The default number
of elements to display is five, but you may pass a custom number.
In [4]: long_series = pd.Series(np.random.randn(1000))
In [5]: long_series.head()
Out[5]:
0 -1.157892
1 -1.344312
2 0.844885
3 1.075770
4 -0.109050
dtype: float64
In [6]: long_series.tail(3)
Out[6]:
997 -0.289388
998 -1.020544
999 0.589993
dtype: float64
Attributes and underlying data#
pandas objects have a number of attributes enabling you to access the metadata
shape: gives the axis dimensions of the object, consistent with ndarray
Axis labels
Series: index (only axis)
DataFrame: index (rows) and columns
Note, these attributes can be safely assigned to!
In [7]: df[:2]
Out[7]:
A B C
2000-01-01 -0.173215 0.119209 -1.044236
2000-01-02 -0.861849 -2.104569 -0.494929
In [8]: df.columns = [x.lower() for x in df.columns]
In [9]: df
Out[9]:
a b c
2000-01-01 -0.173215 0.119209 -1.044236
2000-01-02 -0.861849 -2.104569 -0.494929
2000-01-03 1.071804 0.721555 -0.706771
2000-01-04 -1.039575 0.271860 -0.424972
2000-01-05 0.567020 0.276232 -1.087401
2000-01-06 -0.673690 0.113648 -1.478427
2000-01-07 0.524988 0.404705 0.577046
2000-01-08 -1.715002 -1.039268 -0.370647
pandas objects (Index, Series, DataFrame) can be
thought of as containers for arrays, which hold the actual data and do the
actual computation. For many types, the underlying array is a
numpy.ndarray. However, pandas and 3rd party libraries may extend
NumPy’s type system to add support for custom arrays
(see dtypes).
To get the actual data inside a Index or Series, use
the .array property
In [10]: s.array
Out[10]:
<PandasArray>
[ 0.4691122999071863, -0.2828633443286633, -1.5090585031735124,
-1.1356323710171934, 1.2121120250208506]
Length: 5, dtype: float64
In [11]: s.index.array
Out[11]:
<PandasArray>
['a', 'b', 'c', 'd', 'e']
Length: 5, dtype: object
array will always be an ExtensionArray.
The exact details of what an ExtensionArray is and why pandas uses them are a bit
beyond the scope of this introduction. See dtypes for more.
If you know you need a NumPy array, use to_numpy()
or numpy.asarray().
In [12]: s.to_numpy()
Out[12]: array([ 0.4691, -0.2829, -1.5091, -1.1356, 1.2121])
In [13]: np.asarray(s)
Out[13]: array([ 0.4691, -0.2829, -1.5091, -1.1356, 1.2121])
When the Series or Index is backed by
an ExtensionArray, to_numpy()
may involve copying data and coercing values. See dtypes for more.
to_numpy() gives some control over the dtype of the
resulting numpy.ndarray. For example, consider datetimes with timezones.
NumPy doesn’t have a dtype to represent timezone-aware datetimes, so there
are two possibly useful representations:
An object-dtype numpy.ndarray with Timestamp objects, each
with the correct tz
A datetime64[ns] -dtype numpy.ndarray, where the values have
been converted to UTC and the timezone discarded
Timezones may be preserved with dtype=object
In [14]: ser = pd.Series(pd.date_range("2000", periods=2, tz="CET"))
In [15]: ser.to_numpy(dtype=object)
Out[15]:
array([Timestamp('2000-01-01 00:00:00+0100', tz='CET'),
Timestamp('2000-01-02 00:00:00+0100', tz='CET')], dtype=object)
Or thrown away with dtype='datetime64[ns]'
In [16]: ser.to_numpy(dtype="datetime64[ns]")
Out[16]:
array(['1999-12-31T23:00:00.000000000', '2000-01-01T23:00:00.000000000'],
dtype='datetime64[ns]')
Getting the “raw data” inside a DataFrame is possibly a bit more
complex. When your DataFrame only has a single data type for all the
columns, DataFrame.to_numpy() will return the underlying data:
In [17]: df.to_numpy()
Out[17]:
array([[-0.1732, 0.1192, -1.0442],
[-0.8618, -2.1046, -0.4949],
[ 1.0718, 0.7216, -0.7068],
[-1.0396, 0.2719, -0.425 ],
[ 0.567 , 0.2762, -1.0874],
[-0.6737, 0.1136, -1.4784],
[ 0.525 , 0.4047, 0.577 ],
[-1.715 , -1.0393, -0.3706]])
If a DataFrame contains homogeneously-typed data, the ndarray can
actually be modified in-place, and the changes will be reflected in the data
structure. For heterogeneous data (e.g. some of the DataFrame’s columns are not
all the same dtype), this will not be the case. The values attribute itself,
unlike the axis labels, cannot be assigned to.
Note
When working with heterogeneous data, the dtype of the resulting ndarray
will be chosen to accommodate all of the data involved. For example, if
strings are involved, the result will be of object dtype. If there are only
floats and integers, the resulting array will be of float dtype.
In the past, pandas recommended Series.values or DataFrame.values
for extracting the data from a Series or DataFrame. You’ll still find references
to these in old code bases and online. Going forward, we recommend avoiding
.values and using .array or .to_numpy(). .values has the following
drawbacks:
When your Series contains an extension type, it’s
unclear whether Series.values returns a NumPy array or the extension array.
Series.array will always return an ExtensionArray, and will never
copy data. Series.to_numpy() will always return a NumPy array,
potentially at the cost of copying / coercing values.
When your DataFrame contains a mixture of data types, DataFrame.values may
involve copying data and coercing values to a common dtype, a relatively expensive
operation. DataFrame.to_numpy(), being a method, makes it clearer that the
returned NumPy array may not be a view on the same data in the DataFrame.
Accelerated operations#
pandas has support for accelerating certain types of binary numerical and boolean operations using
the numexpr library and the bottleneck libraries.
These libraries are especially useful when dealing with large data sets, and provide large
speedups. numexpr uses smart chunking, caching, and multiple cores. bottleneck is
a set of specialized cython routines that are especially fast when dealing with arrays that have
nans.
Here is a sample (using 100 column x 100,000 row DataFrames):
Operation
0.11.0 (ms)
Prior Version (ms)
Ratio to Prior
df1 > df2
13.32
125.35
0.1063
df1 * df2
21.71
36.63
0.5928
df1 + df2
22.04
36.50
0.6039
You are highly encouraged to install both libraries. See the section
Recommended Dependencies for more installation info.
These are both enabled to be used by default, you can control this by setting the options:
pd.set_option("compute.use_bottleneck", False)
pd.set_option("compute.use_numexpr", False)
Flexible binary operations#
With binary operations between pandas data structures, there are two key points
of interest:
Broadcasting behavior between higher- (e.g. DataFrame) and
lower-dimensional (e.g. Series) objects.
Missing data in computations.
We will demonstrate how to manage these issues independently, though they can
be handled simultaneously.
Matching / broadcasting behavior#
DataFrame has the methods add(), sub(),
mul(), div() and related functions
radd(), rsub(), …
for carrying out binary operations. For broadcasting behavior,
Series input is of primary interest. Using these functions, you can use to
either match on the index or columns via the axis keyword:
In [18]: df = pd.DataFrame(
....: {
....: "one": pd.Series(np.random.randn(3), index=["a", "b", "c"]),
....: "two": pd.Series(np.random.randn(4), index=["a", "b", "c", "d"]),
....: "three": pd.Series(np.random.randn(3), index=["b", "c", "d"]),
....: }
....: )
....:
In [19]: df
Out[19]:
one two three
a 1.394981 1.772517 NaN
b 0.343054 1.912123 -0.050390
c 0.695246 1.478369 1.227435
d NaN 0.279344 -0.613172
In [20]: row = df.iloc[1]
In [21]: column = df["two"]
In [22]: df.sub(row, axis="columns")
Out[22]:
one two three
a 1.051928 -0.139606 NaN
b 0.000000 0.000000 0.000000
c 0.352192 -0.433754 1.277825
d NaN -1.632779 -0.562782
In [23]: df.sub(row, axis=1)
Out[23]:
one two three
a 1.051928 -0.139606 NaN
b 0.000000 0.000000 0.000000
c 0.352192 -0.433754 1.277825
d NaN -1.632779 -0.562782
In [24]: df.sub(column, axis="index")
Out[24]:
one two three
a -0.377535 0.0 NaN
b -1.569069 0.0 -1.962513
c -0.783123 0.0 -0.250933
d NaN 0.0 -0.892516
In [25]: df.sub(column, axis=0)
Out[25]:
one two three
a -0.377535 0.0 NaN
b -1.569069 0.0 -1.962513
c -0.783123 0.0 -0.250933
d NaN 0.0 -0.892516
Furthermore you can align a level of a MultiIndexed DataFrame with a Series.
In [26]: dfmi = df.copy()
In [27]: dfmi.index = pd.MultiIndex.from_tuples(
....: [(1, "a"), (1, "b"), (1, "c"), (2, "a")], names=["first", "second"]
....: )
....:
In [28]: dfmi.sub(column, axis=0, level="second")
Out[28]:
one two three
first second
1 a -0.377535 0.000000 NaN
b -1.569069 0.000000 -1.962513
c -0.783123 0.000000 -0.250933
2 a NaN -1.493173 -2.385688
Series and Index also support the divmod() builtin. This function takes
the floor division and modulo operation at the same time returning a two-tuple
of the same type as the left hand side. For example:
In [29]: s = pd.Series(np.arange(10))
In [30]: s
Out[30]:
0 0
1 1
2 2
3 3
4 4
5 5
6 6
7 7
8 8
9 9
dtype: int64
In [31]: div, rem = divmod(s, 3)
In [32]: div
Out[32]:
0 0
1 0
2 0
3 1
4 1
5 1
6 2
7 2
8 2
9 3
dtype: int64
In [33]: rem
Out[33]:
0 0
1 1
2 2
3 0
4 1
5 2
6 0
7 1
8 2
9 0
dtype: int64
In [34]: idx = pd.Index(np.arange(10))
In [35]: idx
Out[35]: Int64Index([0, 1, 2, 3, 4, 5, 6, 7, 8, 9], dtype='int64')
In [36]: div, rem = divmod(idx, 3)
In [37]: div
Out[37]: Int64Index([0, 0, 0, 1, 1, 1, 2, 2, 2, 3], dtype='int64')
In [38]: rem
Out[38]: Int64Index([0, 1, 2, 0, 1, 2, 0, 1, 2, 0], dtype='int64')
We can also do elementwise divmod():
In [39]: div, rem = divmod(s, [2, 2, 3, 3, 4, 4, 5, 5, 6, 6])
In [40]: div
Out[40]:
0 0
1 0
2 0
3 1
4 1
5 1
6 1
7 1
8 1
9 1
dtype: int64
In [41]: rem
Out[41]:
0 0
1 1
2 2
3 0
4 0
5 1
6 1
7 2
8 2
9 3
dtype: int64
Missing data / operations with fill values#
In Series and DataFrame, the arithmetic functions have the option of inputting
a fill_value, namely a value to substitute when at most one of the values at
a location are missing. For example, when adding two DataFrame objects, you may
wish to treat NaN as 0 unless both DataFrames are missing that value, in which
case the result will be NaN (you can later replace NaN with some other value
using fillna if you wish).
In [42]: df
Out[42]:
one two three
a 1.394981 1.772517 NaN
b 0.343054 1.912123 -0.050390
c 0.695246 1.478369 1.227435
d NaN 0.279344 -0.613172
In [43]: df2
Out[43]:
one two three
a 1.394981 1.772517 1.000000
b 0.343054 1.912123 -0.050390
c 0.695246 1.478369 1.227435
d NaN 0.279344 -0.613172
In [44]: df + df2
Out[44]:
one two three
a 2.789963 3.545034 NaN
b 0.686107 3.824246 -0.100780
c 1.390491 2.956737 2.454870
d NaN 0.558688 -1.226343
In [45]: df.add(df2, fill_value=0)
Out[45]:
one two three
a 2.789963 3.545034 1.000000
b 0.686107 3.824246 -0.100780
c 1.390491 2.956737 2.454870
d NaN 0.558688 -1.226343
Flexible comparisons#
Series and DataFrame have the binary comparison methods eq, ne, lt, gt,
le, and ge whose behavior is analogous to the binary
arithmetic operations described above:
In [46]: df.gt(df2)
Out[46]:
one two three
a False False False
b False False False
c False False False
d False False False
In [47]: df2.ne(df)
Out[47]:
one two three
a False False True
b False False False
c False False False
d True False False
These operations produce a pandas object of the same type as the left-hand-side
input that is of dtype bool. These boolean objects can be used in
indexing operations, see the section on Boolean indexing.
Boolean reductions#
You can apply the reductions: empty, any(),
all(), and bool() to provide a
way to summarize a boolean result.
In [48]: (df > 0).all()
Out[48]:
one False
two True
three False
dtype: bool
In [49]: (df > 0).any()
Out[49]:
one True
two True
three True
dtype: bool
You can reduce to a final boolean value.
In [50]: (df > 0).any().any()
Out[50]: True
You can test if a pandas object is empty, via the empty property.
In [51]: df.empty
Out[51]: False
In [52]: pd.DataFrame(columns=list("ABC")).empty
Out[52]: True
To evaluate single-element pandas objects in a boolean context, use the method
bool():
In [53]: pd.Series([True]).bool()
Out[53]: True
In [54]: pd.Series([False]).bool()
Out[54]: False
In [55]: pd.DataFrame([[True]]).bool()
Out[55]: True
In [56]: pd.DataFrame([[False]]).bool()
Out[56]: False
Warning
You might be tempted to do the following:
>>> if df:
... pass
Or
>>> df and df2
These will both raise errors, as you are trying to compare multiple values.:
ValueError: The truth value of an array is ambiguous. Use a.empty, a.any() or a.all().
See gotchas for a more detailed discussion.
Comparing if objects are equivalent#
Often you may find that there is more than one way to compute the same
result. As a simple example, consider df + df and df * 2. To test
that these two computations produce the same result, given the tools
shown above, you might imagine using (df + df == df * 2).all(). But in
fact, this expression is False:
In [57]: df + df == df * 2
Out[57]:
one two three
a True True False
b True True True
c True True True
d False True True
In [58]: (df + df == df * 2).all()
Out[58]:
one False
two True
three False
dtype: bool
Notice that the boolean DataFrame df + df == df * 2 contains some False values!
This is because NaNs do not compare as equals:
In [59]: np.nan == np.nan
Out[59]: False
So, NDFrames (such as Series and DataFrames)
have an equals() method for testing equality, with NaNs in
corresponding locations treated as equal.
In [60]: (df + df).equals(df * 2)
Out[60]: True
Note that the Series or DataFrame index needs to be in the same order for
equality to be True:
In [61]: df1 = pd.DataFrame({"col": ["foo", 0, np.nan]})
In [62]: df2 = pd.DataFrame({"col": [np.nan, 0, "foo"]}, index=[2, 1, 0])
In [63]: df1.equals(df2)
Out[63]: False
In [64]: df1.equals(df2.sort_index())
Out[64]: True
Comparing array-like objects#
You can conveniently perform element-wise comparisons when comparing a pandas
data structure with a scalar value:
In [65]: pd.Series(["foo", "bar", "baz"]) == "foo"
Out[65]:
0 True
1 False
2 False
dtype: bool
In [66]: pd.Index(["foo", "bar", "baz"]) == "foo"
Out[66]: array([ True, False, False])
pandas also handles element-wise comparisons between different array-like
objects of the same length:
In [67]: pd.Series(["foo", "bar", "baz"]) == pd.Index(["foo", "bar", "qux"])
Out[67]:
0 True
1 True
2 False
dtype: bool
In [68]: pd.Series(["foo", "bar", "baz"]) == np.array(["foo", "bar", "qux"])
Out[68]:
0 True
1 True
2 False
dtype: bool
Trying to compare Index or Series objects of different lengths will
raise a ValueError:
In [55]: pd.Series(['foo', 'bar', 'baz']) == pd.Series(['foo', 'bar'])
ValueError: Series lengths must match to compare
In [56]: pd.Series(['foo', 'bar', 'baz']) == pd.Series(['foo'])
ValueError: Series lengths must match to compare
Note that this is different from the NumPy behavior where a comparison can
be broadcast:
In [69]: np.array([1, 2, 3]) == np.array([2])
Out[69]: array([False, True, False])
or it can return False if broadcasting can not be done:
In [70]: np.array([1, 2, 3]) == np.array([1, 2])
Out[70]: False
Combining overlapping data sets#
A problem occasionally arising is the combination of two similar data sets
where values in one are preferred over the other. An example would be two data
series representing a particular economic indicator where one is considered to
be of “higher quality”. However, the lower quality series might extend further
back in history or have more complete data coverage. As such, we would like to
combine two DataFrame objects where missing values in one DataFrame are
conditionally filled with like-labeled values from the other DataFrame. The
function implementing this operation is combine_first(),
which we illustrate:
In [71]: df1 = pd.DataFrame(
....: {"A": [1.0, np.nan, 3.0, 5.0, np.nan], "B": [np.nan, 2.0, 3.0, np.nan, 6.0]}
....: )
....:
In [72]: df2 = pd.DataFrame(
....: {
....: "A": [5.0, 2.0, 4.0, np.nan, 3.0, 7.0],
....: "B": [np.nan, np.nan, 3.0, 4.0, 6.0, 8.0],
....: }
....: )
....:
In [73]: df1
Out[73]:
A B
0 1.0 NaN
1 NaN 2.0
2 3.0 3.0
3 5.0 NaN
4 NaN 6.0
In [74]: df2
Out[74]:
A B
0 5.0 NaN
1 2.0 NaN
2 4.0 3.0
3 NaN 4.0
4 3.0 6.0
5 7.0 8.0
In [75]: df1.combine_first(df2)
Out[75]:
A B
0 1.0 NaN
1 2.0 2.0
2 3.0 3.0
3 5.0 4.0
4 3.0 6.0
5 7.0 8.0
General DataFrame combine#
The combine_first() method above calls the more general
DataFrame.combine(). This method takes another DataFrame
and a combiner function, aligns the input DataFrame and then passes the combiner
function pairs of Series (i.e., columns whose names are the same).
So, for instance, to reproduce combine_first() as above:
In [76]: def combiner(x, y):
....: return np.where(pd.isna(x), y, x)
....:
In [77]: df1.combine(df2, combiner)
Out[77]:
A B
0 1.0 NaN
1 2.0 2.0
2 3.0 3.0
3 5.0 4.0
4 3.0 6.0
5 7.0 8.0
Descriptive statistics#
There exists a large number of methods for computing descriptive statistics and
other related operations on Series, DataFrame. Most of these
are aggregations (hence producing a lower-dimensional result) like
sum(), mean(), and quantile(),
but some of them, like cumsum() and cumprod(),
produce an object of the same size. Generally speaking, these methods take an
axis argument, just like ndarray.{sum, std, …}, but the axis can be
specified by name or integer:
Series: no axis argument needed
DataFrame: “index” (axis=0, default), “columns” (axis=1)
For example:
In [78]: df
Out[78]:
one two three
a 1.394981 1.772517 NaN
b 0.343054 1.912123 -0.050390
c 0.695246 1.478369 1.227435
d NaN 0.279344 -0.613172
In [79]: df.mean(0)
Out[79]:
one 0.811094
two 1.360588
three 0.187958
dtype: float64
In [80]: df.mean(1)
Out[80]:
a 1.583749
b 0.734929
c 1.133683
d -0.166914
dtype: float64
All such methods have a skipna option signaling whether to exclude missing
data (True by default):
In [81]: df.sum(0, skipna=False)
Out[81]:
one NaN
two 5.442353
three NaN
dtype: float64
In [82]: df.sum(axis=1, skipna=True)
Out[82]:
a 3.167498
b 2.204786
c 3.401050
d -0.333828
dtype: float64
Combined with the broadcasting / arithmetic behavior, one can describe various
statistical procedures, like standardization (rendering data zero mean and
standard deviation of 1), very concisely:
In [83]: ts_stand = (df - df.mean()) / df.std()
In [84]: ts_stand.std()
Out[84]:
one 1.0
two 1.0
three 1.0
dtype: float64
In [85]: xs_stand = df.sub(df.mean(1), axis=0).div(df.std(1), axis=0)
In [86]: xs_stand.std(1)
Out[86]:
a 1.0
b 1.0
c 1.0
d 1.0
dtype: float64
Note that methods like cumsum() and cumprod()
preserve the location of NaN values. This is somewhat different from
expanding() and rolling() since NaN behavior
is furthermore dictated by a min_periods parameter.
In [87]: df.cumsum()
Out[87]:
one two three
a 1.394981 1.772517 NaN
b 1.738035 3.684640 -0.050390
c 2.433281 5.163008 1.177045
d NaN 5.442353 0.563873
Here is a quick reference summary table of common functions. Each also takes an
optional level parameter which applies only if the object has a
hierarchical index.
Function
Description
count
Number of non-NA observations
sum
Sum of values
mean
Mean of values
mad
Mean absolute deviation
median
Arithmetic median of values
min
Minimum
max
Maximum
mode
Mode
abs
Absolute Value
prod
Product of values
std
Bessel-corrected sample standard deviation
var
Unbiased variance
sem
Standard error of the mean
skew
Sample skewness (3rd moment)
kurt
Sample kurtosis (4th moment)
quantile
Sample quantile (value at %)
cumsum
Cumulative sum
cumprod
Cumulative product
cummax
Cumulative maximum
cummin
Cumulative minimum
Note that by chance some NumPy methods, like mean, std, and sum,
will exclude NAs on Series input by default:
In [88]: np.mean(df["one"])
Out[88]: 0.8110935116651192
In [89]: np.mean(df["one"].to_numpy())
Out[89]: nan
Series.nunique() will return the number of unique non-NA values in a
Series:
In [90]: series = pd.Series(np.random.randn(500))
In [91]: series[20:500] = np.nan
In [92]: series[10:20] = 5
In [93]: series.nunique()
Out[93]: 11
Summarizing data: describe#
There is a convenient describe() function which computes a variety of summary
statistics about a Series or the columns of a DataFrame (excluding NAs of
course):
In [94]: series = pd.Series(np.random.randn(1000))
In [95]: series[::2] = np.nan
In [96]: series.describe()
Out[96]:
count 500.000000
mean -0.021292
std 1.015906
min -2.683763
25% -0.699070
50% -0.069718
75% 0.714483
max 3.160915
dtype: float64
In [97]: frame = pd.DataFrame(np.random.randn(1000, 5), columns=["a", "b", "c", "d", "e"])
In [98]: frame.iloc[::2] = np.nan
In [99]: frame.describe()
Out[99]:
a b c d e
count 500.000000 500.000000 500.000000 500.000000 500.000000
mean 0.033387 0.030045 -0.043719 -0.051686 0.005979
std 1.017152 0.978743 1.025270 1.015988 1.006695
min -3.000951 -2.637901 -3.303099 -3.159200 -3.188821
25% -0.647623 -0.576449 -0.712369 -0.691338 -0.691115
50% 0.047578 -0.021499 -0.023888 -0.032652 -0.025363
75% 0.729907 0.775880 0.618896 0.670047 0.649748
max 2.740139 2.752332 3.004229 2.728702 3.240991
You can select specific percentiles to include in the output:
In [100]: series.describe(percentiles=[0.05, 0.25, 0.75, 0.95])
Out[100]:
count 500.000000
mean -0.021292
std 1.015906
min -2.683763
5% -1.645423
25% -0.699070
50% -0.069718
75% 0.714483
95% 1.711409
max 3.160915
dtype: float64
By default, the median is always included.
For a non-numerical Series object, describe() will give a simple
summary of the number of unique values and most frequently occurring values:
In [101]: s = pd.Series(["a", "a", "b", "b", "a", "a", np.nan, "c", "d", "a"])
In [102]: s.describe()
Out[102]:
count 9
unique 4
top a
freq 5
dtype: object
Note that on a mixed-type DataFrame object, describe() will
restrict the summary to include only numerical columns or, if none are, only
categorical columns:
In [103]: frame = pd.DataFrame({"a": ["Yes", "Yes", "No", "No"], "b": range(4)})
In [104]: frame.describe()
Out[104]:
b
count 4.000000
mean 1.500000
std 1.290994
min 0.000000
25% 0.750000
50% 1.500000
75% 2.250000
max 3.000000
This behavior can be controlled by providing a list of types as include/exclude
arguments. The special value all can also be used:
In [105]: frame.describe(include=["object"])
Out[105]:
a
count 4
unique 2
top Yes
freq 2
In [106]: frame.describe(include=["number"])
Out[106]:
b
count 4.000000
mean 1.500000
std 1.290994
min 0.000000
25% 0.750000
50% 1.500000
75% 2.250000
max 3.000000
In [107]: frame.describe(include="all")
Out[107]:
a b
count 4 4.000000
unique 2 NaN
top Yes NaN
freq 2 NaN
mean NaN 1.500000
std NaN 1.290994
min NaN 0.000000
25% NaN 0.750000
50% NaN 1.500000
75% NaN 2.250000
max NaN 3.000000
That feature relies on select_dtypes. Refer to
there for details about accepted inputs.
Index of min/max values#
The idxmin() and idxmax() functions on Series
and DataFrame compute the index labels with the minimum and maximum
corresponding values:
In [108]: s1 = pd.Series(np.random.randn(5))
In [109]: s1
Out[109]:
0 1.118076
1 -0.352051
2 -1.242883
3 -1.277155
4 -0.641184
dtype: float64
In [110]: s1.idxmin(), s1.idxmax()
Out[110]: (3, 0)
In [111]: df1 = pd.DataFrame(np.random.randn(5, 3), columns=["A", "B", "C"])
In [112]: df1
Out[112]:
A B C
0 -0.327863 -0.946180 -0.137570
1 -0.186235 -0.257213 -0.486567
2 -0.507027 -0.871259 -0.111110
3 2.000339 -2.430505 0.089759
4 -0.321434 -0.033695 0.096271
In [113]: df1.idxmin(axis=0)
Out[113]:
A 2
B 3
C 1
dtype: int64
In [114]: df1.idxmax(axis=1)
Out[114]:
0 C
1 A
2 C
3 A
4 C
dtype: object
When there are multiple rows (or columns) matching the minimum or maximum
value, idxmin() and idxmax() return the first
matching index:
In [115]: df3 = pd.DataFrame([2, 1, 1, 3, np.nan], columns=["A"], index=list("edcba"))
In [116]: df3
Out[116]:
A
e 2.0
d 1.0
c 1.0
b 3.0
a NaN
In [117]: df3["A"].idxmin()
Out[117]: 'd'
Note
idxmin and idxmax are called argmin and argmax in NumPy.
Value counts (histogramming) / mode#
The value_counts() Series method and top-level function computes a histogram
of a 1D array of values. It can also be used as a function on regular arrays:
In [118]: data = np.random.randint(0, 7, size=50)
In [119]: data
Out[119]:
array([6, 6, 2, 3, 5, 3, 2, 5, 4, 5, 4, 3, 4, 5, 0, 2, 0, 4, 2, 0, 3, 2,
2, 5, 6, 5, 3, 4, 6, 4, 3, 5, 6, 4, 3, 6, 2, 6, 6, 2, 3, 4, 2, 1,
6, 2, 6, 1, 5, 4])
In [120]: s = pd.Series(data)
In [121]: s.value_counts()
Out[121]:
6 10
2 10
4 9
3 8
5 8
0 3
1 2
dtype: int64
In [122]: pd.value_counts(data)
Out[122]:
6 10
2 10
4 9
3 8
5 8
0 3
1 2
dtype: int64
New in version 1.1.0.
The value_counts() method can be used to count combinations across multiple columns.
By default all columns are used but a subset can be selected using the subset argument.
In [123]: data = {"a": [1, 2, 3, 4], "b": ["x", "x", "y", "y"]}
In [124]: frame = pd.DataFrame(data)
In [125]: frame.value_counts()
Out[125]:
a b
1 x 1
2 x 1
3 y 1
4 y 1
dtype: int64
Similarly, you can get the most frequently occurring value(s), i.e. the mode, of the values in a Series or DataFrame:
In [126]: s5 = pd.Series([1, 1, 3, 3, 3, 5, 5, 7, 7, 7])
In [127]: s5.mode()
Out[127]:
0 3
1 7
dtype: int64
In [128]: df5 = pd.DataFrame(
.....: {
.....: "A": np.random.randint(0, 7, size=50),
.....: "B": np.random.randint(-10, 15, size=50),
.....: }
.....: )
.....:
In [129]: df5.mode()
Out[129]:
A B
0 1.0 -9
1 NaN 10
2 NaN 13
Discretization and quantiling#
Continuous values can be discretized using the cut() (bins based on values)
and qcut() (bins based on sample quantiles) functions:
In [130]: arr = np.random.randn(20)
In [131]: factor = pd.cut(arr, 4)
In [132]: factor
Out[132]:
[(-0.251, 0.464], (-0.968, -0.251], (0.464, 1.179], (-0.251, 0.464], (-0.968, -0.251], ..., (-0.251, 0.464], (-0.968, -0.251], (-0.968, -0.251], (-0.968, -0.251], (-0.968, -0.251]]
Length: 20
Categories (4, interval[float64, right]): [(-0.968, -0.251] < (-0.251, 0.464] < (0.464, 1.179] <
(1.179, 1.893]]
In [133]: factor = pd.cut(arr, [-5, -1, 0, 1, 5])
In [134]: factor
Out[134]:
[(0, 1], (-1, 0], (0, 1], (0, 1], (-1, 0], ..., (-1, 0], (-1, 0], (-1, 0], (-1, 0], (-1, 0]]
Length: 20
Categories (4, interval[int64, right]): [(-5, -1] < (-1, 0] < (0, 1] < (1, 5]]
qcut() computes sample quantiles. For example, we could slice up some
normally distributed data into equal-size quartiles like so:
In [135]: arr = np.random.randn(30)
In [136]: factor = pd.qcut(arr, [0, 0.25, 0.5, 0.75, 1])
In [137]: factor
Out[137]:
[(0.569, 1.184], (-2.278, -0.301], (-2.278, -0.301], (0.569, 1.184], (0.569, 1.184], ..., (-0.301, 0.569], (1.184, 2.346], (1.184, 2.346], (-0.301, 0.569], (-2.278, -0.301]]
Length: 30
Categories (4, interval[float64, right]): [(-2.278, -0.301] < (-0.301, 0.569] < (0.569, 1.184] <
(1.184, 2.346]]
In [138]: pd.value_counts(factor)
Out[138]:
(-2.278, -0.301] 8
(1.184, 2.346] 8
(-0.301, 0.569] 7
(0.569, 1.184] 7
dtype: int64
We can also pass infinite values to define the bins:
In [139]: arr = np.random.randn(20)
In [140]: factor = pd.cut(arr, [-np.inf, 0, np.inf])
In [141]: factor
Out[141]:
[(-inf, 0.0], (0.0, inf], (0.0, inf], (-inf, 0.0], (-inf, 0.0], ..., (-inf, 0.0], (-inf, 0.0], (-inf, 0.0], (0.0, inf], (0.0, inf]]
Length: 20
Categories (2, interval[float64, right]): [(-inf, 0.0] < (0.0, inf]]
Function application#
To apply your own or another library’s functions to pandas objects,
you should be aware of the three methods below. The appropriate
method to use depends on whether your function expects to operate
on an entire DataFrame or Series, row- or column-wise, or elementwise.
Tablewise Function Application: pipe()
Row or Column-wise Function Application: apply()
Aggregation API: agg() and transform()
Applying Elementwise Functions: applymap()
Tablewise function application#
DataFrames and Series can be passed into functions.
However, if the function needs to be called in a chain, consider using the pipe() method.
First some setup:
In [142]: def extract_city_name(df):
.....: """
.....: Chicago, IL -> Chicago for city_name column
.....: """
.....: df["city_name"] = df["city_and_code"].str.split(",").str.get(0)
.....: return df
.....:
In [143]: def add_country_name(df, country_name=None):
.....: """
.....: Chicago -> Chicago-US for city_name column
.....: """
.....: col = "city_name"
.....: df["city_and_country"] = df[col] + country_name
.....: return df
.....:
In [144]: df_p = pd.DataFrame({"city_and_code": ["Chicago, IL"]})
extract_city_name and add_country_name are functions taking and returning DataFrames.
Now compare the following:
In [145]: add_country_name(extract_city_name(df_p), country_name="US")
Out[145]:
city_and_code city_name city_and_country
0 Chicago, IL Chicago ChicagoUS
Is equivalent to:
In [146]: df_p.pipe(extract_city_name).pipe(add_country_name, country_name="US")
Out[146]:
city_and_code city_name city_and_country
0 Chicago, IL Chicago ChicagoUS
pandas encourages the second style, which is known as method chaining.
pipe makes it easy to use your own or another library’s functions
in method chains, alongside pandas’ methods.
In the example above, the functions extract_city_name and add_country_name each expected a DataFrame as the first positional argument.
What if the function you wish to apply takes its data as, say, the second argument?
In this case, provide pipe with a tuple of (callable, data_keyword).
.pipe will route the DataFrame to the argument specified in the tuple.
For example, we can fit a regression using statsmodels. Their API expects a formula first and a DataFrame as the second argument, data. We pass in the function, keyword pair (sm.ols, 'data') to pipe:
In [147]: import statsmodels.formula.api as sm
In [148]: bb = pd.read_csv("data/baseball.csv", index_col="id")
In [149]: (
.....: bb.query("h > 0")
.....: .assign(ln_h=lambda df: np.log(df.h))
.....: .pipe((sm.ols, "data"), "hr ~ ln_h + year + g + C(lg)")
.....: .fit()
.....: .summary()
.....: )
.....:
Out[149]:
<class 'statsmodels.iolib.summary.Summary'>
"""
OLS Regression Results
==============================================================================
Dep. Variable: hr R-squared: 0.685
Model: OLS Adj. R-squared: 0.665
Method: Least Squares F-statistic: 34.28
Date: Thu, 19 Jan 2023 Prob (F-statistic): 3.48e-15
Time: 05:09:40 Log-Likelihood: -205.92
No. Observations: 68 AIC: 421.8
Df Residuals: 63 BIC: 432.9
Df Model: 4
Covariance Type: nonrobust
===============================================================================
coef std err t P>|t| [0.025 0.975]
-------------------------------------------------------------------------------
Intercept -8484.7720 4664.146 -1.819 0.074 -1.78e+04 835.780
C(lg)[T.NL] -2.2736 1.325 -1.716 0.091 -4.922 0.375
ln_h -1.3542 0.875 -1.547 0.127 -3.103 0.395
year 4.2277 2.324 1.819 0.074 -0.417 8.872
g 0.1841 0.029 6.258 0.000 0.125 0.243
==============================================================================
Omnibus: 10.875 Durbin-Watson: 1.999
Prob(Omnibus): 0.004 Jarque-Bera (JB): 17.298
Skew: 0.537 Prob(JB): 0.000175
Kurtosis: 5.225 Cond. No. 1.49e+07
==============================================================================
Notes:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
[2] The condition number is large, 1.49e+07. This might indicate that there are
strong multicollinearity or other numerical problems.
"""
The pipe method is inspired by unix pipes and more recently dplyr and magrittr, which
have introduced the popular (%>%) (read pipe) operator for R.
The implementation of pipe here is quite clean and feels right at home in Python.
We encourage you to view the source code of pipe().
Row or column-wise function application#
Arbitrary functions can be applied along the axes of a DataFrame
using the apply() method, which, like the descriptive
statistics methods, takes an optional axis argument:
In [150]: df.apply(np.mean)
Out[150]:
one 0.811094
two 1.360588
three 0.187958
dtype: float64
In [151]: df.apply(np.mean, axis=1)
Out[151]:
a 1.583749
b 0.734929
c 1.133683
d -0.166914
dtype: float64
In [152]: df.apply(lambda x: x.max() - x.min())
Out[152]:
one 1.051928
two 1.632779
three 1.840607
dtype: float64
In [153]: df.apply(np.cumsum)
Out[153]:
one two three
a 1.394981 1.772517 NaN
b 1.738035 3.684640 -0.050390
c 2.433281 5.163008 1.177045
d NaN 5.442353 0.563873
In [154]: df.apply(np.exp)
Out[154]:
one two three
a 4.034899 5.885648 NaN
b 1.409244 6.767440 0.950858
c 2.004201 4.385785 3.412466
d NaN 1.322262 0.541630
The apply() method will also dispatch on a string method name.
In [155]: df.apply("mean")
Out[155]:
one 0.811094
two 1.360588
three 0.187958
dtype: float64
In [156]: df.apply("mean", axis=1)
Out[156]:
a 1.583749
b 0.734929
c 1.133683
d -0.166914
dtype: float64
The return type of the function passed to apply() affects the
type of the final output from DataFrame.apply for the default behaviour:
If the applied function returns a Series, the final output is a DataFrame.
The columns match the index of the Series returned by the applied function.
If the applied function returns any other type, the final output is a Series.
This default behaviour can be overridden using the result_type, which
accepts three options: reduce, broadcast, and expand.
These will determine how list-likes return values expand (or not) to a DataFrame.
apply() combined with some cleverness can be used to answer many questions
about a data set. For example, suppose we wanted to extract the date where the
maximum value for each column occurred:
In [157]: tsdf = pd.DataFrame(
.....: np.random.randn(1000, 3),
.....: columns=["A", "B", "C"],
.....: index=pd.date_range("1/1/2000", periods=1000),
.....: )
.....:
In [158]: tsdf.apply(lambda x: x.idxmax())
Out[158]:
A 2000-08-06
B 2001-01-18
C 2001-07-18
dtype: datetime64[ns]
You may also pass additional arguments and keyword arguments to the apply()
method. For instance, consider the following function you would like to apply:
def subtract_and_divide(x, sub, divide=1):
return (x - sub) / divide
You may then apply this function as follows:
df.apply(subtract_and_divide, args=(5,), divide=3)
Another useful feature is the ability to pass Series methods to carry out some
Series operation on each column or row:
In [159]: tsdf
Out[159]:
A B C
2000-01-01 -0.158131 -0.232466 0.321604
2000-01-02 -1.810340 -3.105758 0.433834
2000-01-03 -1.209847 -1.156793 -0.136794
2000-01-04 NaN NaN NaN
2000-01-05 NaN NaN NaN
2000-01-06 NaN NaN NaN
2000-01-07 NaN NaN NaN
2000-01-08 -0.653602 0.178875 1.008298
2000-01-09 1.007996 0.462824 0.254472
2000-01-10 0.307473 0.600337 1.643950
In [160]: tsdf.apply(pd.Series.interpolate)
Out[160]:
A B C
2000-01-01 -0.158131 -0.232466 0.321604
2000-01-02 -1.810340 -3.105758 0.433834
2000-01-03 -1.209847 -1.156793 -0.136794
2000-01-04 -1.098598 -0.889659 0.092225
2000-01-05 -0.987349 -0.622526 0.321243
2000-01-06 -0.876100 -0.355392 0.550262
2000-01-07 -0.764851 -0.088259 0.779280
2000-01-08 -0.653602 0.178875 1.008298
2000-01-09 1.007996 0.462824 0.254472
2000-01-10 0.307473 0.600337 1.643950
Finally, apply() takes an argument raw which is False by default, which
converts each row or column into a Series before applying the function. When
set to True, the passed function will instead receive an ndarray object, which
has positive performance implications if you do not need the indexing
functionality.
Aggregation API#
The aggregation API allows one to express possibly multiple aggregation operations in a single concise way.
This API is similar across pandas objects, see groupby API, the
window API, and the resample API.
The entry point for aggregation is DataFrame.aggregate(), or the alias
DataFrame.agg().
We will use a similar starting frame from above:
In [161]: tsdf = pd.DataFrame(
.....: np.random.randn(10, 3),
.....: columns=["A", "B", "C"],
.....: index=pd.date_range("1/1/2000", periods=10),
.....: )
.....:
In [162]: tsdf.iloc[3:7] = np.nan
In [163]: tsdf
Out[163]:
A B C
2000-01-01 1.257606 1.004194 0.167574
2000-01-02 -0.749892 0.288112 -0.757304
2000-01-03 -0.207550 -0.298599 0.116018
2000-01-04 NaN NaN NaN
2000-01-05 NaN NaN NaN
2000-01-06 NaN NaN NaN
2000-01-07 NaN NaN NaN
2000-01-08 0.814347 -0.257623 0.869226
2000-01-09 -0.250663 -1.206601 0.896839
2000-01-10 2.169758 -1.333363 0.283157
Using a single function is equivalent to apply(). You can also
pass named methods as strings. These will return a Series of the aggregated
output:
In [164]: tsdf.agg(np.sum)
Out[164]:
A 3.033606
B -1.803879
C 1.575510
dtype: float64
In [165]: tsdf.agg("sum")
Out[165]:
A 3.033606
B -1.803879
C 1.575510
dtype: float64
# these are equivalent to a ``.sum()`` because we are aggregating
# on a single function
In [166]: tsdf.sum()
Out[166]:
A 3.033606
B -1.803879
C 1.575510
dtype: float64
Single aggregations on a Series this will return a scalar value:
In [167]: tsdf["A"].agg("sum")
Out[167]: 3.033606102414146
Aggregating with multiple functions#
You can pass multiple aggregation arguments as a list.
The results of each of the passed functions will be a row in the resulting DataFrame.
These are naturally named from the aggregation function.
In [168]: tsdf.agg(["sum"])
Out[168]:
A B C
sum 3.033606 -1.803879 1.57551
Multiple functions yield multiple rows:
In [169]: tsdf.agg(["sum", "mean"])
Out[169]:
A B C
sum 3.033606 -1.803879 1.575510
mean 0.505601 -0.300647 0.262585
On a Series, multiple functions return a Series, indexed by the function names:
In [170]: tsdf["A"].agg(["sum", "mean"])
Out[170]:
sum 3.033606
mean 0.505601
Name: A, dtype: float64
Passing a lambda function will yield a <lambda> named row:
In [171]: tsdf["A"].agg(["sum", lambda x: x.mean()])
Out[171]:
sum 3.033606
<lambda> 0.505601
Name: A, dtype: float64
Passing a named function will yield that name for the row:
In [172]: def mymean(x):
.....: return x.mean()
.....:
In [173]: tsdf["A"].agg(["sum", mymean])
Out[173]:
sum 3.033606
mymean 0.505601
Name: A, dtype: float64
Aggregating with a dict#
Passing a dictionary of column names to a scalar or a list of scalars, to DataFrame.agg
allows you to customize which functions are applied to which columns. Note that the results
are not in any particular order, you can use an OrderedDict instead to guarantee ordering.
In [174]: tsdf.agg({"A": "mean", "B": "sum"})
Out[174]:
A 0.505601
B -1.803879
dtype: float64
Passing a list-like will generate a DataFrame output. You will get a matrix-like output
of all of the aggregators. The output will consist of all unique functions. Those that are
not noted for a particular column will be NaN:
In [175]: tsdf.agg({"A": ["mean", "min"], "B": "sum"})
Out[175]:
A B
mean 0.505601 NaN
min -0.749892 NaN
sum NaN -1.803879
Mixed dtypes#
Deprecated since version 1.4.0: Attempting to determine which columns cannot be aggregated and silently dropping them from the results is deprecated and will be removed in a future version. If any porition of the columns or operations provided fail, the call to .agg will raise.
When presented with mixed dtypes that cannot aggregate, .agg will only take the valid
aggregations. This is similar to how .groupby.agg works.
In [176]: mdf = pd.DataFrame(
.....: {
.....: "A": [1, 2, 3],
.....: "B": [1.0, 2.0, 3.0],
.....: "C": ["foo", "bar", "baz"],
.....: "D": pd.date_range("20130101", periods=3),
.....: }
.....: )
.....:
In [177]: mdf.dtypes
Out[177]:
A int64
B float64
C object
D datetime64[ns]
dtype: object
In [178]: mdf.agg(["min", "sum"])
Out[178]:
A B C D
min 1 1.0 bar 2013-01-01
sum 6 6.0 foobarbaz NaT
Custom describe#
With .agg() it is possible to easily create a custom describe function, similar
to the built in describe function.
In [179]: from functools import partial
In [180]: q_25 = partial(pd.Series.quantile, q=0.25)
In [181]: q_25.__name__ = "25%"
In [182]: q_75 = partial(pd.Series.quantile, q=0.75)
In [183]: q_75.__name__ = "75%"
In [184]: tsdf.agg(["count", "mean", "std", "min", q_25, "median", q_75, "max"])
Out[184]:
A B C
count 6.000000 6.000000 6.000000
mean 0.505601 -0.300647 0.262585
std 1.103362 0.887508 0.606860
min -0.749892 -1.333363 -0.757304
25% -0.239885 -0.979600 0.128907
median 0.303398 -0.278111 0.225365
75% 1.146791 0.151678 0.722709
max 2.169758 1.004194 0.896839
Transform API#
The transform() method returns an object that is indexed the same (same size)
as the original. This API allows you to provide multiple operations at the same
time rather than one-by-one. Its API is quite similar to the .agg API.
We create a frame similar to the one used in the above sections.
In [185]: tsdf = pd.DataFrame(
.....: np.random.randn(10, 3),
.....: columns=["A", "B", "C"],
.....: index=pd.date_range("1/1/2000", periods=10),
.....: )
.....:
In [186]: tsdf.iloc[3:7] = np.nan
In [187]: tsdf
Out[187]:
A B C
2000-01-01 -0.428759 -0.864890 -0.675341
2000-01-02 -0.168731 1.338144 -1.279321
2000-01-03 -1.621034 0.438107 0.903794
2000-01-04 NaN NaN NaN
2000-01-05 NaN NaN NaN
2000-01-06 NaN NaN NaN
2000-01-07 NaN NaN NaN
2000-01-08 0.254374 -1.240447 -0.201052
2000-01-09 -0.157795 0.791197 -1.144209
2000-01-10 -0.030876 0.371900 0.061932
Transform the entire frame. .transform() allows input functions as: a NumPy function, a string
function name or a user defined function.
In [188]: tsdf.transform(np.abs)
Out[188]:
A B C
2000-01-01 0.428759 0.864890 0.675341
2000-01-02 0.168731 1.338144 1.279321
2000-01-03 1.621034 0.438107 0.903794
2000-01-04 NaN NaN NaN
2000-01-05 NaN NaN NaN
2000-01-06 NaN NaN NaN
2000-01-07 NaN NaN NaN
2000-01-08 0.254374 1.240447 0.201052
2000-01-09 0.157795 0.791197 1.144209
2000-01-10 0.030876 0.371900 0.061932
In [189]: tsdf.transform("abs")
Out[189]:
A B C
2000-01-01 0.428759 0.864890 0.675341
2000-01-02 0.168731 1.338144 1.279321
2000-01-03 1.621034 0.438107 0.903794
2000-01-04 NaN NaN NaN
2000-01-05 NaN NaN NaN
2000-01-06 NaN NaN NaN
2000-01-07 NaN NaN NaN
2000-01-08 0.254374 1.240447 0.201052
2000-01-09 0.157795 0.791197 1.144209
2000-01-10 0.030876 0.371900 0.061932
In [190]: tsdf.transform(lambda x: x.abs())
Out[190]:
A B C
2000-01-01 0.428759 0.864890 0.675341
2000-01-02 0.168731 1.338144 1.279321
2000-01-03 1.621034 0.438107 0.903794
2000-01-04 NaN NaN NaN
2000-01-05 NaN NaN NaN
2000-01-06 NaN NaN NaN
2000-01-07 NaN NaN NaN
2000-01-08 0.254374 1.240447 0.201052
2000-01-09 0.157795 0.791197 1.144209
2000-01-10 0.030876 0.371900 0.061932
Here transform() received a single function; this is equivalent to a ufunc application.
In [191]: np.abs(tsdf)
Out[191]:
A B C
2000-01-01 0.428759 0.864890 0.675341
2000-01-02 0.168731 1.338144 1.279321
2000-01-03 1.621034 0.438107 0.903794
2000-01-04 NaN NaN NaN
2000-01-05 NaN NaN NaN
2000-01-06 NaN NaN NaN
2000-01-07 NaN NaN NaN
2000-01-08 0.254374 1.240447 0.201052
2000-01-09 0.157795 0.791197 1.144209
2000-01-10 0.030876 0.371900 0.061932
Passing a single function to .transform() with a Series will yield a single Series in return.
In [192]: tsdf["A"].transform(np.abs)
Out[192]:
2000-01-01 0.428759
2000-01-02 0.168731
2000-01-03 1.621034
2000-01-04 NaN
2000-01-05 NaN
2000-01-06 NaN
2000-01-07 NaN
2000-01-08 0.254374
2000-01-09 0.157795
2000-01-10 0.030876
Freq: D, Name: A, dtype: float64
Transform with multiple functions#
Passing multiple functions will yield a column MultiIndexed DataFrame.
The first level will be the original frame column names; the second level
will be the names of the transforming functions.
In [193]: tsdf.transform([np.abs, lambda x: x + 1])
Out[193]:
A B C
absolute <lambda> absolute <lambda> absolute <lambda>
2000-01-01 0.428759 0.571241 0.864890 0.135110 0.675341 0.324659
2000-01-02 0.168731 0.831269 1.338144 2.338144 1.279321 -0.279321
2000-01-03 1.621034 -0.621034 0.438107 1.438107 0.903794 1.903794
2000-01-04 NaN NaN NaN NaN NaN NaN
2000-01-05 NaN NaN NaN NaN NaN NaN
2000-01-06 NaN NaN NaN NaN NaN NaN
2000-01-07 NaN NaN NaN NaN NaN NaN
2000-01-08 0.254374 1.254374 1.240447 -0.240447 0.201052 0.798948
2000-01-09 0.157795 0.842205 0.791197 1.791197 1.144209 -0.144209
2000-01-10 0.030876 0.969124 0.371900 1.371900 0.061932 1.061932
Passing multiple functions to a Series will yield a DataFrame. The
resulting column names will be the transforming functions.
In [194]: tsdf["A"].transform([np.abs, lambda x: x + 1])
Out[194]:
absolute <lambda>
2000-01-01 0.428759 0.571241
2000-01-02 0.168731 0.831269
2000-01-03 1.621034 -0.621034
2000-01-04 NaN NaN
2000-01-05 NaN NaN
2000-01-06 NaN NaN
2000-01-07 NaN NaN
2000-01-08 0.254374 1.254374
2000-01-09 0.157795 0.842205
2000-01-10 0.030876 0.969124
Transforming with a dict#
Passing a dict of functions will allow selective transforming per column.
In [195]: tsdf.transform({"A": np.abs, "B": lambda x: x + 1})
Out[195]:
A B
2000-01-01 0.428759 0.135110
2000-01-02 0.168731 2.338144
2000-01-03 1.621034 1.438107
2000-01-04 NaN NaN
2000-01-05 NaN NaN
2000-01-06 NaN NaN
2000-01-07 NaN NaN
2000-01-08 0.254374 -0.240447
2000-01-09 0.157795 1.791197
2000-01-10 0.030876 1.371900
Passing a dict of lists will generate a MultiIndexed DataFrame with these
selective transforms.
In [196]: tsdf.transform({"A": np.abs, "B": [lambda x: x + 1, "sqrt"]})
Out[196]:
A B
absolute <lambda> sqrt
2000-01-01 0.428759 0.135110 NaN
2000-01-02 0.168731 2.338144 1.156782
2000-01-03 1.621034 1.438107 0.661897
2000-01-04 NaN NaN NaN
2000-01-05 NaN NaN NaN
2000-01-06 NaN NaN NaN
2000-01-07 NaN NaN NaN
2000-01-08 0.254374 -0.240447 NaN
2000-01-09 0.157795 1.791197 0.889493
2000-01-10 0.030876 1.371900 0.609836
Applying elementwise functions#
Since not all functions can be vectorized (accept NumPy arrays and return
another array or value), the methods applymap() on DataFrame
and analogously map() on Series accept any Python function taking
a single value and returning a single value. For example:
In [197]: df4
Out[197]:
one two three
a 1.394981 1.772517 NaN
b 0.343054 1.912123 -0.050390
c 0.695246 1.478369 1.227435
d NaN 0.279344 -0.613172
In [198]: def f(x):
.....: return len(str(x))
.....:
In [199]: df4["one"].map(f)
Out[199]:
a 18
b 19
c 18
d 3
Name: one, dtype: int64
In [200]: df4.applymap(f)
Out[200]:
one two three
a 18 17 3
b 19 18 20
c 18 18 16
d 3 19 19
Series.map() has an additional feature; it can be used to easily
“link” or “map” values defined by a secondary series. This is closely related
to merging/joining functionality:
In [201]: s = pd.Series(
.....: ["six", "seven", "six", "seven", "six"], index=["a", "b", "c", "d", "e"]
.....: )
.....:
In [202]: t = pd.Series({"six": 6.0, "seven": 7.0})
In [203]: s
Out[203]:
a six
b seven
c six
d seven
e six
dtype: object
In [204]: s.map(t)
Out[204]:
a 6.0
b 7.0
c 6.0
d 7.0
e 6.0
dtype: float64
Reindexing and altering labels#
reindex() is the fundamental data alignment method in pandas.
It is used to implement nearly all other features relying on label-alignment
functionality. To reindex means to conform the data to match a given set of
labels along a particular axis. This accomplishes several things:
Reorders the existing data to match a new set of labels
Inserts missing value (NA) markers in label locations where no data for
that label existed
If specified, fill data for missing labels using logic (highly relevant
to working with time series data)
Here is a simple example:
In [205]: s = pd.Series(np.random.randn(5), index=["a", "b", "c", "d", "e"])
In [206]: s
Out[206]:
a 1.695148
b 1.328614
c 1.234686
d -0.385845
e -1.326508
dtype: float64
In [207]: s.reindex(["e", "b", "f", "d"])
Out[207]:
e -1.326508
b 1.328614
f NaN
d -0.385845
dtype: float64
Here, the f label was not contained in the Series and hence appears as
NaN in the result.
With a DataFrame, you can simultaneously reindex the index and columns:
In [208]: df
Out[208]:
one two three
a 1.394981 1.772517 NaN
b 0.343054 1.912123 -0.050390
c 0.695246 1.478369 1.227435
d NaN 0.279344 -0.613172
In [209]: df.reindex(index=["c", "f", "b"], columns=["three", "two", "one"])
Out[209]:
three two one
c 1.227435 1.478369 0.695246
f NaN NaN NaN
b -0.050390 1.912123 0.343054
You may also use reindex with an axis keyword:
In [210]: df.reindex(["c", "f", "b"], axis="index")
Out[210]:
one two three
c 0.695246 1.478369 1.227435
f NaN NaN NaN
b 0.343054 1.912123 -0.050390
Note that the Index objects containing the actual axis labels can be
shared between objects. So if we have a Series and a DataFrame, the
following can be done:
In [211]: rs = s.reindex(df.index)
In [212]: rs
Out[212]:
a 1.695148
b 1.328614
c 1.234686
d -0.385845
dtype: float64
In [213]: rs.index is df.index
Out[213]: True
This means that the reindexed Series’s index is the same Python object as the
DataFrame’s index.
DataFrame.reindex() also supports an “axis-style” calling convention,
where you specify a single labels argument and the axis it applies to.
In [214]: df.reindex(["c", "f", "b"], axis="index")
Out[214]:
one two three
c 0.695246 1.478369 1.227435
f NaN NaN NaN
b 0.343054 1.912123 -0.050390
In [215]: df.reindex(["three", "two", "one"], axis="columns")
Out[215]:
three two one
a NaN 1.772517 1.394981
b -0.050390 1.912123 0.343054
c 1.227435 1.478369 0.695246
d -0.613172 0.279344 NaN
See also
MultiIndex / Advanced Indexing is an even more concise way of
doing reindexing.
Note
When writing performance-sensitive code, there is a good reason to spend
some time becoming a reindexing ninja: many operations are faster on
pre-aligned data. Adding two unaligned DataFrames internally triggers a
reindexing step. For exploratory analysis you will hardly notice the
difference (because reindex has been heavily optimized), but when CPU
cycles matter sprinkling a few explicit reindex calls here and there can
have an impact.
Reindexing to align with another object#
You may wish to take an object and reindex its axes to be labeled the same as
another object. While the syntax for this is straightforward albeit verbose, it
is a common enough operation that the reindex_like() method is
available to make this simpler:
In [216]: df2
Out[216]:
one two
a 1.394981 1.772517
b 0.343054 1.912123
c 0.695246 1.478369
In [217]: df3
Out[217]:
one two
a 0.583888 0.051514
b -0.468040 0.191120
c -0.115848 -0.242634
In [218]: df.reindex_like(df2)
Out[218]:
one two
a 1.394981 1.772517
b 0.343054 1.912123
c 0.695246 1.478369
Aligning objects with each other with align#
The align() method is the fastest way to simultaneously align two objects. It
supports a join argument (related to joining and merging):
join='outer': take the union of the indexes (default)
join='left': use the calling object’s index
join='right': use the passed object’s index
join='inner': intersect the indexes
It returns a tuple with both of the reindexed Series:
In [219]: s = pd.Series(np.random.randn(5), index=["a", "b", "c", "d", "e"])
In [220]: s1 = s[:4]
In [221]: s2 = s[1:]
In [222]: s1.align(s2)
Out[222]:
(a -0.186646
b -1.692424
c -0.303893
d -1.425662
e NaN
dtype: float64,
a NaN
b -1.692424
c -0.303893
d -1.425662
e 1.114285
dtype: float64)
In [223]: s1.align(s2, join="inner")
Out[223]:
(b -1.692424
c -0.303893
d -1.425662
dtype: float64,
b -1.692424
c -0.303893
d -1.425662
dtype: float64)
In [224]: s1.align(s2, join="left")
Out[224]:
(a -0.186646
b -1.692424
c -0.303893
d -1.425662
dtype: float64,
a NaN
b -1.692424
c -0.303893
d -1.425662
dtype: float64)
For DataFrames, the join method will be applied to both the index and the
columns by default:
In [225]: df.align(df2, join="inner")
Out[225]:
( one two
a 1.394981 1.772517
b 0.343054 1.912123
c 0.695246 1.478369,
one two
a 1.394981 1.772517
b 0.343054 1.912123
c 0.695246 1.478369)
You can also pass an axis option to only align on the specified axis:
In [226]: df.align(df2, join="inner", axis=0)
Out[226]:
( one two three
a 1.394981 1.772517 NaN
b 0.343054 1.912123 -0.050390
c 0.695246 1.478369 1.227435,
one two
a 1.394981 1.772517
b 0.343054 1.912123
c 0.695246 1.478369)
If you pass a Series to DataFrame.align(), you can choose to align both
objects either on the DataFrame’s index or columns using the axis argument:
In [227]: df.align(df2.iloc[0], axis=1)
Out[227]:
( one three two
a 1.394981 NaN 1.772517
b 0.343054 -0.050390 1.912123
c 0.695246 1.227435 1.478369
d NaN -0.613172 0.279344,
one 1.394981
three NaN
two 1.772517
Name: a, dtype: float64)
Filling while reindexing#
reindex() takes an optional parameter method which is a
filling method chosen from the following table:
Method
Action
pad / ffill
Fill values forward
bfill / backfill
Fill values backward
nearest
Fill from the nearest index value
We illustrate these fill methods on a simple Series:
In [228]: rng = pd.date_range("1/3/2000", periods=8)
In [229]: ts = pd.Series(np.random.randn(8), index=rng)
In [230]: ts2 = ts[[0, 3, 6]]
In [231]: ts
Out[231]:
2000-01-03 0.183051
2000-01-04 0.400528
2000-01-05 -0.015083
2000-01-06 2.395489
2000-01-07 1.414806
2000-01-08 0.118428
2000-01-09 0.733639
2000-01-10 -0.936077
Freq: D, dtype: float64
In [232]: ts2
Out[232]:
2000-01-03 0.183051
2000-01-06 2.395489
2000-01-09 0.733639
Freq: 3D, dtype: float64
In [233]: ts2.reindex(ts.index)
Out[233]:
2000-01-03 0.183051
2000-01-04 NaN
2000-01-05 NaN
2000-01-06 2.395489
2000-01-07 NaN
2000-01-08 NaN
2000-01-09 0.733639
2000-01-10 NaN
Freq: D, dtype: float64
In [234]: ts2.reindex(ts.index, method="ffill")
Out[234]:
2000-01-03 0.183051
2000-01-04 0.183051
2000-01-05 0.183051
2000-01-06 2.395489
2000-01-07 2.395489
2000-01-08 2.395489
2000-01-09 0.733639
2000-01-10 0.733639
Freq: D, dtype: float64
In [235]: ts2.reindex(ts.index, method="bfill")
Out[235]:
2000-01-03 0.183051
2000-01-04 2.395489
2000-01-05 2.395489
2000-01-06 2.395489
2000-01-07 0.733639
2000-01-08 0.733639
2000-01-09 0.733639
2000-01-10 NaN
Freq: D, dtype: float64
In [236]: ts2.reindex(ts.index, method="nearest")
Out[236]:
2000-01-03 0.183051
2000-01-04 0.183051
2000-01-05 2.395489
2000-01-06 2.395489
2000-01-07 2.395489
2000-01-08 0.733639
2000-01-09 0.733639
2000-01-10 0.733639
Freq: D, dtype: float64
These methods require that the indexes are ordered increasing or
decreasing.
Note that the same result could have been achieved using
fillna (except for method='nearest') or
interpolate:
In [237]: ts2.reindex(ts.index).fillna(method="ffill")
Out[237]:
2000-01-03 0.183051
2000-01-04 0.183051
2000-01-05 0.183051
2000-01-06 2.395489
2000-01-07 2.395489
2000-01-08 2.395489
2000-01-09 0.733639
2000-01-10 0.733639
Freq: D, dtype: float64
reindex() will raise a ValueError if the index is not monotonically
increasing or decreasing. fillna() and interpolate()
will not perform any checks on the order of the index.
Limits on filling while reindexing#
The limit and tolerance arguments provide additional control over
filling while reindexing. Limit specifies the maximum count of consecutive
matches:
In [238]: ts2.reindex(ts.index, method="ffill", limit=1)
Out[238]:
2000-01-03 0.183051
2000-01-04 0.183051
2000-01-05 NaN
2000-01-06 2.395489
2000-01-07 2.395489
2000-01-08 NaN
2000-01-09 0.733639
2000-01-10 0.733639
Freq: D, dtype: float64
In contrast, tolerance specifies the maximum distance between the index and
indexer values:
In [239]: ts2.reindex(ts.index, method="ffill", tolerance="1 day")
Out[239]:
2000-01-03 0.183051
2000-01-04 0.183051
2000-01-05 NaN
2000-01-06 2.395489
2000-01-07 2.395489
2000-01-08 NaN
2000-01-09 0.733639
2000-01-10 0.733639
Freq: D, dtype: float64
Notice that when used on a DatetimeIndex, TimedeltaIndex or
PeriodIndex, tolerance will coerced into a Timedelta if possible.
This allows you to specify tolerance with appropriate strings.
Dropping labels from an axis#
A method closely related to reindex is the drop() function.
It removes a set of labels from an axis:
In [240]: df
Out[240]:
one two three
a 1.394981 1.772517 NaN
b 0.343054 1.912123 -0.050390
c 0.695246 1.478369 1.227435
d NaN 0.279344 -0.613172
In [241]: df.drop(["a", "d"], axis=0)
Out[241]:
one two three
b 0.343054 1.912123 -0.050390
c 0.695246 1.478369 1.227435
In [242]: df.drop(["one"], axis=1)
Out[242]:
two three
a 1.772517 NaN
b 1.912123 -0.050390
c 1.478369 1.227435
d 0.279344 -0.613172
Note that the following also works, but is a bit less obvious / clean:
In [243]: df.reindex(df.index.difference(["a", "d"]))
Out[243]:
one two three
b 0.343054 1.912123 -0.050390
c 0.695246 1.478369 1.227435
Renaming / mapping labels#
The rename() method allows you to relabel an axis based on some
mapping (a dict or Series) or an arbitrary function.
In [244]: s
Out[244]:
a -0.186646
b -1.692424
c -0.303893
d -1.425662
e 1.114285
dtype: float64
In [245]: s.rename(str.upper)
Out[245]:
A -0.186646
B -1.692424
C -0.303893
D -1.425662
E 1.114285
dtype: float64
If you pass a function, it must return a value when called with any of the
labels (and must produce a set of unique values). A dict or
Series can also be used:
In [246]: df.rename(
.....: columns={"one": "foo", "two": "bar"},
.....: index={"a": "apple", "b": "banana", "d": "durian"},
.....: )
.....:
Out[246]:
foo bar three
apple 1.394981 1.772517 NaN
banana 0.343054 1.912123 -0.050390
c 0.695246 1.478369 1.227435
durian NaN 0.279344 -0.613172
If the mapping doesn’t include a column/index label, it isn’t renamed. Note that
extra labels in the mapping don’t throw an error.
DataFrame.rename() also supports an “axis-style” calling convention, where
you specify a single mapper and the axis to apply that mapping to.
In [247]: df.rename({"one": "foo", "two": "bar"}, axis="columns")
Out[247]:
foo bar three
a 1.394981 1.772517 NaN
b 0.343054 1.912123 -0.050390
c 0.695246 1.478369 1.227435
d NaN 0.279344 -0.613172
In [248]: df.rename({"a": "apple", "b": "banana", "d": "durian"}, axis="index")
Out[248]:
one two three
apple 1.394981 1.772517 NaN
banana 0.343054 1.912123 -0.050390
c 0.695246 1.478369 1.227435
durian NaN 0.279344 -0.613172
The rename() method also provides an inplace named
parameter that is by default False and copies the underlying data. Pass
inplace=True to rename the data in place.
Finally, rename() also accepts a scalar or list-like
for altering the Series.name attribute.
In [249]: s.rename("scalar-name")
Out[249]:
a -0.186646
b -1.692424
c -0.303893
d -1.425662
e 1.114285
Name: scalar-name, dtype: float64
The methods DataFrame.rename_axis() and Series.rename_axis()
allow specific names of a MultiIndex to be changed (as opposed to the
labels).
In [250]: df = pd.DataFrame(
.....: {"x": [1, 2, 3, 4, 5, 6], "y": [10, 20, 30, 40, 50, 60]},
.....: index=pd.MultiIndex.from_product(
.....: [["a", "b", "c"], [1, 2]], names=["let", "num"]
.....: ),
.....: )
.....:
In [251]: df
Out[251]:
x y
let num
a 1 1 10
2 2 20
b 1 3 30
2 4 40
c 1 5 50
2 6 60
In [252]: df.rename_axis(index={"let": "abc"})
Out[252]:
x y
abc num
a 1 1 10
2 2 20
b 1 3 30
2 4 40
c 1 5 50
2 6 60
In [253]: df.rename_axis(index=str.upper)
Out[253]:
x y
LET NUM
a 1 1 10
2 2 20
b 1 3 30
2 4 40
c 1 5 50
2 6 60
Iteration#
The behavior of basic iteration over pandas objects depends on the type.
When iterating over a Series, it is regarded as array-like, and basic iteration
produces the values. DataFrames follow the dict-like convention of iterating
over the “keys” of the objects.
In short, basic iteration (for i in object) produces:
Series: values
DataFrame: column labels
Thus, for example, iterating over a DataFrame gives you the column names:
In [254]: df = pd.DataFrame(
.....: {"col1": np.random.randn(3), "col2": np.random.randn(3)}, index=["a", "b", "c"]
.....: )
.....:
In [255]: for col in df:
.....: print(col)
.....:
col1
col2
pandas objects also have the dict-like items() method to
iterate over the (key, value) pairs.
To iterate over the rows of a DataFrame, you can use the following methods:
iterrows(): Iterate over the rows of a DataFrame as (index, Series) pairs.
This converts the rows to Series objects, which can change the dtypes and has some
performance implications.
itertuples(): Iterate over the rows of a DataFrame
as namedtuples of the values. This is a lot faster than
iterrows(), and is in most cases preferable to use
to iterate over the values of a DataFrame.
Warning
Iterating through pandas objects is generally slow. In many cases,
iterating manually over the rows is not needed and can be avoided with
one of the following approaches:
Look for a vectorized solution: many operations can be performed using
built-in methods or NumPy functions, (boolean) indexing, …
When you have a function that cannot work on the full DataFrame/Series
at once, it is better to use apply() instead of iterating
over the values. See the docs on function application.
If you need to do iterative manipulations on the values but performance is
important, consider writing the inner loop with cython or numba.
See the enhancing performance section for some
examples of this approach.
Warning
You should never modify something you are iterating over.
This is not guaranteed to work in all cases. Depending on the
data types, the iterator returns a copy and not a view, and writing
to it will have no effect!
For example, in the following case setting the value has no effect:
In [256]: df = pd.DataFrame({"a": [1, 2, 3], "b": ["a", "b", "c"]})
In [257]: for index, row in df.iterrows():
.....: row["a"] = 10
.....:
In [258]: df
Out[258]:
a b
0 1 a
1 2 b
2 3 c
items#
Consistent with the dict-like interface, items() iterates
through key-value pairs:
Series: (index, scalar value) pairs
DataFrame: (column, Series) pairs
For example:
In [259]: for label, ser in df.items():
.....: print(label)
.....: print(ser)
.....:
a
0 1
1 2
2 3
Name: a, dtype: int64
b
0 a
1 b
2 c
Name: b, dtype: object
iterrows#
iterrows() allows you to iterate through the rows of a
DataFrame as Series objects. It returns an iterator yielding each
index value along with a Series containing the data in each row:
In [260]: for row_index, row in df.iterrows():
.....: print(row_index, row, sep="\n")
.....:
0
a 1
b a
Name: 0, dtype: object
1
a 2
b b
Name: 1, dtype: object
2
a 3
b c
Name: 2, dtype: object
Note
Because iterrows() returns a Series for each row,
it does not preserve dtypes across the rows (dtypes are
preserved across columns for DataFrames). For example,
In [261]: df_orig = pd.DataFrame([[1, 1.5]], columns=["int", "float"])
In [262]: df_orig.dtypes
Out[262]:
int int64
float float64
dtype: object
In [263]: row = next(df_orig.iterrows())[1]
In [264]: row
Out[264]:
int 1.0
float 1.5
Name: 0, dtype: float64
All values in row, returned as a Series, are now upcasted
to floats, also the original integer value in column x:
In [265]: row["int"].dtype
Out[265]: dtype('float64')
In [266]: df_orig["int"].dtype
Out[266]: dtype('int64')
To preserve dtypes while iterating over the rows, it is better
to use itertuples() which returns namedtuples of the values
and which is generally much faster than iterrows().
For instance, a contrived way to transpose the DataFrame would be:
In [267]: df2 = pd.DataFrame({"x": [1, 2, 3], "y": [4, 5, 6]})
In [268]: print(df2)
x y
0 1 4
1 2 5
2 3 6
In [269]: print(df2.T)
0 1 2
x 1 2 3
y 4 5 6
In [270]: df2_t = pd.DataFrame({idx: values for idx, values in df2.iterrows()})
In [271]: print(df2_t)
0 1 2
x 1 2 3
y 4 5 6
itertuples#
The itertuples() method will return an iterator
yielding a namedtuple for each row in the DataFrame. The first element
of the tuple will be the row’s corresponding index value, while the
remaining values are the row values.
For instance:
In [272]: for row in df.itertuples():
.....: print(row)
.....:
Pandas(Index=0, a=1, b='a')
Pandas(Index=1, a=2, b='b')
Pandas(Index=2, a=3, b='c')
This method does not convert the row to a Series object; it merely
returns the values inside a namedtuple. Therefore,
itertuples() preserves the data type of the values
and is generally faster as iterrows().
Note
The column names will be renamed to positional names if they are
invalid Python identifiers, repeated, or start with an underscore.
With a large number of columns (>255), regular tuples are returned.
.dt accessor#
Series has an accessor to succinctly return datetime like properties for the
values of the Series, if it is a datetime/period like Series.
This will return a Series, indexed like the existing Series.
# datetime
In [273]: s = pd.Series(pd.date_range("20130101 09:10:12", periods=4))
In [274]: s
Out[274]:
0 2013-01-01 09:10:12
1 2013-01-02 09:10:12
2 2013-01-03 09:10:12
3 2013-01-04 09:10:12
dtype: datetime64[ns]
In [275]: s.dt.hour
Out[275]:
0 9
1 9
2 9
3 9
dtype: int64
In [276]: s.dt.second
Out[276]:
0 12
1 12
2 12
3 12
dtype: int64
In [277]: s.dt.day
Out[277]:
0 1
1 2
2 3
3 4
dtype: int64
This enables nice expressions like this:
In [278]: s[s.dt.day == 2]
Out[278]:
1 2013-01-02 09:10:12
dtype: datetime64[ns]
You can easily produces tz aware transformations:
In [279]: stz = s.dt.tz_localize("US/Eastern")
In [280]: stz
Out[280]:
0 2013-01-01 09:10:12-05:00
1 2013-01-02 09:10:12-05:00
2 2013-01-03 09:10:12-05:00
3 2013-01-04 09:10:12-05:00
dtype: datetime64[ns, US/Eastern]
In [281]: stz.dt.tz
Out[281]: <DstTzInfo 'US/Eastern' LMT-1 day, 19:04:00 STD>
You can also chain these types of operations:
In [282]: s.dt.tz_localize("UTC").dt.tz_convert("US/Eastern")
Out[282]:
0 2013-01-01 04:10:12-05:00
1 2013-01-02 04:10:12-05:00
2 2013-01-03 04:10:12-05:00
3 2013-01-04 04:10:12-05:00
dtype: datetime64[ns, US/Eastern]
You can also format datetime values as strings with Series.dt.strftime() which
supports the same format as the standard strftime().
# DatetimeIndex
In [283]: s = pd.Series(pd.date_range("20130101", periods=4))
In [284]: s
Out[284]:
0 2013-01-01
1 2013-01-02
2 2013-01-03
3 2013-01-04
dtype: datetime64[ns]
In [285]: s.dt.strftime("%Y/%m/%d")
Out[285]:
0 2013/01/01
1 2013/01/02
2 2013/01/03
3 2013/01/04
dtype: object
# PeriodIndex
In [286]: s = pd.Series(pd.period_range("20130101", periods=4))
In [287]: s
Out[287]:
0 2013-01-01
1 2013-01-02
2 2013-01-03
3 2013-01-04
dtype: period[D]
In [288]: s.dt.strftime("%Y/%m/%d")
Out[288]:
0 2013/01/01
1 2013/01/02
2 2013/01/03
3 2013/01/04
dtype: object
The .dt accessor works for period and timedelta dtypes.
# period
In [289]: s = pd.Series(pd.period_range("20130101", periods=4, freq="D"))
In [290]: s
Out[290]:
0 2013-01-01
1 2013-01-02
2 2013-01-03
3 2013-01-04
dtype: period[D]
In [291]: s.dt.year
Out[291]:
0 2013
1 2013
2 2013
3 2013
dtype: int64
In [292]: s.dt.day
Out[292]:
0 1
1 2
2 3
3 4
dtype: int64
# timedelta
In [293]: s = pd.Series(pd.timedelta_range("1 day 00:00:05", periods=4, freq="s"))
In [294]: s
Out[294]:
0 1 days 00:00:05
1 1 days 00:00:06
2 1 days 00:00:07
3 1 days 00:00:08
dtype: timedelta64[ns]
In [295]: s.dt.days
Out[295]:
0 1
1 1
2 1
3 1
dtype: int64
In [296]: s.dt.seconds
Out[296]:
0 5
1 6
2 7
3 8
dtype: int64
In [297]: s.dt.components
Out[297]:
days hours minutes seconds milliseconds microseconds nanoseconds
0 1 0 0 5 0 0 0
1 1 0 0 6 0 0 0
2 1 0 0 7 0 0 0
3 1 0 0 8 0 0 0
Note
Series.dt will raise a TypeError if you access with a non-datetime-like values.
Vectorized string methods#
Series is equipped with a set of string processing methods that make it easy to
operate on each element of the array. Perhaps most importantly, these methods
exclude missing/NA values automatically. These are accessed via the Series’s
str attribute and generally have names matching the equivalent (scalar)
built-in string methods. For example:
In [298]: s = pd.Series(
.....: ["A", "B", "C", "Aaba", "Baca", np.nan, "CABA", "dog", "cat"], dtype="string"
.....: )
.....:
In [299]: s.str.lower()
Out[299]:
0 a
1 b
2 c
3 aaba
4 baca
5 <NA>
6 caba
7 dog
8 cat
dtype: string
Powerful pattern-matching methods are provided as well, but note that
pattern-matching generally uses regular expressions by default (and in some cases
always uses them).
Note
Prior to pandas 1.0, string methods were only available on object -dtype
Series. pandas 1.0 added the StringDtype which is dedicated
to strings. See Text data types for more.
Please see Vectorized String Methods for a complete
description.
Sorting#
pandas supports three kinds of sorting: sorting by index labels,
sorting by column values, and sorting by a combination of both.
By index#
The Series.sort_index() and DataFrame.sort_index() methods are
used to sort a pandas object by its index levels.
In [300]: df = pd.DataFrame(
.....: {
.....: "one": pd.Series(np.random.randn(3), index=["a", "b", "c"]),
.....: "two": pd.Series(np.random.randn(4), index=["a", "b", "c", "d"]),
.....: "three": pd.Series(np.random.randn(3), index=["b", "c", "d"]),
.....: }
.....: )
.....:
In [301]: unsorted_df = df.reindex(
.....: index=["a", "d", "c", "b"], columns=["three", "two", "one"]
.....: )
.....:
In [302]: unsorted_df
Out[302]:
three two one
a NaN -1.152244 0.562973
d -0.252916 -0.109597 NaN
c 1.273388 -0.167123 0.640382
b -0.098217 0.009797 -1.299504
# DataFrame
In [303]: unsorted_df.sort_index()
Out[303]:
three two one
a NaN -1.152244 0.562973
b -0.098217 0.009797 -1.299504
c 1.273388 -0.167123 0.640382
d -0.252916 -0.109597 NaN
In [304]: unsorted_df.sort_index(ascending=False)
Out[304]:
three two one
d -0.252916 -0.109597 NaN
c 1.273388 -0.167123 0.640382
b -0.098217 0.009797 -1.299504
a NaN -1.152244 0.562973
In [305]: unsorted_df.sort_index(axis=1)
Out[305]:
one three two
a 0.562973 NaN -1.152244
d NaN -0.252916 -0.109597
c 0.640382 1.273388 -0.167123
b -1.299504 -0.098217 0.009797
# Series
In [306]: unsorted_df["three"].sort_index()
Out[306]:
a NaN
b -0.098217
c 1.273388
d -0.252916
Name: three, dtype: float64
New in version 1.1.0.
Sorting by index also supports a key parameter that takes a callable
function to apply to the index being sorted. For MultiIndex objects,
the key is applied per-level to the levels specified by level.
In [307]: s1 = pd.DataFrame({"a": ["B", "a", "C"], "b": [1, 2, 3], "c": [2, 3, 4]}).set_index(
.....: list("ab")
.....: )
.....:
In [308]: s1
Out[308]:
c
a b
B 1 2
a 2 3
C 3 4
In [309]: s1.sort_index(level="a")
Out[309]:
c
a b
B 1 2
C 3 4
a 2 3
In [310]: s1.sort_index(level="a", key=lambda idx: idx.str.lower())
Out[310]:
c
a b
a 2 3
B 1 2
C 3 4
For information on key sorting by value, see value sorting.
By values#
The Series.sort_values() method is used to sort a Series by its values. The
DataFrame.sort_values() method is used to sort a DataFrame by its column or row values.
The optional by parameter to DataFrame.sort_values() may used to specify one or more columns
to use to determine the sorted order.
In [311]: df1 = pd.DataFrame(
.....: {"one": [2, 1, 1, 1], "two": [1, 3, 2, 4], "three": [5, 4, 3, 2]}
.....: )
.....:
In [312]: df1.sort_values(by="two")
Out[312]:
one two three
0 2 1 5
2 1 2 3
1 1 3 4
3 1 4 2
The by parameter can take a list of column names, e.g.:
In [313]: df1[["one", "two", "three"]].sort_values(by=["one", "two"])
Out[313]:
one two three
2 1 2 3
1 1 3 4
3 1 4 2
0 2 1 5
These methods have special treatment of NA values via the na_position
argument:
In [314]: s[2] = np.nan
In [315]: s.sort_values()
Out[315]:
0 A
3 Aaba
1 B
4 Baca
6 CABA
8 cat
7 dog
2 <NA>
5 <NA>
dtype: string
In [316]: s.sort_values(na_position="first")
Out[316]:
2 <NA>
5 <NA>
0 A
3 Aaba
1 B
4 Baca
6 CABA
8 cat
7 dog
dtype: string
New in version 1.1.0.
Sorting also supports a key parameter that takes a callable function
to apply to the values being sorted.
In [317]: s1 = pd.Series(["B", "a", "C"])
In [318]: s1.sort_values()
Out[318]:
0 B
2 C
1 a
dtype: object
In [319]: s1.sort_values(key=lambda x: x.str.lower())
Out[319]:
1 a
0 B
2 C
dtype: object
key will be given the Series of values and should return a Series
or array of the same shape with the transformed values. For DataFrame objects,
the key is applied per column, so the key should still expect a Series and return
a Series, e.g.
In [320]: df = pd.DataFrame({"a": ["B", "a", "C"], "b": [1, 2, 3]})
In [321]: df.sort_values(by="a")
Out[321]:
a b
0 B 1
2 C 3
1 a 2
In [322]: df.sort_values(by="a", key=lambda col: col.str.lower())
Out[322]:
a b
1 a 2
0 B 1
2 C 3
The name or type of each column can be used to apply different functions to
different columns.
By indexes and values#
Strings passed as the by parameter to DataFrame.sort_values() may
refer to either columns or index level names.
# Build MultiIndex
In [323]: idx = pd.MultiIndex.from_tuples(
.....: [("a", 1), ("a", 2), ("a", 2), ("b", 2), ("b", 1), ("b", 1)]
.....: )
.....:
In [324]: idx.names = ["first", "second"]
# Build DataFrame
In [325]: df_multi = pd.DataFrame({"A": np.arange(6, 0, -1)}, index=idx)
In [326]: df_multi
Out[326]:
A
first second
a 1 6
2 5
2 4
b 2 3
1 2
1 1
Sort by ‘second’ (index) and ‘A’ (column)
In [327]: df_multi.sort_values(by=["second", "A"])
Out[327]:
A
first second
b 1 1
1 2
a 1 6
b 2 3
a 2 4
2 5
Note
If a string matches both a column name and an index level name then a
warning is issued and the column takes precedence. This will result in an
ambiguity error in a future version.
searchsorted#
Series has the searchsorted() method, which works similarly to
numpy.ndarray.searchsorted().
In [328]: ser = pd.Series([1, 2, 3])
In [329]: ser.searchsorted([0, 3])
Out[329]: array([0, 2])
In [330]: ser.searchsorted([0, 4])
Out[330]: array([0, 3])
In [331]: ser.searchsorted([1, 3], side="right")
Out[331]: array([1, 3])
In [332]: ser.searchsorted([1, 3], side="left")
Out[332]: array([0, 2])
In [333]: ser = pd.Series([3, 1, 2])
In [334]: ser.searchsorted([0, 3], sorter=np.argsort(ser))
Out[334]: array([0, 2])
smallest / largest values#
Series has the nsmallest() and nlargest() methods which return the
smallest or largest \(n\) values. For a large Series this can be much
faster than sorting the entire Series and calling head(n) on the result.
In [335]: s = pd.Series(np.random.permutation(10))
In [336]: s
Out[336]:
0 2
1 0
2 3
3 7
4 1
5 5
6 9
7 6
8 8
9 4
dtype: int64
In [337]: s.sort_values()
Out[337]:
1 0
4 1
0 2
2 3
9 4
5 5
7 6
3 7
8 8
6 9
dtype: int64
In [338]: s.nsmallest(3)
Out[338]:
1 0
4 1
0 2
dtype: int64
In [339]: s.nlargest(3)
Out[339]:
6 9
8 8
3 7
dtype: int64
DataFrame also has the nlargest and nsmallest methods.
In [340]: df = pd.DataFrame(
.....: {
.....: "a": [-2, -1, 1, 10, 8, 11, -1],
.....: "b": list("abdceff"),
.....: "c": [1.0, 2.0, 4.0, 3.2, np.nan, 3.0, 4.0],
.....: }
.....: )
.....:
In [341]: df.nlargest(3, "a")
Out[341]:
a b c
5 11 f 3.0
3 10 c 3.2
4 8 e NaN
In [342]: df.nlargest(5, ["a", "c"])
Out[342]:
a b c
5 11 f 3.0
3 10 c 3.2
4 8 e NaN
2 1 d 4.0
6 -1 f 4.0
In [343]: df.nsmallest(3, "a")
Out[343]:
a b c
0 -2 a 1.0
1 -1 b 2.0
6 -1 f 4.0
In [344]: df.nsmallest(5, ["a", "c"])
Out[344]:
a b c
0 -2 a 1.0
1 -1 b 2.0
6 -1 f 4.0
2 1 d 4.0
4 8 e NaN
Sorting by a MultiIndex column#
You must be explicit about sorting when the column is a MultiIndex, and fully specify
all levels to by.
In [345]: df1.columns = pd.MultiIndex.from_tuples(
.....: [("a", "one"), ("a", "two"), ("b", "three")]
.....: )
.....:
In [346]: df1.sort_values(by=("a", "two"))
Out[346]:
a b
one two three
0 2 1 5
2 1 2 3
1 1 3 4
3 1 4 2
Copying#
The copy() method on pandas objects copies the underlying data (though not
the axis indexes, since they are immutable) and returns a new object. Note that
it is seldom necessary to copy objects. For example, there are only a
handful of ways to alter a DataFrame in-place:
Inserting, deleting, or modifying a column.
Assigning to the index or columns attributes.
For homogeneous data, directly modifying the values via the values
attribute or advanced indexing.
To be clear, no pandas method has the side effect of modifying your data;
almost every method returns a new object, leaving the original object
untouched. If the data is modified, it is because you did so explicitly.
dtypes#
For the most part, pandas uses NumPy arrays and dtypes for Series or individual
columns of a DataFrame. NumPy provides support for float,
int, bool, timedelta64[ns] and datetime64[ns] (note that NumPy
does not support timezone-aware datetimes).
pandas and third-party libraries extend NumPy’s type system in a few places.
This section describes the extensions pandas has made internally.
See Extension types for how to write your own extension that
works with pandas. See Extension data types for a list of third-party
libraries that have implemented an extension.
The following table lists all of pandas extension types. For methods requiring dtype
arguments, strings can be specified as indicated. See the respective
documentation sections for more on each type.
Kind of Data
Data Type
Scalar
Array
String Aliases
tz-aware datetime
DatetimeTZDtype
Timestamp
arrays.DatetimeArray
'datetime64[ns, <tz>]'
Categorical
CategoricalDtype
(none)
Categorical
'category'
period (time spans)
PeriodDtype
Period
arrays.PeriodArray
'Period[<freq>]'
'period[<freq>]',
sparse
SparseDtype
(none)
arrays.SparseArray
'Sparse', 'Sparse[int]',
'Sparse[float]'
intervals
IntervalDtype
Interval
arrays.IntervalArray
'interval', 'Interval',
'Interval[<numpy_dtype>]',
'Interval[datetime64[ns, <tz>]]',
'Interval[timedelta64[<freq>]]'
nullable integer
Int64Dtype, …
(none)
arrays.IntegerArray
'Int8', 'Int16', 'Int32',
'Int64', 'UInt8', 'UInt16',
'UInt32', 'UInt64'
Strings
StringDtype
str
arrays.StringArray
'string'
Boolean (with NA)
BooleanDtype
bool
arrays.BooleanArray
'boolean'
pandas has two ways to store strings.
object dtype, which can hold any Python object, including strings.
StringDtype, which is dedicated to strings.
Generally, we recommend using StringDtype. See Text data types for more.
Finally, arbitrary objects may be stored using the object dtype, but should
be avoided to the extent possible (for performance and interoperability with
other libraries and methods. See object conversion).
A convenient dtypes attribute for DataFrame returns a Series
with the data type of each column.
In [347]: dft = pd.DataFrame(
.....: {
.....: "A": np.random.rand(3),
.....: "B": 1,
.....: "C": "foo",
.....: "D": pd.Timestamp("20010102"),
.....: "E": pd.Series([1.0] * 3).astype("float32"),
.....: "F": False,
.....: "G": pd.Series([1] * 3, dtype="int8"),
.....: }
.....: )
.....:
In [348]: dft
Out[348]:
A B C D E F G
0 0.035962 1 foo 2001-01-02 1.0 False 1
1 0.701379 1 foo 2001-01-02 1.0 False 1
2 0.281885 1 foo 2001-01-02 1.0 False 1
In [349]: dft.dtypes
Out[349]:
A float64
B int64
C object
D datetime64[ns]
E float32
F bool
G int8
dtype: object
On a Series object, use the dtype attribute.
In [350]: dft["A"].dtype
Out[350]: dtype('float64')
If a pandas object contains data with multiple dtypes in a single column, the
dtype of the column will be chosen to accommodate all of the data types
(object is the most general).
# these ints are coerced to floats
In [351]: pd.Series([1, 2, 3, 4, 5, 6.0])
Out[351]:
0 1.0
1 2.0
2 3.0
3 4.0
4 5.0
5 6.0
dtype: float64
# string data forces an ``object`` dtype
In [352]: pd.Series([1, 2, 3, 6.0, "foo"])
Out[352]:
0 1
1 2
2 3
3 6.0
4 foo
dtype: object
The number of columns of each type in a DataFrame can be found by calling
DataFrame.dtypes.value_counts().
In [353]: dft.dtypes.value_counts()
Out[353]:
float64 1
int64 1
object 1
datetime64[ns] 1
float32 1
bool 1
int8 1
dtype: int64
Numeric dtypes will propagate and can coexist in DataFrames.
If a dtype is passed (either directly via the dtype keyword, a passed ndarray,
or a passed Series), then it will be preserved in DataFrame operations. Furthermore,
different numeric dtypes will NOT be combined. The following example will give you a taste.
In [354]: df1 = pd.DataFrame(np.random.randn(8, 1), columns=["A"], dtype="float32")
In [355]: df1
Out[355]:
A
0 0.224364
1 1.890546
2 0.182879
3 0.787847
4 -0.188449
5 0.667715
6 -0.011736
7 -0.399073
In [356]: df1.dtypes
Out[356]:
A float32
dtype: object
In [357]: df2 = pd.DataFrame(
.....: {
.....: "A": pd.Series(np.random.randn(8), dtype="float16"),
.....: "B": pd.Series(np.random.randn(8)),
.....: "C": pd.Series(np.array(np.random.randn(8), dtype="uint8")),
.....: }
.....: )
.....:
In [358]: df2
Out[358]:
A B C
0 0.823242 0.256090 0
1 1.607422 1.426469 0
2 -0.333740 -0.416203 255
3 -0.063477 1.139976 0
4 -1.014648 -1.193477 0
5 0.678711 0.096706 0
6 -0.040863 -1.956850 1
7 -0.357422 -0.714337 0
In [359]: df2.dtypes
Out[359]:
A float16
B float64
C uint8
dtype: object
defaults#
By default integer types are int64 and float types are float64,
regardless of platform (32-bit or 64-bit).
The following will all result in int64 dtypes.
In [360]: pd.DataFrame([1, 2], columns=["a"]).dtypes
Out[360]:
a int64
dtype: object
In [361]: pd.DataFrame({"a": [1, 2]}).dtypes
Out[361]:
a int64
dtype: object
In [362]: pd.DataFrame({"a": 1}, index=list(range(2))).dtypes
Out[362]:
a int64
dtype: object
Note that Numpy will choose platform-dependent types when creating arrays.
The following WILL result in int32 on 32-bit platform.
In [363]: frame = pd.DataFrame(np.array([1, 2]))
upcasting#
Types can potentially be upcasted when combined with other types, meaning they are promoted
from the current type (e.g. int to float).
In [364]: df3 = df1.reindex_like(df2).fillna(value=0.0) + df2
In [365]: df3
Out[365]:
A B C
0 1.047606 0.256090 0.0
1 3.497968 1.426469 0.0
2 -0.150862 -0.416203 255.0
3 0.724370 1.139976 0.0
4 -1.203098 -1.193477 0.0
5 1.346426 0.096706 0.0
6 -0.052599 -1.956850 1.0
7 -0.756495 -0.714337 0.0
In [366]: df3.dtypes
Out[366]:
A float32
B float64
C float64
dtype: object
DataFrame.to_numpy() will return the lower-common-denominator of the dtypes, meaning
the dtype that can accommodate ALL of the types in the resulting homogeneous dtyped NumPy array. This can
force some upcasting.
In [367]: df3.to_numpy().dtype
Out[367]: dtype('float64')
astype#
You can use the astype() method to explicitly convert dtypes from one to another. These will by default return a copy,
even if the dtype was unchanged (pass copy=False to change this behavior). In addition, they will raise an
exception if the astype operation is invalid.
Upcasting is always according to the NumPy rules. If two different dtypes are involved in an operation,
then the more general one will be used as the result of the operation.
In [368]: df3
Out[368]:
A B C
0 1.047606 0.256090 0.0
1 3.497968 1.426469 0.0
2 -0.150862 -0.416203 255.0
3 0.724370 1.139976 0.0
4 -1.203098 -1.193477 0.0
5 1.346426 0.096706 0.0
6 -0.052599 -1.956850 1.0
7 -0.756495 -0.714337 0.0
In [369]: df3.dtypes
Out[369]:
A float32
B float64
C float64
dtype: object
# conversion of dtypes
In [370]: df3.astype("float32").dtypes
Out[370]:
A float32
B float32
C float32
dtype: object
Convert a subset of columns to a specified type using astype().
In [371]: dft = pd.DataFrame({"a": [1, 2, 3], "b": [4, 5, 6], "c": [7, 8, 9]})
In [372]: dft[["a", "b"]] = dft[["a", "b"]].astype(np.uint8)
In [373]: dft
Out[373]:
a b c
0 1 4 7
1 2 5 8
2 3 6 9
In [374]: dft.dtypes
Out[374]:
a uint8
b uint8
c int64
dtype: object
Convert certain columns to a specific dtype by passing a dict to astype().
In [375]: dft1 = pd.DataFrame({"a": [1, 0, 1], "b": [4, 5, 6], "c": [7, 8, 9]})
In [376]: dft1 = dft1.astype({"a": np.bool_, "c": np.float64})
In [377]: dft1
Out[377]:
a b c
0 True 4 7.0
1 False 5 8.0
2 True 6 9.0
In [378]: dft1.dtypes
Out[378]:
a bool
b int64
c float64
dtype: object
Note
When trying to convert a subset of columns to a specified type using astype() and loc(), upcasting occurs.
loc() tries to fit in what we are assigning to the current dtypes, while [] will overwrite them taking the dtype from the right hand side. Therefore the following piece of code produces the unintended result.
In [379]: dft = pd.DataFrame({"a": [1, 2, 3], "b": [4, 5, 6], "c": [7, 8, 9]})
In [380]: dft.loc[:, ["a", "b"]].astype(np.uint8).dtypes
Out[380]:
a uint8
b uint8
dtype: object
In [381]: dft.loc[:, ["a", "b"]] = dft.loc[:, ["a", "b"]].astype(np.uint8)
In [382]: dft.dtypes
Out[382]:
a int64
b int64
c int64
dtype: object
object conversion#
pandas offers various functions to try to force conversion of types from the object dtype to other types.
In cases where the data is already of the correct type, but stored in an object array, the
DataFrame.infer_objects() and Series.infer_objects() methods can be used to soft convert
to the correct type.
In [383]: import datetime
In [384]: df = pd.DataFrame(
.....: [
.....: [1, 2],
.....: ["a", "b"],
.....: [datetime.datetime(2016, 3, 2), datetime.datetime(2016, 3, 2)],
.....: ]
.....: )
.....:
In [385]: df = df.T
In [386]: df
Out[386]:
0 1 2
0 1 a 2016-03-02
1 2 b 2016-03-02
In [387]: df.dtypes
Out[387]:
0 object
1 object
2 datetime64[ns]
dtype: object
Because the data was transposed the original inference stored all columns as object, which
infer_objects will correct.
In [388]: df.infer_objects().dtypes
Out[388]:
0 int64
1 object
2 datetime64[ns]
dtype: object
The following functions are available for one dimensional object arrays or scalars to perform
hard conversion of objects to a specified type:
to_numeric() (conversion to numeric dtypes)
In [389]: m = ["1.1", 2, 3]
In [390]: pd.to_numeric(m)
Out[390]: array([1.1, 2. , 3. ])
to_datetime() (conversion to datetime objects)
In [391]: import datetime
In [392]: m = ["2016-07-09", datetime.datetime(2016, 3, 2)]
In [393]: pd.to_datetime(m)
Out[393]: DatetimeIndex(['2016-07-09', '2016-03-02'], dtype='datetime64[ns]', freq=None)
to_timedelta() (conversion to timedelta objects)
In [394]: m = ["5us", pd.Timedelta("1day")]
In [395]: pd.to_timedelta(m)
Out[395]: TimedeltaIndex(['0 days 00:00:00.000005', '1 days 00:00:00'], dtype='timedelta64[ns]', freq=None)
To force a conversion, we can pass in an errors argument, which specifies how pandas should deal with elements
that cannot be converted to desired dtype or object. By default, errors='raise', meaning that any errors encountered
will be raised during the conversion process. However, if errors='coerce', these errors will be ignored and pandas
will convert problematic elements to pd.NaT (for datetime and timedelta) or np.nan (for numeric). This might be
useful if you are reading in data which is mostly of the desired dtype (e.g. numeric, datetime), but occasionally has
non-conforming elements intermixed that you want to represent as missing:
In [396]: import datetime
In [397]: m = ["apple", datetime.datetime(2016, 3, 2)]
In [398]: pd.to_datetime(m, errors="coerce")
Out[398]: DatetimeIndex(['NaT', '2016-03-02'], dtype='datetime64[ns]', freq=None)
In [399]: m = ["apple", 2, 3]
In [400]: pd.to_numeric(m, errors="coerce")
Out[400]: array([nan, 2., 3.])
In [401]: m = ["apple", pd.Timedelta("1day")]
In [402]: pd.to_timedelta(m, errors="coerce")
Out[402]: TimedeltaIndex([NaT, '1 days'], dtype='timedelta64[ns]', freq=None)
The errors parameter has a third option of errors='ignore', which will simply return the passed in data if it
encounters any errors with the conversion to a desired data type:
In [403]: import datetime
In [404]: m = ["apple", datetime.datetime(2016, 3, 2)]
In [405]: pd.to_datetime(m, errors="ignore")
Out[405]: Index(['apple', 2016-03-02 00:00:00], dtype='object')
In [406]: m = ["apple", 2, 3]
In [407]: pd.to_numeric(m, errors="ignore")
Out[407]: array(['apple', 2, 3], dtype=object)
In [408]: m = ["apple", pd.Timedelta("1day")]
In [409]: pd.to_timedelta(m, errors="ignore")
Out[409]: array(['apple', Timedelta('1 days 00:00:00')], dtype=object)
In addition to object conversion, to_numeric() provides another argument downcast, which gives the
option of downcasting the newly (or already) numeric data to a smaller dtype, which can conserve memory:
In [410]: m = ["1", 2, 3]
In [411]: pd.to_numeric(m, downcast="integer") # smallest signed int dtype
Out[411]: array([1, 2, 3], dtype=int8)
In [412]: pd.to_numeric(m, downcast="signed") # same as 'integer'
Out[412]: array([1, 2, 3], dtype=int8)
In [413]: pd.to_numeric(m, downcast="unsigned") # smallest unsigned int dtype
Out[413]: array([1, 2, 3], dtype=uint8)
In [414]: pd.to_numeric(m, downcast="float") # smallest float dtype
Out[414]: array([1., 2., 3.], dtype=float32)
As these methods apply only to one-dimensional arrays, lists or scalars; they cannot be used directly on multi-dimensional objects such
as DataFrames. However, with apply(), we can “apply” the function over each column efficiently:
In [415]: import datetime
In [416]: df = pd.DataFrame([["2016-07-09", datetime.datetime(2016, 3, 2)]] * 2, dtype="O")
In [417]: df
Out[417]:
0 1
0 2016-07-09 2016-03-02 00:00:00
1 2016-07-09 2016-03-02 00:00:00
In [418]: df.apply(pd.to_datetime)
Out[418]:
0 1
0 2016-07-09 2016-03-02
1 2016-07-09 2016-03-02
In [419]: df = pd.DataFrame([["1.1", 2, 3]] * 2, dtype="O")
In [420]: df
Out[420]:
0 1 2
0 1.1 2 3
1 1.1 2 3
In [421]: df.apply(pd.to_numeric)
Out[421]:
0 1 2
0 1.1 2 3
1 1.1 2 3
In [422]: df = pd.DataFrame([["5us", pd.Timedelta("1day")]] * 2, dtype="O")
In [423]: df
Out[423]:
0 1
0 5us 1 days 00:00:00
1 5us 1 days 00:00:00
In [424]: df.apply(pd.to_timedelta)
Out[424]:
0 1
0 0 days 00:00:00.000005 1 days
1 0 days 00:00:00.000005 1 days
gotchas#
Performing selection operations on integer type data can easily upcast the data to floating.
The dtype of the input data will be preserved in cases where nans are not introduced.
See also Support for integer NA.
In [425]: dfi = df3.astype("int32")
In [426]: dfi["E"] = 1
In [427]: dfi
Out[427]:
A B C E
0 1 0 0 1
1 3 1 0 1
2 0 0 255 1
3 0 1 0 1
4 -1 -1 0 1
5 1 0 0 1
6 0 -1 1 1
7 0 0 0 1
In [428]: dfi.dtypes
Out[428]:
A int32
B int32
C int32
E int64
dtype: object
In [429]: casted = dfi[dfi > 0]
In [430]: casted
Out[430]:
A B C E
0 1.0 NaN NaN 1
1 3.0 1.0 NaN 1
2 NaN NaN 255.0 1
3 NaN 1.0 NaN 1
4 NaN NaN NaN 1
5 1.0 NaN NaN 1
6 NaN NaN 1.0 1
7 NaN NaN NaN 1
In [431]: casted.dtypes
Out[431]:
A float64
B float64
C float64
E int64
dtype: object
While float dtypes are unchanged.
In [432]: dfa = df3.copy()
In [433]: dfa["A"] = dfa["A"].astype("float32")
In [434]: dfa.dtypes
Out[434]:
A float32
B float64
C float64
dtype: object
In [435]: casted = dfa[df2 > 0]
In [436]: casted
Out[436]:
A B C
0 1.047606 0.256090 NaN
1 3.497968 1.426469 NaN
2 NaN NaN 255.0
3 NaN 1.139976 NaN
4 NaN NaN NaN
5 1.346426 0.096706 NaN
6 NaN NaN 1.0
7 NaN NaN NaN
In [437]: casted.dtypes
Out[437]:
A float32
B float64
C float64
dtype: object
Selecting columns based on dtype#
The select_dtypes() method implements subsetting of columns
based on their dtype.
First, let’s create a DataFrame with a slew of different
dtypes:
In [438]: df = pd.DataFrame(
.....: {
.....: "string": list("abc"),
.....: "int64": list(range(1, 4)),
.....: "uint8": np.arange(3, 6).astype("u1"),
.....: "float64": np.arange(4.0, 7.0),
.....: "bool1": [True, False, True],
.....: "bool2": [False, True, False],
.....: "dates": pd.date_range("now", periods=3),
.....: "category": pd.Series(list("ABC")).astype("category"),
.....: }
.....: )
.....:
In [439]: df["tdeltas"] = df.dates.diff()
In [440]: df["uint64"] = np.arange(3, 6).astype("u8")
In [441]: df["other_dates"] = pd.date_range("20130101", periods=3)
In [442]: df["tz_aware_dates"] = pd.date_range("20130101", periods=3, tz="US/Eastern")
In [443]: df
Out[443]:
string int64 uint8 ... uint64 other_dates tz_aware_dates
0 a 1 3 ... 3 2013-01-01 2013-01-01 00:00:00-05:00
1 b 2 4 ... 4 2013-01-02 2013-01-02 00:00:00-05:00
2 c 3 5 ... 5 2013-01-03 2013-01-03 00:00:00-05:00
[3 rows x 12 columns]
And the dtypes:
In [444]: df.dtypes
Out[444]:
string object
int64 int64
uint8 uint8
float64 float64
bool1 bool
bool2 bool
dates datetime64[ns]
category category
tdeltas timedelta64[ns]
uint64 uint64
other_dates datetime64[ns]
tz_aware_dates datetime64[ns, US/Eastern]
dtype: object
select_dtypes() has two parameters include and exclude that allow you to
say “give me the columns with these dtypes” (include) and/or “give the
columns without these dtypes” (exclude).
For example, to select bool columns:
In [445]: df.select_dtypes(include=[bool])
Out[445]:
bool1 bool2
0 True False
1 False True
2 True False
You can also pass the name of a dtype in the NumPy dtype hierarchy:
In [446]: df.select_dtypes(include=["bool"])
Out[446]:
bool1 bool2
0 True False
1 False True
2 True False
select_dtypes() also works with generic dtypes as well.
For example, to select all numeric and boolean columns while excluding unsigned
integers:
In [447]: df.select_dtypes(include=["number", "bool"], exclude=["unsignedinteger"])
Out[447]:
int64 float64 bool1 bool2 tdeltas
0 1 4.0 True False NaT
1 2 5.0 False True 1 days
2 3 6.0 True False 1 days
To select string columns you must use the object dtype:
In [448]: df.select_dtypes(include=["object"])
Out[448]:
string
0 a
1 b
2 c
To see all the child dtypes of a generic dtype like numpy.number you
can define a function that returns a tree of child dtypes:
In [449]: def subdtypes(dtype):
.....: subs = dtype.__subclasses__()
.....: if not subs:
.....: return dtype
.....: return [dtype, [subdtypes(dt) for dt in subs]]
.....:
All NumPy dtypes are subclasses of numpy.generic:
In [450]: subdtypes(np.generic)
Out[450]:
[numpy.generic,
[[numpy.number,
[[numpy.integer,
[[numpy.signedinteger,
[numpy.int8,
numpy.int16,
numpy.int32,
numpy.int64,
numpy.longlong,
numpy.timedelta64]],
[numpy.unsignedinteger,
[numpy.uint8,
numpy.uint16,
numpy.uint32,
numpy.uint64,
numpy.ulonglong]]]],
[numpy.inexact,
[[numpy.floating,
[numpy.float16, numpy.float32, numpy.float64, numpy.float128]],
[numpy.complexfloating,
[numpy.complex64, numpy.complex128, numpy.complex256]]]]]],
[numpy.flexible,
[[numpy.character, [numpy.bytes_, numpy.str_]],
[numpy.void, [numpy.record]]]],
numpy.bool_,
numpy.datetime64,
numpy.object_]]
Note
pandas also defines the types category, and datetime64[ns, tz], which are not integrated into the normal
NumPy hierarchy and won’t show up with the above function.
| user_guide/basics.html |
pandas.api.extensions.ExtensionDtype.names | `pandas.api.extensions.ExtensionDtype.names`
Ordered list of field names, or None if there are no fields. | property ExtensionDtype.names[source]#
Ordered list of field names, or None if there are no fields.
This is for compatibility with NumPy arrays, and may be removed in the
future.
| reference/api/pandas.api.extensions.ExtensionDtype.names.html |
pandas.Index.nbytes | `pandas.Index.nbytes`
Return the number of bytes in the underlying data. | property Index.nbytes[source]#
Return the number of bytes in the underlying data.
| reference/api/pandas.Index.nbytes.html |
pandas.core.window.rolling.Rolling.count | `pandas.core.window.rolling.Rolling.count`
Calculate the rolling count of non NaN observations.
Include only float, int, boolean columns.
```
>>> s = pd.Series([2, 3, np.nan, 10])
>>> s.rolling(2).count()
0 1.0
1 2.0
2 1.0
3 1.0
dtype: float64
>>> s.rolling(3).count()
0 1.0
1 2.0
2 2.0
3 2.0
dtype: float64
>>> s.rolling(4).count()
0 1.0
1 2.0
2 2.0
3 3.0
dtype: float64
``` | Rolling.count(numeric_only=False)[source]#
Calculate the rolling count of non NaN observations.
Parameters
numeric_onlybool, default FalseInclude only float, int, boolean columns.
New in version 1.5.0.
Returns
Series or DataFrameReturn type is the same as the original object with np.float64 dtype.
See also
pandas.Series.rollingCalling rolling with Series data.
pandas.DataFrame.rollingCalling rolling with DataFrames.
pandas.Series.countAggregating count for Series.
pandas.DataFrame.countAggregating count for DataFrame.
Examples
>>> s = pd.Series([2, 3, np.nan, 10])
>>> s.rolling(2).count()
0 1.0
1 2.0
2 1.0
3 1.0
dtype: float64
>>> s.rolling(3).count()
0 1.0
1 2.0
2 2.0
3 2.0
dtype: float64
>>> s.rolling(4).count()
0 1.0
1 2.0
2 2.0
3 3.0
dtype: float64
| reference/api/pandas.core.window.rolling.Rolling.count.html |
pandas.CategoricalIndex.map | `pandas.CategoricalIndex.map`
Map values using input an input mapping or function.
```
>>> idx = pd.CategoricalIndex(['a', 'b', 'c'])
>>> idx
CategoricalIndex(['a', 'b', 'c'], categories=['a', 'b', 'c'],
ordered=False, dtype='category')
>>> idx.map(lambda x: x.upper())
CategoricalIndex(['A', 'B', 'C'], categories=['A', 'B', 'C'],
ordered=False, dtype='category')
>>> idx.map({'a': 'first', 'b': 'second', 'c': 'third'})
CategoricalIndex(['first', 'second', 'third'], categories=['first',
'second', 'third'], ordered=False, dtype='category')
``` | CategoricalIndex.map(mapper)[source]#
Map values using input an input mapping or function.
Maps the values (their categories, not the codes) of the index to new
categories. If the mapping correspondence is one-to-one the result is a
CategoricalIndex which has the same order property as
the original, otherwise an Index is returned.
If a dict or Series is used any unmapped category is
mapped to NaN. Note that if this happens an Index
will be returned.
Parameters
mapperfunction, dict, or SeriesMapping correspondence.
Returns
pandas.CategoricalIndex or pandas.IndexMapped index.
See also
Index.mapApply a mapping correspondence on an Index.
Series.mapApply a mapping correspondence on a Series.
Series.applyApply more complex functions on a Series.
Examples
>>> idx = pd.CategoricalIndex(['a', 'b', 'c'])
>>> idx
CategoricalIndex(['a', 'b', 'c'], categories=['a', 'b', 'c'],
ordered=False, dtype='category')
>>> idx.map(lambda x: x.upper())
CategoricalIndex(['A', 'B', 'C'], categories=['A', 'B', 'C'],
ordered=False, dtype='category')
>>> idx.map({'a': 'first', 'b': 'second', 'c': 'third'})
CategoricalIndex(['first', 'second', 'third'], categories=['first',
'second', 'third'], ordered=False, dtype='category')
If the mapping is one-to-one the ordering of the categories is
preserved:
>>> idx = pd.CategoricalIndex(['a', 'b', 'c'], ordered=True)
>>> idx
CategoricalIndex(['a', 'b', 'c'], categories=['a', 'b', 'c'],
ordered=True, dtype='category')
>>> idx.map({'a': 3, 'b': 2, 'c': 1})
CategoricalIndex([3, 2, 1], categories=[3, 2, 1], ordered=True,
dtype='category')
If the mapping is not one-to-one an Index is returned:
>>> idx.map({'a': 'first', 'b': 'second', 'c': 'first'})
Index(['first', 'second', 'first'], dtype='object')
If a dict is used, all unmapped categories are mapped to NaN and
the result is an Index:
>>> idx.map({'a': 'first', 'b': 'second'})
Index(['first', 'second', nan], dtype='object')
| reference/api/pandas.CategoricalIndex.map.html |
pandas.tseries.offsets.CustomBusinessMonthEnd.is_anchored | `pandas.tseries.offsets.CustomBusinessMonthEnd.is_anchored`
Return boolean whether the frequency is a unit frequency (n=1).
```
>>> pd.DateOffset().is_anchored()
True
>>> pd.DateOffset(2).is_anchored()
False
``` | CustomBusinessMonthEnd.is_anchored()#
Return boolean whether the frequency is a unit frequency (n=1).
Examples
>>> pd.DateOffset().is_anchored()
True
>>> pd.DateOffset(2).is_anchored()
False
| reference/api/pandas.tseries.offsets.CustomBusinessMonthEnd.is_anchored.html |
pandas.tseries.offsets.FY5253Quarter.rule_code | pandas.tseries.offsets.FY5253Quarter.rule_code | FY5253Quarter.rule_code#
| reference/api/pandas.tseries.offsets.FY5253Quarter.rule_code.html |
pandas.tseries.offsets.QuarterBegin.rollforward | `pandas.tseries.offsets.QuarterBegin.rollforward`
Roll provided date forward to next offset only if not on offset. | QuarterBegin.rollforward()#
Roll provided date forward to next offset only if not on offset.
Returns
TimeStampRolled timestamp if not on offset, otherwise unchanged timestamp.
| reference/api/pandas.tseries.offsets.QuarterBegin.rollforward.html |
pandas.Series.str.findall | `pandas.Series.str.findall`
Find all occurrences of pattern or regular expression in the Series/Index.
Equivalent to applying re.findall() to all the elements in the
Series/Index.
```
>>> s = pd.Series(['Lion', 'Monkey', 'Rabbit'])
``` | Series.str.findall(pat, flags=0)[source]#
Find all occurrences of pattern or regular expression in the Series/Index.
Equivalent to applying re.findall() to all the elements in the
Series/Index.
Parameters
patstrPattern or regular expression.
flagsint, default 0Flags from re module, e.g. re.IGNORECASE (default is 0, which
means no flags).
Returns
Series/Index of lists of stringsAll non-overlapping matches of pattern or regular expression in each
string of this Series/Index.
See also
countCount occurrences of pattern or regular expression in each string of the Series/Index.
extractallFor each string in the Series, extract groups from all matches of regular expression and return a DataFrame with one row for each match and one column for each group.
re.findallThe equivalent re function to all non-overlapping matches of pattern or regular expression in string, as a list of strings.
Examples
>>> s = pd.Series(['Lion', 'Monkey', 'Rabbit'])
The search for the pattern ‘Monkey’ returns one match:
>>> s.str.findall('Monkey')
0 []
1 [Monkey]
2 []
dtype: object
On the other hand, the search for the pattern ‘MONKEY’ doesn’t return any
match:
>>> s.str.findall('MONKEY')
0 []
1 []
2 []
dtype: object
Flags can be added to the pattern or regular expression. For instance,
to find the pattern ‘MONKEY’ ignoring the case:
>>> import re
>>> s.str.findall('MONKEY', flags=re.IGNORECASE)
0 []
1 [Monkey]
2 []
dtype: object
When the pattern matches more than one string in the Series, all matches
are returned:
>>> s.str.findall('on')
0 [on]
1 [on]
2 []
dtype: object
Regular expressions are supported too. For instance, the search for all the
strings ending with the word ‘on’ is shown next:
>>> s.str.findall('on$')
0 [on]
1 []
2 []
dtype: object
If the pattern is found more than once in the same string, then a list of
multiple strings is returned:
>>> s.str.findall('b')
0 []
1 []
2 [b, b]
dtype: object
| reference/api/pandas.Series.str.findall.html |
pandas.tseries.offsets.BYearEnd.is_anchored | `pandas.tseries.offsets.BYearEnd.is_anchored`
Return boolean whether the frequency is a unit frequency (n=1).
Examples
```
>>> pd.DateOffset().is_anchored()
True
>>> pd.DateOffset(2).is_anchored()
False
``` | BYearEnd.is_anchored()#
Return boolean whether the frequency is a unit frequency (n=1).
Examples
>>> pd.DateOffset().is_anchored()
True
>>> pd.DateOffset(2).is_anchored()
False
| reference/api/pandas.tseries.offsets.BYearEnd.is_anchored.html |
pandas.Series.kurtosis | `pandas.Series.kurtosis`
Return unbiased kurtosis over requested axis.
Kurtosis obtained using Fisher’s definition of
kurtosis (kurtosis of normal == 0.0). Normalized by N-1. | Series.kurtosis(axis=_NoDefault.no_default, skipna=True, level=None, numeric_only=None, **kwargs)[source]#
Return unbiased kurtosis over requested axis.
Kurtosis obtained using Fisher’s definition of
kurtosis (kurtosis of normal == 0.0). Normalized by N-1.
Parameters
axis{index (0)}Axis for the function to be applied on.
For Series this parameter is unused and defaults to 0.
skipnabool, default TrueExclude NA/null values when computing the result.
levelint or level name, default NoneIf the axis is a MultiIndex (hierarchical), count along a
particular level, collapsing into a scalar.
Deprecated since version 1.3.0: The level keyword is deprecated. Use groupby instead.
numeric_onlybool, default NoneInclude only float, int, boolean columns. If None, will attempt to use
everything, then use only numeric data. Not implemented for Series.
Deprecated since version 1.5.0: Specifying numeric_only=None is deprecated. The default value will be
False in a future version of pandas.
**kwargsAdditional keyword arguments to be passed to the function.
Returns
scalar or Series (if level specified)
| reference/api/pandas.Series.kurtosis.html |
Comparison with R / R libraries | Comparison with R / R libraries
Since pandas aims to provide a lot of the data manipulation and analysis
functionality that people use R for, this page
was started to provide a more detailed look at the R language and its many third
party libraries as they relate to pandas. In comparisons with R and CRAN
libraries, we care about the following things:
Functionality / flexibility: what can/cannot be done with each tool
Performance: how fast are operations. Hard numbers/benchmarks are
preferable
Ease-of-use: Is one tool easier/harder to use (you may have to be
the judge of this, given side-by-side code comparisons)
This page is also here to offer a bit of a translation guide for users of these
R packages. | Since pandas aims to provide a lot of the data manipulation and analysis
functionality that people use R for, this page
was started to provide a more detailed look at the R language and its many third
party libraries as they relate to pandas. In comparisons with R and CRAN
libraries, we care about the following things:
Functionality / flexibility: what can/cannot be done with each tool
Performance: how fast are operations. Hard numbers/benchmarks are
preferable
Ease-of-use: Is one tool easier/harder to use (you may have to be
the judge of this, given side-by-side code comparisons)
This page is also here to offer a bit of a translation guide for users of these
R packages.
For transfer of DataFrame objects from pandas to R, one option is to
use HDF5 files, see External compatibility for an
example.
Quick reference#
We’ll start off with a quick reference guide pairing some common R
operations using dplyr with
pandas equivalents.
Querying, filtering, sampling#
R
pandas
dim(df)
df.shape
head(df)
df.head()
slice(df, 1:10)
df.iloc[:9]
filter(df, col1 == 1, col2 == 1)
df.query('col1 == 1 & col2 == 1')
df[df$col1 == 1 & df$col2 == 1,]
df[(df.col1 == 1) & (df.col2 == 1)]
select(df, col1, col2)
df[['col1', 'col2']]
select(df, col1:col3)
df.loc[:, 'col1':'col3']
select(df, -(col1:col3))
df.drop(cols_to_drop, axis=1) but see 1
distinct(select(df, col1))
df[['col1']].drop_duplicates()
distinct(select(df, col1, col2))
df[['col1', 'col2']].drop_duplicates()
sample_n(df, 10)
df.sample(n=10)
sample_frac(df, 0.01)
df.sample(frac=0.01)
1
R’s shorthand for a subrange of columns
(select(df, col1:col3)) can be approached
cleanly in pandas, if you have the list of columns,
for example df[cols[1:3]] or
df.drop(cols[1:3]), but doing this by column
name is a bit messy.
Sorting#
R
pandas
arrange(df, col1, col2)
df.sort_values(['col1', 'col2'])
arrange(df, desc(col1))
df.sort_values('col1', ascending=False)
Transforming#
R
pandas
select(df, col_one = col1)
df.rename(columns={'col1': 'col_one'})['col_one']
rename(df, col_one = col1)
df.rename(columns={'col1': 'col_one'})
mutate(df, c=a-b)
df.assign(c=df['a']-df['b'])
Grouping and summarizing#
R
pandas
summary(df)
df.describe()
gdf <- group_by(df, col1)
gdf = df.groupby('col1')
summarise(gdf, avg=mean(col1, na.rm=TRUE))
df.groupby('col1').agg({'col1': 'mean'})
summarise(gdf, total=sum(col1))
df.groupby('col1').sum()
Base R#
Slicing with R’s c#
R makes it easy to access data.frame columns by name
df <- data.frame(a=rnorm(5), b=rnorm(5), c=rnorm(5), d=rnorm(5), e=rnorm(5))
df[, c("a", "c", "e")]
or by integer location
df <- data.frame(matrix(rnorm(1000), ncol=100))
df[, c(1:10, 25:30, 40, 50:100)]
Selecting multiple columns by name in pandas is straightforward
In [1]: df = pd.DataFrame(np.random.randn(10, 3), columns=list("abc"))
In [2]: df[["a", "c"]]
Out[2]:
a c
0 0.469112 -1.509059
1 -1.135632 -0.173215
2 0.119209 -0.861849
3 -2.104569 1.071804
4 0.721555 -1.039575
5 0.271860 0.567020
6 0.276232 -0.673690
7 0.113648 0.524988
8 0.404705 -1.715002
9 -1.039268 -1.157892
In [3]: df.loc[:, ["a", "c"]]
Out[3]:
a c
0 0.469112 -1.509059
1 -1.135632 -0.173215
2 0.119209 -0.861849
3 -2.104569 1.071804
4 0.721555 -1.039575
5 0.271860 0.567020
6 0.276232 -0.673690
7 0.113648 0.524988
8 0.404705 -1.715002
9 -1.039268 -1.157892
Selecting multiple noncontiguous columns by integer location can be achieved
with a combination of the iloc indexer attribute and numpy.r_.
In [4]: named = list("abcdefg")
In [5]: n = 30
In [6]: columns = named + np.arange(len(named), n).tolist()
In [7]: df = pd.DataFrame(np.random.randn(n, n), columns=columns)
In [8]: df.iloc[:, np.r_[:10, 24:30]]
Out[8]:
a b c ... 27 28 29
0 -1.344312 0.844885 1.075770 ... 0.813850 0.132003 -0.827317
1 -0.076467 -1.187678 1.130127 ... 0.149748 -0.732339 0.687738
2 0.176444 0.403310 -0.154951 ... -0.493662 0.600178 0.274230
3 0.132885 -0.023688 2.410179 ... 0.109121 1.126203 -0.977349
4 1.474071 -0.064034 -1.282782 ... -0.858447 0.306996 -0.028665
.. ... ... ... ... ... ... ...
25 1.492125 -0.068190 0.681456 ... 0.428572 0.880609 0.487645
26 0.725238 0.624607 -0.141185 ... 1.008500 1.424017 0.717110
27 1.262419 1.950057 0.301038 ... 1.007824 2.826008 1.458383
28 -1.585746 -0.899734 0.921494 ... 0.577223 -1.088417 0.326687
29 -0.986248 0.169729 -1.158091 ... -2.013086 -1.602549 0.333109
[30 rows x 16 columns]
aggregate#
In R you may want to split data into subsets and compute the mean for each.
Using a data.frame called df and splitting it into groups by1 and
by2:
df <- data.frame(
v1 = c(1,3,5,7,8,3,5,NA,4,5,7,9),
v2 = c(11,33,55,77,88,33,55,NA,44,55,77,99),
by1 = c("red", "blue", 1, 2, NA, "big", 1, 2, "red", 1, NA, 12),
by2 = c("wet", "dry", 99, 95, NA, "damp", 95, 99, "red", 99, NA, NA))
aggregate(x=df[, c("v1", "v2")], by=list(mydf2$by1, mydf2$by2), FUN = mean)
The groupby() method is similar to base R aggregate
function.
In [9]: df = pd.DataFrame(
...: {
...: "v1": [1, 3, 5, 7, 8, 3, 5, np.nan, 4, 5, 7, 9],
...: "v2": [11, 33, 55, 77, 88, 33, 55, np.nan, 44, 55, 77, 99],
...: "by1": ["red", "blue", 1, 2, np.nan, "big", 1, 2, "red", 1, np.nan, 12],
...: "by2": [
...: "wet",
...: "dry",
...: 99,
...: 95,
...: np.nan,
...: "damp",
...: 95,
...: 99,
...: "red",
...: 99,
...: np.nan,
...: np.nan,
...: ],
...: }
...: )
...:
In [10]: g = df.groupby(["by1", "by2"])
In [11]: g[["v1", "v2"]].mean()
Out[11]:
v1 v2
by1 by2
1 95 5.0 55.0
99 5.0 55.0
2 95 7.0 77.0
99 NaN NaN
big damp 3.0 33.0
blue dry 3.0 33.0
red red 4.0 44.0
wet 1.0 11.0
For more details and examples see the groupby documentation.
match / %in%#
A common way to select data in R is using %in% which is defined using the
function match. The operator %in% is used to return a logical vector
indicating if there is a match or not:
s <- 0:4
s %in% c(2,4)
The isin() method is similar to R %in% operator:
In [12]: s = pd.Series(np.arange(5), dtype=np.float32)
In [13]: s.isin([2, 4])
Out[13]:
0 False
1 False
2 True
3 False
4 True
dtype: bool
The match function returns a vector of the positions of matches
of its first argument in its second:
s <- 0:4
match(s, c(2,4))
For more details and examples see the reshaping documentation.
tapply#
tapply is similar to aggregate, but data can be in a ragged array,
since the subclass sizes are possibly irregular. Using a data.frame called
baseball, and retrieving information based on the array team:
baseball <-
data.frame(team = gl(5, 5,
labels = paste("Team", LETTERS[1:5])),
player = sample(letters, 25),
batting.average = runif(25, .200, .400))
tapply(baseball$batting.average, baseball.example$team,
max)
In pandas we may use pivot_table() method to handle this:
In [14]: import random
In [15]: import string
In [16]: baseball = pd.DataFrame(
....: {
....: "team": ["team %d" % (x + 1) for x in range(5)] * 5,
....: "player": random.sample(list(string.ascii_lowercase), 25),
....: "batting avg": np.random.uniform(0.200, 0.400, 25),
....: }
....: )
....:
In [17]: baseball.pivot_table(values="batting avg", columns="team", aggfunc=np.max)
Out[17]:
team team 1 team 2 team 3 team 4 team 5
batting avg 0.352134 0.295327 0.397191 0.394457 0.396194
For more details and examples see the reshaping documentation.
subset#
The query() method is similar to the base R subset
function. In R you might want to get the rows of a data.frame where one
column’s values are less than another column’s values:
df <- data.frame(a=rnorm(10), b=rnorm(10))
subset(df, a <= b)
df[df$a <= df$b,] # note the comma
In pandas, there are a few ways to perform subsetting. You can use
query() or pass an expression as if it were an
index/slice as well as standard boolean indexing:
In [18]: df = pd.DataFrame({"a": np.random.randn(10), "b": np.random.randn(10)})
In [19]: df.query("a <= b")
Out[19]:
a b
1 0.174950 0.552887
2 -0.023167 0.148084
3 -0.495291 -0.300218
4 -0.860736 0.197378
5 -1.134146 1.720780
7 -0.290098 0.083515
8 0.238636 0.946550
In [20]: df[df["a"] <= df["b"]]
Out[20]:
a b
1 0.174950 0.552887
2 -0.023167 0.148084
3 -0.495291 -0.300218
4 -0.860736 0.197378
5 -1.134146 1.720780
7 -0.290098 0.083515
8 0.238636 0.946550
In [21]: df.loc[df["a"] <= df["b"]]
Out[21]:
a b
1 0.174950 0.552887
2 -0.023167 0.148084
3 -0.495291 -0.300218
4 -0.860736 0.197378
5 -1.134146 1.720780
7 -0.290098 0.083515
8 0.238636 0.946550
For more details and examples see the query documentation.
with#
An expression using a data.frame called df in R with the columns a and
b would be evaluated using with like so:
df <- data.frame(a=rnorm(10), b=rnorm(10))
with(df, a + b)
df$a + df$b # same as the previous expression
In pandas the equivalent expression, using the
eval() method, would be:
In [22]: df = pd.DataFrame({"a": np.random.randn(10), "b": np.random.randn(10)})
In [23]: df.eval("a + b")
Out[23]:
0 -0.091430
1 -2.483890
2 -0.252728
3 -0.626444
4 -0.261740
5 2.149503
6 -0.332214
7 0.799331
8 -2.377245
9 2.104677
dtype: float64
In [24]: df["a"] + df["b"] # same as the previous expression
Out[24]:
0 -0.091430
1 -2.483890
2 -0.252728
3 -0.626444
4 -0.261740
5 2.149503
6 -0.332214
7 0.799331
8 -2.377245
9 2.104677
dtype: float64
In certain cases eval() will be much faster than
evaluation in pure Python. For more details and examples see the eval
documentation.
plyr#
plyr is an R library for the split-apply-combine strategy for data
analysis. The functions revolve around three data structures in R, a
for arrays, l for lists, and d for data.frame. The
table below shows how these data structures could be mapped in Python.
R
Python
array
list
lists
dictionary or list of objects
data.frame
dataframe
ddply#
An expression using a data.frame called df in R where you want to
summarize x by month:
require(plyr)
df <- data.frame(
x = runif(120, 1, 168),
y = runif(120, 7, 334),
z = runif(120, 1.7, 20.7),
month = rep(c(5,6,7,8),30),
week = sample(1:4, 120, TRUE)
)
ddply(df, .(month, week), summarize,
mean = round(mean(x), 2),
sd = round(sd(x), 2))
In pandas the equivalent expression, using the
groupby() method, would be:
In [25]: df = pd.DataFrame(
....: {
....: "x": np.random.uniform(1.0, 168.0, 120),
....: "y": np.random.uniform(7.0, 334.0, 120),
....: "z": np.random.uniform(1.7, 20.7, 120),
....: "month": [5, 6, 7, 8] * 30,
....: "week": np.random.randint(1, 4, 120),
....: }
....: )
....:
In [26]: grouped = df.groupby(["month", "week"])
In [27]: grouped["x"].agg([np.mean, np.std])
Out[27]:
mean std
month week
5 1 63.653367 40.601965
2 78.126605 53.342400
3 92.091886 57.630110
6 1 81.747070 54.339218
2 70.971205 54.687287
3 100.968344 54.010081
7 1 61.576332 38.844274
2 61.733510 48.209013
3 71.688795 37.595638
8 1 62.741922 34.618153
2 91.774627 49.790202
3 73.936856 60.773900
For more details and examples see the groupby documentation.
reshape / reshape2#
meltarray#
An expression using a 3 dimensional array called a in R where you want to
melt it into a data.frame:
a <- array(c(1:23, NA), c(2,3,4))
data.frame(melt(a))
In Python, since a is a list, you can simply use list comprehension.
In [28]: a = np.array(list(range(1, 24)) + [np.NAN]).reshape(2, 3, 4)
In [29]: pd.DataFrame([tuple(list(x) + [val]) for x, val in np.ndenumerate(a)])
Out[29]:
0 1 2 3
0 0 0 0 1.0
1 0 0 1 2.0
2 0 0 2 3.0
3 0 0 3 4.0
4 0 1 0 5.0
.. .. .. .. ...
19 1 1 3 20.0
20 1 2 0 21.0
21 1 2 1 22.0
22 1 2 2 23.0
23 1 2 3 NaN
[24 rows x 4 columns]
meltlist#
An expression using a list called a in R where you want to melt it
into a data.frame:
a <- as.list(c(1:4, NA))
data.frame(melt(a))
In Python, this list would be a list of tuples, so
DataFrame() method would convert it to a dataframe as required.
In [30]: a = list(enumerate(list(range(1, 5)) + [np.NAN]))
In [31]: pd.DataFrame(a)
Out[31]:
0 1
0 0 1.0
1 1 2.0
2 2 3.0
3 3 4.0
4 4 NaN
For more details and examples see the Into to Data Structures
documentation.
meltdf#
An expression using a data.frame called cheese in R where you want to
reshape the data.frame:
cheese <- data.frame(
first = c('John', 'Mary'),
last = c('Doe', 'Bo'),
height = c(5.5, 6.0),
weight = c(130, 150)
)
melt(cheese, id=c("first", "last"))
In Python, the melt() method is the R equivalent:
In [32]: cheese = pd.DataFrame(
....: {
....: "first": ["John", "Mary"],
....: "last": ["Doe", "Bo"],
....: "height": [5.5, 6.0],
....: "weight": [130, 150],
....: }
....: )
....:
In [33]: pd.melt(cheese, id_vars=["first", "last"])
Out[33]:
first last variable value
0 John Doe height 5.5
1 Mary Bo height 6.0
2 John Doe weight 130.0
3 Mary Bo weight 150.0
In [34]: cheese.set_index(["first", "last"]).stack() # alternative way
Out[34]:
first last
John Doe height 5.5
weight 130.0
Mary Bo height 6.0
weight 150.0
dtype: float64
For more details and examples see the reshaping documentation.
cast#
In R acast is an expression using a data.frame called df in R to cast
into a higher dimensional array:
df <- data.frame(
x = runif(12, 1, 168),
y = runif(12, 7, 334),
z = runif(12, 1.7, 20.7),
month = rep(c(5,6,7),4),
week = rep(c(1,2), 6)
)
mdf <- melt(df, id=c("month", "week"))
acast(mdf, week ~ month ~ variable, mean)
In Python the best way is to make use of pivot_table():
In [35]: df = pd.DataFrame(
....: {
....: "x": np.random.uniform(1.0, 168.0, 12),
....: "y": np.random.uniform(7.0, 334.0, 12),
....: "z": np.random.uniform(1.7, 20.7, 12),
....: "month": [5, 6, 7] * 4,
....: "week": [1, 2] * 6,
....: }
....: )
....:
In [36]: mdf = pd.melt(df, id_vars=["month", "week"])
In [37]: pd.pivot_table(
....: mdf,
....: values="value",
....: index=["variable", "week"],
....: columns=["month"],
....: aggfunc=np.mean,
....: )
....:
Out[37]:
month 5 6 7
variable week
x 1 93.888747 98.762034 55.219673
2 94.391427 38.112932 83.942781
y 1 94.306912 279.454811 227.840449
2 87.392662 193.028166 173.899260
z 1 11.016009 10.079307 16.170549
2 8.476111 17.638509 19.003494
Similarly for dcast which uses a data.frame called df in R to
aggregate information based on Animal and FeedType:
df <- data.frame(
Animal = c('Animal1', 'Animal2', 'Animal3', 'Animal2', 'Animal1',
'Animal2', 'Animal3'),
FeedType = c('A', 'B', 'A', 'A', 'B', 'B', 'A'),
Amount = c(10, 7, 4, 2, 5, 6, 2)
)
dcast(df, Animal ~ FeedType, sum, fill=NaN)
# Alternative method using base R
with(df, tapply(Amount, list(Animal, FeedType), sum))
Python can approach this in two different ways. Firstly, similar to above
using pivot_table():
In [38]: df = pd.DataFrame(
....: {
....: "Animal": [
....: "Animal1",
....: "Animal2",
....: "Animal3",
....: "Animal2",
....: "Animal1",
....: "Animal2",
....: "Animal3",
....: ],
....: "FeedType": ["A", "B", "A", "A", "B", "B", "A"],
....: "Amount": [10, 7, 4, 2, 5, 6, 2],
....: }
....: )
....:
In [39]: df.pivot_table(values="Amount", index="Animal", columns="FeedType", aggfunc="sum")
Out[39]:
FeedType A B
Animal
Animal1 10.0 5.0
Animal2 2.0 13.0
Animal3 6.0 NaN
The second approach is to use the groupby() method:
In [40]: df.groupby(["Animal", "FeedType"])["Amount"].sum()
Out[40]:
Animal FeedType
Animal1 A 10
B 5
Animal2 A 2
B 13
Animal3 A 6
Name: Amount, dtype: int64
For more details and examples see the reshaping documentation or the groupby documentation.
factor#
pandas has a data type for categorical data.
cut(c(1,2,3,4,5,6), 3)
factor(c(1,2,3,2,2,3))
In pandas this is accomplished with pd.cut and astype("category"):
In [41]: pd.cut(pd.Series([1, 2, 3, 4, 5, 6]), 3)
Out[41]:
0 (0.995, 2.667]
1 (0.995, 2.667]
2 (2.667, 4.333]
3 (2.667, 4.333]
4 (4.333, 6.0]
5 (4.333, 6.0]
dtype: category
Categories (3, interval[float64, right]): [(0.995, 2.667] < (2.667, 4.333] < (4.333, 6.0]]
In [42]: pd.Series([1, 2, 3, 2, 2, 3]).astype("category")
Out[42]:
0 1
1 2
2 3
3 2
4 2
5 3
dtype: category
Categories (3, int64): [1, 2, 3]
For more details and examples see categorical introduction and the
API documentation. There is also a documentation regarding the
differences to R’s factor.
| getting_started/comparison/comparison_with_r.html |
pandas.DataFrame.xs | `pandas.DataFrame.xs`
Return cross-section from the Series/DataFrame.
```
>>> d = {'num_legs': [4, 4, 2, 2],
... 'num_wings': [0, 0, 2, 2],
... 'class': ['mammal', 'mammal', 'mammal', 'bird'],
... 'animal': ['cat', 'dog', 'bat', 'penguin'],
... 'locomotion': ['walks', 'walks', 'flies', 'walks']}
>>> df = pd.DataFrame(data=d)
>>> df = df.set_index(['class', 'animal', 'locomotion'])
>>> df
num_legs num_wings
class animal locomotion
mammal cat walks 4 0
dog walks 4 0
bat flies 2 2
bird penguin walks 2 2
``` | DataFrame.xs(key, axis=0, level=None, drop_level=True)[source]#
Return cross-section from the Series/DataFrame.
This method takes a key argument to select data at a particular
level of a MultiIndex.
Parameters
keylabel or tuple of labelLabel contained in the index, or partially in a MultiIndex.
axis{0 or ‘index’, 1 or ‘columns’}, default 0Axis to retrieve cross-section on.
levelobject, defaults to first n levels (n=1 or len(key))In case of a key partially contained in a MultiIndex, indicate
which levels are used. Levels can be referred by label or position.
drop_levelbool, default TrueIf False, returns object with same levels as self.
Returns
Series or DataFrameCross-section from the original Series or DataFrame
corresponding to the selected index levels.
See also
DataFrame.locAccess a group of rows and columns by label(s) or a boolean array.
DataFrame.ilocPurely integer-location based indexing for selection by position.
Notes
xs can not be used to set values.
MultiIndex Slicers is a generic way to get/set values on
any level or levels.
It is a superset of xs functionality, see
MultiIndex Slicers.
Examples
>>> d = {'num_legs': [4, 4, 2, 2],
... 'num_wings': [0, 0, 2, 2],
... 'class': ['mammal', 'mammal', 'mammal', 'bird'],
... 'animal': ['cat', 'dog', 'bat', 'penguin'],
... 'locomotion': ['walks', 'walks', 'flies', 'walks']}
>>> df = pd.DataFrame(data=d)
>>> df = df.set_index(['class', 'animal', 'locomotion'])
>>> df
num_legs num_wings
class animal locomotion
mammal cat walks 4 0
dog walks 4 0
bat flies 2 2
bird penguin walks 2 2
Get values at specified index
>>> df.xs('mammal')
num_legs num_wings
animal locomotion
cat walks 4 0
dog walks 4 0
bat flies 2 2
Get values at several indexes
>>> df.xs(('mammal', 'dog'))
num_legs num_wings
locomotion
walks 4 0
Get values at specified index and level
>>> df.xs('cat', level=1)
num_legs num_wings
class locomotion
mammal walks 4 0
Get values at several indexes and levels
>>> df.xs(('bird', 'walks'),
... level=[0, 'locomotion'])
num_legs num_wings
animal
penguin 2 2
Get values at specified column and axis
>>> df.xs('num_wings', axis=1)
class animal locomotion
mammal cat walks 0
dog walks 0
bat flies 2
bird penguin walks 2
Name: num_wings, dtype: int64
| reference/api/pandas.DataFrame.xs.html |
pandas.Timestamp.tzname | `pandas.Timestamp.tzname`
Return self.tzinfo.tzname(self). | Timestamp.tzname()#
Return self.tzinfo.tzname(self).
| reference/api/pandas.Timestamp.tzname.html |
pandas.tseries.offsets.YearBegin.copy | `pandas.tseries.offsets.YearBegin.copy`
Return a copy of the frequency.
```
>>> freq = pd.DateOffset(1)
>>> freq_copy = freq.copy()
>>> freq is freq_copy
False
``` | YearBegin.copy()#
Return a copy of the frequency.
Examples
>>> freq = pd.DateOffset(1)
>>> freq_copy = freq.copy()
>>> freq is freq_copy
False
| reference/api/pandas.tseries.offsets.YearBegin.copy.html |
pandas.tseries.offsets.YearEnd.rule_code | pandas.tseries.offsets.YearEnd.rule_code | YearEnd.rule_code#
| reference/api/pandas.tseries.offsets.YearEnd.rule_code.html |
Input/output | Input/output | Pickling#
read_pickle(filepath_or_buffer[, ...])
Load pickled pandas object (or any object) from file.
DataFrame.to_pickle(path[, compression, ...])
Pickle (serialize) object to file.
Flat file#
read_table(filepath_or_buffer, *[, sep, ...])
Read general delimited file into DataFrame.
read_csv(filepath_or_buffer, *[, sep, ...])
Read a comma-separated values (csv) file into DataFrame.
DataFrame.to_csv([path_or_buf, sep, na_rep, ...])
Write object to a comma-separated values (csv) file.
read_fwf(filepath_or_buffer, *[, colspecs, ...])
Read a table of fixed-width formatted lines into DataFrame.
Clipboard#
read_clipboard([sep])
Read text from clipboard and pass to read_csv.
DataFrame.to_clipboard([excel, sep])
Copy object to the system clipboard.
Excel#
read_excel(io[, sheet_name, header, names, ...])
Read an Excel file into a pandas DataFrame.
DataFrame.to_excel(excel_writer[, ...])
Write object to an Excel sheet.
ExcelFile.parse([sheet_name, header, names, ...])
Parse specified sheet(s) into a DataFrame.
Styler.to_excel(excel_writer[, sheet_name, ...])
Write Styler to an Excel sheet.
ExcelWriter(path[, engine, date_format, ...])
Class for writing DataFrame objects into excel sheets.
JSON#
read_json(path_or_buf, *[, orient, typ, ...])
Convert a JSON string to pandas object.
json_normalize(data[, record_path, meta, ...])
Normalize semi-structured JSON data into a flat table.
DataFrame.to_json([path_or_buf, orient, ...])
Convert the object to a JSON string.
build_table_schema(data[, index, ...])
Create a Table schema from data.
HTML#
read_html(io, *[, match, flavor, header, ...])
Read HTML tables into a list of DataFrame objects.
DataFrame.to_html([buf, columns, col_space, ...])
Render a DataFrame as an HTML table.
Styler.to_html([buf, table_uuid, ...])
Write Styler to a file, buffer or string in HTML-CSS format.
XML#
read_xml(path_or_buffer, *[, xpath, ...])
Read XML document into a DataFrame object.
DataFrame.to_xml([path_or_buffer, index, ...])
Render a DataFrame to an XML document.
Latex#
DataFrame.to_latex([buf, columns, ...])
Render object to a LaTeX tabular, longtable, or nested table.
Styler.to_latex([buf, column_format, ...])
Write Styler to a file, buffer or string in LaTeX format.
HDFStore: PyTables (HDF5)#
read_hdf(path_or_buf[, key, mode, errors, ...])
Read from the store, close it if we opened it.
HDFStore.put(key, value[, format, index, ...])
Store object in HDFStore.
HDFStore.append(key, value[, format, axes, ...])
Append to Table in file.
HDFStore.get(key)
Retrieve pandas object stored in file.
HDFStore.select(key[, where, start, stop, ...])
Retrieve pandas object stored in file, optionally based on where criteria.
HDFStore.info()
Print detailed information on the store.
HDFStore.keys([include])
Return a list of keys corresponding to objects stored in HDFStore.
HDFStore.groups()
Return a list of all the top-level nodes.
HDFStore.walk([where])
Walk the pytables group hierarchy for pandas objects.
Warning
One can store a subclass of DataFrame or Series to HDF5,
but the type of the subclass is lost upon storing.
Feather#
read_feather(path[, columns, use_threads, ...])
Load a feather-format object from the file path.
DataFrame.to_feather(path, **kwargs)
Write a DataFrame to the binary Feather format.
Parquet#
read_parquet(path[, engine, columns, ...])
Load a parquet object from the file path, returning a DataFrame.
DataFrame.to_parquet([path, engine, ...])
Write a DataFrame to the binary parquet format.
ORC#
read_orc(path[, columns])
Load an ORC object from the file path, returning a DataFrame.
DataFrame.to_orc([path, engine, index, ...])
Write a DataFrame to the ORC format.
SAS#
read_sas(filepath_or_buffer, *[, format, ...])
Read SAS files stored as either XPORT or SAS7BDAT format files.
SPSS#
read_spss(path[, usecols, convert_categoricals])
Load an SPSS file from the file path, returning a DataFrame.
SQL#
read_sql_table(table_name, con[, schema, ...])
Read SQL database table into a DataFrame.
read_sql_query(sql, con[, index_col, ...])
Read SQL query into a DataFrame.
read_sql(sql, con[, index_col, ...])
Read SQL query or database table into a DataFrame.
DataFrame.to_sql(name, con[, schema, ...])
Write records stored in a DataFrame to a SQL database.
Google BigQuery#
read_gbq(query[, project_id, index_col, ...])
Load data from Google BigQuery.
STATA#
read_stata(filepath_or_buffer, *[, ...])
Read Stata file into DataFrame.
DataFrame.to_stata(path, *[, convert_dates, ...])
Export DataFrame object to Stata dta format.
StataReader.data_label
Return data label of Stata file.
StataReader.value_labels()
Return a nested dict associating each variable name to its value and label.
StataReader.variable_labels()
Return a dict associating each variable name with corresponding label.
StataWriter.write_file()
Export DataFrame object to Stata dta format.
| reference/io.html |
pandas.tseries.offsets.QuarterEnd.is_month_end | `pandas.tseries.offsets.QuarterEnd.is_month_end`
Return boolean whether a timestamp occurs on the month end.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
``` | QuarterEnd.is_month_end()#
Return boolean whether a timestamp occurs on the month end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
| reference/api/pandas.tseries.offsets.QuarterEnd.is_month_end.html |
pandas.DataFrame.loc | `pandas.DataFrame.loc`
Access a group of rows and columns by label(s) or a boolean array.
.loc[] is primarily label based, but may also be used with a
boolean array.
```
>>> df = pd.DataFrame([[1, 2], [4, 5], [7, 8]],
... index=['cobra', 'viper', 'sidewinder'],
... columns=['max_speed', 'shield'])
>>> df
max_speed shield
cobra 1 2
viper 4 5
sidewinder 7 8
``` | property DataFrame.loc[source]#
Access a group of rows and columns by label(s) or a boolean array.
.loc[] is primarily label based, but may also be used with a
boolean array.
Allowed inputs are:
A single label, e.g. 5 or 'a', (note that 5 is
interpreted as a label of the index, and never as an
integer position along the index).
A list or array of labels, e.g. ['a', 'b', 'c'].
A slice object with labels, e.g. 'a':'f'.
Warning
Note that contrary to usual python slices, both the
start and the stop are included
A boolean array of the same length as the axis being sliced,
e.g. [True, False, True].
An alignable boolean Series. The index of the key will be aligned before
masking.
An alignable Index. The Index of the returned selection will be the input.
A callable function with one argument (the calling Series or
DataFrame) and that returns valid output for indexing (one of the above)
See more at Selection by Label.
Raises
KeyErrorIf any items are not found.
IndexingErrorIf an indexed key is passed and its index is unalignable to the frame index.
See also
DataFrame.atAccess a single value for a row/column label pair.
DataFrame.ilocAccess group of rows and columns by integer position(s).
DataFrame.xsReturns a cross-section (row(s) or column(s)) from the Series/DataFrame.
Series.locAccess group of values using labels.
Examples
Getting values
>>> df = pd.DataFrame([[1, 2], [4, 5], [7, 8]],
... index=['cobra', 'viper', 'sidewinder'],
... columns=['max_speed', 'shield'])
>>> df
max_speed shield
cobra 1 2
viper 4 5
sidewinder 7 8
Single label. Note this returns the row as a Series.
>>> df.loc['viper']
max_speed 4
shield 5
Name: viper, dtype: int64
List of labels. Note using [[]] returns a DataFrame.
>>> df.loc[['viper', 'sidewinder']]
max_speed shield
viper 4 5
sidewinder 7 8
Single label for row and column
>>> df.loc['cobra', 'shield']
2
Slice with labels for row and single label for column. As mentioned
above, note that both the start and stop of the slice are included.
>>> df.loc['cobra':'viper', 'max_speed']
cobra 1
viper 4
Name: max_speed, dtype: int64
Boolean list with the same length as the row axis
>>> df.loc[[False, False, True]]
max_speed shield
sidewinder 7 8
Alignable boolean Series:
>>> df.loc[pd.Series([False, True, False],
... index=['viper', 'sidewinder', 'cobra'])]
max_speed shield
sidewinder 7 8
Index (same behavior as df.reindex)
>>> df.loc[pd.Index(["cobra", "viper"], name="foo")]
max_speed shield
foo
cobra 1 2
viper 4 5
Conditional that returns a boolean Series
>>> df.loc[df['shield'] > 6]
max_speed shield
sidewinder 7 8
Conditional that returns a boolean Series with column labels specified
>>> df.loc[df['shield'] > 6, ['max_speed']]
max_speed
sidewinder 7
Callable that returns a boolean Series
>>> df.loc[lambda df: df['shield'] == 8]
max_speed shield
sidewinder 7 8
Setting values
Set value for all items matching the list of labels
>>> df.loc[['viper', 'sidewinder'], ['shield']] = 50
>>> df
max_speed shield
cobra 1 2
viper 4 50
sidewinder 7 50
Set value for an entire row
>>> df.loc['cobra'] = 10
>>> df
max_speed shield
cobra 10 10
viper 4 50
sidewinder 7 50
Set value for an entire column
>>> df.loc[:, 'max_speed'] = 30
>>> df
max_speed shield
cobra 30 10
viper 30 50
sidewinder 30 50
Set value for rows matching callable condition
>>> df.loc[df['shield'] > 35] = 0
>>> df
max_speed shield
cobra 30 10
viper 0 0
sidewinder 0 0
Getting values on a DataFrame with an index that has integer labels
Another example using integers for the index
>>> df = pd.DataFrame([[1, 2], [4, 5], [7, 8]],
... index=[7, 8, 9], columns=['max_speed', 'shield'])
>>> df
max_speed shield
7 1 2
8 4 5
9 7 8
Slice with integer labels for rows. As mentioned above, note that both
the start and stop of the slice are included.
>>> df.loc[7:9]
max_speed shield
7 1 2
8 4 5
9 7 8
Getting values with a MultiIndex
A number of examples using a DataFrame with a MultiIndex
>>> tuples = [
... ('cobra', 'mark i'), ('cobra', 'mark ii'),
... ('sidewinder', 'mark i'), ('sidewinder', 'mark ii'),
... ('viper', 'mark ii'), ('viper', 'mark iii')
... ]
>>> index = pd.MultiIndex.from_tuples(tuples)
>>> values = [[12, 2], [0, 4], [10, 20],
... [1, 4], [7, 1], [16, 36]]
>>> df = pd.DataFrame(values, columns=['max_speed', 'shield'], index=index)
>>> df
max_speed shield
cobra mark i 12 2
mark ii 0 4
sidewinder mark i 10 20
mark ii 1 4
viper mark ii 7 1
mark iii 16 36
Single label. Note this returns a DataFrame with a single index.
>>> df.loc['cobra']
max_speed shield
mark i 12 2
mark ii 0 4
Single index tuple. Note this returns a Series.
>>> df.loc[('cobra', 'mark ii')]
max_speed 0
shield 4
Name: (cobra, mark ii), dtype: int64
Single label for row and column. Similar to passing in a tuple, this
returns a Series.
>>> df.loc['cobra', 'mark i']
max_speed 12
shield 2
Name: (cobra, mark i), dtype: int64
Single tuple. Note using [[]] returns a DataFrame.
>>> df.loc[[('cobra', 'mark ii')]]
max_speed shield
cobra mark ii 0 4
Single tuple for the index with a single label for the column
>>> df.loc[('cobra', 'mark i'), 'shield']
2
Slice from index tuple to single label
>>> df.loc[('cobra', 'mark i'):'viper']
max_speed shield
cobra mark i 12 2
mark ii 0 4
sidewinder mark i 10 20
mark ii 1 4
viper mark ii 7 1
mark iii 16 36
Slice from index tuple to index tuple
>>> df.loc[('cobra', 'mark i'):('viper', 'mark ii')]
max_speed shield
cobra mark i 12 2
mark ii 0 4
sidewinder mark i 10 20
mark ii 1 4
viper mark ii 7 1
Please see the user guide
for more details and explanations of advanced indexing.
| reference/api/pandas.DataFrame.loc.html |
pandas.Series.max | `pandas.Series.max`
Return the maximum of the values over the requested axis.
If you want the index of the maximum, use idxmax. This is the equivalent of the numpy.ndarray method argmax.
```
>>> idx = pd.MultiIndex.from_arrays([
... ['warm', 'warm', 'cold', 'cold'],
... ['dog', 'falcon', 'fish', 'spider']],
... names=['blooded', 'animal'])
>>> s = pd.Series([4, 2, 0, 8], name='legs', index=idx)
>>> s
blooded animal
warm dog 4
falcon 2
cold fish 0
spider 8
Name: legs, dtype: int64
``` | Series.max(axis=_NoDefault.no_default, skipna=True, level=None, numeric_only=None, **kwargs)[source]#
Return the maximum of the values over the requested axis.
If you want the index of the maximum, use idxmax. This is the equivalent of the numpy.ndarray method argmax.
Parameters
axis{index (0)}Axis for the function to be applied on.
For Series this parameter is unused and defaults to 0.
skipnabool, default TrueExclude NA/null values when computing the result.
levelint or level name, default NoneIf the axis is a MultiIndex (hierarchical), count along a
particular level, collapsing into a scalar.
Deprecated since version 1.3.0: The level keyword is deprecated. Use groupby instead.
numeric_onlybool, default NoneInclude only float, int, boolean columns. If None, will attempt to use
everything, then use only numeric data. Not implemented for Series.
Deprecated since version 1.5.0: Specifying numeric_only=None is deprecated. The default value will be
False in a future version of pandas.
**kwargsAdditional keyword arguments to be passed to the function.
Returns
scalar or Series (if level specified)
See also
Series.sumReturn the sum.
Series.minReturn the minimum.
Series.maxReturn the maximum.
Series.idxminReturn the index of the minimum.
Series.idxmaxReturn the index of the maximum.
DataFrame.sumReturn the sum over the requested axis.
DataFrame.minReturn the minimum over the requested axis.
DataFrame.maxReturn the maximum over the requested axis.
DataFrame.idxminReturn the index of the minimum over the requested axis.
DataFrame.idxmaxReturn the index of the maximum over the requested axis.
Examples
>>> idx = pd.MultiIndex.from_arrays([
... ['warm', 'warm', 'cold', 'cold'],
... ['dog', 'falcon', 'fish', 'spider']],
... names=['blooded', 'animal'])
>>> s = pd.Series([4, 2, 0, 8], name='legs', index=idx)
>>> s
blooded animal
warm dog 4
falcon 2
cold fish 0
spider 8
Name: legs, dtype: int64
>>> s.max()
8
| reference/api/pandas.Series.max.html |
pandas.api.extensions.ExtensionArray.isin | `pandas.api.extensions.ExtensionArray.isin`
Pointwise comparison for set containment in the given values.
Roughly equivalent to np.array([x in values for x in self]) | ExtensionArray.isin(values)[source]#
Pointwise comparison for set containment in the given values.
Roughly equivalent to np.array([x in values for x in self])
Parameters
valuesSequence
Returns
np.ndarray[bool]
| reference/api/pandas.api.extensions.ExtensionArray.isin.html |
pandas.tseries.offsets.CustomBusinessMonthEnd.offset | `pandas.tseries.offsets.CustomBusinessMonthEnd.offset`
Alias for self._offset. | CustomBusinessMonthEnd.offset#
Alias for self._offset.
| reference/api/pandas.tseries.offsets.CustomBusinessMonthEnd.offset.html |
pandas.Timestamp.daysinmonth | `pandas.Timestamp.daysinmonth`
Return the number of days in the month.
Examples
```
>>> ts = pd.Timestamp(2020, 3, 14)
>>> ts.days_in_month
31
``` | Timestamp.daysinmonth#
Return the number of days in the month.
Examples
>>> ts = pd.Timestamp(2020, 3, 14)
>>> ts.days_in_month
31
| reference/api/pandas.Timestamp.daysinmonth.html |
pandas.Series.str.cat | `pandas.Series.str.cat`
Concatenate strings in the Series/Index with given separator.
```
>>> s = pd.Series(['a', 'b', np.nan, 'd'])
>>> s.str.cat(sep=' ')
'a b d'
``` | Series.str.cat(others=None, sep=None, na_rep=None, join='left')[source]#
Concatenate strings in the Series/Index with given separator.
If others is specified, this function concatenates the Series/Index
and elements of others element-wise.
If others is not passed, then all values in the Series/Index are
concatenated into a single string with a given sep.
Parameters
othersSeries, Index, DataFrame, np.ndarray or list-likeSeries, Index, DataFrame, np.ndarray (one- or two-dimensional) and
other list-likes of strings must have the same length as the
calling Series/Index, with the exception of indexed objects (i.e.
Series/Index/DataFrame) if join is not None.
If others is a list-like that contains a combination of Series,
Index or np.ndarray (1-dim), then all elements will be unpacked and
must satisfy the above criteria individually.
If others is None, the method returns the concatenation of all
strings in the calling Series/Index.
sepstr, default ‘’The separator between the different elements/columns. By default
the empty string ‘’ is used.
na_repstr or None, default NoneRepresentation that is inserted for all missing values:
If na_rep is None, and others is None, missing values in the
Series/Index are omitted from the result.
If na_rep is None, and others is not None, a row containing a
missing value in any of the columns (before concatenation) will
have a missing value in the result.
join{‘left’, ‘right’, ‘outer’, ‘inner’}, default ‘left’Determines the join-style between the calling Series/Index and any
Series/Index/DataFrame in others (objects without an index need
to match the length of the calling Series/Index). To disable
alignment, use .values on any Series/Index/DataFrame in others.
New in version 0.23.0.
Changed in version 1.0.0: Changed default of join from None to ‘left’.
Returns
str, Series or IndexIf others is None, str is returned, otherwise a Series/Index
(same type as caller) of objects is returned.
See also
splitSplit each string in the Series/Index.
joinJoin lists contained as elements in the Series/Index.
Examples
When not passing others, all values are concatenated into a single
string:
>>> s = pd.Series(['a', 'b', np.nan, 'd'])
>>> s.str.cat(sep=' ')
'a b d'
By default, NA values in the Series are ignored. Using na_rep, they
can be given a representation:
>>> s.str.cat(sep=' ', na_rep='?')
'a b ? d'
If others is specified, corresponding values are concatenated with
the separator. Result will be a Series of strings.
>>> s.str.cat(['A', 'B', 'C', 'D'], sep=',')
0 a,A
1 b,B
2 NaN
3 d,D
dtype: object
Missing values will remain missing in the result, but can again be
represented using na_rep
>>> s.str.cat(['A', 'B', 'C', 'D'], sep=',', na_rep='-')
0 a,A
1 b,B
2 -,C
3 d,D
dtype: object
If sep is not specified, the values are concatenated without
separation.
>>> s.str.cat(['A', 'B', 'C', 'D'], na_rep='-')
0 aA
1 bB
2 -C
3 dD
dtype: object
Series with different indexes can be aligned before concatenation. The
join-keyword works as in other methods.
>>> t = pd.Series(['d', 'a', 'e', 'c'], index=[3, 0, 4, 2])
>>> s.str.cat(t, join='left', na_rep='-')
0 aa
1 b-
2 -c
3 dd
dtype: object
>>>
>>> s.str.cat(t, join='outer', na_rep='-')
0 aa
1 b-
2 -c
3 dd
4 -e
dtype: object
>>>
>>> s.str.cat(t, join='inner', na_rep='-')
0 aa
2 -c
3 dd
dtype: object
>>>
>>> s.str.cat(t, join='right', na_rep='-')
3 dd
0 aa
4 -e
2 -c
dtype: object
For more examples, see here.
| reference/api/pandas.Series.str.cat.html |
pandas.DataFrame.lookup | `pandas.DataFrame.lookup`
Label-based “fancy indexing” function for DataFrame.
Deprecated since version 1.2.0: DataFrame.lookup is deprecated,
use pandas.factorize and NumPy indexing instead.
For further details see
Looking up values by index/column labels. | DataFrame.lookup(row_labels, col_labels)[source]#
Label-based “fancy indexing” function for DataFrame.
Deprecated since version 1.2.0: DataFrame.lookup is deprecated,
use pandas.factorize and NumPy indexing instead.
For further details see
Looking up values by index/column labels.
Given equal-length arrays of row and column labels, return an
array of the values corresponding to each (row, col) pair.
Parameters
row_labelssequenceThe row labels to use for lookup.
col_labelssequenceThe column labels to use for lookup.
Returns
numpy.ndarrayThe found values.
| reference/api/pandas.DataFrame.lookup.html |
pandas.CategoricalIndex.categories | `pandas.CategoricalIndex.categories`
The categories of this categorical. | property CategoricalIndex.categories[source]#
The categories of this categorical.
Setting assigns new values to each category (effectively a rename of
each individual category).
The assigned value has to be a list-like object. All items must be
unique and the number of items in the new categories must be the same
as the number of items in the old categories.
Assigning to categories is a inplace operation!
Raises
ValueErrorIf the new categories do not validate as categories or if the
number of new categories is unequal the number of old categories
See also
rename_categoriesRename categories.
reorder_categoriesReorder categories.
add_categoriesAdd new categories.
remove_categoriesRemove the specified categories.
remove_unused_categoriesRemove categories which are not used.
set_categoriesSet the categories to the specified ones.
| reference/api/pandas.CategoricalIndex.categories.html |
General functions | General functions | Data manipulations#
melt(frame[, id_vars, value_vars, var_name, ...])
Unpivot a DataFrame from wide to long format, optionally leaving identifiers set.
pivot(data, *[, index, columns, values])
Return reshaped DataFrame organized by given index / column values.
pivot_table(data[, values, index, columns, ...])
Create a spreadsheet-style pivot table as a DataFrame.
crosstab(index, columns[, values, rownames, ...])
Compute a simple cross tabulation of two (or more) factors.
cut(x, bins[, right, labels, retbins, ...])
Bin values into discrete intervals.
qcut(x, q[, labels, retbins, precision, ...])
Quantile-based discretization function.
merge(left, right[, how, on, left_on, ...])
Merge DataFrame or named Series objects with a database-style join.
merge_ordered(left, right[, on, left_on, ...])
Perform a merge for ordered data with optional filling/interpolation.
merge_asof(left, right[, on, left_on, ...])
Perform a merge by key distance.
concat(objs, *[, axis, join, ignore_index, ...])
Concatenate pandas objects along a particular axis.
get_dummies(data[, prefix, prefix_sep, ...])
Convert categorical variable into dummy/indicator variables.
from_dummies(data[, sep, default_category])
Create a categorical DataFrame from a DataFrame of dummy variables.
factorize(values[, sort, na_sentinel, ...])
Encode the object as an enumerated type or categorical variable.
unique(values)
Return unique values based on a hash table.
wide_to_long(df, stubnames, i, j[, sep, suffix])
Unpivot a DataFrame from wide to long format.
Top-level missing data#
isna(obj)
Detect missing values for an array-like object.
isnull(obj)
Detect missing values for an array-like object.
notna(obj)
Detect non-missing values for an array-like object.
notnull(obj)
Detect non-missing values for an array-like object.
Top-level dealing with numeric data#
to_numeric(arg[, errors, downcast])
Convert argument to a numeric type.
Top-level dealing with datetimelike data#
to_datetime(arg[, errors, dayfirst, ...])
Convert argument to datetime.
to_timedelta(arg[, unit, errors])
Convert argument to timedelta.
date_range([start, end, periods, freq, tz, ...])
Return a fixed frequency DatetimeIndex.
bdate_range([start, end, periods, freq, tz, ...])
Return a fixed frequency DatetimeIndex with business day as the default.
period_range([start, end, periods, freq, name])
Return a fixed frequency PeriodIndex.
timedelta_range([start, end, periods, freq, ...])
Return a fixed frequency TimedeltaIndex with day as the default.
infer_freq(index[, warn])
Infer the most likely frequency given the input index.
Top-level dealing with Interval data#
interval_range([start, end, periods, freq, ...])
Return a fixed frequency IntervalIndex.
Top-level evaluation#
eval(expr[, parser, engine, truediv, ...])
Evaluate a Python expression as a string using various backends.
Hashing#
util.hash_array(vals[, encoding, hash_key, ...])
Given a 1d array, return an array of deterministic integers.
util.hash_pandas_object(obj[, index, ...])
Return a data hash of the Index/Series/DataFrame.
Importing from other DataFrame libraries#
api.interchange.from_dataframe(df[, allow_copy])
Build a pd.DataFrame from any DataFrame supporting the interchange protocol.
| reference/general_functions.html |
pandas.tseries.offsets.Micro.delta | pandas.tseries.offsets.Micro.delta | Micro.delta#
| reference/api/pandas.tseries.offsets.Micro.delta.html |
pandas.plotting.scatter_matrix | `pandas.plotting.scatter_matrix`
Draw a matrix of scatter plots.
Amount of transparency applied.
```
>>> df = pd.DataFrame(np.random.randn(1000, 4), columns=['A','B','C','D'])
>>> pd.plotting.scatter_matrix(df, alpha=0.2)
array([[<AxesSubplot: xlabel='A', ylabel='A'>,
<AxesSubplot: xlabel='B', ylabel='A'>,
<AxesSubplot: xlabel='C', ylabel='A'>,
<AxesSubplot: xlabel='D', ylabel='A'>],
[<AxesSubplot: xlabel='A', ylabel='B'>,
<AxesSubplot: xlabel='B', ylabel='B'>,
<AxesSubplot: xlabel='C', ylabel='B'>,
<AxesSubplot: xlabel='D', ylabel='B'>],
[<AxesSubplot: xlabel='A', ylabel='C'>,
<AxesSubplot: xlabel='B', ylabel='C'>,
<AxesSubplot: xlabel='C', ylabel='C'>,
<AxesSubplot: xlabel='D', ylabel='C'>],
[<AxesSubplot: xlabel='A', ylabel='D'>,
<AxesSubplot: xlabel='B', ylabel='D'>,
<AxesSubplot: xlabel='C', ylabel='D'>,
<AxesSubplot: xlabel='D', ylabel='D'>]], dtype=object)
``` | pandas.plotting.scatter_matrix(frame, alpha=0.5, figsize=None, ax=None, grid=False, diagonal='hist', marker='.', density_kwds=None, hist_kwds=None, range_padding=0.05, **kwargs)[source]#
Draw a matrix of scatter plots.
Parameters
frameDataFrame
alphafloat, optionalAmount of transparency applied.
figsize(float,float), optionalA tuple (width, height) in inches.
axMatplotlib axis object, optional
gridbool, optionalSetting this to True will show the grid.
diagonal{‘hist’, ‘kde’}Pick between ‘kde’ and ‘hist’ for either Kernel Density Estimation or
Histogram plot in the diagonal.
markerstr, optionalMatplotlib marker type, default ‘.’.
density_kwdskeywordsKeyword arguments to be passed to kernel density estimate plot.
hist_kwdskeywordsKeyword arguments to be passed to hist function.
range_paddingfloat, default 0.05Relative extension of axis range in x and y with respect to
(x_max - x_min) or (y_max - y_min).
**kwargsKeyword arguments to be passed to scatter function.
Returns
numpy.ndarrayA matrix of scatter plots.
Examples
>>> df = pd.DataFrame(np.random.randn(1000, 4), columns=['A','B','C','D'])
>>> pd.plotting.scatter_matrix(df, alpha=0.2)
array([[<AxesSubplot: xlabel='A', ylabel='A'>,
<AxesSubplot: xlabel='B', ylabel='A'>,
<AxesSubplot: xlabel='C', ylabel='A'>,
<AxesSubplot: xlabel='D', ylabel='A'>],
[<AxesSubplot: xlabel='A', ylabel='B'>,
<AxesSubplot: xlabel='B', ylabel='B'>,
<AxesSubplot: xlabel='C', ylabel='B'>,
<AxesSubplot: xlabel='D', ylabel='B'>],
[<AxesSubplot: xlabel='A', ylabel='C'>,
<AxesSubplot: xlabel='B', ylabel='C'>,
<AxesSubplot: xlabel='C', ylabel='C'>,
<AxesSubplot: xlabel='D', ylabel='C'>],
[<AxesSubplot: xlabel='A', ylabel='D'>,
<AxesSubplot: xlabel='B', ylabel='D'>,
<AxesSubplot: xlabel='C', ylabel='D'>,
<AxesSubplot: xlabel='D', ylabel='D'>]], dtype=object)
| reference/api/pandas.plotting.scatter_matrix.html |
Sparse data structures | Sparse data structures
pandas provides data structures for efficiently storing sparse data.
These are not necessarily sparse in the typical “mostly 0”. Rather, you can view these
objects as being “compressed” where any data matching a specific value (NaN / missing value, though any value
can be chosen, including 0) is omitted. The compressed values are not actually stored in the array.
Notice the dtype, Sparse[float64, nan]. The nan means that elements in the
array that are nan aren’t actually stored, only the non-nan elements are.
Those non-nan elements have a float64 dtype.
The sparse objects exist for memory efficiency reasons. Suppose you had a
large, mostly NA DataFrame:
As you can see, the density (% of values that have not been “compressed”) is
extremely low. This sparse object takes up much less memory on disk (pickled)
and in the Python interpreter.
Functionally, their behavior should be nearly
identical to their dense counterparts. | pandas provides data structures for efficiently storing sparse data.
These are not necessarily sparse in the typical “mostly 0”. Rather, you can view these
objects as being “compressed” where any data matching a specific value (NaN / missing value, though any value
can be chosen, including 0) is omitted. The compressed values are not actually stored in the array.
In [1]: arr = np.random.randn(10)
In [2]: arr[2:-2] = np.nan
In [3]: ts = pd.Series(pd.arrays.SparseArray(arr))
In [4]: ts
Out[4]:
0 0.469112
1 -0.282863
2 NaN
3 NaN
4 NaN
5 NaN
6 NaN
7 NaN
8 -0.861849
9 -2.104569
dtype: Sparse[float64, nan]
Notice the dtype, Sparse[float64, nan]. The nan means that elements in the
array that are nan aren’t actually stored, only the non-nan elements are.
Those non-nan elements have a float64 dtype.
The sparse objects exist for memory efficiency reasons. Suppose you had a
large, mostly NA DataFrame:
In [5]: df = pd.DataFrame(np.random.randn(10000, 4))
In [6]: df.iloc[:9998] = np.nan
In [7]: sdf = df.astype(pd.SparseDtype("float", np.nan))
In [8]: sdf.head()
Out[8]:
0 1 2 3
0 NaN NaN NaN NaN
1 NaN NaN NaN NaN
2 NaN NaN NaN NaN
3 NaN NaN NaN NaN
4 NaN NaN NaN NaN
In [9]: sdf.dtypes
Out[9]:
0 Sparse[float64, nan]
1 Sparse[float64, nan]
2 Sparse[float64, nan]
3 Sparse[float64, nan]
dtype: object
In [10]: sdf.sparse.density
Out[10]: 0.0002
As you can see, the density (% of values that have not been “compressed”) is
extremely low. This sparse object takes up much less memory on disk (pickled)
and in the Python interpreter.
In [11]: 'dense : {:0.2f} bytes'.format(df.memory_usage().sum() / 1e3)
Out[11]: 'dense : 320.13 bytes'
In [12]: 'sparse: {:0.2f} bytes'.format(sdf.memory_usage().sum() / 1e3)
Out[12]: 'sparse: 0.22 bytes'
Functionally, their behavior should be nearly
identical to their dense counterparts.
SparseArray#
arrays.SparseArray is a ExtensionArray
for storing an array of sparse values (see dtypes for more
on extension arrays). It is a 1-dimensional ndarray-like object storing
only values distinct from the fill_value:
In [13]: arr = np.random.randn(10)
In [14]: arr[2:5] = np.nan
In [15]: arr[7:8] = np.nan
In [16]: sparr = pd.arrays.SparseArray(arr)
In [17]: sparr
Out[17]:
[-1.9556635297215477, -1.6588664275960427, nan, nan, nan, 1.1589328886422277, 0.14529711373305043, nan, 0.6060271905134522, 1.3342113401317768]
Fill: nan
IntIndex
Indices: array([0, 1, 5, 6, 8, 9], dtype=int32)
A sparse array can be converted to a regular (dense) ndarray with numpy.asarray()
In [18]: np.asarray(sparr)
Out[18]:
array([-1.9557, -1.6589, nan, nan, nan, 1.1589, 0.1453,
nan, 0.606 , 1.3342])
SparseDtype#
The SparseArray.dtype property stores two pieces of information
The dtype of the non-sparse values
The scalar fill value
In [19]: sparr.dtype
Out[19]: Sparse[float64, nan]
A SparseDtype may be constructed by passing only a dtype
In [20]: pd.SparseDtype(np.dtype('datetime64[ns]'))
Out[20]: Sparse[datetime64[ns], numpy.datetime64('NaT')]
in which case a default fill value will be used (for NumPy dtypes this is often the
“missing” value for that dtype). To override this default an explicit fill value may be
passed instead
In [21]: pd.SparseDtype(np.dtype('datetime64[ns]'),
....: fill_value=pd.Timestamp('2017-01-01'))
....:
Out[21]: Sparse[datetime64[ns], Timestamp('2017-01-01 00:00:00')]
Finally, the string alias 'Sparse[dtype]' may be used to specify a sparse dtype
in many places
In [22]: pd.array([1, 0, 0, 2], dtype='Sparse[int]')
Out[22]:
[1, 0, 0, 2]
Fill: 0
IntIndex
Indices: array([0, 3], dtype=int32)
Sparse accessor#
pandas provides a .sparse accessor, similar to .str for string data, .cat
for categorical data, and .dt for datetime-like data. This namespace provides
attributes and methods that are specific to sparse data.
In [23]: s = pd.Series([0, 0, 1, 2], dtype="Sparse[int]")
In [24]: s.sparse.density
Out[24]: 0.5
In [25]: s.sparse.fill_value
Out[25]: 0
This accessor is available only on data with SparseDtype, and on the Series
class itself for creating a Series with sparse data from a scipy COO matrix with.
New in version 0.25.0.
A .sparse accessor has been added for DataFrame as well.
See Sparse accessor for more.
Sparse calculation#
You can apply NumPy ufuncs
to arrays.SparseArray and get a arrays.SparseArray as a result.
In [26]: arr = pd.arrays.SparseArray([1., np.nan, np.nan, -2., np.nan])
In [27]: np.abs(arr)
Out[27]:
[1.0, nan, nan, 2.0, nan]
Fill: nan
IntIndex
Indices: array([0, 3], dtype=int32)
The ufunc is also applied to fill_value. This is needed to get
the correct dense result.
In [28]: arr = pd.arrays.SparseArray([1., -1, -1, -2., -1], fill_value=-1)
In [29]: np.abs(arr)
Out[29]:
[1, 1, 1, 2.0, 1]
Fill: 1
IntIndex
Indices: array([3], dtype=int32)
In [30]: np.abs(arr).to_dense()
Out[30]: array([1., 1., 1., 2., 1.])
Migrating#
Note
SparseSeries and SparseDataFrame were removed in pandas 1.0.0. This migration
guide is present to aid in migrating from previous versions.
In older versions of pandas, the SparseSeries and SparseDataFrame classes (documented below)
were the preferred way to work with sparse data. With the advent of extension arrays, these subclasses
are no longer needed. Their purpose is better served by using a regular Series or DataFrame with
sparse values instead.
Note
There’s no performance or memory penalty to using a Series or DataFrame with sparse values,
rather than a SparseSeries or SparseDataFrame.
This section provides some guidance on migrating your code to the new style. As a reminder,
you can use the Python warnings module to control warnings. But we recommend modifying
your code, rather than ignoring the warning.
Construction
From an array-like, use the regular Series or
DataFrame constructors with arrays.SparseArray values.
# Previous way
>>> pd.SparseDataFrame({"A": [0, 1]})
# New way
In [31]: pd.DataFrame({"A": pd.arrays.SparseArray([0, 1])})
Out[31]:
A
0 0
1 1
From a SciPy sparse matrix, use DataFrame.sparse.from_spmatrix(),
# Previous way
>>> from scipy import sparse
>>> mat = sparse.eye(3)
>>> df = pd.SparseDataFrame(mat, columns=['A', 'B', 'C'])
# New way
In [32]: from scipy import sparse
In [33]: mat = sparse.eye(3)
In [34]: df = pd.DataFrame.sparse.from_spmatrix(mat, columns=['A', 'B', 'C'])
In [35]: df.dtypes
Out[35]:
A Sparse[float64, 0]
B Sparse[float64, 0]
C Sparse[float64, 0]
dtype: object
Conversion
From sparse to dense, use the .sparse accessors
In [36]: df.sparse.to_dense()
Out[36]:
A B C
0 1.0 0.0 0.0
1 0.0 1.0 0.0
2 0.0 0.0 1.0
In [37]: df.sparse.to_coo()
Out[37]:
<3x3 sparse matrix of type '<class 'numpy.float64'>'
with 3 stored elements in COOrdinate format>
From dense to sparse, use DataFrame.astype() with a SparseDtype.
In [38]: dense = pd.DataFrame({"A": [1, 0, 0, 1]})
In [39]: dtype = pd.SparseDtype(int, fill_value=0)
In [40]: dense.astype(dtype)
Out[40]:
A
0 1
1 0
2 0
3 1
Sparse Properties
Sparse-specific properties, like density, are available on the .sparse accessor.
In [41]: df.sparse.density
Out[41]: 0.3333333333333333
General differences
In a SparseDataFrame, all columns were sparse. A DataFrame can have a mixture of
sparse and dense columns. As a consequence, assigning new columns to a DataFrame with sparse
values will not automatically convert the input to be sparse.
# Previous Way
>>> df = pd.SparseDataFrame({"A": [0, 1]})
>>> df['B'] = [0, 0] # implicitly becomes Sparse
>>> df['B'].dtype
Sparse[int64, nan]
Instead, you’ll need to ensure that the values being assigned are sparse
In [42]: df = pd.DataFrame({"A": pd.arrays.SparseArray([0, 1])})
In [43]: df['B'] = [0, 0] # remains dense
In [44]: df['B'].dtype
Out[44]: dtype('int64')
In [45]: df['B'] = pd.arrays.SparseArray([0, 0])
In [46]: df['B'].dtype
Out[46]: Sparse[int64, 0]
The SparseDataFrame.default_kind and SparseDataFrame.default_fill_value attributes
have no replacement.
Interaction with scipy.sparse#
Use DataFrame.sparse.from_spmatrix() to create a DataFrame with sparse values from a sparse matrix.
New in version 0.25.0.
In [47]: from scipy.sparse import csr_matrix
In [48]: arr = np.random.random(size=(1000, 5))
In [49]: arr[arr < .9] = 0
In [50]: sp_arr = csr_matrix(arr)
In [51]: sp_arr
Out[51]:
<1000x5 sparse matrix of type '<class 'numpy.float64'>'
with 517 stored elements in Compressed Sparse Row format>
In [52]: sdf = pd.DataFrame.sparse.from_spmatrix(sp_arr)
In [53]: sdf.head()
Out[53]:
0 1 2 3 4
0 0.956380 0.0 0.0 0.000000 0.0
1 0.000000 0.0 0.0 0.000000 0.0
2 0.000000 0.0 0.0 0.000000 0.0
3 0.000000 0.0 0.0 0.000000 0.0
4 0.999552 0.0 0.0 0.956153 0.0
In [54]: sdf.dtypes
Out[54]:
0 Sparse[float64, 0]
1 Sparse[float64, 0]
2 Sparse[float64, 0]
3 Sparse[float64, 0]
4 Sparse[float64, 0]
dtype: object
All sparse formats are supported, but matrices that are not in COOrdinate format will be converted, copying data as needed.
To convert back to sparse SciPy matrix in COO format, you can use the DataFrame.sparse.to_coo() method:
In [55]: sdf.sparse.to_coo()
Out[55]:
<1000x5 sparse matrix of type '<class 'numpy.float64'>'
with 517 stored elements in COOrdinate format>
Series.sparse.to_coo() is implemented for transforming a Series with sparse values indexed by a MultiIndex to a scipy.sparse.coo_matrix.
The method requires a MultiIndex with two or more levels.
In [56]: s = pd.Series([3.0, np.nan, 1.0, 3.0, np.nan, np.nan])
In [57]: s.index = pd.MultiIndex.from_tuples(
....: [
....: (1, 2, "a", 0),
....: (1, 2, "a", 1),
....: (1, 1, "b", 0),
....: (1, 1, "b", 1),
....: (2, 1, "b", 0),
....: (2, 1, "b", 1),
....: ],
....: names=["A", "B", "C", "D"],
....: )
....:
In [58]: ss = s.astype('Sparse')
In [59]: ss
Out[59]:
A B C D
1 2 a 0 3.0
1 NaN
1 b 0 1.0
1 3.0
2 1 b 0 NaN
1 NaN
dtype: Sparse[float64, nan]
In the example below, we transform the Series to a sparse representation of a 2-d array by specifying that the first and second MultiIndex levels define labels for the rows and the third and fourth levels define labels for the columns. We also specify that the column and row labels should be sorted in the final sparse representation.
In [60]: A, rows, columns = ss.sparse.to_coo(
....: row_levels=["A", "B"], column_levels=["C", "D"], sort_labels=True
....: )
....:
In [61]: A
Out[61]:
<3x4 sparse matrix of type '<class 'numpy.float64'>'
with 3 stored elements in COOrdinate format>
In [62]: A.todense()
Out[62]:
matrix([[0., 0., 1., 3.],
[3., 0., 0., 0.],
[0., 0., 0., 0.]])
In [63]: rows
Out[63]: [(1, 1), (1, 2), (2, 1)]
In [64]: columns
Out[64]: [('a', 0), ('a', 1), ('b', 0), ('b', 1)]
Specifying different row and column labels (and not sorting them) yields a different sparse matrix:
In [65]: A, rows, columns = ss.sparse.to_coo(
....: row_levels=["A", "B", "C"], column_levels=["D"], sort_labels=False
....: )
....:
In [66]: A
Out[66]:
<3x2 sparse matrix of type '<class 'numpy.float64'>'
with 3 stored elements in COOrdinate format>
In [67]: A.todense()
Out[67]:
matrix([[3., 0.],
[1., 3.],
[0., 0.]])
In [68]: rows
Out[68]: [(1, 2, 'a'), (1, 1, 'b'), (2, 1, 'b')]
In [69]: columns
Out[69]: [(0,), (1,)]
A convenience method Series.sparse.from_coo() is implemented for creating a Series with sparse values from a scipy.sparse.coo_matrix.
In [70]: from scipy import sparse
In [71]: A = sparse.coo_matrix(([3.0, 1.0, 2.0], ([1, 0, 0], [0, 2, 3])), shape=(3, 4))
In [72]: A
Out[72]:
<3x4 sparse matrix of type '<class 'numpy.float64'>'
with 3 stored elements in COOrdinate format>
In [73]: A.todense()
Out[73]:
matrix([[0., 0., 1., 2.],
[3., 0., 0., 0.],
[0., 0., 0., 0.]])
The default behaviour (with dense_index=False) simply returns a Series containing
only the non-null entries.
In [74]: ss = pd.Series.sparse.from_coo(A)
In [75]: ss
Out[75]:
0 2 1.0
3 2.0
1 0 3.0
dtype: Sparse[float64, nan]
Specifying dense_index=True will result in an index that is the Cartesian product of the
row and columns coordinates of the matrix. Note that this will consume a significant amount of memory
(relative to dense_index=False) if the sparse matrix is large (and sparse) enough.
In [76]: ss_dense = pd.Series.sparse.from_coo(A, dense_index=True)
In [77]: ss_dense
Out[77]:
0 0 NaN
1 NaN
2 1.0
3 2.0
1 0 3.0
1 NaN
2 NaN
3 NaN
2 0 NaN
1 NaN
2 NaN
3 NaN
dtype: Sparse[float64, nan]
| user_guide/sparse.html |
pandas.DatetimeIndex.minute | `pandas.DatetimeIndex.minute`
The minutes of the datetime.
```
>>> datetime_series = pd.Series(
... pd.date_range("2000-01-01", periods=3, freq="T")
... )
>>> datetime_series
0 2000-01-01 00:00:00
1 2000-01-01 00:01:00
2 2000-01-01 00:02:00
dtype: datetime64[ns]
>>> datetime_series.dt.minute
0 0
1 1
2 2
dtype: int64
``` | property DatetimeIndex.minute[source]#
The minutes of the datetime.
Examples
>>> datetime_series = pd.Series(
... pd.date_range("2000-01-01", periods=3, freq="T")
... )
>>> datetime_series
0 2000-01-01 00:00:00
1 2000-01-01 00:01:00
2 2000-01-01 00:02:00
dtype: datetime64[ns]
>>> datetime_series.dt.minute
0 0
1 1
2 2
dtype: int64
| reference/api/pandas.DatetimeIndex.minute.html |
pandas.DataFrame.transform | `pandas.DataFrame.transform`
Call func on self producing a DataFrame with the same axis shape as self.
```
>>> df = pd.DataFrame({'A': range(3), 'B': range(1, 4)})
>>> df
A B
0 0 1
1 1 2
2 2 3
>>> df.transform(lambda x: x + 1)
A B
0 1 2
1 2 3
2 3 4
``` | DataFrame.transform(func, axis=0, *args, **kwargs)[source]#
Call func on self producing a DataFrame with the same axis shape as self.
Parameters
funcfunction, str, list-like or dict-likeFunction to use for transforming the data. If a function, must either
work when passed a DataFrame or when passed to DataFrame.apply. If func
is both list-like and dict-like, dict-like behavior takes precedence.
Accepted combinations are:
function
string function name
list-like of functions and/or function names, e.g. [np.exp, 'sqrt']
dict-like of axis labels -> functions, function names or list-like of such.
axis{0 or ‘index’, 1 or ‘columns’}, default 0If 0 or ‘index’: apply function to each column.
If 1 or ‘columns’: apply function to each row.
*argsPositional arguments to pass to func.
**kwargsKeyword arguments to pass to func.
Returns
DataFrameA DataFrame that must have the same length as self.
Raises
ValueErrorIf the returned DataFrame has a different length than self.
See also
DataFrame.aggOnly perform aggregating type operations.
DataFrame.applyInvoke function on a DataFrame.
Notes
Functions that mutate the passed object can produce unexpected
behavior or errors and are not supported. See Mutating with User Defined Function (UDF) methods
for more details.
Examples
>>> df = pd.DataFrame({'A': range(3), 'B': range(1, 4)})
>>> df
A B
0 0 1
1 1 2
2 2 3
>>> df.transform(lambda x: x + 1)
A B
0 1 2
1 2 3
2 3 4
Even though the resulting DataFrame must have the same length as the
input DataFrame, it is possible to provide several input functions:
>>> s = pd.Series(range(3))
>>> s
0 0
1 1
2 2
dtype: int64
>>> s.transform([np.sqrt, np.exp])
sqrt exp
0 0.000000 1.000000
1 1.000000 2.718282
2 1.414214 7.389056
You can call transform on a GroupBy object:
>>> df = pd.DataFrame({
... "Date": [
... "2015-05-08", "2015-05-07", "2015-05-06", "2015-05-05",
... "2015-05-08", "2015-05-07", "2015-05-06", "2015-05-05"],
... "Data": [5, 8, 6, 1, 50, 100, 60, 120],
... })
>>> df
Date Data
0 2015-05-08 5
1 2015-05-07 8
2 2015-05-06 6
3 2015-05-05 1
4 2015-05-08 50
5 2015-05-07 100
6 2015-05-06 60
7 2015-05-05 120
>>> df.groupby('Date')['Data'].transform('sum')
0 55
1 108
2 66
3 121
4 55
5 108
6 66
7 121
Name: Data, dtype: int64
>>> df = pd.DataFrame({
... "c": [1, 1, 1, 2, 2, 2, 2],
... "type": ["m", "n", "o", "m", "m", "n", "n"]
... })
>>> df
c type
0 1 m
1 1 n
2 1 o
3 2 m
4 2 m
5 2 n
6 2 n
>>> df['size'] = df.groupby('c')['type'].transform(len)
>>> df
c type size
0 1 m 3
1 1 n 3
2 1 o 3
3 2 m 4
4 2 m 4
5 2 n 4
6 2 n 4
| reference/api/pandas.DataFrame.transform.html |
pandas.tseries.offsets.YearBegin.onOffset | pandas.tseries.offsets.YearBegin.onOffset | YearBegin.onOffset()#
| reference/api/pandas.tseries.offsets.YearBegin.onOffset.html |
pandas.core.window.rolling.Rolling.sum | `pandas.core.window.rolling.Rolling.sum`
Calculate the rolling sum.
```
>>> s = pd.Series([1, 2, 3, 4, 5])
>>> s
0 1
1 2
2 3
3 4
4 5
dtype: int64
``` | Rolling.sum(numeric_only=False, *args, engine=None, engine_kwargs=None, **kwargs)[source]#
Calculate the rolling sum.
Parameters
numeric_onlybool, default FalseInclude only float, int, boolean columns.
New in version 1.5.0.
*argsFor NumPy compatibility and will not have an effect on the result.
Deprecated since version 1.5.0.
enginestr, default None
'cython' : Runs the operation through C-extensions from cython.
'numba' : Runs the operation through JIT compiled code from numba.
None : Defaults to 'cython' or globally setting compute.use_numba
New in version 1.3.0.
engine_kwargsdict, default None
For 'cython' engine, there are no accepted engine_kwargs
For 'numba' engine, the engine can accept nopython, nogil
and parallel dictionary keys. The values must either be True or
False. The default engine_kwargs for the 'numba' engine is
{'nopython': True, 'nogil': False, 'parallel': False}
New in version 1.3.0.
**kwargsFor NumPy compatibility and will not have an effect on the result.
Deprecated since version 1.5.0.
Returns
Series or DataFrameReturn type is the same as the original object with np.float64 dtype.
See also
pandas.Series.rollingCalling rolling with Series data.
pandas.DataFrame.rollingCalling rolling with DataFrames.
pandas.Series.sumAggregating sum for Series.
pandas.DataFrame.sumAggregating sum for DataFrame.
Notes
See Numba engine and Numba (JIT compilation) for extended documentation and performance considerations for the Numba engine.
Examples
>>> s = pd.Series([1, 2, 3, 4, 5])
>>> s
0 1
1 2
2 3
3 4
4 5
dtype: int64
>>> s.rolling(3).sum()
0 NaN
1 NaN
2 6.0
3 9.0
4 12.0
dtype: float64
>>> s.rolling(3, center=True).sum()
0 NaN
1 6.0
2 9.0
3 12.0
4 NaN
dtype: float64
For DataFrame, each sum is computed column-wise.
>>> df = pd.DataFrame({"A": s, "B": s ** 2})
>>> df
A B
0 1 1
1 2 4
2 3 9
3 4 16
4 5 25
>>> df.rolling(3).sum()
A B
0 NaN NaN
1 NaN NaN
2 6.0 14.0
3 9.0 29.0
4 12.0 50.0
| reference/api/pandas.core.window.rolling.Rolling.sum.html |