repo
stringclasses 32
values | instance_id
stringlengths 13
37
| base_commit
stringlengths 40
40
| patch
stringlengths 1
1.89M
| test_patch
stringclasses 1
value | problem_statement
stringlengths 304
69k
| hints_text
stringlengths 0
246k
| created_at
stringlengths 20
20
| version
stringclasses 1
value | FAIL_TO_PASS
stringclasses 1
value | PASS_TO_PASS
stringclasses 1
value | environment_setup_commit
stringclasses 1
value | traceback
stringlengths 64
23.4k
| __index_level_0__
int64 29
19k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pandas-dev/pandas | pandas-dev__pandas-28498 | 0a082d40c4a750655a7f1ce127c58ca26cd5905e | pandas._libs.parsers.TextReader('中文文件名-chinese-in-filename.csv') OSError: Initializing from file failed
kwargs={'delimiter': ',', 'doublequote': True, 'escapechar': None, 'quotechar': '"', 'quoting': 0, 'skipinitialspace': False, 'lineterminator': None, 'header': 0, 'index_col': None, 'names': None, 'skiprows': None, 'na_values': {'nan', '', 'NaN', 'N/A', '-NaN', 'null', '#N/A N/A', 'n/a', '-nan', '-1.#QNAN', 'NA', '1.#IND', '-1.#IND', '1.#QNAN', 'NULL', '#NA', '#N/A'}, 'true_values': None, 'false_values': None, 'converters': {}, 'dtype': None, 'keep_default_na': True, 'thousands': None, 'comment': None, 'decimal': b'.', 'usecols': None, 'verbose': False, 'encoding': 'gb2312', 'compression': None, 'mangle_dupe_cols': True, 'tupleize_cols': False, 'skip_blank_lines': True, 'delim_whitespace': False, 'na_filter': True, 'low_memory': True, 'memory_map': False, 'error_bad_lines': True, 'warn_bad_lines': True, 'float_precision': None, 'na_fvalues': set(), 'allow_leading_cols': True}
parsers.TextReader('中文文件名-chinese-in-filename.csv',**kwargs)
Traceback (most recent call last)
<ipython-input-6-8864dc3e024e> in <module>()
----> 1 parsers.TextReader(f,**_2)
pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader.__cinit__()
pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._setup_parser_source()
OSError: Initializing from file failed
| This isn't part of the API - what error are you getting from user-facing methods?
@willweil
```
In [70]: pd.read_csv(f, delimiter=',',encoding='gb2312')
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
<ipython-input-70-4d1e02136d2f> in <module>()
----> 1 pd.read_csv(f, delimiter=',',encoding='gb2312')
E:\QGB\Anaconda3\lib\site-packages\pandas\io\parsers.py in parser_f(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, escapechar, comment, encoding, dialect, tupleize_cols, error_bad_lines, warn_bad_lines, skipfooter, doublequote, delim_whitespace, low_memory, memory_map, float_precision)
676 skip_blank_lines=skip_blank_lines)
677
--> 678 return _read(filepath_or_buffer, kwds)
679
680 parser_f.__name__ = name
E:\QGB\Anaconda3\lib\site-packages\pandas\io\parsers.py in _read(filepath_or_buffer, kwds)
438
439 # Create the parser.
--> 440 parser = TextFileReader(filepath_or_buffer, **kwds)
441
442 if chunksize or iterator:
E:\QGB\Anaconda3\lib\site-packages\pandas\io\parsers.py in __init__(self, f, engine, **kwds)
785 self.options['has_index_names'] = kwds['has_index_names']
786
--> 787 self._make_engine(self.engine)
788
789 def close(self):
E:\QGB\Anaconda3\lib\site-packages\pandas\io\parsers.py in _make_engine(self, engine)
1012 def _make_engine(self, engine='c'):
1013 if engine == 'c':
-> 1014 self._engine = CParserWrapper(self.f, **self.options)
1015 else:
1016 if engine == 'python':
E:\QGB\Anaconda3\lib\site-packages\pandas\io\parsers.py in __init__(self, src, **kwds)
1706 kwds['usecols'] = self.usecols
1707
-> 1708 self._reader = parsers.TextReader(src, **kwds)
1709
1710 passed_names = self.names is None
pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader.__cinit__()
pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._setup_parser_source()
OSError: Initializing from file failed
```
Hmm still not able to reproduce. Can you post the output of pd.show_versions? Have you tried on master?
@QGB
It is a bug in previous version. I ran into that in 0.23.4.
I cant reporduce it in 0.24.2 or in current master.
```
In [4]: import pandas as pd
...: df = pd.DataFrame({"A":[1]})
...: df.to_csv("./中文.csv")
...: pd.read_csv("./中文.csv")
In [5]: pd.__version__
Out[5]: '0.24.2'
```
Works Fine.
In 0.23.4, you can still work around it by using
``pd.read_csv(f, engine="python")``
I think if someone wanted to add a test case for this can close out | 2019-09-18T14:19:29Z | [] | [] |
Traceback (most recent call last)
<ipython-input-6-8864dc3e024e> in <module>()
----> 1 parsers.TextReader(f,**_2)
pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader.__cinit__()
| 13,029 |
||||
pandas-dev/pandas | pandas-dev__pandas-2857 | 16f791b141fe40564dfb1ab16ec845fa3ff8a8e9 | pandas.tests.test_graphics.TestDataFramePlots test_unsorted_index failure
``` python
======================================================================
FAIL: test_unsorted_index (pandas.tests.test_graphics.TestDataFramePlots)
----------------------------------------------------------------------
Traceback (most recent call last):
File "<...>/pandas/tests/test_graphics.py", line 312, in test_unsorted_index
tm.assert_series_equal(rs, df.y)
File "<...>/pandas/util/testing.py", line 166, in assert_series_equal
assert(left.dtype == right.dtype)
AssertionError
...
(Pdb) p left.dtype
dtype('int32')
(Pdb) p right.dtype
dtype('int64')
(Pdb) p pd.__version__
'0.11.0.dev-dad367e'
```
| yeh...the comparison series is built with dtype=int, which is platform dependent; and the rhs side defaults (as it wasn't specified to int64), so either need to change to np.int64 in Series construction or set check_dtypes=False in the assert
(prior to 0.11, the Series would always upcast, now that dtypes is supported #2708, can get test failures like this)
this doesn't fail on 64-bit and isn't tested on travis (as it requires matplotlib I guess...and don't build with that), should we?
mpl is available in the FULL_DEPS build. Check the bottom of the log for installed versions:
https://travis-ci.org/pydata/pandas/jobs/4745031
are you sure the tests are not run?
oh...didn't think they were running because this test has not failed on travis....and given what other tests I had to change on 32-bit I think this WOULD fail....odd
@lodagro which arch is this failing on?
```
Linux ubuntu 3.2.0-37-generic #58-Ubuntu SMP Thu Jan 24 15:28:57 UTC 2013 i686 i686 i386 GNU/Linux
```
Test is ok, if i set dtype to `np.int64` iso `int`.
| 2013-02-12T21:37:28Z | [] | [] |
Traceback (most recent call last):
File "<...>/pandas/tests/test_graphics.py", line 312, in test_unsorted_index
tm.assert_series_equal(rs, df.y)
File "<...>/pandas/util/testing.py", line 166, in assert_series_equal
assert(left.dtype == right.dtype)
AssertionError
| 13,032 |
||||
pandas-dev/pandas | pandas-dev__pandas-28834 | cd013b41bbde9752bbf561eba4abfbc439e83c7a | diff --git a/doc/source/whatsnew/v1.0.0.rst b/doc/source/whatsnew/v1.0.0.rst
--- a/doc/source/whatsnew/v1.0.0.rst
+++ b/doc/source/whatsnew/v1.0.0.rst
@@ -465,6 +465,7 @@ Other
- :meth:`Series.append` will no longer raise a ``TypeError`` when passed a tuple of ``Series`` (:issue:`28410`)
- :meth:`SeriesGroupBy.value_counts` will be able to handle the case even when the :class:`Grouper` makes empty groups (:issue: 28479)
- Fix corrupted error message when calling ``pandas.libs._json.encode()`` on a 0d array (:issue:`18878`)
+- Bug in :meth:`DataFrame.append` that raised ``IndexError`` when appending with empty list (:issue:`28769`)
- Fix :class:`AbstractHolidayCalendar` to return correct results for
years after 2030 (now goes up to 2200) (:issue:`27790`)
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -6943,10 +6943,13 @@ def append(self, other, ignore_index=False, verify_integrity=False, sort=None):
other = other._convert(datetime=True, timedelta=True)
if not self.columns.equals(combined_columns):
self = self.reindex(columns=combined_columns)
- elif isinstance(other, list) and not isinstance(other[0], DataFrame):
- other = DataFrame(other)
- if (self.columns.get_indexer(other.columns) >= 0).all():
- other = other.reindex(columns=self.columns)
+ elif isinstance(other, list):
+ if not other:
+ pass
+ elif not isinstance(other[0], DataFrame):
+ other = DataFrame(other)
+ if (self.columns.get_indexer(other.columns) >= 0).all():
+ other = other.reindex(columns=self.columns)
from pandas.core.reshape.concat import concat
| DataFrame.append with empty list raises IndexError
#### Code Sample
```python
>>> import pandas
>>> pandas.DataFrame().append([])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File ".env\lib\site-packages\pandas\core\frame.py", line 7108, in append
elif isinstance(other, list) and not isinstance(other[0], DataFrame):
IndexError: list index out of range
>>> pandas.__version__
'0.25.1'
```
#### Problem description
Crash when passing empty sequence to `DataFrame.append`
#### Expected Output
No crash.
The source DataFrame is returned intact.
#### Version
Version 0.25.1. Happens in master.
Problem line
https://github.com/pandas-dev/pandas/blob/master/pandas/core/frame.py#L7014
| What output are you hoping to see here? Still an empty DataFrame?
I guess the source DataFrame() is expected since an empty sequence is just empty and there is nothing to append.
Like this
```
pandas.DataFrame({"a":[1,2]}).append([pandas.DataFrame()])
```
Seems reasonable. A PR would be welcome. | 2019-10-08T05:26:25Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File ".env\lib\site-packages\pandas\core\frame.py", line 7108, in append
elif isinstance(other, list) and not isinstance(other[0], DataFrame):
IndexError: list index out of range
| 13,071 |
|||
pandas-dev/pandas | pandas-dev__pandas-28945 | 8830e85c35d25cc1763bd7a342ea1638c9b75dfc | diff --git a/doc/source/whatsnew/v1.0.0.rst b/doc/source/whatsnew/v1.0.0.rst
--- a/doc/source/whatsnew/v1.0.0.rst
+++ b/doc/source/whatsnew/v1.0.0.rst
@@ -838,6 +838,7 @@ ExtensionArray
^^^^^^^^^^^^^^
- Bug in :class:`arrays.PandasArray` when setting a scalar string (:issue:`28118`, :issue:`28150`).
+- Bug where nullable integers could not be compared to strings (:issue:`28930`)
-
diff --git a/pandas/conftest.py b/pandas/conftest.py
--- a/pandas/conftest.py
+++ b/pandas/conftest.py
@@ -654,6 +654,24 @@ def any_int_dtype(request):
return request.param
+@pytest.fixture(params=ALL_EA_INT_DTYPES)
+def any_nullable_int_dtype(request):
+ """
+ Parameterized fixture for any nullable integer dtype.
+
+ * 'UInt8'
+ * 'Int8'
+ * 'UInt16'
+ * 'Int16'
+ * 'UInt32'
+ * 'Int32'
+ * 'UInt64'
+ * 'Int64'
+ """
+
+ return request.param
+
+
@pytest.fixture(params=ALL_REAL_DTYPES)
def any_real_dtype(request):
"""
diff --git a/pandas/core/arrays/integer.py b/pandas/core/arrays/integer.py
--- a/pandas/core/arrays/integer.py
+++ b/pandas/core/arrays/integer.py
@@ -26,6 +26,7 @@
from pandas.core import nanops, ops
from pandas.core.algorithms import take
from pandas.core.arrays import ExtensionArray, ExtensionOpsMixin
+from pandas.core.ops import invalid_comparison
from pandas.core.ops.common import unpack_zerodim_and_defer
from pandas.core.tools.numeric import to_numeric
@@ -646,7 +647,11 @@ def cmp_method(self, other):
with warnings.catch_warnings():
warnings.filterwarnings("ignore", "elementwise", FutureWarning)
with np.errstate(all="ignore"):
- result = op(self._data, other)
+ method = getattr(self._data, f"__{op_name}__")
+ result = method(other)
+
+ if result is NotImplemented:
+ result = invalid_comparison(self._data, other, op)
# nans propagate
if mask is None:
| Comparison to string is broken for nullable int
#### Code Sample, a copy-pastable example if possible
```python
import pandas as pd
df = pd.DataFrame({'x': [1, None]})
df.x.astype(pd.Int64Dtype()) == 'a'
```
#### Problem description
The above raises TypeError on line 627 in integer.py:
```python
result[mask] = op_name == "ne"
```
Here, the result is bool. Here is the exception:
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "C:\Python37\lib\site-packages\pandas\core\ops.py", line 1731, in wrapper
return dispatch_to_extension_op(op, self, other)
File "C:\Python37\lib\site-packages\pandas\core\ops.py", line 1220, in dispatch_to_extension_op
res_values = op(new_left, new_right)
File "C:\Python37\lib\site-packages\pandas\core\arrays\integer.py", line 564, in cmp_method
result[mask] = True if op_name == 'ne' else False
TypeError: 'bool' object does not support item assignment
The above works as expected for float64 and other numeric types.
#### Expected Output
0 False
1 False
Name: x, dtype: bool
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit : None
python : 3.7.0.final.0
python-bits : 64
OS : Windows
OS-release : 10
machine : AMD64
processor : Intel64 Family 6 Model 158 Stepping 9, GenuineIntel
byteorder : little
LC_ALL : None
LANG : None
LOCALE : None.None
pandas : 0.25.1
numpy : 1.16.4
pytz : 2019.1
dateutil : 2.8.0
pip : 19.2.3
setuptools : 39.0.1
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : 4.3.4
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : None
pandas_datareader: None
bs4 : None
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : 4.3.4
matplotlib : 3.1.1
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
s3fs : None
scipy : 1.3.0
sqlalchemy : 1.3.5
tables : None
xarray : None
xlrd : None
xlwt : None
xlsxwriter : None
</details>
| 2019-10-12T21:09:02Z | [] | [] |
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "C:\Python37\lib\site-packages\pandas\core\ops.py", line 1731, in wrapper
return dispatch_to_extension_op(op, self, other)
File "C:\Python37\lib\site-packages\pandas\core\ops.py", line 1220, in dispatch_to_extension_op
res_values = op(new_left, new_right)
File "C:\Python37\lib\site-packages\pandas\core\arrays\integer.py", line 564, in cmp_method
result[mask] = True if op_name == 'ne' else False
TypeError: 'bool' object does not support item assignment
| 13,084 |
||||
pandas-dev/pandas | pandas-dev__pandas-28993 | 6d35836ec25b33990e6d962aff52e388652f65ce | diff --git a/pandas/util/testing.py b/pandas/util/testing.py
--- a/pandas/util/testing.py
+++ b/pandas/util/testing.py
@@ -1156,7 +1156,9 @@ def assert_series_equal(
):
pass
else:
- assert_attr_equal("dtype", left, right)
+ assert_attr_equal(
+ "dtype", left, right, obj="Attributes of {obj}".format(obj=obj)
+ )
if check_exact:
assert_numpy_array_equal(
@@ -1315,8 +1317,9 @@ def assert_frame_equal(
>>> assert_frame_equal(df1, df2)
Traceback (most recent call last):
- AssertionError: Attributes are different
...
+ AssertionError: Attributes of DataFrame.iloc[:, 1] are different
+
Attribute "dtype" are different
[left]: int64
[right]: float64
| assert_frame_equal: "AssertionError: Attributes are different" is un-informative for DataFrames
#### Example from the docstring of `assert_frame_equal`
```
>>> from pandas.util.testing import assert_frame_equal
>>> df1 = pd.DataFrame({'a': [1, 2], 'b': [3, 4]})
>>> df2 = pd.DataFrame({'a': [1, 2], 'b': [3.0, 4.0]})
>>> assert_frame_equal(df1, df2)
Traceback (most recent call last):
AssertionError: Attributes are different
...
Attribute "dtype" are different
[left]: int64
[right]: float64
```
#### Problem description
The `AssertionError` is un-informative in that it does not state the column for which the error happens.
#### Expected Output
Something like:
```
AssertionError: Attributes of "DataFrame.iloc[:, 1]" are different
...
Attribute "dtype" are different.
```
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit : None
python : 3.6.7.final.0
python-bits : 64
OS : Linux
OS-release : 5.0.0-31-generic
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : de_DE.UTF-8
LOCALE : de_DE.UTF-8
pandas : 0.25.0
numpy : 1.17.2
pytz : 2019.3
dateutil : 2.8.0
pip : 19.2.3
setuptools : 41.4.0
Cython : 0.29.13
pytest : 5.2.1
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : 2.8.3 (dt dec pq3 ext lo64)
jinja2 : 2.10.3
IPython : 7.7.0
pandas_datareader: None
bs4 : None
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : None
matplotlib : None
numexpr : 2.7.0
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
s3fs : None
scipy : 1.3.1
sqlalchemy : 1.3.6
tables : 3.5.2
xarray : None
xlrd : None
xlwt : None
xlsxwriter : None
</details>
#### Remedy
- `assert_frame_equal` passes `obj="{obj}.iloc[:, {idx}]".format(obj=obj, idx=i)` to `assert_series_equal`
- `assert_series_equal` should pass `obj=Attributes of "{obj}".format(obj=obj)` to `assert_attr_equal`
| 2019-10-15T09:18:26Z | [] | [] |
Traceback (most recent call last):
AssertionError: Attributes are different
...
Attribute "dtype" are different
| 13,091 |
||||
pandas-dev/pandas | pandas-dev__pandas-29063 | 2701f524661a82cbcb205e377ebe91d02fc66cb4 | Failure in groupby-apply if aggregating timedelta and datetime columns
#### Code Sample, a copy-pastable example if possible
```python
import pandas as pd
fname = 'foo.csv'
sample_data = '''clientid,datetime
A,2017-02-01 00:00:00
B,2017-02-01 00:00:00
C,2017-02-01 00:00:00'''
open(fname, 'w').write(sample_data)
df = pd.read_csv(fname)
df['datetime'] = pd.to_datetime(df.datetime)
df['time_delta_zero'] = df.datetime - df.datetime
# Next line fails w error message below
print df.groupby('clientid').apply(
lambda ddf: pd.Series(dict(
clientid_age = ddf.time_delta_zero.min(),
date = ddf.datetime.min()
))
)
```
#### Problem description
The current behavior is that the groupby-apply line fails, with the error message indicated below.
#### Expected Output
clientid date clientid_age
0 A 2017-02-01 00:00:00 0 days
1 B 2017-02-01 00:00:00 0 days
2 C 2017-02-01 00:00:00 0 days
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 2.7.10.final.0
python-bits: 64
OS: Darwin
OS-release: 16.4.0
machine: x86_64
processor: i386
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
pandas: 0.18.1
nose: 1.3.7
pip: 9.0.1
setuptools: 27.2.0
Cython: 0.24.1
numpy: 1.11.3
scipy: 0.18.1
statsmodels: 0.6.1
xarray: None
IPython: 5.1.0
sphinx: 1.4.6
patsy: 0.4.1
dateutil: 2.5.3
pytz: 2016.6.1
blosc: None
bottleneck: 1.1.0
tables: 3.2.3.1
numexpr: 2.6.1
matplotlib: 1.5.3
openpyxl: 2.3.2
xlrd: 1.0.0
xlwt: 1.1.2
xlsxwriter: 0.9.3
lxml: 3.6.4
bs4: 4.5.1
html5lib: None
httplib2: None
apiclient: None
sqlalchemy: 1.0.13
pymysql: None
psycopg2: 2.6.2 (dt dec pq3 ext lo64)
jinja2: 2.8
boto: 2.42.0
pandas_datareader: None
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
File "/Users/fieldc/anaconda/lib/python2.7/site-packages/pandas/core/groupby.py", line 651, in apply
return self._python_apply_general(f)
File "/Users/fieldc/anaconda/lib/python2.7/site-packages/pandas/core/groupby.py", line 660, in _python_apply_general
not_indexed_same=mutated or self.mutated)
File "/Users/fieldc/anaconda/lib/python2.7/site-packages/pandas/core/groupby.py", line 3343, in _wrap_applied_output
axis=self.axis).unstack()
File "/Users/fieldc/anaconda/lib/python2.7/site-packages/pandas/core/series.py", line 2043, in unstack
return unstack(self, level, fill_value)
File "/Users/fieldc/anaconda/lib/python2.7/site-packages/pandas/core/reshape.py", line 408, in unstack
return unstacker.get_result()
File "/Users/fieldc/anaconda/lib/python2.7/site-packages/pandas/core/reshape.py", line 169, in get_result
return DataFrame(values, index=index, columns=columns)
File "/Users/fieldc/anaconda/lib/python2.7/site-packages/pandas/core/frame.py", line 255, in __init__
copy=copy)
File "/Users/fieldc/anaconda/lib/python2.7/site-packages/pandas/core/frame.py", line 432, in _init_ndarray
return create_block_manager_from_blocks([values], [columns, index])
File "/Users/fieldc/anaconda/lib/python2.7/site-packages/pandas/core/internals.py", line 3993, in create_block_manager_from_blocks
construction_error(tot_items, blocks[0].shape[1:], axes, e)
File "/Users/fieldc/anaconda/lib/python2.7/site-packages/pandas/core/internals.py", line 3967, in construction_error
if block_shape[0] == 0:
IndexError: tuple index out of range
</details>
| What you are doing completely non-performant and way less readable than this idiomatic.
```
In [31]: df.groupby('clientid').min()
Out[31]:
datetime time_delta_zero
clientid
A 2017-02-01 0 days
B 2017-02-01 0 days
C 2017-02-01 0 days
```
I'll mark it as a bug, though, its shouldn't error (as you are giving back a series; the inference logic is already quite complex).
welcome for a PR to fix though.
Oh yes, this is definitely horrible pandas code to write! It's a simplified example from a much more complex script doing things more complicated than min().
I found a work-around where I return a DataFrame from the applied function rather than a Series.
@field-cady my point is you can compute ``.min()`` or other functions via ``.apply``. don't compute them all at once, simply do:
```
result = df.groupby(...).agg(['min', my_really_hard_to_compute_function])
```
or you can be even more specific via
```
result1 = df.groupby(..).this_column.agg(....)
result2 = df.groupby(..).that_column.agg(...)
result = pd.concat([result1, result2], axis=1)
```
is also quite idiomatic / readable.
It seems that the error comes from the creation of `DataFrame` with n-dimension `timedelta/datetime` values, pandas extracts and flattens the embedded n-dimensional array here:
https://github.com/pandas-dev/pandas/blob/master/pandas/core/frame.py#L469-L473
Ex. Here we our initial `values` is an 2-dimensional array like
`[[datetime.timedelta(0), datetime.timedelta(0), datetime.timedelta(0)], [datetime.datetime(2017, 3, 16), datetime.datetime(2017, 3, 16), datetime.datetime(2017, 3, 16)]]`
and it was flattened to an 1-dimensional array:
`[datetime.timedelta(0), datetime.timedelta(0), datetime.timedelta(0), datetime.datetime(2017, 3, 16), datetime.datetime(2017, 3, 16), datetime.datetime(2017, 3, 16)]`
I didn't get well that why we want to extract datetime like array here ?
so the purpose is to infer from a bunch of objects whether they are datetimelikes (e.g. datetime, datetime w/tz or timedeltas). This goes thru some logic to see if its possible, then tries to convert, backing aways if things are not fully convertible (IOW there are mixed non-same types things that are not NaNs/NaT's). IOW you will get a datetime64[ns] or datetime64[tz-aware] or timedelta64[ns] or back what you started.
The only thing is I think I originally made this work regardless of the passed in shape (see the ravel). This is wrong, it should preserve the shape and return a list of array-like if ndim > 1 or array-like if ndim == 1 . array-like are the converted objects or original array if you cannot convert successfully.
So this would fix this issue (and another one, ill find the reference). happy to have you put up a patch!
@PnPie this is the same cause as this: https://github.com/pandas-dev/pandas/issues/13287
Looks to work on master. Could use a test:
```
In [41]: df.groupby('clientid').apply(
...: ^Ilambda ddf: pd.Series(dict(
...: ^I^Iclientid_age = ddf.time_delta_zero.min(),
...: ^I^Idate = ddf.datetime.min()
...: ^I^I))
...: ^I)
Out[41]:
clientid_age date
clientid
A 0 days 2017-02-01
B 0 days 2017-02-01
C 0 days 2017-02-01
In [42]: pd.__version__
Out[42]: '0.26.0.dev0+533.gd8f9be7e3'
```
works with released version (0.25.1), too. Can be closed.
Care to contribute a regression test @qudade?
@mroeschke Sure, will do! | 2019-10-17T21:19:13Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
File "/Users/fieldc/anaconda/lib/python2.7/site-packages/pandas/core/groupby.py", line 651, in apply
return self._python_apply_general(f)
File "/Users/fieldc/anaconda/lib/python2.7/site-packages/pandas/core/groupby.py", line 660, in _python_apply_general
not_indexed_same=mutated or self.mutated)
File "/Users/fieldc/anaconda/lib/python2.7/site-packages/pandas/core/groupby.py", line 3343, in _wrap_applied_output
axis=self.axis).unstack()
File "/Users/fieldc/anaconda/lib/python2.7/site-packages/pandas/core/series.py", line 2043, in unstack
return unstack(self, level, fill_value)
File "/Users/fieldc/anaconda/lib/python2.7/site-packages/pandas/core/reshape.py", line 408, in unstack
return unstacker.get_result()
File "/Users/fieldc/anaconda/lib/python2.7/site-packages/pandas/core/reshape.py", line 169, in get_result
return DataFrame(values, index=index, columns=columns)
File "/Users/fieldc/anaconda/lib/python2.7/site-packages/pandas/core/frame.py", line 255, in __init__
copy=copy)
File "/Users/fieldc/anaconda/lib/python2.7/site-packages/pandas/core/frame.py", line 432, in _init_ndarray
return create_block_manager_from_blocks([values], [columns, index])
File "/Users/fieldc/anaconda/lib/python2.7/site-packages/pandas/core/internals.py", line 3993, in create_block_manager_from_blocks
construction_error(tot_items, blocks[0].shape[1:], axes, e)
File "/Users/fieldc/anaconda/lib/python2.7/site-packages/pandas/core/internals.py", line 3967, in construction_error
if block_shape[0] == 0:
IndexError: tuple index out of range
| 13,094 |
||||
pandas-dev/pandas | pandas-dev__pandas-29140 | 05406717740be3c49f0752547d750ceff34bb3b9 | diff --git a/doc/source/whatsnew/v1.0.0.rst b/doc/source/whatsnew/v1.0.0.rst
--- a/doc/source/whatsnew/v1.0.0.rst
+++ b/doc/source/whatsnew/v1.0.0.rst
@@ -292,6 +292,71 @@ New repr for :class:`~pandas.arrays.IntervalArray`
pd.arrays.IntervalArray.from_tuples([(0, 1), (2, 3)])
+``DataFrame.rename`` now only accepts one positional argument
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+- :meth:`DataFrame.rename` would previously accept positional arguments that would lead
+ to ambiguous or undefined behavior. From pandas 1.0, only the very first argument, which
+ maps labels to their new names along the default axis, is allowed to be passed by position
+ (:issue:`29136`).
+
+*pandas 0.25.x*
+
+.. code-block:: ipython
+
+ In [1]: df = pd.DataFrame([[1]])
+ In [2]: df.rename({0: 1}, {0: 2})
+ FutureWarning: ...Use named arguments to resolve ambiguity...
+ Out[2]:
+ 2
+ 1 1
+
+*pandas 1.0.0*
+
+.. ipython:: python
+ :okexcept:
+
+ df.rename({0: 1}, {0: 2})
+
+Note that errors will now be raised when conflicting or potentially ambiguous arguments are provided.
+
+*pandas 0.25.x*
+
+.. code-block:: ipython
+
+ In [1]: df.rename({0: 1}, index={0: 2})
+ Out[1]:
+ 0
+ 1 1
+
+ In [2]: df.rename(mapper={0: 1}, index={0: 2})
+ Out[2]:
+ 0
+ 2 1
+
+*pandas 1.0.0*
+
+.. ipython:: python
+ :okexcept:
+
+ df.rename({0: 1}, index={0: 2})
+ df.rename(mapper={0: 1}, index={0: 2})
+
+You can still change the axis along which the first positional argument is applied by
+supplying the ``axis`` keyword argument.
+
+.. ipython:: python
+
+ df.rename({0: 1})
+ df.rename({0: 1}, axis=1)
+
+If you would like to update both the index and column labels, be sure to use the respective
+keywords.
+
+.. ipython:: python
+
+ df.rename(index={0: 1}, columns={0: 2})
+
Extended verbose info output for :class:`~pandas.DataFrame`
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -555,7 +620,6 @@ Optional libraries below the lowest tested version may still work, but are not c
See :ref:`install.dependencies` and :ref:`install.optional_dependencies` for more.
-
.. _whatsnew_100.api.other:
Other API changes
diff --git a/pandas/_typing.py b/pandas/_typing.py
--- a/pandas/_typing.py
+++ b/pandas/_typing.py
@@ -2,10 +2,14 @@
from typing import (
IO,
TYPE_CHECKING,
+ Any,
AnyStr,
+ Callable,
Collection,
Dict,
+ Hashable,
List,
+ Mapping,
Optional,
TypeVar,
Union,
@@ -56,9 +60,14 @@
FrameOrSeries = TypeVar("FrameOrSeries", bound="NDFrame")
Axis = Union[str, int]
+Label = Optional[Hashable]
+Level = Union[Label, int]
Ordered = Optional[bool]
JSONSerializable = Union[PythonScalar, List, Dict]
Axes = Collection
+# For functions like rename that convert one label to another
+Renamer = Union[Mapping[Label, Any], Callable[[Label], Label]]
+
# to maintain type information across generic functions and parametrization
T = TypeVar("T")
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -38,7 +38,7 @@
from pandas._config import get_option
from pandas._libs import algos as libalgos, lib
-from pandas._typing import Axes, Dtype, FilePathOrBuffer
+from pandas._typing import Axes, Axis, Dtype, FilePathOrBuffer, Level, Renamer
from pandas.compat import PY37
from pandas.compat._optional import import_optional_dependency
from pandas.compat.numpy import function as nv
@@ -3986,7 +3986,19 @@ def drop(
"mapper",
[("copy", True), ("inplace", False), ("level", None), ("errors", "ignore")],
)
- def rename(self, *args, **kwargs):
+ def rename(
+ self,
+ mapper: Optional[Renamer] = None,
+ *,
+ index: Optional[Renamer] = None,
+ columns: Optional[Renamer] = None,
+ axis: Optional[Axis] = None,
+ copy: bool = True,
+ inplace: bool = False,
+ level: Optional[Level] = None,
+ errors: str = "ignore",
+ ) -> Optional["DataFrame"]:
+
"""
Alter axes labels.
@@ -4095,12 +4107,16 @@ def rename(self, *args, **kwargs):
2 2 5
4 3 6
"""
- axes = validate_axis_style_args(self, args, kwargs, "mapper", "rename")
- kwargs.update(axes)
- # Pop these, since the values are in `kwargs` under different names
- kwargs.pop("axis", None)
- kwargs.pop("mapper", None)
- return super().rename(**kwargs)
+ return super().rename(
+ mapper=mapper,
+ index=index,
+ columns=columns,
+ axis=axis,
+ copy=copy,
+ inplace=inplace,
+ level=level,
+ errors=errors,
+ )
@Substitution(**_shared_doc_kwargs)
@Appender(NDFrame.fillna.__doc__)
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -30,7 +30,15 @@
from pandas._config import config
from pandas._libs import Timestamp, iNaT, lib, properties
-from pandas._typing import Dtype, FilePathOrBuffer, FrameOrSeries, JSONSerializable
+from pandas._typing import (
+ Axis,
+ Dtype,
+ FilePathOrBuffer,
+ FrameOrSeries,
+ JSONSerializable,
+ Level,
+ Renamer,
+)
from pandas.compat import set_function_name
from pandas.compat._optional import import_optional_dependency
from pandas.compat.numpy import function as nv
@@ -921,7 +929,18 @@ def swaplevel(self: FrameOrSeries, i=-2, j=-1, axis=0) -> FrameOrSeries:
# ----------------------------------------------------------------------
# Rename
- def rename(self, *args, **kwargs):
+ def rename(
+ self: FrameOrSeries,
+ mapper: Optional[Renamer] = None,
+ *,
+ index: Optional[Renamer] = None,
+ columns: Optional[Renamer] = None,
+ axis: Optional[Axis] = None,
+ copy: bool = True,
+ inplace: bool = False,
+ level: Optional[Level] = None,
+ errors: str = "ignore",
+ ) -> Optional[FrameOrSeries]:
"""
Alter axes input function or functions. Function / dict values must be
unique (1-to-1). Labels not contained in a dict / Series will be left
@@ -1034,44 +1053,46 @@ def rename(self, *args, **kwargs):
See the :ref:`user guide <basics.rename>` for more.
"""
- axes, kwargs = self._construct_axes_from_arguments(args, kwargs)
- copy = kwargs.pop("copy", True)
- inplace = kwargs.pop("inplace", False)
- level = kwargs.pop("level", None)
- axis = kwargs.pop("axis", None)
- errors = kwargs.pop("errors", "ignore")
- if axis is not None:
- # Validate the axis
- self._get_axis_number(axis)
-
- if kwargs:
- raise TypeError(
- "rename() got an unexpected keyword "
- f'argument "{list(kwargs.keys())[0]}"'
- )
-
- if com.count_not_none(*axes.values()) == 0:
+ if mapper is None and index is None and columns is None:
raise TypeError("must pass an index to rename")
- self._consolidate_inplace()
+ if index is not None or columns is not None:
+ if axis is not None:
+ raise TypeError(
+ "Cannot specify both 'axis' and any of 'index' or 'columns'"
+ )
+ elif mapper is not None:
+ raise TypeError(
+ "Cannot specify both 'mapper' and any of 'index' or 'columns'"
+ )
+ else:
+ # use the mapper argument
+ if axis and self._get_axis_number(axis) == 1:
+ columns = mapper
+ else:
+ index = mapper
+
result = self if inplace else self.copy(deep=copy)
- # start in the axis order to eliminate too many copies
- for axis in range(self._AXIS_LEN):
- v = axes.get(self._AXIS_NAMES[axis])
- if v is None:
+ for axis_no, replacements in enumerate((index, columns)):
+ if replacements is None:
continue
- f = com.get_rename_function(v)
- baxis = self._get_block_manager_axis(axis)
+
+ ax = self._get_axis(axis_no)
+ baxis = self._get_block_manager_axis(axis_no)
+ f = com.get_rename_function(replacements)
+
if level is not None:
- level = self.axes[axis]._get_level_number(level)
+ level = ax._get_level_number(level)
# GH 13473
- if not callable(v):
- indexer = self.axes[axis].get_indexer_for(v)
+ if not callable(replacements):
+ indexer = ax.get_indexer_for(replacements)
if errors == "raise" and len(indexer[indexer == -1]):
missing_labels = [
- label for index, label in enumerate(v) if indexer[index] == -1
+ label
+ for index, label in enumerate(replacements)
+ if indexer[index] == -1
]
raise KeyError(f"{missing_labels} not found in axis")
@@ -1082,6 +1103,7 @@ def rename(self, *args, **kwargs):
if inplace:
self._update_inplace(result._data)
+ return None
else:
return result.__finalize__(self)
@@ -4036,7 +4058,7 @@ def add_prefix(self: FrameOrSeries, prefix: str) -> FrameOrSeries:
f = functools.partial("{prefix}{}".format, prefix=prefix)
mapper = {self._info_axis_name: f}
- return self.rename(**mapper)
+ return self.rename(**mapper) # type: ignore
def add_suffix(self: FrameOrSeries, suffix: str) -> FrameOrSeries:
"""
@@ -4095,7 +4117,7 @@ def add_suffix(self: FrameOrSeries, suffix: str) -> FrameOrSeries:
f = functools.partial("{}{suffix}".format, suffix=suffix)
mapper = {self._info_axis_name: f}
- return self.rename(**mapper)
+ return self.rename(**mapper) # type: ignore
def sort_values(
self,
diff --git a/pandas/core/series.py b/pandas/core/series.py
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -3124,7 +3124,7 @@ def argsort(self, axis=0, kind="quicksort", order=None):
Parameters
----------
- axis : int
+ axis : {0 or "index"}
Has no effect but is accepted for compatibility with numpy.
kind : {'mergesort', 'quicksort', 'heapsort'}, default 'quicksort'
Choice of sorting algorithm. See np.sort for more
@@ -3893,7 +3893,16 @@ def align(
broadcast_axis=broadcast_axis,
)
- def rename(self, index=None, **kwargs):
+ def rename(
+ self,
+ index=None,
+ *,
+ axis=None,
+ copy=True,
+ inplace=False,
+ level=None,
+ errors="ignore",
+ ):
"""
Alter Series index labels or name.
@@ -3907,6 +3916,8 @@ def rename(self, index=None, **kwargs):
Parameters
----------
+ axis : {0 or "index"}
+ Unused. Accepted for compatability with DataFrame method only.
index : scalar, hashable sequence, dict-like or function, optional
Functions or dict-like are transformations to apply to
the index.
@@ -3924,6 +3935,7 @@ def rename(self, index=None, **kwargs):
See Also
--------
+ DataFrame.rename : Corresponding DataFrame method.
Series.rename_axis : Set the name of the axis.
Examples
@@ -3950,12 +3962,12 @@ def rename(self, index=None, **kwargs):
5 3
dtype: int64
"""
- kwargs["inplace"] = validate_bool_kwarg(kwargs.get("inplace", False), "inplace")
-
if callable(index) or is_dict_like(index):
- return super().rename(index=index, **kwargs)
+ return super().rename(
+ index, copy=copy, inplace=inplace, level=level, errors=errors
+ )
else:
- return self._set_name(index, inplace=kwargs.get("inplace"))
+ return self._set_name(index, inplace=inplace)
@Substitution(**_shared_doc_kwargs)
@Appender(generic.NDFrame.reindex.__doc__)
| DataFrame.rename only validates column arguments
Found this while trying to clean up axis handling in core.generic
This fails as you would hope
```python
>>> df = pd.DataFrame([[1]])
>>> df.rename({0: 1}, columns={0: 2}, axis=1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/williamayd/clones/pandas/pandas/util/_decorators.py", line 235, in wrapper
return func(*args, **kwargs)
File "/Users/williamayd/clones/pandas/pandas/core/frame.py", line 4143, in rename
axes = validate_axis_style_args(self, args, kwargs, "mapper", "rename")
File "/Users/williamayd/clones/pandas/pandas/util/_validators.py", line 287, in validate_axis_style_args
raise TypeError(msg)
TypeError: Cannot specify both 'axis' and any of 'index' or 'columns'.
```
This doesn't
```python
>>> df.rename({0: 1}, index={0: 2})
0
1 1
```
And perhaps even more surprising is that you will get a different result depending on whether the first argument is passed by position or keyword
```python
>>> df.rename(mapper={0: 1}, index={0: 2})
0
2 1
```
| 2019-10-21T21:11:20Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/williamayd/clones/pandas/pandas/util/_decorators.py", line 235, in wrapper
return func(*args, **kwargs)
File "/Users/williamayd/clones/pandas/pandas/core/frame.py", line 4143, in rename
axes = validate_axis_style_args(self, args, kwargs, "mapper", "rename")
File "/Users/williamayd/clones/pandas/pandas/util/_validators.py", line 287, in validate_axis_style_args
raise TypeError(msg)
TypeError: Cannot specify both 'axis' and any of 'index' or 'columns'.
| 13,104 |
||||
pandas-dev/pandas | pandas-dev__pandas-29142 | ef77b5700d0829b944c332b1dbeb810797a38461 | diff --git a/doc/source/whatsnew/v1.0.0.rst b/doc/source/whatsnew/v1.0.0.rst
--- a/doc/source/whatsnew/v1.0.0.rst
+++ b/doc/source/whatsnew/v1.0.0.rst
@@ -248,7 +248,6 @@ Performance improvements
Bug fixes
~~~~~~~~~
-- Bug in :meth:`DataFrame.to_html` when using ``formatters=<list>`` and ``max_cols`` together. (:issue:`25955`)
Categorical
^^^^^^^^^^^
@@ -296,6 +295,7 @@ Numeric
- Bug in :meth:`DataFrame.quantile` with zero-column :class:`DataFrame` incorrectly raising (:issue:`23925`)
- :class:`DataFrame` flex inequality comparisons methods (:meth:`DataFrame.lt`, :meth:`DataFrame.le`, :meth:`DataFrame.gt`, :meth: `DataFrame.ge`) with object-dtype and ``complex`` entries failing to raise ``TypeError`` like their :class:`Series` counterparts (:issue:`28079`)
- Bug in :class:`DataFrame` logical operations (`&`, `|`, `^`) not matching :class:`Series` behavior by filling NA values (:issue:`28741`)
+- Bug in :meth:`DataFrame.interpolate` where specifying axis by name references variable before it is assigned (:issue:`29142`)
-
Conversion
@@ -356,6 +356,7 @@ I/O
- Bug in :meth:`DataFrame.read_json` where using ``orient="index"`` would not maintain the order (:issue:`28557`)
- Bug in :meth:`DataFrame.to_html` where the length of the ``formatters`` argument was not verified (:issue:`28469`)
- Bug in :meth:`pandas.io.formats.style.Styler` formatting for floating values not displaying decimals correctly (:issue:`13257`)
+- Bug in :meth:`DataFrame.to_html` when using ``formatters=<list>`` and ``max_cols`` together. (:issue:`25955`)
Plotting
^^^^^^^^
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -7023,14 +7023,15 @@ def interpolate(
"""
inplace = validate_bool_kwarg(inplace, "inplace")
+ axis = self._get_axis_number(axis)
+
if axis == 0:
ax = self._info_axis_name
_maybe_transposed_self = self
elif axis == 1:
_maybe_transposed_self = self.T
ax = 1
- else:
- _maybe_transposed_self = self
+
ax = _maybe_transposed_self._get_axis_number(ax)
if _maybe_transposed_self.ndim == 2:
| NDFrame.interpolate(): variable 'ax' not assigned when axis='index'
The `NDFrame.interpolate` function fails when passing a string as axis. Example:
```python
>>> import numpy as np
>>> import pandas as pd
>>> df = pd.DataFrame(np.zeros((3,2)), columns=['a','b'])
>>> df.iloc[1] = np.nan
>>> df
a b
0 0.0 0.0
1 NaN NaN
2 0.0 0.0
>>> df.interpolate(axis=0)
a b
0 0.0 0.0
1 0.0 0.0
2 0.0 0.0
>>> df.interpolate(axis='index')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/lehanson/anaconda3/envs/plots/lib/python3.7/site-packages/pandas/core/generic.py", line 7006, in interpolate
ax = _maybe_transposed_self._get_axis_number(ax)
UnboundLocalError: local variable 'ax' referenced before assignment
```
From the documentation and from the function itself, it looks like `df.interpolate(axis='index')` was intended to work, but that maybe someone accidentally deleted a line in generic.py? The function seems to work properly if I add `ax = axis` in the else block here:
https://github.com/pandas-dev/pandas/blob/171c71611886aab8549a8620c5b0071a129ad685/pandas/core/generic.py#L6998-L7006
I am using pandas version 0.25.1
| Thanks for the report, confirmed on master.
We should have an `axis = self._get_axis_number(axis)` at the start of `NDFrame.interpolate` (and a whatsnew note & test to ensure we don't regress).
@TomAugspurger do you mean something like this? Also: there doesn't seem to be a whatsnew file since 0.25.2, does this go into v0.25.3.rst?
```
axis = self._get_axis_number(axis)
if axis == 0:
ax = self._info_axis_name
_maybe_transposed_self = self
elif axis == 1:
_maybe_transposed_self = self.T
ax = 1
else:
ax = axis
_maybe_transposed_self = self
ax = _maybe_transposed_self._get_axis_number(ax)
```
Edit: sorry @gabriellm1 I didn't see your comment!
whatsnew will go in v1.0.0.rst
On Mon, Oct 21, 2019 at 3:59 PM ellequelle <notifications@github.com> wrote:
> @TomAugspurger <https://github.com/TomAugspurger> do you mean something
> like this? Also: there doesn't seem to be a whatsnew file since 0.25.2,
> does this go into v0.25.3.rst?
>
> axis = self._get_axis_number(axis)
> if axis == 0:
> ax = self._info_axis_name
> _maybe_transposed_self = self
> elif axis == 1:
> _maybe_transposed_self = self.T
> ax = 1
> else:
> ax = axis
> _maybe_transposed_self = self
> ax = _maybe_transposed_self._get_axis_number(ax)
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/pandas-dev/pandas/issues/29132?email_source=notifications&email_token=AAKAOIUOC55VGO2P4XPEPQ3QPYJ2TA5CNFSM4JC7IVS2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEB3YWGA#issuecomment-544705304>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AAKAOISKWQ3QQ7MB53HPSNLQPYJ2TANCNFSM4JC7IVSQ>
> .
>
| 2019-10-22T02:35:48Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/lehanson/anaconda3/envs/plots/lib/python3.7/site-packages/pandas/core/generic.py", line 7006, in interpolate
ax = _maybe_transposed_self._get_axis_number(ax)
UnboundLocalError: local variable 'ax' referenced before assignment
| 13,105 |
|||
pandas-dev/pandas | pandas-dev__pandas-29358 | 221c8a7a623bfabdeccfb62212e091ed37b4af5e | Apply method broken for empty integer series with datetime index
#### Code Sample, a copy-pastable example if possible
```python
# Your code here
>>> pd.Series([], index=pd.date_range(start="2018-01-01", periods=0), dtype=int).apply(lambda x: x)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/.../lib/python3.6/site-packages/pandas/core/series.py", line 2526, in apply
index=self.index).__finalize__(self)
File "/.../lib/python3.6/site-packages/pandas/core/series.py", line 264, in __init__
raise_cast_failure=True)
File "/.../lib/python3.6/site-packages/pandas/core/series.py", line 3228, in _sanitize_array
subarr = _try_cast(data, False)
File "/.../lib/python3.6/site-packages/pandas/core/series.py", line 3163, in _try_cast
subarr = np.array(subarr, dtype=dtype, copy=copy)
ValueError: cannot convert float NaN to integer
```
#### Problem description
For some reason, `apply` method doesn't work for empty series with datetime index and dtype int. Obviously, it should work.
#### Expected Output
```python
>>> pd.Series([], index=pd.date_range(start="2018-01-01", periods=0), dtype=int).apply(lambda x: x)
Series([], Freq: D, dtype: int64)
```
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 3.6.5.final.0
python-bits: 64
OS: Linux
OS-release: 4.14.42
machine: x86_64
processor:
byteorder: little
LC_ALL: None
LANG: en_GB.UTF-8
LOCALE: en_GB.UTF-8
pandas: 0.22.0
pytest: None
pip: None
setuptools: 39.0.1
Cython: 0.28.1
numpy: 1.14.2
scipy: 1.0.1
pyarrow: None
xarray: None
IPython: None
sphinx: None
patsy: None
dateutil: 2.6.1
pytz: 2018.3
blosc: None
bottleneck: 1.2.1
tables: 3.4.2
numexpr: 2.6.4
feather: None
matplotlib: None
openpyxl: 2.5.2
xlrd: 0.9.4
xlwt: 1.3.0
xlsxwriter: None
lxml: 4.2.1
bs4: 4.6.0
html5lib: 1.0.1
sqlalchemy: 1.2.6
pymysql: None
psycopg2: None
jinja2: None
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
</details>
| Not sure how much this issue is related, but it might be: https://github.com/pandas-dev/pandas/issues/21192
This has been fixed in the latest release.
```
In [14]: pd.__version__
Out[14]: u'0.23.0'
In [15]: pd.Series([], index=pd.date_range(start="2018-01-01", periods=0), dtype=int).apply(lambda x: x)
Out[15]: Series([], Freq: D, dtype: int64)
```
@jreback do we have this edge case tested?
Hi,
I would like to write test case for this as part of PyCon India'19 dev sprint.
@mroeschke Can you please elaborate on the edge test case.
We would just need a test seeing that the original issue was fixed. Namely that the example in https://github.com/pandas-dev/pandas/issues/21245#issue-327300484 gives the stated desired output | 2019-11-02T15:23:05Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/.../lib/python3.6/site-packages/pandas/core/series.py", line 2526, in apply
index=self.index).__finalize__(self)
File "/.../lib/python3.6/site-packages/pandas/core/series.py", line 264, in __init__
raise_cast_failure=True)
File "/.../lib/python3.6/site-packages/pandas/core/series.py", line 3228, in _sanitize_array
subarr = _try_cast(data, False)
File "/.../lib/python3.6/site-packages/pandas/core/series.py", line 3163, in _try_cast
subarr = np.array(subarr, dtype=dtype, copy=copy)
ValueError: cannot convert float NaN to integer
| 13,140 |
||||
pandas-dev/pandas | pandas-dev__pandas-29470 | d3461c14b1d38edb823e675ef130876677bd3cf1 | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -2731,38 +2731,6 @@ def transpose(self, *args, **kwargs):
T = property(transpose)
- # ----------------------------------------------------------------------
- # Picklability
-
- # legacy pickle formats
- def _unpickle_frame_compat(self, state): # pragma: no cover
- if len(state) == 2: # pragma: no cover
- series, idx = state
- columns = sorted(series)
- else:
- series, cols, idx = state
- columns = com._unpickle_array(cols)
-
- index = com._unpickle_array(idx)
- self._data = self._init_dict(series, index, columns, None)
-
- def _unpickle_matrix_compat(self, state): # pragma: no cover
- # old unpickling
- (vals, idx, cols), object_state = state
-
- index = com._unpickle_array(idx)
- dm = DataFrame(vals, index=index, columns=com._unpickle_array(cols), copy=False)
-
- if object_state is not None:
- ovals, _, ocols = object_state
- objects = DataFrame(
- ovals, index=index, columns=com._unpickle_array(ocols), copy=False
- )
-
- dm = dm.join(objects)
-
- self._data = dm._data
-
# ----------------------------------------------------------------------
# Indexing Methods
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -2089,18 +2089,8 @@ def __setstate__(self, state):
else:
self._unpickle_series_compat(state)
- elif isinstance(state[0], dict):
- if len(state) == 5:
- self._unpickle_sparse_frame_compat(state)
- else:
- self._unpickle_frame_compat(state)
- elif len(state) == 4:
- self._unpickle_panel_compat(state)
elif len(state) == 2:
self._unpickle_series_compat(state)
- else: # pragma: no cover
- # old pickling format, for compatibility
- self._unpickle_matrix_compat(state)
self._item_cache = {}
diff --git a/pandas/io/pickle.py b/pandas/io/pickle.py
--- a/pandas/io/pickle.py
+++ b/pandas/io/pickle.py
@@ -1,10 +1,7 @@
""" pickle compat """
-from io import BytesIO
import pickle
import warnings
-from numpy.lib.format import read_array
-
from pandas.compat import PY36, pickle_compat as pc
from pandas.io.common import _get_handle, _stringify_path
@@ -164,12 +161,3 @@ def read_pickle(path, compression="infer"):
f.close()
for _f in fh:
_f.close()
-
-
-# compat with sparse pickle / unpickle
-
-
-def _unpickle_array(bytes):
- arr = read_array(BytesIO(bytes))
-
- return arr
| AttributeError: module 'pandas.core.common' has no attribute '_unpickle_array'
mypy errors
```
pandas\core\frame.py:2742:23: error: Module has no attribute "_unpickle_array" [attr-defined]
pandas\core\frame.py:2744:17: error: Module has no attribute "_unpickle_array" [attr-defined]
pandas\core\frame.py:2751:17: error: Module has no attribute "_unpickle_array" [attr-defined]
pandas\core\frame.py:2752:51: error: Module has no attribute "_unpickle_array" [attr-defined]
pandas\core\frame.py:2757:45: error: Module has no attribute "_unpickle_array" [attr-defined]
```
```python
>>> import pandas as pd
>>>
>>> pd.__version__
'0.26.0.dev0+521.g93183bab1'
>>>
>>> df = pd.DataFrame()
>>>
>>> state = df.__getstate__()
>>>
>>> state
{'_data': BlockManager
Items: Index([], dtype='object')
Axis 1: Index([], dtype='object'), '_typ': 'dataframe', '_metadata': []}
>>>
>>> df._unpickle_frame_compat(state)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\simon\pandas-simonjayhawkins\pandas\core\frame.py", line 2717, in _unpickle_frame_compat
columns = com._unpickle_array(cols)
AttributeError: module 'pandas.core.common' has no attribute '_unpickle_array'
>>>
```
we don't have any tests hitting DataFrame._unpickle_frame_compat or DataFrame._unpickle_matrix_compat
https://github.com/pandas-dev/pandas/blob/f625730eb1e729a3f58b213cb31ccf625307b198/pandas/core/frame.py#L2707-L2738
can this code be removed?
| Looks like there exists such a function in io.pickle. I'd be pretty happy to see this untested code removed
the one in io.pickle is preceded with `# compat with sparse pickle / unpickle`
also in NDFrame.__setstate__ there is a `self._unpickle_panel_compat(state)` which seems iffy. so I think NDFrame.__setstate__ needs a bit of a clean. | 2019-11-07T18:59:27Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\simon\pandas-simonjayhawkins\pandas\core\frame.py", line 2717, in _unpickle_frame_compat
columns = com._unpickle_array(cols)
AttributeError: module 'pandas.core.common' has no attribute '_unpickle_array'
| 13,167 |
|||
pandas-dev/pandas | pandas-dev__pandas-29792 | f93e4df02588e4ae1c2d338cfeedaee0a88fac4b | diff --git a/doc/source/whatsnew/v1.0.0.rst b/doc/source/whatsnew/v1.0.0.rst
--- a/doc/source/whatsnew/v1.0.0.rst
+++ b/doc/source/whatsnew/v1.0.0.rst
@@ -535,6 +535,7 @@ Reshaping
- Bug :meth:`Series.pct_change` where supplying an anchored frequency would throw a ValueError (:issue:`28664`)
- Bug where :meth:`DataFrame.equals` returned True incorrectly in some cases when two DataFrames had the same columns in different orders (:issue:`28839`)
- Bug in :meth:`DataFrame.replace` that caused non-numeric replacer's dtype not respected (:issue:`26632`)
+- Bug in :func:`melt` where supplying mixed strings and numeric values for ``id_vars`` or ``value_vars`` would incorrectly raise a ``ValueError`` (:issue:`29718`)
Sparse
^^^^^^
diff --git a/pandas/core/reshape/melt.py b/pandas/core/reshape/melt.py
--- a/pandas/core/reshape/melt.py
+++ b/pandas/core/reshape/melt.py
@@ -11,6 +11,7 @@
from pandas.core.dtypes.missing import notna
from pandas.core.arrays import Categorical
+import pandas.core.common as com
from pandas.core.frame import DataFrame, _shared_docs
from pandas.core.indexes.base import Index
from pandas.core.reshape.concat import concat
@@ -47,7 +48,7 @@ def melt(
else:
# Check that `id_vars` are in frame
id_vars = list(id_vars)
- missing = Index(np.ravel(id_vars)).difference(cols)
+ missing = Index(com.flatten(id_vars)).difference(cols)
if not missing.empty:
raise KeyError(
"The following 'id_vars' are not present"
@@ -69,7 +70,7 @@ def melt(
else:
value_vars = list(value_vars)
# Check that `value_vars` are in frame
- missing = Index(np.ravel(value_vars)).difference(cols)
+ missing = Index(com.flatten(value_vars)).difference(cols)
if not missing.empty:
raise KeyError(
"The following 'value_vars' are not present in"
| melt does not recognize numeric column names
#### Code Sample, a copy-pastable example if possible
```python
import pandas as pd
df = pd.DataFrame(columns=[1, "string"])
pd.melt(df, id_vars=[1, "string"])
```
#### Problem description
The shown example fails with
```
Traceback (most recent call last):
File "test.py", line 5, in <module>
pd.melt(df, id_vars=[1, "string"])
File "/home/nils/projects/tsfresh/venv/lib/python3.6/site-packages/pandas/core/reshape/melt.py", line 52, in melt
"".format(missing=list(missing))
KeyError: "The following 'id_vars' are not present in the DataFrame: ['1']"
```
and I guess the reason is that the call of
```python
Index(np.ravel(id_vars))
```
in `pd.melt` somehow casts the numerical column name `1` to the string `"1"`.
I am not sure if this is intended behavior or if the case of numerical column names is just not supported, but at least in older pandas versions (e.g. 0.23.4) this still worked.
Thanks for looking into this! I am also fine if this is closed with "won't fix" :-)
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit : None
python : 3.6.8.final.0
python-bits : 64
OS : Linux
OS-release : 4.15.0-65-generic
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 0.25.3
numpy : 1.17.4
pytz : 2019.3
dateutil : 2.8.1
pip : 19.3.1
setuptools : 41.6.0
Cython : None
pytest : 5.2.4
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : 4.4.1
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 2.10.3
IPython : 7.9.0
pandas_datareader: 0.8.1
bs4 : None
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : 4.4.1
matplotlib : 3.1.1
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
s3fs : None
scipy : 1.3.2
sqlalchemy : None
tables : None
xarray : None
xlrd : None
xlwt : None
xlsxwriter : None
</details>
| This looks like a bug to me. Thanks for the report! Investigations and PR's welcome.
So concerning investigation: the root cause is that
```python3
>>> np.ravel(["string", 1])
array(['string', '1'], dtype='<U5')
```
will give an `np.array` just of strings.
Concerning PRs: the check for having the columns could also be implemented via just using the `Index` directly without the `np.ravel`. Do you (or someone) know why it was introduced?
I see it was added here https://github.com/pandas-dev/pandas/pull/23575/commits/fba641fdb53beb3ec1b952cf9329544af1c378e5 to cope with multiindex. Maybe there is another way to do this?
Maybe use `pandas.core.common.flatten` instead?
```diff
diff --git a/pandas/core/reshape/melt.py b/pandas/core/reshape/melt.py
index 16c044548..2d3a40fdb 100644
--- a/pandas/core/reshape/melt.py
+++ b/pandas/core/reshape/melt.py
@@ -10,6 +10,7 @@ from pandas.core.dtypes.generic import ABCMultiIndex
from pandas.core.dtypes.missing import notna
from pandas.core.arrays import Categorical
+import pandas.core.common as com
from pandas.core.frame import _shared_docs
from pandas.core.indexes.base import Index
from pandas.core.reshape.concat import concat
@@ -45,7 +46,7 @@ def melt(
else:
# Check that `id_vars` are in frame
id_vars = list(id_vars)
- missing = Index(np.ravel(id_vars)).difference(cols)
+ missing = Index(com.flatten(id_vars)).difference(cols)
if not missing.empty:
raise KeyError(
"The following 'id_vars' are not present"
```
Which seems to make things work:
```python
In [1]: import pandas as pd
In [2]: df = pd.DataFrame(columns=[1, "string"])
In [3]: pd.melt(df, id_vars=[1, "string"])
Out[3]:
Empty DataFrame
Columns: [1, string, variable, value]
Index: []
``` | 2019-11-22T16:13:16Z | [] | [] |
Traceback (most recent call last):
File "test.py", line 5, in <module>
pd.melt(df, id_vars=[1, "string"])
File "/home/nils/projects/tsfresh/venv/lib/python3.6/site-packages/pandas/core/reshape/melt.py", line 52, in melt
"".format(missing=list(missing))
KeyError: "The following 'id_vars' are not present in the DataFrame: ['1']"
| 13,208 |
|||
pandas-dev/pandas | pandas-dev__pandas-30246 | cc5b41757327054cb38f7b37525c52244ce8bede | diff --git a/pandas/_libs/parsers.pyx b/pandas/_libs/parsers.pyx
--- a/pandas/_libs/parsers.pyx
+++ b/pandas/_libs/parsers.pyx
@@ -657,7 +657,7 @@ cdef class TextReader:
if isinstance(source, str):
encoding = sys.getfilesystemencoding() or "utf-8"
-
+ usource = source
source = source.encode(encoding)
if self.memory_map:
@@ -677,10 +677,11 @@ cdef class TextReader:
if ptr == NULL:
if not os.path.exists(source):
+
raise FileNotFoundError(
ENOENT,
- f'File {source} does not exist',
- source)
+ f'File {usource} does not exist',
+ usource)
raise IOError('Initializing from file failed')
self.parser.source = ptr
| read_csv encode operation on source
#### Code Sample, a copy-pastable example if possible
```python
>>> import pandas as pd
>>> df = pd.read_csv("historical_dataset.csv")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3/dist-packages/pandas/io/parsers.py", line 709, in parser_f
return _read(filepath_or_buffer, kwds)
File "/usr/lib/python3/dist-packages/pandas/io/parsers.py", line 449, in _read
parser = TextFileReader(filepath_or_buffer, **kwds)
File "/usr/lib/python3/dist-packages/pandas/io/parsers.py", line 818, in __init__
self._make_engine(self.engine)
File "/usr/lib/python3/dist-packages/pandas/io/parsers.py", line 1049, in _make_engine
self._engine = CParserWrapper(self.f, **self.options)
File "/usr/lib/python3/dist-packages/pandas/io/parsers.py", line 1695, in __init__
self._reader = parsers.TextReader(src, **kwds)
File "pandas/_libs/parsers.pyx", line 402, in pandas._libs.parsers.TextReader.__cinit__
File "pandas/_libs/parsers.pyx", line 718, in pandas._libs.parsers.TextReader._setup_parser_source
FileNotFoundError: File b'historical_dataset.csv' does not exist
```
#### Problem description
Noticed that when source does not exist, there is a leading 'b' on the source on the error message. For Python3, is it necessary to do the encode operation on line 667 of https://github.com/pandas-dev/pandas/blob/master/pandas/_libs/parsers.pyx? Works fine for Python2.
#### Output of ``pd.show_versions()``
<details>
```
INSTALLED VERSIONS
------------------
commit : None
python : 3.7.4.final.0
python-bits : 64
OS : Linux
OS-release : 5.0.0-31-generic
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : en_US.UTF-8
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 0.25.1
numpy : 1.14.5
pytz : 2018.5
dateutil : 2.8.0
pip : 19.2.3
setuptools : 41.2.0
Cython : 0.29.13
pytest : None
hypothesis : None
sphinx : 1.7.6
blosc : None
feather : None
xlsxwriter : None
lxml.etree : 4.2.3
html5lib : None
pymysql : 0.9.3
psycopg2 : None
jinja2 : 2.10
IPython : None
pandas_datareader: None
bs4 : None
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : 4.2.3
matplotlib : 2.2.2
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
s3fs : None
scipy : 1.1.0
sqlalchemy : 1.3.8
tables : None
xarray : None
xlrd : None
xlwt : None
xlsxwriter : None
```
</details>
| @abellerarj : The reason for the `b` is that we encode the name of the file to account for special characters cross-OS (xref: https://github.com/pandas-dev/pandas/pull/24758). In Python2, the visual representation of encoded and non-encoded strings looked one and the same, whereas in Python3, a distinction is made visually.
Now for the error message, we don't necessarily have to pass in the encoded version. We could use the unencoded one instead. You are welcome to investigate.
@gfyoung to get the non-prefixed filename, would we need to reverse the encoding done in #24758? that is now done in a .c file that i dont want to futz with
> would we need to reverse the encoding done in #24758
That would a be logical first step to investigate.
Fat fingers, my bad. | 2019-12-12T21:51:36Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3/dist-packages/pandas/io/parsers.py", line 709, in parser_f
return _read(filepath_or_buffer, kwds)
File "/usr/lib/python3/dist-packages/pandas/io/parsers.py", line 449, in _read
parser = TextFileReader(filepath_or_buffer, **kwds)
File "/usr/lib/python3/dist-packages/pandas/io/parsers.py", line 818, in __init__
self._make_engine(self.engine)
File "/usr/lib/python3/dist-packages/pandas/io/parsers.py", line 1049, in _make_engine
self._engine = CParserWrapper(self.f, **self.options)
File "/usr/lib/python3/dist-packages/pandas/io/parsers.py", line 1695, in __init__
self._reader = parsers.TextReader(src, **kwds)
File "pandas/_libs/parsers.pyx", line 402, in pandas._libs.parsers.TextReader.__cinit__
File "pandas/_libs/parsers.pyx", line 718, in pandas._libs.parsers.TextReader._setup_parser_source
FileNotFoundError: File b'historical_dataset.csv' does not exist
| 13,249 |
|||
pandas-dev/pandas | pandas-dev__pandas-30329 | bbcda98c7974ba5320174ba6be117d399c15603e | diff --git a/doc/source/whatsnew/v1.1.0.rst b/doc/source/whatsnew/v1.1.0.rst
--- a/doc/source/whatsnew/v1.1.0.rst
+++ b/doc/source/whatsnew/v1.1.0.rst
@@ -132,7 +132,7 @@ MultiIndex
I/O
^^^
-
+- Bug in :meth:`read_json` where integer overflow was occuring when json contains big number strings. (:issue:`30320`)
-
-
diff --git a/pandas/io/json/_json.py b/pandas/io/json/_json.py
--- a/pandas/io/json/_json.py
+++ b/pandas/io/json/_json.py
@@ -942,7 +942,7 @@ def _try_convert_data(self, name, data, use_dtypes=True, convert_dates=True):
if (new_data == data).all():
data = new_data
result = True
- except (TypeError, ValueError):
+ except (TypeError, ValueError, OverflowError):
pass
# coerce ints to 64
| Read_json overflow error when json contains big number strings
#### Code Sample, a copy-pastable example if possible
```python
import json
import pandas as pd
test_data = [{"col": "31900441201190696999"}, {"col": "Text"}]
test_json = json.dumps(test_data)
pd.read_json(test_json)
```
#### Problem description
The current behaviour doesn't return a dateframe for a valid JSON. Note when the number is smaller, it works fine. It also works when only big numbers are present. It would be cool to have it work with big numbers as it works for small numbers.
#### Expected Output
A dataframe with a number and string
```
col
0 3.190044e+19
1 Text
```
#### Output of ``pd.read_json()``
<details>
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File ".../.venv/lib/python3.6/site-packages/pandas/io/json/_json.py", line 592, in read_json
result = json_reader.read()
File ".../.venv/lib/python3.6/site-packages/pandas/io/json/_json.py", line 717, in read
obj = self._get_object_parser(self.data)
File ".../.venv/lib/python3.6/site-packages/pandas/io/json/_json.py", line 739, in _get_object_parser
obj = FrameParser(json, **kwargs).parse()
File ".../.venv/lib/python3.6/site-packages/pandas/io/json/_json.py", line 855, in parse
self._try_convert_types()
File ".../.venv/lib/python3.6/site-packages/pandas/io/json/_json.py", line 1151, in _try_convert_types
lambda col, c: self._try_convert_data(col, c, convert_dates=False)
File ".../.venv/lib/python3.6/site-packages/pandas/io/json/_json.py", line 1131, in _process_converter
new_data, result = f(col, c)
File ".../.venv/lib/python3.6/site-packages/pandas/io/json/_json.py", line 1151, in <lambda>
lambda col, c: self._try_convert_data(col, c, convert_dates=False)
File ".../.venv/lib/python3.6/site-packages/pandas/io/json/_json.py", line 927, in _try_convert_data
new_data = data.astype("int64")
File ".../.venv/lib/python3.6/site-packages/pandas/core/generic.py", line 5882, in astype
dtype=dtype, copy=copy, errors=errors, **kwargs
File ".../.venv/lib/python3.6/site-packages/pandas/core/internals/managers.py", line 581, in astype
return self.apply("astype", dtype=dtype, **kwargs)
File ".../.venv/lib/python3.6/site-packages/pandas/core/internals/managers.py", line 438, in apply
applied = getattr(b, f)(**kwargs)
File ".../.venv/lib/python3.6/site-packages/pandas/core/internals/blocks.py", line 559, in astype
return self._astype(dtype, copy=copy, errors=errors, values=values, **kwargs)
File ".../.venv/lib/python3.6/site-packages/pandas/core/internals/blocks.py", line 643, in _astype
values = astype_nansafe(vals1d, dtype, copy=True, **kwargs)
File ".../.venv/lib/python3.6/site-packages/pandas/core/dtypes/cast.py", line 707, in astype_nansafe
return lib.astype_intsafe(arr.ravel(), dtype).reshape(arr.shape)
File "pandas/_libs/lib.pyx", line 547, in pandas._libs.lib.astype_intsafe
OverflowError: Python int too large to convert to C long
</details>
| take
I'm new to Open Source contributions so please bear with me. It seems that we are coercing ints wherever possible while parsing JSON. The code b/w line 943 to line 950 in file [pandas/io/json/_json.py](https://github.com/pandas-dev/pandas/blob/4e807a2923804eb231eb9b7c991273e860c25726/pandas/io/json/_json.py#L943) is what is causing the problem. The int coerce is being checked using a try except which only catches TypeErrors and ValueErrors. If it catches an OverflowError too then things work as intended. Will submit a PR regarding this soon. | 2019-12-18T19:24:42Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File ".../.venv/lib/python3.6/site-packages/pandas/io/json/_json.py", line 592, in read_json
result = json_reader.read()
File ".../.venv/lib/python3.6/site-packages/pandas/io/json/_json.py", line 717, in read
obj = self._get_object_parser(self.data)
File ".../.venv/lib/python3.6/site-packages/pandas/io/json/_json.py", line 739, in _get_object_parser
obj = FrameParser(json, **kwargs).parse()
File ".../.venv/lib/python3.6/site-packages/pandas/io/json/_json.py", line 855, in parse
self._try_convert_types()
File ".../.venv/lib/python3.6/site-packages/pandas/io/json/_json.py", line 1151, in _try_convert_types
lambda col, c: self._try_convert_data(col, c, convert_dates=False)
File ".../.venv/lib/python3.6/site-packages/pandas/io/json/_json.py", line 1131, in _process_converter
new_data, result = f(col, c)
File ".../.venv/lib/python3.6/site-packages/pandas/io/json/_json.py", line 1151, in <lambda>
lambda col, c: self._try_convert_data(col, c, convert_dates=False)
File ".../.venv/lib/python3.6/site-packages/pandas/io/json/_json.py", line 927, in _try_convert_data
new_data = data.astype("int64")
File ".../.venv/lib/python3.6/site-packages/pandas/core/generic.py", line 5882, in astype
dtype=dtype, copy=copy, errors=errors, **kwargs
File ".../.venv/lib/python3.6/site-packages/pandas/core/internals/managers.py", line 581, in astype
return self.apply("astype", dtype=dtype, **kwargs)
File ".../.venv/lib/python3.6/site-packages/pandas/core/internals/managers.py", line 438, in apply
applied = getattr(b, f)(**kwargs)
File ".../.venv/lib/python3.6/site-packages/pandas/core/internals/blocks.py", line 559, in astype
return self._astype(dtype, copy=copy, errors=errors, values=values, **kwargs)
File ".../.venv/lib/python3.6/site-packages/pandas/core/internals/blocks.py", line 643, in _astype
values = astype_nansafe(vals1d, dtype, copy=True, **kwargs)
File ".../.venv/lib/python3.6/site-packages/pandas/core/dtypes/cast.py", line 707, in astype_nansafe
return lib.astype_intsafe(arr.ravel(), dtype).reshape(arr.shape)
File "pandas/_libs/lib.pyx", line 547, in pandas._libs.lib.astype_intsafe
OverflowError: Python int too large to convert to C long
| 13,262 |
|||
pandas-dev/pandas | pandas-dev__pandas-30336 | f36eac1718ef784ead396118aec6893d17e0e5e8 | diff --git a/doc/source/whatsnew/v1.0.0.rst b/doc/source/whatsnew/v1.0.0.rst
--- a/doc/source/whatsnew/v1.0.0.rst
+++ b/doc/source/whatsnew/v1.0.0.rst
@@ -711,7 +711,7 @@ Datetimelike
- Bug in :func:`pandas.to_datetime` when called with ``None`` raising ``TypeError`` instead of returning ``NaT`` (:issue:`30011`)
- Bug in :func:`pandas.to_datetime` failing for `deques` when using ``cache=True`` (the default) (:issue:`29403`)
- Bug in :meth:`Series.item` with ``datetime64`` or ``timedelta64`` dtype, :meth:`DatetimeIndex.item`, and :meth:`TimedeltaIndex.item` returning an integer instead of a :class:`Timestamp` or :class:`Timedelta` (:issue:`30175`)
--
+- Bug in :class:`DatetimeIndex` addition when adding a non-optimized :class:`DateOffset` incorrectly dropping timezone information (:issue:`30336`)
Timedelta
^^^^^^^^^
diff --git a/pandas/core/arrays/datetimes.py b/pandas/core/arrays/datetimes.py
--- a/pandas/core/arrays/datetimes.py
+++ b/pandas/core/arrays/datetimes.py
@@ -794,9 +794,7 @@ def _add_offset(self, offset):
values = self.tz_localize(None)
else:
values = self
- result = offset.apply_index(values)
- if self.tz is not None:
- result = result.tz_localize(self.tz)
+ result = offset.apply_index(values).tz_localize(self.tz)
except NotImplementedError:
warnings.warn(
@@ -804,6 +802,9 @@ def _add_offset(self, offset):
PerformanceWarning,
)
result = self.astype("O") + offset
+ if len(self) == 0:
+ # _from_sequence won't be able to infer self.tz
+ return type(self)._from_sequence(result).tz_localize(self.tz)
return type(self)._from_sequence(result, freq="infer")
| BUG: Cannot add non-vectorized DateOffset to empty DatetimeIndex
After the holidays refactor in pandas 0.17.0+, I'm seeing errors when calling the AbstractHolidayCalendar.holidays() method with a date range that is before the declared Holiday rule. Consider the example below, I'm declaring a custom MLK holiday for CME equities market with a specific start date of 1998-01-01, with a non-vectorised DateOffset (third monday of January).
If I call calendar.holidays(start='2015-01-01', end='2015-12-31'), it blows up with an exception:
```
Traceback (most recent call last):
File "/tests/test_pandas_features.py", line 35, in test_no_holidays_generated_if_not_in_range
calendar.holidays(start='1995-01-01', end='1995-12-31'))
File "/python3/lib/python3.5/site-packages/pandas/tseries/holiday.py", line 377, in holidays
rule_holidays = rule.dates(start, end, return_name=True)
File "/python3/lib/python3.5/site-packages/pandas/tseries/holiday.py", line 209, in dates
holiday_dates = self._apply_rule(dates)
File "/python3/lib/python3.5/site-packages/pandas/tseries/holiday.py", line 278, in _apply_rule
dates += offset
File "/python3/lib/python3.5/site-packages/pandas/tseries/base.py", line 412, in __add__
return self._add_delta(other)
File "/python3/lib/python3.5/site-packages/pandas/tseries/index.py", line 736, in _add_delta
result = DatetimeIndex(new_values, tz=tz, name=name, freq='infer')
File "/python3/lib/python3.5/site-packages/pandas/util/decorators.py", line 89, in wrapper
return func(*args, **kwargs)
File "/python3/lib/python3.5/site-packages/pandas/tseries/index.py", line 231, in __new__
raise ValueError("Must provide freq argument if no data is "
ValueError: Must provide freq argument if no data is supplied
```
After some digging, it appears that if the supplied date range is a year before the declared holiday rule start date, the DatetimeIndex constructed in tseries/holiday.py is an empty index (a reasonable optimization), but when a non-vectorized DateOffset is being applied to an empty DatetimeIndex, the _add_offset method in tseries/index.py returns a plain empty Index, rather than DatetimeIndex, on which .asi8 is subsequently called and returns None instead of an empty numpy i8 array.
So the underlying problem is that an empty DatetimeIndex doesn't like a non-vectorized DateOffset applied to it.
Observe the testcase below, which reproduces the issue.
#### Test case
``` python
import unittest
import pandas as pd
import pandas.util.testing as pdt
from dateutil.relativedelta import MO
from pandas.tseries.holiday import AbstractHolidayCalendar, Holiday
from pandas.tseries.offsets import DateOffset
CME1998StartUSMartinLutherKingJr = Holiday('Dr. Martin Luther King Jr.', month=1, day=1, offset=DateOffset(weekday=MO(3)), start_date='1998-01-01')
class CustomHolidayCalendar(AbstractHolidayCalendar):
rules = [CME1998StartUSMartinLutherKingJr]
class TestPandasIndexFeatures(unittest.TestCase):
def test_empty_datetime_index_added_to_non_vectorized_date_offset(self):
empty_ts_index = pd.DatetimeIndex([], freq='infer')
# This blows up
new_ts_index = empty_ts_index + DateOffset(weekday=MO(3))
self.assertEqual(0, len(new_ts_index))
def test_no_holidays_generated_if_not_in_range(self):
calendar = CustomHolidayCalendar()
# This blows up
pdt.assert_index_equal(
pd.DatetimeIndex([]),
calendar.holidays(start='1995-01-01', end='1995-12-31'))
def test_holidays_generated_if_in_range(self):
calendar = CustomHolidayCalendar()
pdt.assert_index_equal(
pd.DatetimeIndex(['2015-01-19']),
calendar.holidays(start='2015-01-01', end='2015-12-31'))
```
#### Output
The following tests fail:
- test_empty_datetime_index_added_to_non_vectorized_date_offset
- test_no_holidays_generated_if_not_in_range
#### output of `pd.show_versions()`
```
INSTALLED VERSIONS
------------------
commit: 5870731f32ae569e01e3c0a8972cdd2c6e0301f8
python: 3.5.1.final.0
python-bits: 64
OS: Darwin
OS-release: 15.4.0
machine: x86_64
processor: i386
byteorder: little
LC_ALL: None
LANG: en_GB.UTF-8
pandas: 0.18.0+44.g5870731.dirty
nose: 1.3.7
pip: 8.1.1
setuptools: 20.3.1
Cython: 0.23.4
numpy: 1.10.4
scipy: 0.17.0
statsmodels: 0.6.1
xarray: None
IPython: 4.1.2
sphinx: None
patsy: 0.4.1
dateutil: 2.5.1
pytz: 2016.1
blosc: None
bottleneck: None
tables: None
numexpr: None
matplotlib: 1.5.1
openpyxl: None
xlrd: 0.9.4
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: None
httplib2: None
apiclient: None
sqlalchemy: 1.0.12
pymysql: None
psycopg2: 2.6.1 (dt dec pq3 ext lo64)
jinja2: 2.8
boto: None
```
| Currently, I'm using the following patch as a temporarily workaround. The workaround is to simply return the index if the length is 0, which should be ok as the index is immutable.
``` python
def _add_offset(self, offset):
import warnings
try:
from pandas.io.common import PerformanceWarning
except:
from pandas.core.common import PerformanceWarning
try:
if self.tz is not None:
values = self.tz_localize(None)
else:
values = self
result = offset.apply_index(values)
if self.tz is not None:
result = result.tz_localize(self.tz)
return result
except NotImplementedError:
warnings.warn("Non-vectorized DateOffset being applied to Series "
"or DatetimeIndex", PerformanceWarning)
if len(self) == 0:
return self
return self.astype('O') + offset
def patch_pandas_index():
pd.DatetimeIndex._add_offset = _add_offset
# Patch pandas empty datetimeindex and offset operations
patch_pandas_index()
```
not really sure this is supported, meaning this kind of `dateutil.relativedetla` offset. What are you intending this to actually do?
This definitely used to be a supported feature, as least in 0.16.x
The custom MLK rule in my testcase is simply a modified example taken from tseries/holiday.py. The holiday.py module is shipped with pandas and it uses dateutil.relativedelta, so it must be a supported feature.
``` python
USMartinLutherKingJr = Holiday('Dr. Martin Luther King Jr.', start_date=datetime(1986,1,1), month=1, day=1,
offset=DateOffset(weekday=MO(3)))
```
This kind of holiday rule is quite common for exchanges across the world, e.g. as new national holidays introduced they would all have a start date. With this bug generating holidays before the starting date would not be possible. one would simply expect an empty index returned if rule does not yield any holidays.
cc @chris-b1
cc @rockg
hmm, maybe something broke. Though the [19] is expected to be honest, its not really clear what to do with this. you are trying to add a frequency aware object to something with no frequency, so what should you do? coerce or raise? I think this error message is fine, maybe something needs an adjustment in the holiday logic.
```
In [18]: DatetimeIndex(['20160101'],freq='D') + DateOffset(weekday=MO(3))
Out[18]: DatetimeIndex(['2016-01-18'], dtype='datetime64[ns]', freq=None)
In [19]: DatetimeIndex([],freq='D') + DateOffset(weekday=MO(3))
ValueError: Must provide freq argument if no data is supplied
```
in [19], I think the expected output should be:
```
DatetimeIndex([], dtype='datetime64[ns]', freq='D')
```
i.e. the same index as before. I think that was the behaviour in pandas 0.16.x (I haven't had time to check)
I think the behaviour for empty DateTimeIndex should be consistent with other empty indexes, as well as empty numpy arrays.
```
In [11]: Index([1,2,3]) + 1
Out[11]: Int64Index([2, 3, 4], dtype='int64')
In [12]: Index([]) + 1
Out[12]: Index([], dtype='object')
```
It is probably easier to fix this within the holiday.py module, maybe DatetimeIndex _should be_ special.
@yerikzheng that example is not the same. Those don't have frequencies, so datetimelikes ARE special. Adding a frequency aware object to a non-freq aware is an explicit error.
Yes, thinking about it again I agree with you. However, I do expect AbstractHolidayCalendar.holidays() method to handle the case for which no holiday is generated by the rule, so this must have broke since 0.17.0 release as it did work correctly before.
@yerikzheng would love for a PR for this.
Funny that I just came across this post. I found this error today with the same exact issue (MLK for start of 1998). I have begun looking into a work around but this definitely used to be supported. It should be fixed in the next release.
This looks fixed on master (Jeff's example). Guess this could use a test
```
In [62]: DatetimeIndex([],freq='D') + DateOffset(weekday=MO(3))
/Users/matthewroeschke/pandas-mroeschke/pandas/core/arrays/datetimes.py:835: PerformanceWarning: Non-vectorized DateOffset being applied to Series or DatetimeIndex
PerformanceWarning,
Out[62]: DatetimeIndex([], dtype='datetime64[ns]', freq=None)
In [63]: pd.__version__
Out[63]: '0.26.0.dev0+576.gde67bb72e'
``` | 2019-12-18T23:58:18Z | [] | [] |
Traceback (most recent call last):
File "/tests/test_pandas_features.py", line 35, in test_no_holidays_generated_if_not_in_range
calendar.holidays(start='1995-01-01', end='1995-12-31'))
File "/python3/lib/python3.5/site-packages/pandas/tseries/holiday.py", line 377, in holidays
rule_holidays = rule.dates(start, end, return_name=True)
File "/python3/lib/python3.5/site-packages/pandas/tseries/holiday.py", line 209, in dates
holiday_dates = self._apply_rule(dates)
File "/python3/lib/python3.5/site-packages/pandas/tseries/holiday.py", line 278, in _apply_rule
dates += offset
File "/python3/lib/python3.5/site-packages/pandas/tseries/base.py", line 412, in __add__
return self._add_delta(other)
File "/python3/lib/python3.5/site-packages/pandas/tseries/index.py", line 736, in _add_delta
result = DatetimeIndex(new_values, tz=tz, name=name, freq='infer')
File "/python3/lib/python3.5/site-packages/pandas/util/decorators.py", line 89, in wrapper
return func(*args, **kwargs)
File "/python3/lib/python3.5/site-packages/pandas/tseries/index.py", line 231, in __new__
raise ValueError("Must provide freq argument if no data is "
ValueError: Must provide freq argument if no data is supplied
| 13,264 |
|||
pandas-dev/pandas | pandas-dev__pandas-30494 | ad2790c0043772baf7386c0ae151c0c91f5934fa | diff --git a/pandas/core/series.py b/pandas/core/series.py
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -235,6 +235,13 @@ def __init__(
copy = False
elif isinstance(data, np.ndarray):
+ if len(data.dtype):
+ # GH#13296 we are dealing with a compound dtype, which
+ # should be treated as 2D
+ raise ValueError(
+ "Cannot construct a Series from an ndarray with "
+ "compound dtype. Use DataFrame instead."
+ )
pass
elif isinstance(data, ABCSeries):
if name is None:
| ERR: Series must have a singluar dtype otherwise should raise
When constructing a `Series` object using a numpy structured data array, if you try and cast it to a `str` (or print it), it throws:
```
TypeError(ufunc 'isnan' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe'')
```
You can print a single value from the series, but not the whole series.
#### Code Sample, a copy-pastable example if possible
``` python
import pandas as pd
import numpy as np
c_dtype = np.dtype([('a', 'i8'), ('b', 'f4')])
cdt_arr = np.array([(1, 0.4), (256, -13)], dtype=c_dtype)
pds = pd.Series(cdt_arr, index=['A', 'B'])
print('pds.iloc[0]: {}'.format(str(pds.iloc[0]))) # (1, 0.4000000059604645)
print('pds.iloc[1]: {}'.format(str(pds.iloc[1]))) # (256, -13.0)
print('pds.loc["A"]: {}'.format(str(pds.loc['A']))) # Works
print('pds.loc["B"]: {}'.format(str(pds.loc['B']))) # Works
def print_error(x):
try:
o = str(x) # repr(x) also causes the same errors
print(o)
except TypeError as e:
print('TypeError({})'.format(e.args[0]))
a = pds.iloc[0:1]
b = pds.loc[['A', 'B']]
print('pds.iloc[0:1]:')
print_error(a)
print('pds.loc["A", "B"]:')
print_error(b)
print('pds:')
print_error(pds)
print('pd.DataFrame([pds]).T:')
print_error(pd.DataFrame([pds]).T)
print('pds2:')
cdt_arr_2 = np.array([(1, 0.4)], dtype=c_dtype)
pds2 = pd.Series(cdt_arr_2, index=['A'])
print_error(pds2)
```
#### Output (actual):
```
$ python demo_index_bug.py
pds.iloc[0]: (1, 0.4000000059604645)
pds.iloc[1]: (256, -13.0)
pds.loc["A"]: (1, 0.4000000059604645)
pds.loc["B"]: (256, -13.0)
pds.iloc[0:1]:
TypeError(ufunc 'isnan' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe'')
pds.loc["A", "B"]:
TypeError(ufunc 'isnan' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe'')
pds:
TypeError(ufunc 'isnan' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe'')
pd.DataFrame([pds]).T:
0
A (1, 0.4000000059604645)
B (256, -13.0)
pds2:
TypeError(ufunc 'isnan' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe'')
```
#### output of `pd.show_versions()`:
```
INSTALLED VERSIONS
------------------
commit: None
python: 3.5.1.final.0
python-bits: 64
OS: Linux
OS-release: 4.5.2-1-ARCH
machine: x86_64
processor:
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
pandas: 0.18.1
nose: None
pip: 8.1.2
setuptools: 21.0.0
Cython: 0.24
numpy: 1.11.0
scipy: 0.17.1
statsmodels: None
xarray: None
IPython: 4.2.0
sphinx: 1.4.1
patsy: None
dateutil: 2.5.3
pytz: 2016.4
blosc: None
bottleneck: None
tables: None
numexpr: None
matplotlib: 1.5.1
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: None
httplib2: None
apiclient: None
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: 2.8
boto: None
pandas_datareader: None
```
#### Stack Trace
I swallowed the stack traces to show where this was failing, so here's the traceback for that last error:
```
Traceback (most recent call last):
File "demo_dtype_bug.py", line 37, in <module>
print(pds2)
File "~/.local/lib/python3.5/site-packages/pandas/core/base.py", line 46, in __str__
return self.__unicode__()
File "~/.local/lib/python3.5/site-packages/pandas/core/series.py", line 984, in __unicode__
max_rows=max_rows)
File "~/.local/lib/python3.5/site-packages/pandas/core/series.py", line 1025, in to_string
dtype=dtype, name=name, max_rows=max_rows)
File "~/.local/lib/python3.5/site-packages/pandas/core/series.py", line 1053, in _get_repr
result = formatter.to_string()
File "~/.local/lib/python3.5/site-packages/pandas/formats/format.py", line 225, in to_string
fmt_values = self._get_formatted_values()
File "~/.local/lib/python3.5/site-packages/pandas/formats/format.py", line 215, in _get_formatted_values
float_format=self.float_format, na_rep=self.na_rep)
File "~/.local/lib/python3.5/site-packages/pandas/formats/format.py", line 2007, in format_array
return fmt_obj.get_result()
File "~/.local/lib/python3.5/site-packages/pandas/formats/format.py", line 2026, in get_result
fmt_values = self._format_strings()
File "~/.local/lib/python3.5/site-packages/pandas/formats/format.py", line 2059, in _format_strings
is_float = lib.map_infer(vals, com.is_float) & notnull(vals)
File "~/.local/lib/python3.5/site-packages/pandas/core/common.py", line 250, in notnull
res = isnull(obj)
File "~/.local/lib/python3.5/site-packages/pandas/core/common.py", line 91, in isnull
return _isnull(obj)
File "~/.local/lib/python3.5/site-packages/pandas/core/common.py", line 101, in _isnull_new
return _isnull_ndarraylike(obj)
File "~/.local/lib/python3.5/site-packages/pandas/core/common.py", line 192, in _isnull_ndarraylike
result = np.isnan(values)
TypeError: ufunc 'isnan' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''
```
| ref https://github.com/pydata/xarray/issues/861
You are wanting to construct a `DataFrame`. xarray is a bit extreme for this case. Series is by-definition a single dtyped structure.
```
In [5]: DataFrame.from_records(cdt_arr)
Out[5]:
a b
0 1 0.4
1 256 -13.0
In [6]: DataFrame.from_records(cdt_arr).dtypes
Out[6]:
a int64
b float32
dtype: object
```
@jreback Whether or not my purposes would be better served with a dataframe in this case, I think this is still a valid bug, considering that you can _construct_ and _use_ `Series` perfectly well using a compound datatype, but it crashes when printed.
As for why you might want to do something like this - occasionally there are uses where the semantics are much easier when you can treat a single value as a scalars rather than multiple columns. One toy example would be operations on coordinate systems:
``` python
import pandas as pd
import numpy as np
three_vec = np.dtype([('x', 'f8'), ('y', 'f8'), ('z', 'f8')])
def rotate_coordinates(x, u, theta):
I = np.identity(3)
ux = np.array([
[ 0, -u['z'], u['y']],
[ u['z'], 0, -u['x']],
[-u['y'], u['x'], 0]
])
uu = np.array([
[ u['x'] ** 2, u['x'] * u['y'], u['x'] * u['z']],
[u['x'] * u['y'], u['y'] ** 2, u['y'] * u['z']],
[u['x'] * u['z'], u['y'] * u['z'], u['z'] ** 2]
])
R = np.cos(theta) * I + np.sin(theta) * ux + (1 - np.cos(theta)) * uu
xx = x.view(np.float64).reshape(x.shape + (-1,)).T
out_array = (R @ xx).round(15)
return np.core.records.fromarrays(out_array, dtype=three_vec)
# Rotate these arrays about z
z = np.array([(0, 0, 1)], dtype=three_vec)[0]
v1 = np.array([(0, 1, 0), (1, 0, 0)], dtype=three_vec)
vp = rotate_coordinates(v1, z, np.pi / 2)
print(v1)
print(vp)
```
Now imagine that I wanted a `pd.DataFrame` containing the start and end of some motion. I could represent it as a `DataFrame` with columns `'start_x'`, `'end_x'`, `'start_y'`, `'end_y'`, etc, and if I wanted to rotate all the coordinates to a new coordinate system, either manually group the columns, then manually re-distribute them, or I could use a compound datatype `three_vec`, have a dataframe with columns `'start'` and `'end'`, then do `df.apply(partial(rotate_coordinates, u=z, theta=np.pi/2), axis=1)`. This is a much cleaner way to both store the data and operate on it. It's similar in principle to the idea that if a first-class `datetime` data type didn't exist, you wouldn't suggest just using a `DataFrame` with columns `'year'`, `'month'`, `'day'`, etc.
@pganssle you are violating the guarantees of a Series. it is by-definition a singular dtype. The bug is that it accepts (a non-singular one) in the first place. I'll reopen for that purpose. There is NO support for a Series with the use-case you describe. EIther use a DataFrame or xarray.
@jreback My suggestion is that compound types _are_ a single type in the same way that a `datetime` is a single type. Complex numbers are also a single type because they have native numpy support, but what about quarternions and other hypercomplex numbers? I think it's reasonable to use records to define the base unit of a scalar, given that it's already supported by numpy.
@pganssle a compound dtype is simply not supported, nor do I think should be. Sure an extension type that is innately a compound type is fine because it singular. But a structured dtype is NOT. it has sub-dtypes. This is just making an already complicated structure WAY more complex.
as I said for not this should simply raise `NotImplementedError`. If you want to investigate if this _could_ be suppport w/o major restructuring, then great. If its trivial, sure. But I suspect its not.
@jreback Does pandas support custom dtypes? I'm not sure that I've ever seen someone create one, other than `pandas`.
https://github.com/pydata/pandas/blob/master/pandas/types/dtypes.py
But these required a lot of support to integrate properly. These are fundamental types. I suppose a Coordinate _could_ also be in that category. But as I said its a MAJOR effort to properly handle things.
Principally the issue is efficient storage. What you are suggesting is NOT stored efficiently and that's the problem.
I have NEVER seen a good use of `.apply`, and that's what you are suggesting here. That is SO completely inefficient.
It's just a toy example of why the semantics would be useful. You could achieve the same thing with `applymap` or even just `df[:] = rotate_coordinates(df.values, z, theta)`. I don't have any particular understanding of the underlying efficiency of how these things are stored, I was just demonstrating the concept of compound data types that are "logical" scalars.
I think it's fine to consider my suggestion a "low reward / high effort" enhancement - it may be fundamentally difficult to deal with this sort of thing and not something that comes up a lot, I just think it's worth considering as a "nice to have", since, if possible, it would be better to have first-class support for complex datatypes than not.
When I have a bit of time I will be happy to look into the underlying details and see if I can get a better understanding of difficulty and/or propose an alternate approach. Probably it will be a while, though, since I have quite a backlog of other stuff to get to.
In the meantime, I would think this could be profitably handled by just converting compound datatypes to tuple on import, possibly with a warning about the inefficiency of this approach. At least this would allow people who are less performance sensitive to write some wrapper functions to allow the use of normal semantics.
@pganssle if you have time for this great. But I don't have time for every enhancement (actually most of them). So if you'd like to propose something great. However the very simplest thing is to raise an error.
If you are someone wants to implement a better soln. great.
| 2019-12-26T20:03:49Z | [] | [] |
Traceback (most recent call last):
File "demo_dtype_bug.py", line 37, in <module>
print(pds2)
File "~/.local/lib/python3.5/site-packages/pandas/core/base.py", line 46, in __str__
return self.__unicode__()
File "~/.local/lib/python3.5/site-packages/pandas/core/series.py", line 984, in __unicode__
max_rows=max_rows)
File "~/.local/lib/python3.5/site-packages/pandas/core/series.py", line 1025, in to_string
dtype=dtype, name=name, max_rows=max_rows)
File "~/.local/lib/python3.5/site-packages/pandas/core/series.py", line 1053, in _get_repr
result = formatter.to_string()
File "~/.local/lib/python3.5/site-packages/pandas/formats/format.py", line 225, in to_string
fmt_values = self._get_formatted_values()
File "~/.local/lib/python3.5/site-packages/pandas/formats/format.py", line 215, in _get_formatted_values
float_format=self.float_format, na_rep=self.na_rep)
File "~/.local/lib/python3.5/site-packages/pandas/formats/format.py", line 2007, in format_array
return fmt_obj.get_result()
File "~/.local/lib/python3.5/site-packages/pandas/formats/format.py", line 2026, in get_result
fmt_values = self._format_strings()
File "~/.local/lib/python3.5/site-packages/pandas/formats/format.py", line 2059, in _format_strings
is_float = lib.map_infer(vals, com.is_float) & notnull(vals)
File "~/.local/lib/python3.5/site-packages/pandas/core/common.py", line 250, in notnull
res = isnull(obj)
File "~/.local/lib/python3.5/site-packages/pandas/core/common.py", line 91, in isnull
return _isnull(obj)
File "~/.local/lib/python3.5/site-packages/pandas/core/common.py", line 101, in _isnull_new
return _isnull_ndarraylike(obj)
File "~/.local/lib/python3.5/site-packages/pandas/core/common.py", line 192, in _isnull_ndarraylike
result = np.isnan(values)
TypeError: ufunc 'isnan' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''
| 13,290 |
|||
pandas-dev/pandas | pandas-dev__pandas-30498 | 710df2140555030e4d86e669d6df2deb852bcaf5 | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -504,6 +504,10 @@ def maybe_cythonize(extensions, *args, **kwargs):
# See https://github.com/cython/cython/issues/1495
return extensions
+ elif not cython:
+ # GH#28836 raise a helfpul error message
+ raise RuntimeError("Cannot cythonize without Cython installed.")
+
numpy_incl = pkg_resources.resource_filename("numpy", "core/include")
# TODO: Is this really necessary here?
for ext in extensions:
| NameError: name 'tempita' is not defined when building extensions with old cython
#### Code Sample, a copy-pastable example if possible
```python
pietro@debiousci:~/nobackup/repo/pandas_temp$ python3 setup.py build_ext --inplace
Traceback (most recent call last):
File "setup.py", line 813, in <module>
ext_modules=maybe_cythonize(extensions, compiler_directives=directives),
File "setup.py", line 541, in maybe_cythonize
build_ext.render_templates(_pxifiles)
File "setup.py", line 127, in render_templates
pyxcontent = tempita.sub(tmpl)
NameError: name 'tempita' is not defined
```
#### Problem description
If cython is too old/missing, the nice error reporting of the setup.py do not work.
Until few minutes ago I had never heard about ``tempita``, but my understanding is that it is required (used in ``build_ext.render_templates``) while ``cython`` is not:
https://github.com/pandas-dev/pandas/blob/bee17d5f4c99e6d1e451cf9a7b6a1780aa25988d/setup.py#L74
Hence, whether the check for ``tempita`` is made should not depend on whether ``cython`` is found: instead, this is what currently happens, given that ``tempita`` is checked in the ``else`` clause of the ``try...except`` check of ``cython``:
https://github.com/pandas-dev/pandas/blob/bee17d5f4c99e6d1e451cf9a7b6a1780aa25988d/setup.py#L76
Notice that this means the above error pops out despite ``tempita`` being installed in my system.
#### Expected Output
None
#### Output of ``pd.show_versions()``
<details>
Commit: bee17d5f4c99e6d1e451cf9a7b6a1780aa25988d
</details>
| (By the way: my command above will fail anyway because of the missing cython, but my understanding is that, because of this bug, ``setup.py`` will fail even when cython files are provided - and hence cython is not needed)
we bumped cython recently; conda update cython should be enough
this is a dev only dep and only if not creating a new from a new env
if you don’t have cython it actually would work
Yep, sorry for not mentioning I had already solved my local problem.
I suppose this is caused by https://github.com/pandas-dev/pandas/pull/28374 when we now require cython for sdists. Before, we only called cythonize when `if cython`, but now we assume cython is installed in `maybe_cythonize`. We can add a check there if cython/tempita is installed and otherwise raise a better error message?
I _think_ (but I'm quite ignorant of how the ``setup.py`` works) that just moving the following ``try..except``
https://github.com/pandas-dev/pandas/blob/bee17d5f4c99e6d1e451cf9a7b6a1780aa25988d/setup.py#L76
... out of the ``else`` should fix this...
I think that block could be simplified - the `pip install Tempita` command I don't believe is valid any longer.
I generally think anything to simplify setup.py would be great. I would be hesitant to add more logic
@toobaz I don't think that should be moved out of the else clause, as in theory you don't need tempita to run `setup.py` (eg sdist does not need it).
But I think @WillAyd is correct that the `import tempita` / `pip install tempita` is no longer needed, it might stem from a time that we still supported cython versions that did not include tempita
> @toobaz I don't think that should be moved out of the else clause, as in theory you don't need tempita to run `setup.py` (eg sdist does not need it).
Right, it would need also a variable where we remember whether tempita is present, like for cython.
> But I think @WillAyd is correct that the `import tempita` / `pip install tempita` is no longer needed, it might stem from a time that we still supported cython versions that did not include tempita
Again, I'm not an expert, but my understanding is that if you have ``tempita`` but not ``cython``, the ``setup.py`` still tried to allow you to take a pandas provided with ``cython`` files and compile them to C. But then, this is a very marginal use case, so maybe not worth the effort.
> if you have tempita but not cython, the setup.py still tried to allow you to take a pandas provided with cython files and compile them to C
tempita itself is not enough to compile cython file to C (tempita only generates cython files from the templates, not C files), so that use case doesn't sound possible to me.
So I think it should be fine to just rely on tempita provided by cython?
> tempita itself is not enough to compile cython file to C (tempita only generates cython files from the templates, not C files), so that use case doesn't sound possible to me.
OK sorry, now I understand, and sure, checking ``tempita'' is useless. | 2019-12-27T01:04:42Z | [] | [] |
Traceback (most recent call last):
File "setup.py", line 813, in <module>
ext_modules=maybe_cythonize(extensions, compiler_directives=directives),
File "setup.py", line 541, in maybe_cythonize
build_ext.render_templates(_pxifiles)
File "setup.py", line 127, in render_templates
pyxcontent = tempita.sub(tmpl)
NameError: name 'tempita' is not defined
| 13,292 |
|||
pandas-dev/pandas | pandas-dev__pandas-30519 | 0913ed04d6652ef5daf59a1319325f28d7c0fd72 | diff --git a/doc/source/whatsnew/v1.0.0.rst b/doc/source/whatsnew/v1.0.0.rst
--- a/doc/source/whatsnew/v1.0.0.rst
+++ b/doc/source/whatsnew/v1.0.0.rst
@@ -900,6 +900,7 @@ I/O
- Bug in :class:`PythonParser` where str and bytes were being mixed when dealing with the decimal field (:issue:`29650`)
- :meth:`read_gbq` now accepts ``progress_bar_type`` to display progress bar while the data downloads. (:issue:`29857`)
- Bug in :func:`pandas.io.json.json_normalize` where a missing value in the location specified by `record_path` would raise a ``TypeError`` (:issue:`30148`)
+- :func:`read_excel` now accepts binary data (:issue:`15914`)
Plotting
^^^^^^^^
diff --git a/pandas/io/excel/_base.py b/pandas/io/excel/_base.py
--- a/pandas/io/excel/_base.py
+++ b/pandas/io/excel/_base.py
@@ -40,7 +40,7 @@
Parameters
----------
-io : str, ExcelFile, xlrd.Book, path object or file-like object
+io : str, bytes, ExcelFile, xlrd.Book, path object, or file-like object
Any valid string path is acceptable. The string could be a URL. Valid
URL schemes include http, ftp, s3, and file. For file URLs, a host is
expected. A local file could be: ``file://localhost/path/to/table.xlsx``.
@@ -350,6 +350,8 @@ def __init__(self, filepath_or_buffer):
self.book = self.load_workbook(filepath_or_buffer)
elif isinstance(filepath_or_buffer, str):
self.book = self.load_workbook(filepath_or_buffer)
+ elif isinstance(filepath_or_buffer, bytes):
+ self.book = self.load_workbook(BytesIO(filepath_or_buffer))
else:
raise ValueError(
"Must explicitly set engine if not passing in buffer or path for io."
| read_excel does not work on excel file binary text or buffered binary text object
Python 3.5, Pandas 0.19.2
I have the following excel file, and I am trying to read its binary content, and then have read_excel read the binary content in, except this is not working, and there may be a bug somewhere, as I have specified the engine for reading.
>>> import pandas as pd
>>> f = open("Test_Prom_Data.xlsx", "rb")
>>> df = pd.read_excel(f.read(), engine = "xlrd")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/alexander/anaconda3/lib/python3.5/site-packages/pandas/io/excel.py", line 191, in read_excel
io = ExcelFile(io, engine=engine)
File "/home/alexander/anaconda3/lib/python3.5/site-packages/pandas/io/excel.py", line 251, in __init__
raise ValueError('Must explicitly set engine if not passing in'
ValueError: Must explicitly set engine if not passing in buffer or path for io.
[Test_Prom_Data.xlsx](https://github.com/pandas-dev/pandas/files/901685/Test_Prom_Data.xlsx)
| I don't think that's expected to work, from the doctoring:
```
io : string, path object (pathlib.Path or py._path.local.LocalPath),
file-like object, pandas ExcelFile, or xlrd workbook.
The string could be a URL. Valid URL schemes include http, ftp, s3,
and file. For file URLs, a host is expected. For instance, a local
file could be file://localhost/path/to/workbook.xlsx
```
We could have a better error message though, similar to `read_csv`:
```python
In [9]: pd.read_csv(b"abc,123", encoding='utf-8')
```
```pytb
Traceback (most recent call last):
File "<ipython-input-9-d793693413f5>", line 1, in <module>
pd.read_csv(b"abc,123", encoding='utf-8')
File "/Users/taugspurger/Envs/pandas-dev/lib/python3.6/site-packages/pandas-0.19.0+733.gca8ef494d-py3.6-macosx-10.12-x86_64.egg/pandas/io/parsers.py", line 656, in parser_f
return _read(filepath_or_buffer, kwds)
File "/Users/taugspurger/Envs/pandas-dev/lib/python3.6/site-packages/pandas-0.19.0+733.gca8ef494d-py3.6-macosx-10.12-x86_64.egg/pandas/io/parsers.py", line 404, in _read
parser = TextFileReader(filepath_or_buffer, **kwds)
File "/Users/taugspurger/Envs/pandas-dev/lib/python3.6/site-packages/pandas-0.19.0+733.gca8ef494d-py3.6-macosx-10.12-x86_64.egg/pandas/io/parsers.py", line 763, in __init__
self._make_engine(self.engine)
File "/Users/taugspurger/Envs/pandas-dev/lib/python3.6/site-packages/pandas-0.19.0+733.gca8ef494d-py3.6-macosx-10.12-x86_64.egg/pandas/io/parsers.py", line 967, in _make_engine
self._engine = CParserWrapper(self.f, **self.options)
File "/Users/taugspurger/Envs/pandas-dev/lib/python3.6/site-packages/pandas-0.19.0+733.gca8ef494d-py3.6-macosx-10.12-x86_64.egg/pandas/io/parsers.py", line 1552, in __init__
self._reader = libparsers.TextReader(src, **kwds)
File "as/pandas/io/parsers.pyx", line 393, in pandas.io.libparsers.TextReader.__cinit__ (pandas/io/parsers.c:4209)
File "as/pandas/io/parsers.pyx", line 727, in pandas.io.libparsers.TextReader._setup_parser_source (pandas/io/parsers.c:9050)
OSError: Expected file path name or file-like object, got <class 'bytes'> type
```
@alexanderwhatley interested in submitting a fix?
#Read and write to excel
dataFileUrl = R"D:\\real_names.xlsx"
data = pd.read_excel(dataFileUrl) | 2019-12-27T22:46:28Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/alexander/anaconda3/lib/python3.5/site-packages/pandas/io/excel.py", line 191, in read_excel
io = ExcelFile(io, engine=engine)
File "/home/alexander/anaconda3/lib/python3.5/site-packages/pandas/io/excel.py", line 251, in __init__
raise ValueError('Must explicitly set engine if not passing in'
ValueError: Must explicitly set engine if not passing in buffer or path for io.
| 13,299 |
|||
pandas-dev/pandas | pandas-dev__pandas-30653 | f937843bf1cd56ce2294772e924590efe3da7158 | BUG: merge_asof raises when grouping on multiple columns with a categorical
#### Code Sample, a copy-pastable example if possible
```python
>>> import pandas as pd
>>> x = pd.DataFrame(dict(x=[0],y=[0],z=pd.Categorical([0])))
>>> pd.merge_asof(x, x, on='x', by=['y', 'z'])
Traceback (most recent call last):
File "bug.py", line 10, in <module>
pd.merge_asof(x, x, on='x', by=['y', 'z'])
File "~/anaconda3/envs/pantheon/lib/python3.6/site-packages/pandas/core/reshape/merge.py", line 486, in merge_asof
return op.get_result()
File "~/anaconda3/envs/pantheon/lib/python3.6/site-packages/pandas/core/reshape/merge.py", line 1019, in get_result
join_index, left_indexer, right_indexer = self._get_join_info()
File "~/anaconda3/envs/pantheon/lib/python3.6/site-packages/pandas/core/reshape/merge.py", line 734, in _get_join_info
right_indexer) = self._get_join_indexers()
File "~/anaconda3/envs/pantheon/lib/python3.6/site-packages/pandas/core/reshape/merge.py", line 1269, in _get_join_indexers
left_by_values = flip(left_by_values)
File "~/anaconda3/envs/pantheon/lib/python3.6/site-packages/pandas/core/reshape/merge.py", line 1231, in flip
return np.array(lzip(*xs), labeled_dtypes)
File "~/anaconda3/envs/pantheon/lib/python3.6/site-packages/pandas/core/dtypes/dtypes.py", line 62, in __repr__
return str(self)
File "~/anaconda3/envs/pantheon/lib/python3.6/site-packages/pandas/core/dtypes/dtypes.py", line 41, in __str__
return self.__unicode__()
SystemError: PyEval_EvalFrameEx returned a result with an error set
```
#### Problem description
`merge_asof` takes a `by` argument which defines the groups to merge between. When `by` is any single column, or multiple non-categorical columns, the merge succeeds. When `by` includes multiple columns, at least one of which is categorical, an error is raised.
#### Expected Output
```python
x y z
0 0 0 0
```
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 3.6.1.final.0
python-bits: 64
OS: Linux
OS-release: 4.10.10-100.fc24.x86_64
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: C
LANG: C
LOCALE: None.None
pandas: 0.20.1
pytest: None
pip: 9.0.1
setuptools: 27.2.0
Cython: 0.25.2
numpy: 1.12.1
scipy: 0.19.0
xarray: None
IPython: 4.2.1
sphinx: None
patsy: 0.4.1
dateutil: 2.6.0
pytz: 2017.2
blosc: None
bottleneck: 1.2.0
tables: None
numexpr: 2.6.2
feather: None
matplotlib: 2.0.0
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: 0.999999999
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: 2.9.6
s3fs: None
pandas_gbq: None
pandas_datareader: None
</details>
| cc @chrisaycock should this be supported? If not we can raise a better error message.
this could work, but is not implemented atm because of how the multiple grouping is done.
I also ran into this:
```python
import pandas as pd
def convert_to_cat(df):
return df.assign(cat1=pd.Categorical(df.cat1),
cat2=pd.Categorical(df.cat2))
left = pd.DataFrame({'time': [1, 2, 3, 6, 7],
'cat1': ['a', 'a', 'b', 'b', 'b'],
'cat2': ['x', 'y', 'x', 'y', 'x'],
'left': [0, 1, 2, 3, 4]})
right = pd.DataFrame({'time': [1, 5, 10],
'cat1': ['a', 'b', 'b'],
'cat2': ['x', 'y', 'x'],
'right': [0, 1, 2]})
left_cat = convert_to_cat(left)
right_cat = convert_to_cat(right)
# This works: multiple by= columns, with object dtype.
result = pd.merge_asof(left, right, on='time', by=['cat1', 'cat2'])
# This also works: one by= column, with category dtype.
result_1cat = pd.merge_asof(left_cat, right_cat, on='time', by='cat1')
# This raises SystemError: multiple by= columns, with category dtype.
result_2cats = pd.merge_asof(left_cat, right_cat, on='time', by=['cat1', 'cat2'])
```
Here is the backtrace produced by the last line, if that helps:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
TypeError: data type not understood
The above exception was the direct cause of the following exception:
SystemError Traceback (most recent call last)
<ipython-input-29-0d159f08de94> in <module>
26
27 # This
---> 28 result_2cats = pd.merge_asof(left_cat, right_cat, on='time', by=['cat1', 'cat2'])
~/.local/share/virtualenvs/pandas-bugs-QyHl3rh2/lib/python3.7/site-packages/pandas/core/reshape/merge.py in merge_asof(left, right, on, left_on, right_on, left_index, right_index, by, left_by, right_by, suffixes, tolerance, allow_exact_matches, direction)
460 allow_exact_matches=allow_exact_matches,
461 direction=direction)
--> 462 return op.get_result()
463
464
~/.local/share/virtualenvs/pandas-bugs-QyHl3rh2/lib/python3.7/site-packages/pandas/core/reshape/merge.py in get_result(self)
1254
1255 def get_result(self):
-> 1256 join_index, left_indexer, right_indexer = self._get_join_info()
1257
1258 # this is a bit kludgy
~/.local/share/virtualenvs/pandas-bugs-QyHl3rh2/lib/python3.7/site-packages/pandas/core/reshape/merge.py in _get_join_info(self)
754 else:
755 (left_indexer,
--> 756 right_indexer) = self._get_join_indexers()
757
758 if self.right_index:
~/.local/share/virtualenvs/pandas-bugs-QyHl3rh2/lib/python3.7/site-packages/pandas/core/reshape/merge.py in _get_join_indexers(self)
1502 right_by_values = right_by_values[0]
1503 else:
-> 1504 left_by_values = flip(left_by_values)
1505 right_by_values = flip(right_by_values)
1506
~/.local/share/virtualenvs/pandas-bugs-QyHl3rh2/lib/python3.7/site-packages/pandas/core/reshape/merge.py in flip(xs)
1455 dtypes = [x.dtype for x in xs]
1456 labeled_dtypes = list(zip(labels, dtypes))
-> 1457 return np.array(lzip(*xs), labeled_dtypes)
1458
1459 # values to compare
~/.local/share/virtualenvs/pandas-bugs-QyHl3rh2/lib/python3.7/site-packages/pandas/core/dtypes/dtypes.py in __repr__(self)
393 def __repr__(self):
394 tpl = u'CategoricalDtype(categories={}ordered={})'
--> 395 if self.categories is None:
396 data = u"None, "
397 else:
SystemError: PyEval_EvalFrameEx returned a result with an error set
```
This is on Pandas 0.24.2.
This looks to work on master. Could use a regression test.
```
In [162]: >>> import pandas as pd
...: >>> x = pd.DataFrame(dict(x=[0],y=[0],z=pd.Categorical([0])))
...: >>> pd.merge_asof(x, x, on='x', by=['y', 'z'])
Out[162]:
x y z
0 0 0 0
In [163]: pd.__version__
Out[163]: '0.26.0.dev0+555.gf7d162b18'
``` | 2020-01-03T18:37:31Z | [] | [] |
Traceback (most recent call last):
File "bug.py", line 10, in <module>
pd.merge_asof(x, x, on='x', by=['y', 'z'])
File "~/anaconda3/envs/pantheon/lib/python3.6/site-packages/pandas/core/reshape/merge.py", line 486, in merge_asof
return op.get_result()
File "~/anaconda3/envs/pantheon/lib/python3.6/site-packages/pandas/core/reshape/merge.py", line 1019, in get_result
join_index, left_indexer, right_indexer = self._get_join_info()
File "~/anaconda3/envs/pantheon/lib/python3.6/site-packages/pandas/core/reshape/merge.py", line 734, in _get_join_info
right_indexer) = self._get_join_indexers()
File "~/anaconda3/envs/pantheon/lib/python3.6/site-packages/pandas/core/reshape/merge.py", line 1269, in _get_join_indexers
left_by_values = flip(left_by_values)
File "~/anaconda3/envs/pantheon/lib/python3.6/site-packages/pandas/core/reshape/merge.py", line 1231, in flip
return np.array(lzip(*xs), labeled_dtypes)
File "~/anaconda3/envs/pantheon/lib/python3.6/site-packages/pandas/core/dtypes/dtypes.py", line 62, in __repr__
return str(self)
File "~/anaconda3/envs/pantheon/lib/python3.6/site-packages/pandas/core/dtypes/dtypes.py", line 41, in __str__
return self.__unicode__()
SystemError: PyEval_EvalFrameEx returned a result with an error set
| 13,326 |
||||
pandas-dev/pandas | pandas-dev__pandas-30769 | 6600b5b5222b96dbc4391cf7d8557708d08d6f47 | Series __finalized__ not correctly called in binary operators
```
#!/bin/env python
"""
Example bug in derived Pandas Series.
__finalized__ is not called in arithmetic binary operators, but it is in in some booleans cases.
>>> m = MySeries([1, 2, 3], name='test')
>>> m.x = 42
>>> n=m[:2]
>>> n
0 1
1 2
dtype: int64
>>> n.x
42
>>> o=n+1
>>> o
0 2
1 3
dtype: int64
>>> o.x
Traceback (most recent call last):
...
AttributeError: 'MySeries' object has no attribute 'x'
>>> m = MySeries([True, False, True], name='test2')
>>> m.x = 42
>>> n=m[:2]
>>> n
0 True
1 False
dtype: bool
>>> n.x
42
>>> o=n ^ True
>>> o
0 False
1 True
dtype: bool
>>> o.x
42
>>> p = n ^ o
>>> p
0 True
1 True
dtype: bool
>>> p.x
42
"""
import pandas as pd
class MySeries(pd.Series):
_metadata = ['x']
@property
def _constructor(self):
return MySeries
if __name__ == "__main__":
import doctest
doctest.testmod()
```
#### Expected Output
In all cases, the metadata 'x' should be transferred from the passed values when applying binary operators.
When the right-hand value is a constant, the left-hand value metadata should be used in **finalize** for arithmetic operators, just like it is for Boolean binary operators.
When two series are used in binary operators, some resolution should be possible in **finalize**.
I would pass the second (right-hand) value by calling **finalize**(self, other=other), leaving the resolution to the derived class implementer, but there might be a smarter approach.
#### output of `pd.show_versions()`
pd.show_versions()
## INSTALLED VERSIONS
commit: None
python: 2.7.6.final.0
python-bits: 64
OS: Linux
OS-release: 3.19.0-59-generic
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
pandas: 0.18.1
nose: 1.3.7
pip: None
setuptools: 20.2.2
Cython: 0.24
numpy: 1.11.0
scipy: 0.17.0
statsmodels: 0.6.1
xarray: None
IPython: 4.0.1
sphinx: 1.3.1
patsy: 0.4.0
dateutil: 2.4.2
pytz: 2015.7
blosc: None
bottleneck: 1.0.0
tables: 3.2.2
numexpr: 2.5.2
matplotlib: 1.5.0
openpyxl: 2.2.6
xlrd: 0.9.4
xlwt: 1.0.0
xlsxwriter: 0.7.7
lxml: 3.4.4
bs4: 4.4.1
html5lib: 0.9999999
httplib2: None
apiclient: None
sqlalchemy: 1.0.9
pymysql: None
psycopg2: None
jinja2: 2.8
boto: 2.38.0
pandas_datareader: None
| yeah it appears some paths don't call finalize; though you can't just call finalize for example on binary operations between 2 Series. This would have to be handled in the sub-class itself. We don't have the machinery to ignore this though.
So I am suggesting that we DO call `__finalize__` but maybe pass a parameter that will be default propogate (but on binary functions this parameter can be set to `False` and prevent propogation / unless overriden)
pull-requests welcome!
@jreback I would like to attempt it. Based on what you have written, I have located the function def of **finalize** in generic.py.
I'm thinking of adding a paramter with default value as True just as you mentioned above( which can be set to False on binary functions.)
However I'm not able to understand what do you mean by "DO call **finalize**". Does that mean that something has to be changed in the process during operations between two Series?
@pfrcks I think it means you need to change pandas/core/ops.py so that binary operations now call your brand new **finalize**
Each call to self._constructor should be followed by a call to __finalize__ to make sure the metadata is propagated.
Also, adding a named argument to **finalize** will break upward compatibility, but that might be the only way to do it right.
@jdfekete That makes more sense. Thanks. Will look into it.
so to clarify `x` and `y` are `NDFrames` (e.g. `Series/DataFrame`)
`x + 1` -> `__finalize__(other)`
`x + y` -> `__finalize__(other, method = '__add__')`
would prob work. So the result in both cases is a `NDFrame`, they can be distinguished (and possibly operated on but a sub-class) when `method` is `not None`
I am not sure. Looking in ops.py on the `wrapper` methods, maybe:
`x + 1` -> `__finalize__(self)`
`x + y` -> `__finalize__(self, method='__add__', other=other)`
So far, the operators are not passed to the `__finalize__` function. It could be handy in general, but that's a separate issue I think. Adding `method=operator` is useful but does not brake the API (I think). Adding the `other=other` parameter do change it a bit, although the current `__finalize__` method will take it in its `**kwargs` .
Also, I have seen references to possible cycles/loops in some exchanges but I don't quite understand when they happen and whether my solution solves them.
> So far, the operators are not passed to the **finalize** function. It could be handy in general, but that's a separate issue I think. Adding method=operator is useful but does not brake the API (I think). Adding the other=other parameter do change it a bit, although the current **finalize** method will take it in its **kwargs .
that's exactly what needs to be one, `__finalize__` NEEDS to be called. There is no API change.
## Same issue, different context
#### Code Sample
```python
import pandas
# subclass series and define property 'meta' to be passed on
class S(pandas.Series):
_metadata = ['meta']
@property
def _constructor(self):
return S
# create new instance of Series subclass and set custom property
x = S(range(3))
x.meta = 'test'
# 'meta' gets passed on to slice, as expected
print(x[1:].meta)
# calculation results
print((x * 2).meta)
```
#### Problem description
The documentation states:
```Define _metadata for normal properties which will be passed to manipulation results```
See http://pandas.pydata.org/pandas-docs/stable/internals.html#define-original-properties.
I think multiplication / adding etc. are also manipulation results.
This should be discussed with others who are already subclassing Pandas classes and are making use of _metadata.
#### Expected Output
The property is expected to be passed on to the calculation result.
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 3.5.1.final.0
python-bits: 64
OS: Darwin
OS-release: 16.6.0
machine: x86_64
processor: i386
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: en_US.UTF-8
pandas: 0.20.3
pytest: 3.2.1
pip: 9.0.1
setuptools: 36.2.0
Cython: None
numpy: 1.13.1
scipy: 0.19.1
xarray: None
IPython: 6.1.0
sphinx: None
patsy: None
dateutil: 2.6.1
pytz: 2017.2
blosc: None
bottleneck: None
tables: None
numexpr: None
feather: None
matplotlib: None
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: 0.999999999
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: 2.9.6
s3fs: None
pandas_gbq: None
pandas_datareader: None
</details>
My workaround: Always call `__finalize__` when constructing a new Series:
```python
@property
def _constructor(self):
def f(*args, **kwargs):
# workaround for https://github.com/pandas-dev/pandas/issues/13208
return MySubclass(*args, **kwargs).__finalize__(self)
return f
```
Which module contains the operations part? Like __add__ I would love to be able to contribute.
This is fixed in master, may need a dedicated test | 2020-01-07T05:50:58Z | [] | [] |
Traceback (most recent call last):
...
AttributeError: 'MySeries' object has no attribute 'x'
| 13,338 |
||||
pandas-dev/pandas | pandas-dev__pandas-30882 | 7d280409c40302437c37b520945615e5a1f90ffc | diff --git a/doc/source/whatsnew/v1.1.0.rst b/doc/source/whatsnew/v1.1.0.rst
--- a/doc/source/whatsnew/v1.1.0.rst
+++ b/doc/source/whatsnew/v1.1.0.rst
@@ -161,7 +161,8 @@ ExtensionArray
Other
^^^^^
--
+- Appending a dictionary to a :class:`DataFrame` without passing ``ignore_index=True`` will raise ``TypeError: Can only append a dict if ignore_index=True``
+ instead of ``TypeError: Can only append a Series if ignore_index=True or if the Series has a name`` (:issue:`30871`)
-
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -7062,6 +7062,8 @@ def append(
"""
if isinstance(other, (Series, dict)):
if isinstance(other, dict):
+ if not ignore_index:
+ raise TypeError("Can only append a dict if ignore_index=True")
other = Series(other)
if other.name is None and not ignore_index:
raise TypeError(
| Improve error message for DataFrame.append(<dict-like>)
#### Code Sample, a copy-pastable example if possible
```python
>>> import pandas as pd
>>>
>>> pd.__version__
'0.26.0.dev0+1731.g3ddd495e4'
>>>
>>> df = pd.DataFrame({"Name": ["Alice"], "Gender": ["F"]})
>>> df
Name Gender
0 Alice F
>>>
>>> df.append({"Name": "Bob", "Gender": "M"})
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\simon\pandas\pandas\core\frame.py", line 7013, in append
"Can only append a Series if ignore_index=True "
TypeError: Can only append a Series if ignore_index=True or if the Series has a name
>>>
```
#### Problem description
dict-like is valid type for `other` parameter of DataFrame.append
#### Expected Output
ValueError: Can only append a dict-like if ignore_index=True
#### Output of ``pd.show_versions()``
<details>
[paste the output of ``pd.show_versions()`` here below this line]
</details>
| 2020-01-10T13:39:36Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\simon\pandas\pandas\core\frame.py", line 7013, in append
"Can only append a Series if ignore_index=True "
TypeError: Can only append a Series if ignore_index=True or if the Series has a name
| 13,352 |
||||
pandas-dev/pandas | pandas-dev__pandas-30977 | bc9d329ba83795845be6aa455178e2f8d753542b | diff --git a/pandas/_libs/src/ujson/python/objToJSON.c b/pandas/_libs/src/ujson/python/objToJSON.c
--- a/pandas/_libs/src/ujson/python/objToJSON.c
+++ b/pandas/_libs/src/ujson/python/objToJSON.c
@@ -456,8 +456,8 @@ static char *PyDateTimeToIso(PyDateTime_Date *obj, NPY_DATETIMEUNIT base,
static char *PyDateTimeToIsoCallback(JSOBJ obj, JSONTypeContext *tc,
size_t *len) {
- if (!PyDateTime_Check(obj)) {
- PyErr_SetString(PyExc_TypeError, "Expected datetime object");
+ if (!PyDate_Check(obj)) {
+ PyErr_SetString(PyExc_TypeError, "Expected date object");
return NULL;
}
@@ -469,7 +469,7 @@ static npy_datetime PyDateTimeToEpoch(PyObject *obj, NPY_DATETIMEUNIT base) {
npy_datetimestruct dts;
int ret;
- if (!PyDateTime_Check(obj)) {
+ if (!PyDate_Check(obj)) {
// TODO: raise TypeError
}
PyDateTime_Date *dt = (PyDateTime_Date *)obj;
@@ -1504,6 +1504,7 @@ char **NpyArr_encodeLabels(PyArrayObject *labels, PyObjectEncoder *enc,
char **ret;
char *dataptr, *cLabel;
int type_num;
+ NPY_DATETIMEUNIT base = enc->datetimeUnit;
PRINTMARK();
if (!labels) {
@@ -1541,32 +1542,10 @@ char **NpyArr_encodeLabels(PyArrayObject *labels, PyObjectEncoder *enc,
break;
}
- // TODO: vectorized timedelta solution
- if (enc->datetimeIso &&
- (type_num == NPY_TIMEDELTA || PyDelta_Check(item))) {
- PyObject *td = PyObject_CallFunction(cls_timedelta, "(O)", item);
- if (td == NULL) {
- Py_DECREF(item);
- NpyArr_freeLabels(ret, num);
- ret = 0;
- break;
- }
-
- PyObject *iso = PyObject_CallMethod(td, "isoformat", NULL);
- Py_DECREF(td);
- if (iso == NULL) {
- Py_DECREF(item);
- NpyArr_freeLabels(ret, num);
- ret = 0;
- break;
- }
-
- cLabel = (char *)PyUnicode_AsUTF8(iso);
- Py_DECREF(iso);
- len = strlen(cLabel);
- } else if (PyTypeNum_ISDATETIME(type_num)) {
- NPY_DATETIMEUNIT base = enc->datetimeUnit;
- npy_int64 longVal;
+ int is_datetimelike = 0;
+ npy_int64 nanosecVal;
+ if (PyTypeNum_ISDATETIME(type_num)) {
+ is_datetimelike = 1;
PyArray_VectorUnaryFunc *castfunc =
PyArray_GetCastFunc(PyArray_DescrFromType(type_num), NPY_INT64);
if (!castfunc) {
@@ -1574,27 +1553,74 @@ char **NpyArr_encodeLabels(PyArrayObject *labels, PyObjectEncoder *enc,
"Cannot cast numpy dtype %d to long",
enc->npyType);
}
- castfunc(dataptr, &longVal, 1, NULL, NULL);
- if (enc->datetimeIso) {
- cLabel = int64ToIso(longVal, base, &len);
+ castfunc(dataptr, &nanosecVal, 1, NULL, NULL);
+ } else if (PyDate_Check(item) || PyDelta_Check(item)) {
+ is_datetimelike = 1;
+ if (PyObject_HasAttrString(item, "value")) {
+ nanosecVal = get_long_attr(item, "value");
} else {
- if (!scaleNanosecToUnit(&longVal, base)) {
- // TODO: This gets hit but somehow doesn't cause errors
- // need to clean up (elsewhere in module as well)
+ if (PyDelta_Check(item)) {
+ nanosecVal = total_seconds(item) *
+ 1000000000LL; // nanoseconds per second
+ } else {
+ // datetime.* objects don't follow above rules
+ nanosecVal = PyDateTimeToEpoch(item, NPY_FR_ns);
}
- cLabel = PyObject_Malloc(21); // 21 chars for int64
- sprintf(cLabel, "%" NPY_INT64_FMT, longVal);
- len = strlen(cLabel);
}
- } else if (PyDateTime_Check(item) || PyDate_Check(item)) {
- NPY_DATETIMEUNIT base = enc->datetimeUnit;
- if (enc->datetimeIso) {
- cLabel = PyDateTimeToIso((PyDateTime_Date *)item, base, &len);
+ }
+
+ if (is_datetimelike) {
+ if (nanosecVal == get_nat()) {
+ len = 5; // TODO: shouldn't require extra space for terminator
+ cLabel = PyObject_Malloc(len);
+ strncpy(cLabel, "null", len);
} else {
- cLabel = PyObject_Malloc(21); // 21 chars for int64
- sprintf(cLabel, "%" NPY_DATETIME_FMT,
- PyDateTimeToEpoch(item, base));
- len = strlen(cLabel);
+ if (enc->datetimeIso) {
+ // TODO: Vectorized Timedelta function
+ if ((type_num == NPY_TIMEDELTA) || (PyDelta_Check(item))) {
+ PyObject *td =
+ PyObject_CallFunction(cls_timedelta, "(O)", item);
+ if (td == NULL) {
+ Py_DECREF(item);
+ NpyArr_freeLabels(ret, num);
+ ret = 0;
+ break;
+ }
+
+ PyObject *iso =
+ PyObject_CallMethod(td, "isoformat", NULL);
+ Py_DECREF(td);
+ if (iso == NULL) {
+ Py_DECREF(item);
+ NpyArr_freeLabels(ret, num);
+ ret = 0;
+ break;
+ }
+
+ len = strlen(PyUnicode_AsUTF8(iso));
+ cLabel = PyObject_Malloc(len + 1);
+ memcpy(cLabel, PyUnicode_AsUTF8(iso), len + 1);
+ Py_DECREF(iso);
+ } else {
+ if (type_num == NPY_DATETIME) {
+ cLabel = int64ToIso(nanosecVal, base, &len);
+ } else {
+ cLabel = PyDateTimeToIso((PyDateTime_Date *)item,
+ base, &len);
+ }
+ }
+ if (cLabel == NULL) {
+ Py_DECREF(item);
+ NpyArr_freeLabels(ret, num);
+ ret = 0;
+ break;
+ }
+ } else {
+ cLabel = PyObject_Malloc(21); // 21 chars for int64
+ sprintf(cLabel, "%" NPY_DATETIME_FMT,
+ NpyDateTimeToEpoch(nanosecVal, base));
+ len = strlen(cLabel);
+ }
}
} else { // Fallback to string representation
PyObject *str = PyObject_Str(item);
@@ -1615,6 +1641,10 @@ char **NpyArr_encodeLabels(PyArrayObject *labels, PyObjectEncoder *enc,
ret[i] = PyObject_Malloc(len + 1);
memcpy(ret[i], cLabel, len + 1);
+ if (is_datetimelike) {
+ PyObject_Free(cLabel);
+ }
+
if (PyErr_Occurred()) {
NpyArr_freeLabels(ret, num);
ret = 0;
| to_json doesn't work with Datetime.date
This is a regression between 0.25.3 and 1.0
```python
>>> import pandas as pd
>>> import datetime
>>> data = [datetime.date(year=2020, month=1, day=1), "a"]
>>> pd.Series(data).to_json(date_format="iso")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/williamayd/clones/pandas/pandas/core/generic.py", line 2363, in to_json
indent=indent,
File "/Users/williamayd/clones/pandas/pandas/io/json/_json.py", line 85, in to_json
indent=indent,
File "/Users/williamayd/clones/pandas/pandas/io/json/_json.py", line 145, in write
self.indent,
File "/Users/williamayd/clones/pandas/pandas/io/json/_json.py", line 199, in _write
indent,
File "/Users/williamayd/clones/pandas/pandas/io/json/_json.py", line 167, in _write
indent=indent,
TypeError: Expected datetime object
```
Noticed while working on #30903
| 2020-01-13T18:13:51Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/williamayd/clones/pandas/pandas/core/generic.py", line 2363, in to_json
indent=indent,
File "/Users/williamayd/clones/pandas/pandas/io/json/_json.py", line 85, in to_json
indent=indent,
File "/Users/williamayd/clones/pandas/pandas/io/json/_json.py", line 145, in write
self.indent,
File "/Users/williamayd/clones/pandas/pandas/io/json/_json.py", line 199, in _write
indent,
File "/Users/williamayd/clones/pandas/pandas/io/json/_json.py", line 167, in _write
indent=indent,
TypeError: Expected datetime object
| 13,368 |
||||
pandas-dev/pandas | pandas-dev__pandas-31017 | e31c5ad5464e9f6a72bc9f99dfa7a4b095f9ca5d | diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -2472,6 +2472,7 @@ class Fixed:
"""
pandas_kind: str
+ format_type: str = "fixed" # GH#30962 needed by dask
obj_type: Type[Union[DataFrame, Series]]
ndim: int
encoding: str
@@ -3129,6 +3130,7 @@ class Table(Fixed):
"""
pandas_kind = "wide_table"
+ format_type: str = "table" # GH#30962 needed by dask
table_type: str
levels = 1
is_table = True
| AttributeError: 'FrameFixed' object has no attribute 'format_type' on 1.0.0rc0
```python
import pandas as pd
df = pd.DataFrame({"A": [1, 2]})
df.to_hdf("foo.h5", "foo")
with pd.HDFStore("foo.h5") as hdf:
for key in hdf.keys():
storer = hdf.get_storer(key)
print(type(storer), storer.format_type)
```
On 0.25.3, we have
```python
<class 'pandas.io.pytables.FrameFixed'> fixed
```
On 1.0.0rc0
```pytb
Traceback (most recent call last):
File "bug.py", line 10, in <module>
print(storer.format_type)
AttributeError: 'FrameFixed' object has no attribute 'format_type'
```
Didn't see anything about this in the release notes. cc @jbrockmendel (if this was part of your tables cleanup).
| > (if this was part of your tables cleanup).
This seems likely. AFAICT it was never used, so it got stripped out. Does it need to be restored?
I think so. It's available by only public APIs and was used in downstream projects like Dask: https://travis-ci.org/dask/dask/jobs/636393300#L1315 | 2020-01-14T20:43:43Z | [] | [] |
Traceback (most recent call last):
File "bug.py", line 10, in <module>
print(storer.format_type)
AttributeError: 'FrameFixed' object has no attribute 'format_type'
| 13,373 |
|||
pandas-dev/pandas | pandas-dev__pandas-31159 | 15bacea86844237a9e5290446612ebe3ea712d84 | diff --git a/doc/source/whatsnew/v1.0.0.rst b/doc/source/whatsnew/v1.0.0.rst
--- a/doc/source/whatsnew/v1.0.0.rst
+++ b/doc/source/whatsnew/v1.0.0.rst
@@ -144,7 +144,7 @@ type dedicated to boolean data that can hold missing values. The default
``bool`` data type based on a bool-dtype NumPy array, the column can only hold
``True`` or ``False``, and not missing values. This new :class:`~arrays.BooleanArray`
can store missing values as well by keeping track of this in a separate mask.
-(:issue:`29555`, :issue:`30095`)
+(:issue:`29555`, :issue:`30095`, :issue:`31131`)
.. ipython:: python
diff --git a/pandas/core/arrays/boolean.py b/pandas/core/arrays/boolean.py
--- a/pandas/core/arrays/boolean.py
+++ b/pandas/core/arrays/boolean.py
@@ -1,5 +1,5 @@
import numbers
-from typing import TYPE_CHECKING, Any, Tuple, Type
+from typing import TYPE_CHECKING, Any, List, Tuple, Type
import warnings
import numpy as np
@@ -286,6 +286,23 @@ def _from_sequence(cls, scalars, dtype=None, copy: bool = False):
values, mask = coerce_to_array(scalars, copy=copy)
return BooleanArray(values, mask)
+ @classmethod
+ def _from_sequence_of_strings(
+ cls, strings: List[str], dtype=None, copy: bool = False
+ ):
+ def map_string(s):
+ if isna(s):
+ return s
+ elif s in ["True", "TRUE", "true"]:
+ return True
+ elif s in ["False", "FALSE", "false"]:
+ return False
+ else:
+ raise ValueError(f"{s} cannot be cast to bool")
+
+ scalars = [map_string(x) for x in strings]
+ return cls._from_sequence(scalars, dtype, copy)
+
def _values_for_factorize(self) -> Tuple[np.ndarray, Any]:
data = self._data.astype("int8")
data[self._mask] = -1
| ENH: Implement CSV reading for BooleanArray
#### Code Sample, a copy-pastable example if possible
In 1.0.0 rc0, `pd.read_csv` returns this error when reading NA with specifying dtype of 'boolean'
```python
# import pandas as pd
# import io
>>> txt = """
X1,X2,X3
1,a,True
2,b,False
NA,NA,NA
"""
>>> df1 = pd.read_csv(io.StringIO(txt), dtype={'X3': 'boolean'}) # `pd.BooleanDtype()` also fails
raceback (most recent call last):
File "pandas/_libs/parsers.pyx", line 1191, in pandas._libs.parsers.TextReader._convert_with_dtype
File "/usr/local/lib/python3.7/site-packages/pandas/core/arrays/base.py", line 232, in _from_sequence_of_strings
raise AbstractMethodError(cls)
pandas.errors.AbstractMethodError: This method must be defined in the concrete class type
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.7/site-packages/pandas/io/parsers.py", line 676, in parser_f
return _read(filepath_or_buffer, kwds)
File "/usr/local/lib/python3.7/site-packages/pandas/io/parsers.py", line 454, in _read
data = parser.read(nrows)
File "/usr/local/lib/python3.7/site-packages/pandas/io/parsers.py", line 1133, in read
ret = self._engine.read(nrows)
File "/usr/local/lib/python3.7/site-packages/pandas/io/parsers.py", line 2037, in read
data = self._reader.read(nrows)
File "pandas/_libs/parsers.pyx", line 859, in pandas._libs.parsers.TextReader.read
File "pandas/_libs/parsers.pyx", line 874, in pandas._libs.parsers.TextReader._read_low_memory
File "pandas/_libs/parsers.pyx", line 951, in pandas._libs.parsers.TextReader._read_rows
File "pandas/_libs/parsers.pyx", line 1083, in pandas._libs.parsers.TextReader._convert_column_data
File "pandas/_libs/parsers.pyx", line 1114, in pandas._libs.parsers.TextReader._convert_tokens
File "pandas/_libs/parsers.pyx", line 1194, in pandas._libs.parsers.TextReader._convert_with_dtype
NotImplementedError: Extension Array: <class 'pandas.core.arrays.boolean.BooleanArray'> must implement _from_sequence_of_strings in order to be used in parser methods
```
while "Int64" and "string" with NA can be correctly recognized.
```python
>>> df2 = pd.read_csv(io.StringIO(txt), dtype={'X1': 'Int64', 'X2': 'string'})
>>> df2
X1 X2 X3
0 1 a True
1 2 b False
2 <NA> <NA> NaN
>>> df2.dtypes
X1 Int64
X2 string
X3 object
dtype: object
```
#### Problem description
NA literal for boolean is not parsed by `pd.read_csv`
#### Expected Output
```python
df3 = pd.read_csv(io.StringIO(txt), dtype={'X3': 'boolean'})
>>> df3
X1 X2 X3
0 1.0 a True
1 2.0 b False
2 NaN NaN <NA>
>>> df3.dtypes
X1 float64
X2 object
X3 boolean
dtype: object
# and
df4 = pd.read_csv(io.StringIO(txt), dtype={'X1': 'Int64', 'X2': 'string', 'X3': 'boolean'})
>>> df4
X1 X2 X3
0 1 a True
1 2 b False
2 <NA> <NA> <NA>
>>> df4.dtypes
X1 Int64
X2 string
X3 boolean
dtype: object
```
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit : None
python : 3.7.6.final.0
python-bits : 64
OS : Linux
OS-release : 4.9.184-linuxkit
machine : x86_64
processor :
byteorder : little
LC_ALL : None
LANG : C.UTF-8
LOCALE : en_US.UTF-8
pandas : 1.0.0rc0
numpy : 1.18.1
pytz : 2019.3
dateutil : 2.8.1
pip : 19.3.1
setuptools : 44.0.0
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : None
pandas_datareader: None
bs4 : None
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : None
matplotlib : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
pytest : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
xlsxwriter : None
numba : None
</details>
| @koizumihiroo try this
`df = pd.read_csv(io.StringIO(txt), dtype={'X3': 'bool'})`
@koizumihiroo We need to implement `BooleanArray._from_sequence_of_strings`. Are you interested in doing that?
@sach99in bool and boolean are different dtypes.
| 2020-01-20T18:30:01Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.7/site-packages/pandas/io/parsers.py", line 676, in parser_f
return _read(filepath_or_buffer, kwds)
File "/usr/local/lib/python3.7/site-packages/pandas/io/parsers.py", line 454, in _read
data = parser.read(nrows)
File "/usr/local/lib/python3.7/site-packages/pandas/io/parsers.py", line 1133, in read
ret = self._engine.read(nrows)
File "/usr/local/lib/python3.7/site-packages/pandas/io/parsers.py", line 2037, in read
data = self._reader.read(nrows)
File "pandas/_libs/parsers.pyx", line 859, in pandas._libs.parsers.TextReader.read
File "pandas/_libs/parsers.pyx", line 874, in pandas._libs.parsers.TextReader._read_low_memory
File "pandas/_libs/parsers.pyx", line 951, in pandas._libs.parsers.TextReader._read_rows
File "pandas/_libs/parsers.pyx", line 1083, in pandas._libs.parsers.TextReader._convert_column_data
File "pandas/_libs/parsers.pyx", line 1114, in pandas._libs.parsers.TextReader._convert_tokens
File "pandas/_libs/parsers.pyx", line 1194, in pandas._libs.parsers.TextReader._convert_with_dtype
NotImplementedError: Extension Array: <class 'pandas.core.arrays.boolean.BooleanArray'> must implement _from_sequence_of_strings in order to be used in parser methods
| 13,388 |
|||
pandas-dev/pandas | pandas-dev__pandas-31416 | feee467ff47c3af7f24e761d1abb78a161af1f12 | diff --git a/doc/source/whatsnew/v1.1.0.rst b/doc/source/whatsnew/v1.1.0.rst
--- a/doc/source/whatsnew/v1.1.0.rst
+++ b/doc/source/whatsnew/v1.1.0.rst
@@ -223,6 +223,8 @@ Other
- Appending a dictionary to a :class:`DataFrame` without passing ``ignore_index=True`` will raise ``TypeError: Can only append a dict if ignore_index=True``
instead of ``TypeError: Can only append a Series if ignore_index=True or if the Series has a name`` (:issue:`30871`)
- Set operations on an object-dtype :class:`Index` now always return object-dtype results (:issue:`31401`)
+- Bug in :meth:`AbstractHolidayCalendar.holidays` when no rules were defined (:issue:`31415`)
+-
.. ---------------------------------------------------------------------------
diff --git a/pandas/tseries/holiday.py b/pandas/tseries/holiday.py
--- a/pandas/tseries/holiday.py
+++ b/pandas/tseries/holiday.py
@@ -7,7 +7,7 @@
from pandas.errors import PerformanceWarning
-from pandas import DateOffset, Series, Timestamp, date_range
+from pandas import DateOffset, DatetimeIndex, Series, Timestamp, concat, date_range
from pandas.tseries.offsets import Day, Easter
@@ -406,17 +406,14 @@ def holidays(self, start=None, end=None, return_name=False):
start = Timestamp(start)
end = Timestamp(end)
- holidays = None
# If we don't have a cache or the dates are outside the prior cache, we
# get them again
if self._cache is None or start < self._cache[0] or end > self._cache[1]:
- for rule in self.rules:
- rule_holidays = rule.dates(start, end, return_name=True)
-
- if holidays is None:
- holidays = rule_holidays
- else:
- holidays = holidays.append(rule_holidays)
+ holidays = [rule.dates(start, end, return_name=True) for rule in self.rules]
+ if holidays:
+ holidays = concat(holidays)
+ else:
+ holidays = Series(index=DatetimeIndex([]), dtype=object)
self._cache = (start, end, holidays.sort_index())
| Bug in AbstractHolidayCalendar.holidays
...when there are no rules.
```python
from pandas.tseries.holiday import AbstractHolidayCalendar
class ExampleCalendar(AbstractHolidayCalendar):
pass
cal = ExampleCalendar()
```
```python
In [58]: cal.holidays(pd.Timestamp('01-Jan-2020'), pd.Timestamp('01-Jan-2021'))
Traceback (most recent call last):
File "<ipython-input-58-022244d4e794>", line 1, in <module>
cal.holidays(pd.Timestamp('01-Jan-2020'), pd.Timestamp('01-Jan-2021'))
File "C:\Users\dhirschf\envs\dev\lib\site-packages\pandas\tseries\holiday.py", line 422, in holidays
self._cache = (start, end, holidays.sort_index())
AttributeError: 'NoneType' object has no attribute 'sort_index'
In [59]: pd.__version__
Out[59]: '0.25.3'
```
| As can be seen below:
https://github.com/pandas-dev/pandas/blob/ec0996c6751326eed17a0bb456fe1c550689a618/pandas/tseries/holiday.py#L409-L421
when `self.rules` is empty
```python
In [61]: cal.rules
Out[61]: []
```
You drop through the loop to L421:
```python
self._cache = (start, end, holidays.sort_index())
```
where `holidays` remains unchanged from being set to `None` before the loop on L409. | 2020-01-29T12:19:32Z | [] | [] |
Traceback (most recent call last):
File "<ipython-input-58-022244d4e794>", line 1, in <module>
cal.holidays(pd.Timestamp('01-Jan-2020'), pd.Timestamp('01-Jan-2021'))
File "C:\Users\dhirschf\envs\dev\lib\site-packages\pandas\tseries\holiday.py", line 422, in holidays
self._cache = (start, end, holidays.sort_index())
AttributeError: 'NoneType' object has no attribute 'sort_index'
| 13,427 |
|||
pandas-dev/pandas | pandas-dev__pandas-31477 | 79633f94f912a083389f76f7f8c876fe4f755ca1 | diff --git a/doc/source/whatsnew/v1.0.1.rst b/doc/source/whatsnew/v1.0.1.rst
--- a/doc/source/whatsnew/v1.0.1.rst
+++ b/doc/source/whatsnew/v1.0.1.rst
@@ -20,6 +20,7 @@ Fixed regressions
- Fixed regression in ``DataFrame.__setitem__`` raising an ``AttributeError`` with a :class:`MultiIndex` and a non-monotonic indexer (:issue:`31449`)
- Fixed regression in :class:`Series` multiplication when multiplying a numeric :class:`Series` with >10000 elements with a timedelta-like scalar (:issue:`31457`)
- Fixed regression in :meth:`GroupBy.apply` if called with a function which returned a non-pandas non-scalar object (e.g. a list or numpy array) (:issue:`31441`)
+- Fixed regression in :meth:`DataFrame.groupby` whereby taking the minimum or maximum of a column with period dtype would raise a ``TypeError``. (:issue:`31471`)
- Fixed regression in :meth:`to_datetime` when parsing non-nanosecond resolution datetimes (:issue:`31491`)
- Fixed regression in :meth:`~DataFrame.to_csv` where specifying an ``na_rep`` might truncate the values written (:issue:`31447`)
- Fixed regression in :class:`Categorical` construction with ``numpy.str_`` categories (:issue:`31499`)
diff --git a/pandas/core/groupby/ops.py b/pandas/core/groupby/ops.py
--- a/pandas/core/groupby/ops.py
+++ b/pandas/core/groupby/ops.py
@@ -31,6 +31,7 @@
is_extension_array_dtype,
is_integer_dtype,
is_numeric_dtype,
+ is_period_dtype,
is_sparse,
is_timedelta64_dtype,
needs_i8_conversion,
@@ -567,7 +568,12 @@ def _cython_operation(
if swapped:
result = result.swapaxes(0, axis)
- if is_datetime64tz_dtype(orig_values.dtype):
+ if is_datetime64tz_dtype(orig_values.dtype) or is_period_dtype(
+ orig_values.dtype
+ ):
+ # We need to use the constructors directly for these dtypes
+ # since numpy won't recognize them
+ # https://github.com/pandas-dev/pandas/issues/31471
result = type(orig_values)(result.astype(np.int64), dtype=orig_values.dtype)
elif is_datetimelike and kind == "aggregate":
result = result.astype(orig_values.dtype)
| TypeError when calculating min/max of period column using groupby
#### Code Sample, a copy-pastable example if possible
```python
import pandas as pd
periods = pd.period_range(start="2019-01", periods=4, freq="M")
groups = [1, 1, 2, 2]
df = pd.DataFrame({"periods": periods, "groups": groups})
result = df.groupby("groups")["periods"].min()
```
#### Problem description
The last line of the example throws a _TypeError: data type not understood_.
<details>
Traceback (most recent call last):
File "test.py", line 6, in <module>
result = df.groupby("groups")["periods"].min()
File "...\site-packages\pandas\core\groupby\groupby.py", line 1378, in f
return self._cython_agg_general(alias, alt=npfunc, **kwargs)
File "...\site-packages\pandas\core\groupby\groupby.py", line 889, in _cython_agg_general
result, agg_names = self.grouper.aggregate(
File "...\site-packages\pandas\core\groupby\ops.py", line 580, in aggregate
return self._cython_operation(
File "...\site-packages\pandas\core\groupby\ops.py", line 573, in _cython_operation
result = result.astype(orig_values.dtype)
TypeError: data type not understood
</details>
#### Expected Output
The `result` DataFrame should contain the earliest period (in this case 2019-01) for each grouping slice.
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit : None
python : 3.8.1.final.0
python-bits : 64
OS : Windows
OS-release : 10
machine : AMD64
processor : Intel64 Family 6 Model 142 Stepping 10, GenuineIntel
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : English_United Kingdom.1252
pandas : 1.0.0
numpy : 1.18.1
pytz : 2019.3
dateutil : 2.8.1
pip : 20.0.2
setuptools : 45.1.0.post20200127
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : None
pandas_datareader: None
bs4 : None
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : None
matplotlib : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
pytest : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
xlsxwriter : None
numba : None
</details>
| 2020-01-31T02:06:23Z | [] | [] |
Traceback (most recent call last):
File "test.py", line 6, in <module>
result = df.groupby("groups")["periods"].min()
File "...\site-packages\pandas\core\groupby\groupby.py", line 1378, in f
return self._cython_agg_general(alias, alt=npfunc, **kwargs)
File "...\site-packages\pandas\core\groupby\groupby.py", line 889, in _cython_agg_general
result, agg_names = self.grouper.aggregate(
File "...\site-packages\pandas\core\groupby\ops.py", line 580, in aggregate
return self._cython_operation(
File "...\site-packages\pandas\core\groupby\ops.py", line 573, in _cython_operation
result = result.astype(orig_values.dtype)
TypeError: data type not understood
| 13,432 |
||||
pandas-dev/pandas | pandas-dev__pandas-3148 | 70974fbd9f95a1ff7a5e61494d80ae432cca9487 | diff --git a/RELEASE.rst b/RELEASE.rst
--- a/RELEASE.rst
+++ b/RELEASE.rst
@@ -179,7 +179,7 @@ pandas 0.11.0
- Series ops with a Timestamp on the rhs was throwing an exception (GH2898_)
added tests for Series ops with datetimes,timedeltas,Timestamps, and datelike
Series on both lhs and rhs
- - Fixed subtle timedelta64 inference issue on py3
+ - Fixed subtle timedelta64 inference issue on py3 & numpy 1.7.0 (GH3094_)
- Fixed some formatting issues on timedelta when negative
- Support null checking on timedelta64, representing (and formatting) with NaT
- Support setitem with np.nan value, converts to NaT
@@ -293,6 +293,7 @@ pandas 0.11.0
.. _GH3115: https://github.com/pydata/pandas/issues/3115
.. _GH3070: https://github.com/pydata/pandas/issues/3070
.. _GH3075: https://github.com/pydata/pandas/issues/3075
+.. _GH3094: https://github.com/pydata/pandas/issues/3094
.. _GH3130: https://github.com/pydata/pandas/issues/3130
pandas 0.10.1
diff --git a/pandas/core/series.py b/pandas/core/series.py
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -104,6 +104,11 @@ def convert_to_array(values):
pass
else:
values = com._possibly_cast_to_timedelta(values)
+ elif inferred_type in set(['integer']):
+ if values.dtype == 'timedelta64[ns]':
+ pass
+ elif values.dtype.kind == 'm':
+ values = values.astype('timedelta64[ns]')
else:
values = pa.array(values)
return values
| 0.11.0.dev-4b22372.win-amd64-py2.7: nosetests error
## ERROR: test_operators_timedelta64 (pandas.tests.test_series.TestSeries)
Traceback (most recent call last):
File "C:\Python27\lib\site-packages\pandas\tests\test_series.py", line 1752, i
n test_operators_timedelta64
result = resultb + df['A']
File "C:\Python27\lib\site-packages\pandas\core\series.py", line 144, in wrapp
er
lvalues = lvalues.values
AttributeError: 'numpy.ndarray' object has no attribute 'values'
---
Ran 3113 tests in 241.670s
FAILED (SKIP=94, errors=1)
| I've run the full test suite several times today with no errros. can you
try it on the latest git master and tell us if there's still a problem?
I don't think any of the devs are on windows, we might be missing a problem.
I just ran on windows x64/python 2.7.3, with latest master, test_series/frame passes
maybe try a later binary (think they are generated 5pm?)
I usually grab binaries on http://pandas.pydata.org/pandas-build/dev/, and the latest is dated March 14. That's the one I used. If you guys say that the latest code works fine, I'll just wait for the new binary.
yep...i think the box those are built on is in the process of moving.....check back periodically...
hmm, i don't know what's up but i just got this exact error too (ubuntu, 32-bit...fresh installing on different machine than my normal one from master)...have not investigated yet
@bluefir looks like the binaries are updated....can you give a test?
@jreback replying here instead
yep, same exact error as above, on two different machines
`pip freeze` output:
```
Cython==0.18
Pygments==1.6
SQLAlchemy==0.8.0
argparse==1.2.1
ipython==0.13.1
logilab-astng==0.24.2
logilab-common==0.59.0
matplotlib==1.2.0
nose==1.2.1
numexpr==2.0.1
numpy==1.7.0
openpyxl==1.6.1
pandas==0.10.1
pylint==0.27.0
python-dateutil==1.5
pytz==2013b
pyzmq==13.0.0
scipy==0.11.0
six==1.3.0
statsmodels==0.4.3
vbench==0.1
wsgiref==0.1.2
xlrd==0.9.0
xlwt==0.7.4
yolk==0.4.3
```
some of that might be relevant
@bluefir can you conform that you are on numpy 1.7.0?
@y-p can you make that new build (for slow tests, which uses 2.7, also use numpy 1.7?),
I believe we test 1.6.2 on the main 2.7 one
| 2013-03-23T17:36:49Z | [] | [] |
Traceback (most recent call last):
File "C:\Python27\lib\site-packages\pandas\tests\test_series.py", line 1752, i
n test_operators_timedelta64
| 13,433 |
|||
pandas-dev/pandas | pandas-dev__pandas-31524 | a77ad8bf04da4bb4e5a30fc375b4b59f2d0860ab | diff --git a/doc/source/whatsnew/v1.0.2.rst b/doc/source/whatsnew/v1.0.2.rst
--- a/doc/source/whatsnew/v1.0.2.rst
+++ b/doc/source/whatsnew/v1.0.2.rst
@@ -75,6 +75,7 @@ Bug fixes
**I/O**
- Using ``pd.NA`` with :meth:`DataFrame.to_json` now correctly outputs a null value instead of an empty object (:issue:`31615`)
+- Bug in :meth:`pandas.json_normalize` when value in meta path is not iterable (:issue:`31507`)
- Fixed pickling of ``pandas.NA``. Previously a new object was returned, which broke computations relying on ``NA`` being a singleton (:issue:`31847`)
- Fixed bug in parquet roundtrip with nullable unsigned integer dtypes (:issue:`31896`).
diff --git a/pandas/io/json/_normalize.py b/pandas/io/json/_normalize.py
--- a/pandas/io/json/_normalize.py
+++ b/pandas/io/json/_normalize.py
@@ -8,6 +8,7 @@
import numpy as np
from pandas._libs.writers import convert_json_to_lines
+from pandas._typing import Scalar
from pandas.util._decorators import deprecate
import pandas as pd
@@ -226,14 +227,28 @@ def _json_normalize(
Returns normalized data with columns prefixed with the given string.
"""
- def _pull_field(js: Dict[str, Any], spec: Union[List, str]) -> Iterable:
+ def _pull_field(
+ js: Dict[str, Any], spec: Union[List, str]
+ ) -> Union[Scalar, Iterable]:
+ """Internal function to pull field"""
result = js # type: ignore
if isinstance(spec, list):
for field in spec:
result = result[field]
else:
result = result[spec]
+ return result
+
+ def _pull_records(js: Dict[str, Any], spec: Union[List, str]) -> Iterable:
+ """
+ Interal function to pull field for records, and similar to
+ _pull_field, but require to return Iterable. And will raise error
+ if has non iterable value.
+ """
+ result = _pull_field(js, spec)
+ # GH 31507 GH 30145, if result is not Iterable, raise TypeError if not
+ # null, otherwise return an empty list
if not isinstance(result, Iterable):
if pd.isnull(result):
result = [] # type: ignore
@@ -242,7 +257,6 @@ def _pull_field(js: Dict[str, Any], spec: Union[List, str]) -> Iterable:
f"{js} has non iterable value {result} for path {spec}. "
"Must be iterable or null."
)
-
return result
if isinstance(data, list) and not data:
@@ -292,7 +306,7 @@ def _recursive_extract(data, path, seen_meta, level=0):
_recursive_extract(obj[path[0]], path[1:], seen_meta, level=level + 1)
else:
for obj in data:
- recs = _pull_field(obj, path[0])
+ recs = _pull_records(obj, path[0])
recs = [
nested_to_record(r, sep=sep, max_level=max_level)
if isinstance(r, dict)
| json_normalize in 1.0.0 with meta path specified - expects iterable
#### Code Sample, a copy-pastable example if possible
```python
import json
from pandas.io.json import json_normalize
the_json = """
[{"id": 99,
"data": [{"one": 1, "two": 2}]
}]
"""
print(json_normalize(json.loads(the_json),
record_path=['data'], meta=['id']))
```
#### Problem description
Through 0.25.3, this program generates a DataFrame with one row. In 1.0.0 it fails with an exception:
```
Traceback (most recent call last):
File "foo.py", line 11, in <module>
record_path=['data'], meta=['id']))
File "/home/dataczar/venvs/test/lib/python3.7/site-packages/pandas/util/_decorators.py", line 66, in wrapper
return alternative(*args, **kwargs)
File "/home/dataczar/venvs/test/lib/python3.7/site-packages/pandas/io/json/_normalize.py", line 327, in _json_normalize
_recursive_extract(data, record_path, {}, level=0)
File "/home/dataczar/venvs/test/lib/python3.7/site-packages/pandas/io/json/_normalize.py", line 314, in _recursive_extract
meta_val = _pull_field(obj, val[level:])
File "/home/dataczar/venvs/test/lib/python3.7/site-packages/pandas/io/json/_normalize.py", line 246, in _pull_field
f"{js} has non iterable value {result} for path {spec}. "
TypeError: {'id': 99, 'data': [{'one': 1, 'two': 2}]} has non iterable value 99 for path ['id']. Must be iterable or null.
```
I don't see any documentation changes that suggest a backwards-incompatible change. All my calls to `json_normalize` that don't use `meta` function as before.
#### Expected Output
Through 0.25.3, the output was:
```
one two id
0 1 2 99
```
#### Output of ``pd.show_versions()``
From my virtualenv with pandas 1.0.0:
<details>
INSTALLED VERSIONS
------------------
commit : None
python : 3.7.4.final.0
python-bits : 64
OS : Linux
OS-release : 4.2.0-042stab120.16
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_GB.UTF-8
LOCALE : en_GB.UTF-8
pandas : 1.0.0
numpy : 1.18.1
pytz : 2019.3
dateutil : 2.8.1
pip : 19.0.3
setuptools : 40.8.0
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : 4.5.0
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : 7.11.1
pandas_datareader: None
bs4 : None
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : 4.5.0
matplotlib : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
pytest : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : 1.3.13
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
xlsxwriter : None
numba : None
</details>
From my virtualenv with 0.25.x:
<details>
INSTALLED VERSIONS
------------------
commit : None
python : 3.7.4.final.0
python-bits : 64
OS : Linux
OS-release : 4.2.0-042stab120.16
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_GB.UTF-8
LOCALE : en_GB.UTF-8
pandas : 0.25.0
numpy : 1.18.1
pytz : 2019.3
dateutil : 2.8.1
pip : 19.0.3
setuptools : 40.8.0
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : 4.4.1
html5lib : 1.0.1
pymysql : None
psycopg2 : None
jinja2 : None
IPython : 7.7.0
pandas_datareader: None
bs4 : 4.8.0
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : 4.4.1
matplotlib : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
s3fs : None
scipy : None
sqlalchemy : 1.3.7
tables : None
xarray : None
xlrd : None
xlwt : None
xlsxwriter : None
</details>
| 2020-01-31T23:14:26Z | [] | [] |
Traceback (most recent call last):
File "foo.py", line 11, in <module>
record_path=['data'], meta=['id']))
File "/home/dataczar/venvs/test/lib/python3.7/site-packages/pandas/util/_decorators.py", line 66, in wrapper
return alternative(*args, **kwargs)
File "/home/dataczar/venvs/test/lib/python3.7/site-packages/pandas/io/json/_normalize.py", line 327, in _json_normalize
_recursive_extract(data, record_path, {}, level=0)
File "/home/dataczar/venvs/test/lib/python3.7/site-packages/pandas/io/json/_normalize.py", line 314, in _recursive_extract
meta_val = _pull_field(obj, val[level:])
File "/home/dataczar/venvs/test/lib/python3.7/site-packages/pandas/io/json/_normalize.py", line 246, in _pull_field
f"{js} has non iterable value {result} for path {spec}. "
TypeError: {'id': 99, 'data': [{'one': 1, 'two': 2}]} has non iterable value 99 for path ['id']. Must be iterable or null.
| 13,441 |
||||
pandas-dev/pandas | pandas-dev__pandas-31528 | a2721fd602e43128314d4efd056dae56a89197bf | diff --git a/doc/source/whatsnew/v1.0.1.rst b/doc/source/whatsnew/v1.0.1.rst
--- a/doc/source/whatsnew/v1.0.1.rst
+++ b/doc/source/whatsnew/v1.0.1.rst
@@ -21,6 +21,7 @@ Fixed regressions
- Fixed regression in :meth:`GroupBy.apply` if called with a function which returned a non-pandas non-scalar object (e.g. a list or numpy array) (:issue:`31441`)
- Fixed regression in :meth:`to_datetime` when parsing non-nanosecond resolution datetimes (:issue:`31491`)
- Fixed regression in :meth:`~DataFrame.to_csv` where specifying an ``na_rep`` might truncate the values written (:issue:`31447`)
+- Fixed regression in :class:`Categorical` construction with ``numpy.str_`` categories (:issue:`31499`)
- Fixed regression where setting :attr:`pd.options.display.max_colwidth` was not accepting negative integer. In addition, this behavior has been deprecated in favor of using ``None`` (:issue:`31532`)
- Fixed regression in objTOJSON.c fix return-type warning (:issue:`31463`)
- Fixed regression in :meth:`qcut` when passed a nullable integer. (:issue:`31389`)
diff --git a/pandas/_libs/hashtable_class_helper.pxi.in b/pandas/_libs/hashtable_class_helper.pxi.in
--- a/pandas/_libs/hashtable_class_helper.pxi.in
+++ b/pandas/_libs/hashtable_class_helper.pxi.in
@@ -670,7 +670,9 @@ cdef class StringHashTable(HashTable):
val = values[i]
if isinstance(val, str):
- v = get_c_string(val)
+ # GH#31499 if we have a np.str_ get_c_string wont recognize
+ # it as a str, even though isinstance does.
+ v = get_c_string(<str>val)
else:
v = get_c_string(self.na_string_sentinel)
vecs[i] = v
@@ -703,7 +705,9 @@ cdef class StringHashTable(HashTable):
val = values[i]
if isinstance(val, str):
- v = get_c_string(val)
+ # GH#31499 if we have a np.str_ get_c_string wont recognize
+ # it as a str, even though isinstance does.
+ v = get_c_string(<str>val)
else:
v = get_c_string(self.na_string_sentinel)
vecs[i] = v
| Pandas 1.0 no longer handles `numpy.str_`s as catgories
#### Code Sample
```python
import pandas as pd
pd.Categorical(['1', '0', '1'], [np.str_('0'), np.str_('1')])
```
```pytb
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/angerer/Dev/Python/venvs/env-pandas-1/lib/python3.8/site-packages/pandas/core/arrays/categorical.py", line 385, in __init__
codes = _get_codes_for_values(values, dtype.categories)
File "/home/angerer/Dev/Python/venvs/env-pandas-1/lib/python3.8/site-packages/pandas/core/arrays/categorical.py", line 2576, in _get_codes_for_values
t.map_locations(cats)
File "pandas/_libs/hashtable_class_helper.pxi", line 1403, in pandas._libs.hashtable.StringHashTable.map_locations
TypeError: Expected unicode, got numpy.str_
```
#### Problem description
I know that having a list of `numpy.str_`s seems weird, but it easily happens when you use non-numpy algorithms on numpy arrays (e.g. `natsort.natsorted` in our case), or via comprehensions or so:
```py
>>> np.array(['1', '0'])[0].__class__
<class 'numpy.str_'>
>>> [type(s) for s in np.array(['1', '0'])]
[<class 'numpy.str_'>, <class 'numpy.str_'>]
```
#### Expected Output
A normal pd.Categorical
#### Pandas version
pandas 1.0
| This changed from 0.25.3?
Are you able to pin down what change caused it?
Yes, in 0.25.3 this worked.
At least, it gives a categorical with object categories with those numpy strings (but with Series constructor, we also preserve the numpy strings, and don't convert to python strings, so that seems the "expected" behaviour).
My guess is that it's related to https://github.com/pandas-dev/pandas/pull/30419 which changed `get_c_string` implementation (which is used in StringHashTable to get the c string from the string object) cc @jbrockmendel
maybe if we're lucky it will be good enough to change L706 in hashtable_class_helper.pxi.in from `v = get_c_string(val)` to `v = get_c_string(<str>val)`, but this is really a PITA because the previous line is precisely a check for `isintance(val, str)` which is True for np.str_ objects | 2020-02-01T03:40:31Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/angerer/Dev/Python/venvs/env-pandas-1/lib/python3.8/site-packages/pandas/core/arrays/categorical.py", line 385, in __init__
codes = _get_codes_for_values(values, dtype.categories)
File "/home/angerer/Dev/Python/venvs/env-pandas-1/lib/python3.8/site-packages/pandas/core/arrays/categorical.py", line 2576, in _get_codes_for_values
t.map_locations(cats)
File "pandas/_libs/hashtable_class_helper.pxi", line 1403, in pandas._libs.hashtable.StringHashTable.map_locations
TypeError: Expected unicode, got numpy.str_
| 13,442 |
|||
pandas-dev/pandas | pandas-dev__pandas-31666 | c3e32d739271355757f8cdba54c0daab2bca8226 | diff --git a/doc/source/whatsnew/v1.0.1.rst b/doc/source/whatsnew/v1.0.1.rst
--- a/doc/source/whatsnew/v1.0.1.rst
+++ b/doc/source/whatsnew/v1.0.1.rst
@@ -24,6 +24,7 @@ Fixed regressions
- Fixed regression in :meth:`to_datetime` when parsing non-nanosecond resolution datetimes (:issue:`31491`)
- Fixed regression in :meth:`~DataFrame.to_csv` where specifying an ``na_rep`` might truncate the values written (:issue:`31447`)
- Fixed regression in :class:`Categorical` construction with ``numpy.str_`` categories (:issue:`31499`)
+- Fixed regression in :meth:`DataFrame.loc` and :meth:`DataFrame.iloc` when selecting a row containing a single ``datetime64`` or ``timedelta64`` column (:issue:`31649`)
- Fixed regression where setting :attr:`pd.options.display.max_colwidth` was not accepting negative integer. In addition, this behavior has been deprecated in favor of using ``None`` (:issue:`31532`)
- Fixed regression in objTOJSON.c fix return-type warning (:issue:`31463`)
- Fixed regression in :meth:`qcut` when passed a nullable integer. (:issue:`31389`)
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -7,7 +7,7 @@
import numpy as np
-from pandas._libs import NaT, algos as libalgos, lib, tslib, writers
+from pandas._libs import NaT, Timestamp, algos as libalgos, lib, tslib, writers
from pandas._libs.index import convert_scalar
import pandas._libs.internals as libinternals
from pandas._libs.tslibs import Timedelta, conversion
@@ -2158,6 +2158,16 @@ def internal_values(self):
# Override to return DatetimeArray and TimedeltaArray
return self.array_values()
+ def iget(self, key):
+ # GH#31649 we need to wrap scalars in Timestamp/Timedelta
+ # TODO: this can be removed if we ever have 2D EA
+ result = super().iget(key)
+ if isinstance(result, np.datetime64):
+ result = Timestamp(result)
+ elif isinstance(result, np.timedelta64):
+ result = Timedelta(result)
+ return result
+
class DatetimeBlock(DatetimeLikeBlockMixin, Block):
__slots__ = ()
| to_datetime returning numpy.datetime64
#### Code Sample, a copy-pastable example if possible
This code:
```python
>>> df = pd.DataFrame({'date': ['Aug2020', 'November 2020']})
>>> df['parsed'] = df['date'].apply(pd.to_datetime)
>>> end = df.loc[df['parsed'].idxmax()]
>>> end['parsed'].replace(day=2)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'numpy.datetime64' object has no attribute 'replace'
```
worked in Pandas 0.25.3, but raises since 1.0.0.
I think there might be an issue with unboxing values when there are mixed types in the dataframe:
```python
>>> df = pd.DataFrame({'date': ['Aug2020', 'November 2020']})
>>> new = (
... df
... .assign(
... parsed=lambda x: x['date'].apply(pd.to_datetime),
... parsed2 = lambda x: x['date'].apply(pd.to_datetime)
... )
... )
>>> new['parsed'].iloc[0]
Timestamp('2020-08-01 00:00:00')
>>> new.iloc[0]['parsed']
numpy.datetime64('2020-08-01T00:00:00.000000000') # unboxed type
>>> new2 = new.drop(columns=['date'])
>>> new2['parsed'].iloc[0]
Timestamp('2020-08-01 00:00:00')
>>> new2.iloc[0]['parsed']
Timestamp('2020-08-01 00:00:00') # boxed type now that we've dropped the string column
```
#### Problem description
to_datetime can "sometimes" result in a np.datetime64 return type.
np.datetime64 is not a valid return type for to_datetime (https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.to_datetime.html), it should always be a datetimelike.
#### Expected Output
As in previous versions of Pandas:
```python
>>> df = pd.DataFrame({'date': ['Aug2020', 'November 2020']})
>>> df['parsed'] = df['date'].apply(pd.to_datetime)
>>> end = df.loc[df['parsed'].idxmax()]
>>> end['parsed'].replace(day=2)
Timestamp('2020-11-02 00:00:00')
```
#### Output of ``pd.show_versions()``
<details>
>>> pd.show_versions()
INSTALLED VERSIONS
------------------
commit : None
python : 3.7.6.final.0
python-bits : 64
OS : Windows
OS-release : 10
machine : AMD64
processor : Intel64 Family 6 Model 94 Stepping 3, GenuineIntel
byteorder : little
LC_ALL : None
LANG : en_GB.UTF-8
LOCALE : None.None
pandas : 1.0.0
numpy : 1.18.1
pytz : 2019.3
dateutil : 2.8.1
pip : 19.3.1
setuptools : 41.2.0
Cython : 0.29.14
pytest : 5.3.5
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : 4.5.0
html5lib : None
pymysql : None
psycopg2 : 2.8.4 (dt dec pq3 ext lo64)
jinja2 : 2.10.3
IPython : 7.11.1
pandas_datareader: None
bs4 : None
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : 4.5.0
matplotlib : 3.1.3
numexpr : None
odfpy : None
openpyxl : 3.0.0
pandas_gbq : None
pyarrow : None
pytables : None
pytest : 5.3.5
pyxlsb : None
s3fs : None
scipy : 1.4.1
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : 1.2.0
xlwt : None
xlsxwriter : None
numba : None
</details>
| A bit simpler example
```python
In [23]: df = pd.DataFrame({"A": [1, 2], "B": pd.date_range('2000', periods=2)})
In [24]: type(df.loc[0].values[1])
Out[24]: numpy.datetime64
```
@jbrockmendel this sounds similar to https://github.com/pandas-dev/pandas/pull/31630, but I don't think that quite fixes it.
Yah, #31630 should lead to us boxing in _fewer_ occasions, not more. I'll take a look at this.
We stopped using Block._try_coerce_result which was previously responsible for handling this in `Block.iget`. The quick fix is to patch DatetimeBlock.iget to box if it is returning a scalar. The long term fix is backing DatetimeBlock by 2D DTA. | 2020-02-04T20:26:39Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'numpy.datetime64' object has no attribute 'replace'
| 13,460 |
|||
pandas-dev/pandas | pandas-dev__pandas-31794 | 50ebb24880d9d516a6dacf9a28117289fb9eae97 | diff --git a/doc/source/whatsnew/v1.0.2.rst b/doc/source/whatsnew/v1.0.2.rst
--- a/doc/source/whatsnew/v1.0.2.rst
+++ b/doc/source/whatsnew/v1.0.2.rst
@@ -28,6 +28,10 @@ Fixed regressions
Bug fixes
~~~~~~~~~
+**Categorical**
+
+- Fixed bug where :meth:`Categorical.from_codes` improperly raised a ``ValueError`` when passed nullable integer codes. (:issue:`31779`)
+
**I/O**
- Using ``pd.NA`` with :meth:`DataFrame.to_json` now correctly outputs a null value instead of an empty object (:issue:`31615`)
diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -644,7 +644,13 @@ def from_codes(cls, codes, categories=None, ordered=None, dtype=None):
)
raise ValueError(msg)
- codes = np.asarray(codes) # #21767
+ if is_extension_array_dtype(codes) and is_integer_dtype(codes):
+ # Avoid the implicit conversion of Int to object
+ if isna(codes).any():
+ raise ValueError("codes cannot contain NA values")
+ codes = codes.to_numpy(dtype=np.int64)
+ else:
+ codes = np.asarray(codes)
if len(codes) and not is_integer_dtype(codes):
raise ValueError("codes need to be array-like integers")
| Categorical.from_codes fails for the (new nullable) Int64 dtype
#### Code Sample, a copy-pastable example if possible
```python
>>> import pandas as pd
>>> codes = pd.Series([1, 0], dtype="Int64")
>>> pd.Categorical.from_codes(codes, categories=["foo", "bar"])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File ".../lib/python3.7/site-packages/pandas/core/arrays/categorical.py", line 649, in from_codes
raise ValueError("codes need to be array-like integers")
ValueError: codes need to be array-like integers
```
#### Problem description
`Categories.from_codes` works with Series with the Numpy `"int64"` dtype.
```python
>>> codes = pd.Series([1, 0])
>>> pd.Categorical.from_codes(codes, categories=["foo", "bar"])
[bar, foo]
Categories (2, object): [foo, bar]
```
I would expect that it will work with the new Pandas `"Int64"` dtype.
#### Expected Output
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit : None
python : 3.7.3.final.0
python-bits : 64
OS : Darwin
OS-release : 18.7.0
machine : x86_64
processor : i386
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 1.0.1
numpy : 1.18.1
pytz : 2019.3
dateutil : 2.8.1
pip : 20.0.2
setuptools : 45.1.0
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : None
pandas_datareader: None
bs4 : None
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : None
matplotlib : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
pytest : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
xlsxwriter : None
numba : None
</details>
| 2020-02-07T21:37:33Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File ".../lib/python3.7/site-packages/pandas/core/arrays/categorical.py", line 649, in from_codes
raise ValueError("codes need to be array-like integers")
ValueError: codes need to be array-like integers
| 13,473 |
||||
pandas-dev/pandas | pandas-dev__pandas-31910 | 361a938dd82fdb5fdc1f7b1fff97de39326421e7 | diff --git a/pandas/_libs/lib.pyx b/pandas/_libs/lib.pyx
--- a/pandas/_libs/lib.pyx
+++ b/pandas/_libs/lib.pyx
@@ -571,6 +571,8 @@ def array_equivalent_object(left: object[:], right: object[:]) -> bool:
if PyArray_Check(x) and PyArray_Check(y):
if not array_equivalent_object(x, y):
return False
+ elif (x is C_NA) ^ (y is C_NA):
+ return False
elif not (PyObject_RichCompareBool(x, y, Py_EQ) or
(x is None or is_nan(x)) and (y is None or is_nan(y))):
return False
| assert_numpy_array_equal raises TypeError for pd.NA
Discovered in #31799
```python
>>> arr1 = np.array([True, False])
>>> arr2 = np.array([True, pd.NA])
>>> tm.assert_numpy_array_equal(arr1, arr2)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/williamayd/clones/pandas/pandas/_testing.py", line 1001, in assert_numpy_array_equal
if not array_equivalent(left, right, strict_nan=strict_nan):
File "/Users/williamayd/clones/pandas/pandas/core/dtypes/missing.py", line 447, in array_equivalent
ensure_object(left.ravel()), ensure_object(right.ravel())
File "pandas/_libs/lib.pyx", line 583, in pandas._libs.lib.array_equivalent_object
raise
File "pandas/_libs/lib.pyx", line 574, in pandas._libs.lib.array_equivalent_object
elif not (PyObject_RichCompareBool(x, y, Py_EQ) or
File "pandas/_libs/missing.pyx", line 360, in pandas._libs.missing.NAType.__bool__
raise TypeError("boolean value of NA is ambiguous")
TypeError: boolean value of NA is ambiguous
```
Should yield an AssertionError instead of a TypeError
| 2020-02-12T03:29:38Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/williamayd/clones/pandas/pandas/_testing.py", line 1001, in assert_numpy_array_equal
if not array_equivalent(left, right, strict_nan=strict_nan):
File "/Users/williamayd/clones/pandas/pandas/core/dtypes/missing.py", line 447, in array_equivalent
ensure_object(left.ravel()), ensure_object(right.ravel())
File "pandas/_libs/lib.pyx", line 583, in pandas._libs.lib.array_equivalent_object
raise
File "pandas/_libs/lib.pyx", line 574, in pandas._libs.lib.array_equivalent_object
elif not (PyObject_RichCompareBool(x, y, Py_EQ) or
File "pandas/_libs/missing.pyx", line 360, in pandas._libs.missing.NAType.__bool__
raise TypeError("boolean value of NA is ambiguous")
TypeError: boolean value of NA is ambiguous
| 13,487 |
||||
pandas-dev/pandas | pandas-dev__pandas-32090 | 4018550c796f9b23369267565b8ab8c87b9d14ad | diff --git a/doc/source/whatsnew/v1.1.0.rst b/doc/source/whatsnew/v1.1.0.rst
--- a/doc/source/whatsnew/v1.1.0.rst
+++ b/doc/source/whatsnew/v1.1.0.rst
@@ -327,6 +327,7 @@ Reshaping
- Bug in :func:`crosstab` when inputs are two Series and have tuple names, the output will keep dummy MultiIndex as columns. (:issue:`18321`)
- :meth:`DataFrame.pivot` can now take lists for ``index`` and ``columns`` arguments (:issue:`21425`)
- Bug in :func:`concat` where the resulting indices are not copied when ``copy=True`` (:issue:`29879`)
+- :meth:`Series.append` will now raise a ``TypeError`` when passed a DataFrame or a sequence containing Dataframe (:issue:`31413`)
- :meth:`DataFrame.replace` and :meth:`Series.replace` will raise a ``TypeError`` if ``to_replace`` is not an expected type. Previously the ``replace`` would fail silently (:issue:`18634`)
@@ -349,7 +350,6 @@ Other
instead of ``TypeError: Can only append a Series if ignore_index=True or if the Series has a name`` (:issue:`30871`)
- Set operations on an object-dtype :class:`Index` now always return object-dtype results (:issue:`31401`)
- Bug in :meth:`AbstractHolidayCalendar.holidays` when no rules were defined (:issue:`31415`)
--
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/series.py b/pandas/core/series.py
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -2535,6 +2535,12 @@ def append(self, to_append, ignore_index=False, verify_integrity=False):
to_concat.extend(to_append)
else:
to_concat = [self, to_append]
+ if any(isinstance(x, (ABCDataFrame,)) for x in to_concat[1:]):
+ msg = (
+ f"to_append should be a Series or list/tuple of Series, "
+ f"got DataFrame"
+ )
+ raise TypeError(msg)
return concat(
to_concat, ignore_index=ignore_index, verify_integrity=verify_integrity
)
| Series.append(DataFrame) should throw TypeError
#### Code Sample, a copy-pastable example if possible
```python
>>> import pandas as pd
>>> pd.__version__
'1.0.0rc0+233.gec0996c67'
>>> df = pd.DataFrame({'A':[1,2]})
>>> df.A.append(df)
0 A
0 1.0 NaN
1 2.0 NaN
0 NaN 1.0
1 NaN 2.0
```
#### Problem description
The behavior of append is the same as it is in 0.25.3 when a DataFrame is appended to Series. But it should actually throw a TypeError when passed a DataFrame.
We clearly document that the elements passed to append should be Series or list/tuple of Series.
Also according to [this](https://github.com/pandas-dev/pandas/issues/30975), a TypeError seems appropriate. The change can be implemented for 1.0.1 or 1.1 after [this](https://github.com/pandas-dev/pandas/pull/31036#issuecomment-577248825) discussion.
#### Expected Output
```python
>>> df = pd.DataFrame({'A':[1, 2]})
>>> df.A.append([df])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "D:\Work\Git projects\pandas\pandas\core\series.py", line 2572, in append
raise TypeError(msg)
TypeError: to_append should be a Series or list/tuple of Series, got DataFrame
```
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit : ec0996c6751326eed17a0bb456fe1c550689a618
python : 3.7.6.final.0
python-bits : 64
OS : Windows
OS-release : 10
machine : AMD64
processor : Intel64 Family 6 Model 60 Stepping 3, GenuineIntel
byteorder : little
LC_ALL : None
LANG : None
LOCALE : None.None
pandas : 1.0.0rc0+233.gec0996c67.dirty
numpy : 1.17.0
pytz : 2018.9
dateutil : 2.8.0
pip : 19.3.1
setuptools : 44.0.0.post20200106
Cython : 0.29.14
pytest : 4.3.1
hypothesis : 5.1.5
sphinx : 1.8.5
blosc : None
feather : 0.4.0
xlsxwriter : 1.1.5
lxml.etree : 4.3.2
html5lib : 1.0.1
pymysql : 0.9.3
psycopg2 : None
jinja2 : 2.10
IPython : 7.4.0
pandas_datareader: None
bs4 : 4.6.3
bottleneck : 1.2.1
fastparquet : None
gcsfs : None
lxml.etree : 4.3.2
matplotlib : 3.0.3
numexpr : 2.6.9
odfpy : None
openpyxl : 2.6.1
pandas_gbq : None
pyarrow : 0.15.1
pytables : None
pytest : 4.3.1
pyxlsb : None
s3fs : None
scipy : 1.4.1
sqlalchemy : 1.3.1
tables : 3.5.1
tabulate : 0.8.6
xarray : None
xlrd : 1.2.0
xlwt : 1.3.0
xlsxwriter : 1.1.5
numba : 0.43.1
</details>
| Thanks @hvardhan20 - are you interested in submitting a pull request?
Yes @MarcoGorelli | 2020-02-19T02:18:53Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "D:\Work\Git projects\pandas\pandas\core\series.py", line 2572, in append
raise TypeError(msg)
TypeError: to_append should be a Series or list/tuple of Series, got DataFrame
| 13,502 |
|||
pandas-dev/pandas | pandas-dev__pandas-32107 | 428791c5e01453ff6979b43d37c39c7315c0aaa2 | diff --git a/doc/source/whatsnew/v1.1.0.rst b/doc/source/whatsnew/v1.1.0.rst
--- a/doc/source/whatsnew/v1.1.0.rst
+++ b/doc/source/whatsnew/v1.1.0.rst
@@ -525,6 +525,7 @@ Indexing
- Bug in :class:`Index` constructor where an unhelpful error message was raised for ``numpy`` scalars (:issue:`33017`)
- Bug in :meth:`DataFrame.lookup` incorrectly raising an ``AttributeError`` when ``frame.index`` or ``frame.columns`` is not unique; this will now raise a ``ValueError`` with a helpful error message (:issue:`33041`)
- Bug in :meth:`DataFrame.iloc.__setitem__` creating a new array instead of overwriting ``Categorical`` values in-place (:issue:`32831`)
+- Bug in :class:`Interval` where a :class:`Timedelta` could not be added or subtracted from a :class:`Timestamp` interval (:issue:`32023`)
- Bug in :meth:`DataFrame.copy` _item_cache not invalidated after copy causes post-copy value updates to not be reflected (:issue:`31784`)
- Bug in `Series.__getitem__` with an integer key and a :class:`MultiIndex` with leading integer level failing to raise ``KeyError`` if the key is not present in the first level (:issue:`33355`)
- Bug in :meth:`DataFrame.iloc` when slicing a single column-:class:`DataFrame`` with ``ExtensionDtype`` (e.g. ``df.iloc[:, :1]``) returning an invalid result (:issue:`32957`)
diff --git a/pandas/_libs/interval.pyx b/pandas/_libs/interval.pyx
--- a/pandas/_libs/interval.pyx
+++ b/pandas/_libs/interval.pyx
@@ -1,6 +1,9 @@
import numbers
from operator import le, lt
+from cpython.datetime cimport PyDelta_Check, PyDateTime_IMPORT
+PyDateTime_IMPORT
+
from cpython.object cimport (
Py_EQ,
Py_GE,
@@ -11,7 +14,6 @@ from cpython.object cimport (
PyObject_RichCompare,
)
-
import cython
from cython import Py_ssize_t
@@ -34,7 +36,11 @@ cnp.import_array()
cimport pandas._libs.util as util
from pandas._libs.hashtable cimport Int64Vector
-from pandas._libs.tslibs.util cimport is_integer_object, is_float_object
+from pandas._libs.tslibs.util cimport (
+ is_integer_object,
+ is_float_object,
+ is_timedelta64_object,
+)
from pandas._libs.tslibs import Timestamp
from pandas._libs.tslibs.timedeltas import Timedelta
@@ -294,6 +300,7 @@ cdef class Interval(IntervalMixin):
True
"""
_typ = "interval"
+ __array_priority__ = 1000
cdef readonly object left
"""
@@ -398,14 +405,29 @@ cdef class Interval(IntervalMixin):
return f'{start_symbol}{left}, {right}{end_symbol}'
def __add__(self, y):
- if isinstance(y, numbers.Number):
+ if (
+ isinstance(y, numbers.Number)
+ or PyDelta_Check(y)
+ or is_timedelta64_object(y)
+ ):
return Interval(self.left + y, self.right + y, closed=self.closed)
- elif isinstance(y, Interval) and isinstance(self, numbers.Number):
+ elif (
+ isinstance(y, Interval)
+ and (
+ isinstance(self, numbers.Number)
+ or PyDelta_Check(self)
+ or is_timedelta64_object(self)
+ )
+ ):
return Interval(y.left + self, y.right + self, closed=y.closed)
return NotImplemented
def __sub__(self, y):
- if isinstance(y, numbers.Number):
+ if (
+ isinstance(y, numbers.Number)
+ or PyDelta_Check(y)
+ or is_timedelta64_object(y)
+ ):
return Interval(self.left - y, self.right - y, closed=self.closed)
return NotImplemented
| Unable to add Timedelta to a Timestamp Interval
#### Code Sample, a copy-pastable example if possible
```python
>>> import pandas as pd
>>> year_2017 = pd.Interval(pd.Timestamp('2017-01-01 00:00:00'),
pd.Timestamp('2018-01-01 00:00:00'))
>>> year_2017
Interval('2017-01-01', '2018-01-01', closed='right')
>>> year_2017 + pd.Timedelta(days=7)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for +: 'pandas._libs.interval.Interval' and 'Timedelta'
```
#### Problem description
From the [pd.Interval documentation](https://pandas.pydata.org/pandas-docs/version/1.0.1/reference/api/pandas.Interval.html):
> You can operate with + and * over an Interval and the operation is applied to each of its bounds, so the result depends on the type of the bound elements
However , we can apply the + operation manually on each bounds:
```python
>>> (year_2017.left + pd.Timedelta(days=7), year_2017.right + pd.Timedelta(days=7) )
(Timestamp('2017-01-08 00:00:00'), Timestamp('2018-01-08 00:00:00'))
```
#### Expected Output
The same as the manual operation on each bounds:
```python
>>> pd.Interval(year_2017.left + pd.Timedelta(days=7),
year_2017.right + pd.Timedelta(days=7) )
Interval('2017-01-08', '2018-01-08', closed='right')
```
#### Output of ``pd.show_versions()``
pandas 1.0.1
<details>
>>> pd.show_versions()
INSTALLED VERSIONS
------------------
commit : None
python : 3.7.6.final.0
python-bits : 64
OS : Linux
OS-release : 4.4.0-146-generic
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : fr_FR.UTF-8
LOCALE : fr_FR.UTF-8
pandas : 1.0.1
numpy : 1.18.1
pytz : 2019.3
dateutil : 2.8.0
pip : 19.3.1
setuptools : 41.4.0
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : 4.5.0
html5lib : None
pymysql : None
psycopg2 : 2.8.4 (dt dec pq3 ext lo64)
jinja2 : 2.10.3
IPython : 7.9.0
pandas_datareader: None
bs4 : None
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : 4.5.0
matplotlib : 3.1.2
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
pytest : None
pyxlsb : None
s3fs : None
scipy : 1.3.1
sqlalchemy : 1.3.10
tables : None
tabulate : None
xarray : 0.15.0
xlrd : None
xlwt : None
xlsxwriter : None
numba : 0.46.0
</details>
| 2020-02-19T14:53:31Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for +: 'pandas._libs.interval.Interval' and 'Timedelta'
| 13,503 |
||||
pandas-dev/pandas | pandas-dev__pandas-32152 | cb4f739c1363833044c4e794c431f86288f5bcdd | Pandas 1.0.1 fails get_loc on timeindex (0.25.3 works)
```python
findtime = pd.Timestamp('2019-12-12 10:19:25', tz='US/Eastern')
start = pd.Timestamp('2019-12-12 0:0:0', tz='US/Eastern')
end = pd.Timestamp('2019-12-13 0:0:0', tz='US/Eastern')
testindex = pd.date_range(start, end, freq='5s')
testindex.get_loc(findtime, method='nearest')
```
#### Problem description
With pandas 1.0.1, python 3.8.1
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Program Files\Python38\lib\site-packages\pandas\core\indexes\datetimes.py", line 699, in get_loc
return Index.get_loc(self, key, method, tolerance)
File "C:\Program Files\Python38\lib\site-packages\pandas\core\indexes\base.py", line 2649, in get_loc
indexer = self.get_indexer([key], method=method, tolerance=tolerance)
File "C:\Program Files\Python38\lib\site-packages\pandas\core\indexes\base.py", line 2740, in get_indexer
indexer = self._get_nearest_indexer(target, limit, tolerance)
File "C:\Program Files\Python38\lib\site-packages\pandas\core\indexes\base.py", line 2821, in _get_nearest_indexer
left_distances = abs(self.values[left_indexer] - target)
numpy.core._exceptions.UFuncTypeError: ufunc 'subtract' cannot use operands with types dtype('<M8[ns]') and dtype('O')
```
With 0.25.3, this returns index location 7433 as expected.
I've tested this on Windows 10 1909, and Ubuntu WSL, and get the same error on both.
#### Output of ``pd.show_versions()``
```
INSTALLED VERSIONS
------------------
commit : None
python : 3.8.1.final.0
python-bits : 64
OS : Windows
OS-release : 10
machine : AMD64
processor : AMD64 Family 23 Model 8 Stepping 2, AuthenticAMD
byteorder : little
LC_ALL : None
LANG : None
LOCALE : English_Canada.1252
pandas : 1.0.1
numpy : 1.18.1
pytz : 2019.3
dateutil : 2.8.1
pip : 20.0.2
setuptools : 41.2.0
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : 4.5.0
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 2.11.1
IPython : 7.12.0
pandas_datareader: None
bs4 : None
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : 4.5.0
matplotlib : 3.1.3
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
pytest : None
pyxlsb : None
s3fs : None
scipy : 1.4.1
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
xlsxwriter : None
numba : None
```
| @AndrewMoscoe Thanks for the report!
This is already working again on master. But so we should verify it if also works on the 1.0.x branch (to become 1.0.2), or if there is something we need to backport.
@jorisvandenbossche This bug was noticed in https://github.com/pandas-dev/pandas/issues/26683#issuecomment-580515579 and fixed in #31511.
so we need to backport that PR. and move the whatsnew. any particular order? Also, do we need to add another test for this?
The order is not that important. I think the easiest is to backport the PR (can first try asking MeeseeksDev, in case there is no conflict), and update the branch with moving the whatsnew. And then a separate PR moving the whatsnew just in master.
I would also still add a test, since the case seems a bit different.
xref #31964
OK will
1. backport #31511 with whatsnew moved.
2. PR to mater with whatsnew move
3. PR to master with this code sample as test case (no need to backport?) to close this issue
3. PR to master with code sample from #31964 as test case to close that issue
Sounds good, thanks!
> (can first try asking MeeseeksDev, in case there is no conflict)
had already cherry-picked locally to test. so just created backport manually. | 2020-02-21T14:32:40Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Program Files\Python38\lib\site-packages\pandas\core\indexes\datetimes.py", line 699, in get_loc
return Index.get_loc(self, key, method, tolerance)
File "C:\Program Files\Python38\lib\site-packages\pandas\core\indexes\base.py", line 2649, in get_loc
indexer = self.get_indexer([key], method=method, tolerance=tolerance)
File "C:\Program Files\Python38\lib\site-packages\pandas\core\indexes\base.py", line 2740, in get_indexer
indexer = self._get_nearest_indexer(target, limit, tolerance)
File "C:\Program Files\Python38\lib\site-packages\pandas\core\indexes\base.py", line 2821, in _get_nearest_indexer
left_distances = abs(self.values[left_indexer] - target)
numpy.core._exceptions.UFuncTypeError: ufunc 'subtract' cannot use operands with types dtype('<M8[ns]') and dtype('O')
| 13,510 |
||||
pandas-dev/pandas | pandas-dev__pandas-32155 | 7d37ab85c9df9561653c659f29c5d7fca1454c67 | Reindex on nearest datetime gives UFuncTypeError - subtract with dtype('<M8[ns]') and dtype('O')
#### Code Sample
```python
import pandas as pd
print(pd.__version__)
i1 = pd.DatetimeIndex(['2016-06-26 14:27:26+00:00'])
i2 = pd.DatetimeIndex(['2016-07-04 14:00:59+00:00'])
f2 = pd.DataFrame(index=i2)
f2.reindex(i1, method='nearest')
```
#### Problem description
Running the above code gives:
```
Python 3.7.4 (default, Jul 27 2019, 21:25:02)
[GCC 7.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import pandas as pd
>>> print(pd.__version__)
1.0.1
>>> i1 = pd.DatetimeIndex(['2016-06-26 14:27:26+00:00'])
>>> i2 = pd.DatetimeIndex(['2016-07-04 14:00:59+00:00'])
>>> f2 = pd.DataFrame(index=i2)
>>> f2.reindex(i1, method='nearest')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/andrew/project/ch2/choochoo/py/env/lib/python3.7/site-packages/pandas/util/_decorators.py", line 227, in wrapper
return func(*args, **kwargs)
File "/home/andrew/project/ch2/choochoo/py/env/lib/python3.7/site-packages/pandas/core/frame.py", line 3856, in reindex
return self._ensure_type(super().reindex(**kwargs))
File "/home/andrew/project/ch2/choochoo/py/env/lib/python3.7/site-packages/pandas/core/generic.py", line 4544, in reindex
axes, level, limit, tolerance, method, fill_value, copy
File "/home/andrew/project/ch2/choochoo/py/env/lib/python3.7/site-packages/pandas/core/frame.py", line 3744, in _reindex_axes
index, method, copy, level, fill_value, limit, tolerance
File "/home/andrew/project/ch2/choochoo/py/env/lib/python3.7/site-packages/pandas/core/frame.py", line 3760, in _reindex_index
new_index, method=method, level=level, limit=limit, tolerance=tolerance
File "/home/andrew/project/ch2/choochoo/py/env/lib/python3.7/site-packages/pandas/core/indexes/base.py", line 3145, in reindex
target, method=method, limit=limit, tolerance=tolerance
File "/home/andrew/project/ch2/choochoo/py/env/lib/python3.7/site-packages/pandas/core/indexes/base.py", line 2740, in get_indexer
indexer = self._get_nearest_indexer(target, limit, tolerance)
File "/home/andrew/project/ch2/choochoo/py/env/lib/python3.7/site-packages/pandas/core/indexes/base.py", line 2821, in _get_nearest_indexer
left_distances = abs(self.values[left_indexer] - target)
numpy.core._exceptions.UFuncTypeError: ufunc 'subtract' cannot use operands with types dtype('<M8[ns]') and dtype('O')
```
This is a problem because it occurs in a larger dataset that I need to reindex. It used to work fine with earlier versions of Pandas (although I do not know which, sorry).
#### Expected Output
Expected output would be something like
```
>>> f2.reindex(i1)
Empty DataFrame
Columns: []
Index: [2016-06-26 14:27:26+00:00]
```
#### Output of ``pd.show_versions()``
<details>
```
>>> pd.show_versions()
INSTALLED VERSIONS
------------------
commit : None
python : 3.7.4.final.0
python-bits : 64
OS : Linux
OS-release : 4.12.14-lp151.28.36-default
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_GB.UTF-8
LOCALE : en_GB.UTF-8
pandas : 1.0.1
numpy : 1.18.1
pytz : 2019.3
dateutil : 2.8.1
pip : 20.0.2
setuptools : 40.8.0
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 2.11.1
IPython : 7.12.0
pandas_datareader: None
bs4 : None
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : None
matplotlib : 3.1.3
numexpr : None
odfpy : None
openpyxl : 3.0.3
pandas_gbq : None
pyarrow : None
pytables : None
pytest : None
pyxlsb : None
s3fs : None
scipy : 1.4.1
sqlalchemy : 1.3.13
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
xlsxwriter : None
numba : None
```
</details>
| I just checked and the same error was present in 1.0.0 but not in 0.25.3 (which instead gives a warning about TZ related issues that I think is not directly relevant here?).
This doesn't raise an error anymore on master. But didn't check the 1.0.x branch yet. So we should check that, or otherwise find the commit on master that fixed it (and backport that), and add a test for it.
Is this fixed by https://github.com/pandas-dev/pandas/pull/31511? That wasn't backported to 1.0.x.
yes backporting #31511 will fix this. do we need to add a test? | 2020-02-21T14:56:29Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/andrew/project/ch2/choochoo/py/env/lib/python3.7/site-packages/pandas/util/_decorators.py", line 227, in wrapper
return func(*args, **kwargs)
File "/home/andrew/project/ch2/choochoo/py/env/lib/python3.7/site-packages/pandas/core/frame.py", line 3856, in reindex
return self._ensure_type(super().reindex(**kwargs))
File "/home/andrew/project/ch2/choochoo/py/env/lib/python3.7/site-packages/pandas/core/generic.py", line 4544, in reindex
axes, level, limit, tolerance, method, fill_value, copy
File "/home/andrew/project/ch2/choochoo/py/env/lib/python3.7/site-packages/pandas/core/frame.py", line 3744, in _reindex_axes
index, method, copy, level, fill_value, limit, tolerance
File "/home/andrew/project/ch2/choochoo/py/env/lib/python3.7/site-packages/pandas/core/frame.py", line 3760, in _reindex_index
new_index, method=method, level=level, limit=limit, tolerance=tolerance
File "/home/andrew/project/ch2/choochoo/py/env/lib/python3.7/site-packages/pandas/core/indexes/base.py", line 3145, in reindex
target, method=method, limit=limit, tolerance=tolerance
File "/home/andrew/project/ch2/choochoo/py/env/lib/python3.7/site-packages/pandas/core/indexes/base.py", line 2740, in get_indexer
indexer = self._get_nearest_indexer(target, limit, tolerance)
File "/home/andrew/project/ch2/choochoo/py/env/lib/python3.7/site-packages/pandas/core/indexes/base.py", line 2821, in _get_nearest_indexer
left_distances = abs(self.values[left_indexer] - target)
numpy.core._exceptions.UFuncTypeError: ufunc 'subtract' cannot use operands with types dtype('<M8[ns]') and dtype('O')
| 13,511 |
||||
pandas-dev/pandas | pandas-dev__pandas-32241 | 54b400196c4663def2f51720d52fc3408a65e32a | diff --git a/ci/setup_env.sh b/ci/setup_env.sh
--- a/ci/setup_env.sh
+++ b/ci/setup_env.sh
@@ -50,7 +50,7 @@ echo
echo "update conda"
conda config --set ssl_verify false
conda config --set quiet true --set always_yes true --set changeps1 false
-conda install pip # create conda to create a historical artifact for pip & setuptools
+conda install pip conda # create conda to create a historical artifact for pip & setuptools
conda update -n base conda
echo "conda info -a"
| 32 bit CI Failures
Haven't had time to debug but seeing this in the logs when setting up the 32 bit linux environment:
```sh
Traceback (most recent call last):
File "/home/vsts/miniconda3/bin/conda", line 7, in <module>
from conda.cli import main
ImportError: No module named conda.cli
```
| just disable the build for now (and leave this issue open) | 2020-02-25T16:57:42Z | [] | [] |
Traceback (most recent call last):
File "/home/vsts/miniconda3/bin/conda", line 7, in <module>
from conda.cli import main
ImportError: No module named conda.cli
| 13,518 |
|||
pandas-dev/pandas | pandas-dev__pandas-32320 | 9e7cb7c102655d0ba92d2561c178da9254d5cef5 | diff --git a/doc/source/whatsnew/v1.1.0.rst b/doc/source/whatsnew/v1.1.0.rst
--- a/doc/source/whatsnew/v1.1.0.rst
+++ b/doc/source/whatsnew/v1.1.0.rst
@@ -305,6 +305,7 @@ I/O
timestamps with ``version="2.0"`` (:issue:`31652`).
- Bug in :meth:`read_csv` was raising `TypeError` when `sep=None` was used in combination with `comment` keyword (:issue:`31396`)
- Bug in :class:`HDFStore` that caused it to set to ``int64`` the dtype of a ``datetime64`` column when reading a DataFrame in Python 3 from fixed format written in Python 2 (:issue:`31750`)
+- :func:`read_csv` will raise a ``ValueError`` when the column names passed in `parse_dates` are missing in the Dataframe (:issue:`31251`)
- Bug in :meth:`read_excel` where a UTF-8 string with a high surrogate would cause a segmentation violation (:issue:`23809`)
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -6,10 +6,11 @@
import csv
import datetime
from io import StringIO, TextIOWrapper
+import itertools
import re
import sys
from textwrap import fill
-from typing import Any, Dict, Set
+from typing import Any, Dict, Iterable, List, Set
import warnings
import numpy as np
@@ -34,6 +35,7 @@
ensure_str,
is_bool_dtype,
is_categorical_dtype,
+ is_dict_like,
is_dtype_equal,
is_extension_array_dtype,
is_file_like,
@@ -1421,6 +1423,54 @@ def __init__(self, kwds):
# keep references to file handles opened by the parser itself
self.handles = []
+ def _validate_parse_dates_presence(self, columns: List[str]) -> None:
+ """
+ Check if parse_dates are in columns.
+
+ If user has provided names for parse_dates, check if those columns
+ are available.
+
+ Parameters
+ ----------
+ columns : list
+ List of names of the dataframe.
+
+ Raises
+ ------
+ ValueError
+ If column to parse_date is not in dataframe.
+
+ """
+ cols_needed: Iterable
+ if is_dict_like(self.parse_dates):
+ cols_needed = itertools.chain(*self.parse_dates.values())
+ elif is_list_like(self.parse_dates):
+ # a column in parse_dates could be represented
+ # ColReference = Union[int, str]
+ # DateGroups = List[ColReference]
+ # ParseDates = Union[DateGroups, List[DateGroups],
+ # Dict[ColReference, DateGroups]]
+ cols_needed = itertools.chain.from_iterable(
+ col if is_list_like(col) else [col] for col in self.parse_dates
+ )
+ else:
+ cols_needed = []
+
+ # get only columns that are references using names (str), not by index
+ missing_cols = ", ".join(
+ sorted(
+ {
+ col
+ for col in cols_needed
+ if isinstance(col, str) and col not in columns
+ }
+ )
+ )
+ if missing_cols:
+ raise ValueError(
+ f"Missing column provided to 'parse_dates': '{missing_cols}'"
+ )
+
def close(self):
for f in self.handles:
f.close()
@@ -1940,6 +1990,7 @@ def __init__(self, src, **kwds):
if len(self.names) < len(usecols):
_validate_usecols_names(usecols, self.names)
+ self._validate_parse_dates_presence(self.names)
self._set_noconvert_columns()
self.orig_names = self.names
@@ -2310,6 +2361,7 @@ def __init__(self, f, **kwds):
if self.index_names is None:
self.index_names = index_names
+ self._validate_parse_dates_presence(self.columns)
if self.parse_dates:
self._no_thousands_columns = self._set_no_thousands_columns()
else:
| read_csv: if parse_dates dont appear in use_cols, we get a trace
#### Code Sample, a copy-pastable example if possible
```python
# Your code here
import pandas as pd
import io
content = io.StringIO('''
time,val
212.23, 32
''')
date_cols = ['time']
df = pd.read_csv(
content,
sep=',',
usecols=['val'],
dtype= { 'val': int },
parse_dates=date_cols,
)
```
#### Problem description
triggers
```
Traceback (most recent call last):
File "test.py", line 16, in <module>
parse_dates=date_cols,
File "/nix/store/k4fd48jzsyafvcifa6wi6pk4vaprnw36-python3.7-pandas-0.25.3/lib/python3.7/site-packages/pandas/io/parsers.py",
line 685, in parser_f
return _read(filepath_or_buffer, kwds)
File "/nix/store/k4fd48jzsyafvcifa6wi6pk4vaprnw36-python3.7-pandas-0.25.3/lib/python3.7/site-packages/pandas/io/parsers.py",
line 463, in _read
data = parser.read(nrows)
File "/nix/store/k4fd48jzsyafvcifa6wi6pk4vaprnw36-python3.7-pandas-0.25.3/lib/python3.7/site-packages/pandas/io/parsers.py",
line 1154, in read
ret = self._engine.read(nrows)
File "/nix/store/k4fd48jzsyafvcifa6wi6pk4vaprnw36-python3.7-pandas-0.25.3/lib/python3.7/site-packages/pandas/io/parsers.py",
line 2134, in read
names, data = self._do_date_conversions(names, data)
File "/nix/store/k4fd48jzsyafvcifa6wi6pk4vaprnw36-python3.7-pandas-0.25.3/lib/python3.7/site-packages/pandas/io/parsers.py",
line 1885, in _do_date_conversions
keep_date_col=self.keep_date_col,
File "/nix/store/k4fd48jzsyafvcifa6wi6pk4vaprnw36-python3.7-pandas-0.25.3/lib/python3.7/site-packages/pandas/io/parsers.py",
line 3335, in _process_date_conversion
data_dict[colspec] = converter(data_dict[colspec])
KeyError: 'time'
```
i.e., if you use columns in parse_dates that dont appear in use_cols, then you are screwed.
Either parse_dates sould be added to use_cols or the documentation precise it. An assert could make the error more understandable too.
pandas 0.25.3
| I think this might have been fixed. I'm not able to reproduce that error in 1.0.0rc.
Looks fixed on master as well. Could use a regression test since it doesn't appear explicitly fixed in the whatsnew notes for 1.0.0rc
I am getting the same error as well. (Python 3.8.1)
<details>
INSTALLED VERSIONS
------------------
commit : aad377dd7f001568955982040d8701e004869259
python : 3.8.1.final.0
python-bits : 64
OS : Linux
OS-release : 5.4.13.a-1-hardened
machine : x86_64
processor :
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 1.0.0rc0+190.gaad377dd7
numpy : 1.18.1
pytz : 2019.3
dateutil : 2.8.1
pip : 20.0.1
setuptools : 45.1.0
Cython : 0.29.14
pytest : 5.3.4
hypothesis : 5.3.0
sphinx : 2.3.1
blosc : None
feather : None
xlsxwriter : 1.2.7
lxml.etree : 4.4.2
html5lib : 1.0.1
pymysql : None
psycopg2 : None
jinja2 : 2.10.3
IPython : 7.11.1
pandas_datareader: None
bs4 : 4.8.2
bottleneck : 1.3.1
fastparquet : 0.3.2
gcsfs : None
lxml.etree : 4.4.2
matplotlib : 3.1.2
numexpr : 2.7.1
odfpy : None
openpyxl : 3.0.1
pandas_gbq : None
pyarrow : None
pytables : None
pytest : 5.3.4
pyxlsb : None
s3fs : 0.4.0
scipy : 1.4.1
sqlalchemy : 1.3.13
tables : 3.6.1
tabulate : 0.8.6
xarray : 0.14.1
xlrd : 1.2.0
xlwt : 1.3.0
xlsxwriter : 1.2.7
numba : 0.47.0
</details>
Thanks for looking into it.
One thing I don't get in the api is why dates are considered different than converters. Because they are multicolumn ? we could have a multicolumn converter or ignore dates altogether/have multicolumn converters (I mention this just in case 1.0 is allowed to break API). | 2020-02-28T03:28:41Z | [] | [] |
Traceback (most recent call last):
File "test.py", line 16, in <module>
parse_dates=date_cols,
File "/nix/store/k4fd48jzsyafvcifa6wi6pk4vaprnw36-python3.7-pandas-0.25.3/lib/python3.7/site-packages/pandas/io/parsers.py",
line 685, in parser_f
| 13,528 |
|||
pandas-dev/pandas | pandas-dev__pandas-32512 | 74f6579941fbe71cf7c033f53977047ac872e469 | diff --git a/doc/source/whatsnew/v1.1.0.rst b/doc/source/whatsnew/v1.1.0.rst
--- a/doc/source/whatsnew/v1.1.0.rst
+++ b/doc/source/whatsnew/v1.1.0.rst
@@ -268,6 +268,7 @@ Numeric
^^^^^^^
- Bug in :meth:`DataFrame.floordiv` with ``axis=0`` not treating division-by-zero like :meth:`Series.floordiv` (:issue:`31271`)
- Bug in :meth:`to_numeric` with string argument ``"uint64"`` and ``errors="coerce"`` silently fails (:issue:`32394`)
+- Bug in :meth:`to_numeric` with ``downcast="unsigned"`` fails for empty data (:issue:`32493`)
-
Conversion
diff --git a/pandas/core/tools/numeric.py b/pandas/core/tools/numeric.py
--- a/pandas/core/tools/numeric.py
+++ b/pandas/core/tools/numeric.py
@@ -162,7 +162,7 @@ def to_numeric(arg, errors="raise", downcast=None):
if downcast in ("integer", "signed"):
typecodes = np.typecodes["Integer"]
- elif downcast == "unsigned" and np.min(values) >= 0:
+ elif downcast == "unsigned" and (not len(values) or np.min(values) >= 0):
typecodes = np.typecodes["UnsignedInteger"]
elif downcast == "float":
typecodes = np.typecodes["Float"]
| to_numeric fails with empty data and downcast="unsigned"
#### Code Sample, a copy-pastable example if possible
```python
pd.to_numeric([], downcast="unsigned")
```
#### Problem description
The code currently calls `np.min()` on the data which fails when the data is empty. This does not happen for any other downcast.
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/.../lib/python3.7/site-packages/pandas/core/tools/numeric.py", line 163, in to_numeric
elif downcast == "unsigned" and np.min(values) >= 0:
File "<__array_function__ internals>", line 6, in amin
File "/.../lib/python3.7/site-packages/numpy/core/fromnumeric.py", line 2793, in amin
keepdims=keepdims, initial=initial, where=where)
File "/.../lib/python3.7/site-packages/numpy/core/fromnumeric.py", line 90, in _wrapreduction
return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
ValueError: zero-size array to reduction operation minimum which has no identity
```
#### Expected Output
array([], dtype=uint8)
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit : None
python : 3.7.6.final.0
python-bits : 64
OS : Darwin
OS-release : 19.3.0
machine : x86_64
processor : i386
byteorder : little
LC_ALL : None
LANG : en_GB.UTF-8
LOCALE : en_GB.UTF-8
pandas : 1.0.1
numpy : 1.18.1
pytz : 2019.3
dateutil : 2.8.1
pip : 20.0.2
setuptools : 41.2.0
Cython : None
pytest : 5.3.5
hypothesis : None
sphinx : None
blosc : None
feather : 0.4.0
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 2.11.1
IPython : 7.13.0
pandas_datareader: None
bs4 : None
bottleneck : None
fastparquet : None
gcsfs : 0.6.0
lxml.etree : None
matplotlib : 3.1.3
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : 0.16.0
pytables : None
pytest : 5.3.5
pyxlsb : None
s3fs : None
scipy : 1.4.1
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
xlsxwriter : None
numba : 0.48.0
</details>
| Thanks for the report. I think it's safe to include an `initial=0` in that call to `np.min` in https://github.com/pandas-dev/pandas/blob/970499d9660a4f4a1df3e1f8ce3015336ffb4d41/pandas/core/tools/numeric.py#L165-L166.
| 2020-03-07T04:52:51Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/.../lib/python3.7/site-packages/pandas/core/tools/numeric.py", line 163, in to_numeric
elif downcast == "unsigned" and np.min(values) >= 0:
File "<__array_function__ internals>", line 6, in amin
File "/.../lib/python3.7/site-packages/numpy/core/fromnumeric.py", line 2793, in amin
keepdims=keepdims, initial=initial, where=where)
File "/.../lib/python3.7/site-packages/numpy/core/fromnumeric.py", line 90, in _wrapreduction
return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
ValueError: zero-size array to reduction operation minimum which has no identity
| 13,545 |
|||
pandas-dev/pandas | pandas-dev__pandas-32520 | 1ce9f0ca9228833f07c9fdbc2e3c186094b2cdd8 | diff --git a/doc/source/whatsnew/v1.1.0.rst b/doc/source/whatsnew/v1.1.0.rst
--- a/doc/source/whatsnew/v1.1.0.rst
+++ b/doc/source/whatsnew/v1.1.0.rst
@@ -731,6 +731,7 @@ Indexing
- Bug in :meth:`Series.__getitem__` allowing missing labels with ``np.ndarray``, :class:`Index`, :class:`Series` indexers but not ``list``, these now all raise ``KeyError`` (:issue:`33646`)
- Bug in :meth:`DataFrame.truncate` and :meth:`Series.truncate` where index was assumed to be monotone increasing (:issue:`33756`)
- Indexing with a list of strings representing datetimes failed on :class:`DatetimeIndex` or :class:`PeriodIndex`(:issue:`11278`)
+- Bug in :meth:`Series.at` when used with a :class:`MultiIndex` would raise an exception on valid inputs (:issue:`26989`)
Missing
^^^^^^^
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -2016,10 +2016,10 @@ def __setitem__(self, key, value):
if not isinstance(key, tuple):
key = _tuplify(self.ndim, key)
+ key = list(self._convert_key(key, is_setter=True))
if len(key) != self.ndim:
raise ValueError("Not enough indexers for scalar access (setting)!")
- key = list(self._convert_key(key, is_setter=True))
self.obj._set_value(*key, value=value, takeable=self._takeable)
@@ -2032,6 +2032,12 @@ def _convert_key(self, key, is_setter: bool = False):
Require they keys to be the same type as the index. (so we don't
fallback)
"""
+ # GH 26989
+ # For series, unpacking key needs to result in the label.
+ # This is already the case for len(key) == 1; e.g. (1,)
+ if self.ndim == 1 and len(key) > 1:
+ key = (key,)
+
# allow arbitrary setting
if is_setter:
return list(key)
| Series with MultiIndex "at" function raises "TypeError"
#### Code Sample
```python
# Your code here
import pandas as pd
s = pd.Series(index=pd.MultiIndex.from_tuples([('a', 0)]))
s.loc['a', 0] # loc function works fine, no complaints here
s.at['a', 0] # raises TypeError
```
#### Stack Trace:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.7/site-packages/pandas/core/indexing.py", line 2270, in __getitem__
return self.obj._get_value(*key, takeable=self._takeable)
TypeError: _get_value() got multiple values for argument 'takeable'
```
#### Problem description
I would expect the `at` function to either return the values given by the indexers or raise a `KeyError`. In the above example, the index value exists, so `at` should return the value from the series at that index (`nan`).
The `at` function works fine for series with normal indexes. There is nothing in the documentation indicating it should not work for multi-indexes.
#### Expected Output
```
nan
nan
```
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 3.7.3.final.0
python-bits: 64
OS: Linux
OS-release: 4.15.0-50-generic
machine: x86_64
processor:
byteorder: little
LC_ALL: None
LANG: C.UTF-8
LOCALE: en_US.UTF-8
pandas: 0.24.2
pytest: 3.4.2
pip: 19.1.1
setuptools: 41.0.1
Cython: 0.29.10
numpy: 1.16.4
scipy: 1.3.0
pyarrow: None
xarray: None
IPython: None
sphinx: None
patsy: None
dateutil: 2.7.3
pytz: 2019.1
blosc: None
bottleneck: None
tables: None
numexpr: None
feather: None
matplotlib: 2.2.3
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml.etree: None
bs4: None
html5lib: None
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: 2.10.1
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
gcsfs: None
</details>
| Makes sense - if you'd like to take a look and submit a PR would certainly be welcome
try this option:
s.loc['a'].at[0]
@Seemachopra22 That seems like a reasonable workaround, but I still think this is a bug that should be fixed. I'll work on a PR if I have time.
I stumbled upon the same issue and did a little digging into the code. Please forgive any errors that are to follow - I am quite new to looking at the Pandas internals.
It seems like the issue is line 2087 of this block of code within _ScalarAccessIndexer (similar issues for the getter methods):
https://github.com/pandas-dev/pandas/blob/844dc4a4fb8d213303085709aa4a3649400ed51a/pandas/core/indexing.py#L2085-L2091
When I run this on a series with a multi index, e.g.
s = pd.DataFrame([['x', 'y', 1], ['z', 'w', 2]], columns=['a', 'b', 'c']).set_index(['a', 'b']).c
s.at['x', 'y'] = 5
self.ndim is 1 whereas key is ('x', 'y'), which leads to raising the value error. What I can't tell is whether self.ndim here should be 2, or if perhaps this check is not correct.
If self.ndim is the same as [pd.Series.ndim](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.ndim.html), then self.ndim must be 1 and I believe this check can be removed. | 2020-03-07T17:00:57Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.7/site-packages/pandas/core/indexing.py", line 2270, in __getitem__
return self.obj._get_value(*key, takeable=self._takeable)
TypeError: _get_value() got multiple values for argument 'takeable'
| 13,547 |
|||
pandas-dev/pandas | pandas-dev__pandas-32544 | 9bc3ee0e42218c1ea154835587edf52d0debce48 | diff --git a/doc/source/whatsnew/v1.0.2.rst b/doc/source/whatsnew/v1.0.2.rst
--- a/doc/source/whatsnew/v1.0.2.rst
+++ b/doc/source/whatsnew/v1.0.2.rst
@@ -28,6 +28,7 @@ Fixed regressions
- Fixed regression in the repr of an object-dtype :class:`Index` with bools and missing values (:issue:`32146`)
- Fixed regression in :meth:`read_csv` in which the ``encoding`` option was not recognized with certain file-like objects (:issue:`31819`)
- Fixed regression in :meth:`DataFrame.reindex` and :meth:`Series.reindex` when reindexing with (tz-aware) index and ``method=nearest`` (:issue:`26683`)
+- Fixed regression in :class:`ExcelFile` where the stream passed into the function was closed by the destructor. (:issue:`31467`)
- Fixed regression in :meth:`DataFrame.reindex_like` on a :class:`DataFrame` subclass raised an ``AssertionError`` (:issue:`31925`)
- Fixed regression in :meth:`Series.shift` with ``datetime64`` dtype when passing an integer ``fill_value`` (:issue:`32591`)
diff --git a/pandas/io/excel/_base.py b/pandas/io/excel/_base.py
--- a/pandas/io/excel/_base.py
+++ b/pandas/io/excel/_base.py
@@ -366,6 +366,9 @@ def _workbook_class(self):
def load_workbook(self, filepath_or_buffer):
pass
+ def close(self):
+ pass
+
@property
@abc.abstractmethod
def sheet_names(self):
@@ -895,14 +898,7 @@ def sheet_names(self):
def close(self):
"""close io if necessary"""
- if self.engine == "openpyxl":
- # https://stackoverflow.com/questions/31416842/
- # openpyxl-does-not-close-excel-workbook-in-read-only-mode
- wb = self.book
- wb._archive.close()
-
- if hasattr(self.io, "close"):
- self.io.close()
+ self._reader.close()
def __enter__(self):
return self
diff --git a/pandas/io/excel/_openpyxl.py b/pandas/io/excel/_openpyxl.py
--- a/pandas/io/excel/_openpyxl.py
+++ b/pandas/io/excel/_openpyxl.py
@@ -492,6 +492,11 @@ def load_workbook(self, filepath_or_buffer: FilePathOrBuffer):
filepath_or_buffer, read_only=True, data_only=True, keep_links=False
)
+ def close(self):
+ # https://stackoverflow.com/questions/31416842/
+ # openpyxl-does-not-close-excel-workbook-in-read-only-mode
+ self.book.close()
+
@property
def sheet_names(self) -> List[str]:
return self.book.sheetnames
| pd.ExcelFile closes stream on destruction in pandas 1.0.0
#### Code Sample, a copy-pastable example if possible
```python3
import pandas as pd
print(pd.__version__)
a = open('some_file.xlsx', 'rb')
x = pd.ExcelFile(a)
del x
print(a.read())
```
#### Problem description
Above script behaves in different way in pandas 0.25.3 and 1.0.0:
```bash
$ python3 t1.py
0.25.3
b''
$ sudo pip3 install pandas --upgrade --quiet
$ python3 t1.py
1.0.0
Traceback (most recent call last):
File "t1.py", line 6, in <module>
print(a.read())
ValueError: read of closed file
```
It seems that stream is closed when ExcelFile is destroyed - and I don't see why.
#### Expected Output
I'd expect either notice in release notes, or the same output in 0.25.3 and 1.0.0.
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit : None
python : 3.6.8.final.0
python-bits : 64
OS : Linux
OS-release : 5.0.0-1028-gcp
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : C.UTF-8
LOCALE : en_US.UTF-8
pandas : 1.0.0
numpy : 1.18.1
pytz : 2019.3
dateutil : 2.8.1
pip : 9.0.1
setuptools : 39.0.1
Cython : None
pytest : 4.3.0
hypothesis : None
sphinx : 1.8.5
blosc : None
feather : None
xlsxwriter : None
lxml.etree : 4.4.1
html5lib : 0.999999999
pymysql : None
psycopg2 : 2.8.3 (dt dec pq3 ext lo64)
jinja2 : 2.10.1
IPython : None
pandas_datareader: None
bs4 : 4.8.0
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : 4.4.1
matplotlib : None
numexpr : None
odfpy : None
openpyxl : 2.5.14
pandas_gbq : None
pyarrow : None
pytables : None
pytest : 4.3.0
pyxlsb : None
s3fs : None
scipy : 1.2.0
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : 1.2.0
xlwt : None
xlsxwriter : None
numba : None
</details>
| Possibly related to https://github.com/pandas-dev/pandas/pull/30096 since that added a `__del__` (cc @jbrockmendel), although I would assume that the `ExcelFile.close()` should not necessarily close the file handle (in which case it is not related to that PR).
@johny-b if you do `x.close()` instead of `del x`, is the file stream also closed?
`x.close()` closes stream **on both pandas versions**.
I don’t think this is a bug. del simply decrements a reference count but makes no guarantees around when the GC will actually destroy the object.
in read_csv where we track if an open stream is given (as opposed to a file), then can close *only* things that we opened. this might take a bit of work to fix though.
if someone wants to do this for 1.0.1 great, but won't consider this a blocker.
Yes, the "bug" (or missing feature, how you want to call it) is that that `close()` should only close the file handle if it was opened by pandas itself.
The fact that now `__del__` closes the file handle is only because `__del__` now calls `close()`, which was a corect fix, but it surfaced the close issue.
take | 2020-03-08T19:14:23Z | [] | [] |
Traceback (most recent call last):
File "t1.py", line 6, in <module>
print(a.read())
ValueError: read of closed file
| 13,552 |
|||
pandas-dev/pandas | pandas-dev__pandas-32839 | fddaa993540ef2894adad40db98d060688ff249d | diff --git a/doc/source/whatsnew/v1.1.0.rst b/doc/source/whatsnew/v1.1.0.rst
--- a/doc/source/whatsnew/v1.1.0.rst
+++ b/doc/source/whatsnew/v1.1.0.rst
@@ -349,6 +349,7 @@ I/O
- Bug in :meth:`read_csv` was causing a file descriptor leak on an empty file (:issue:`31488`)
- Bug in :meth:`read_csv` was causing a segfault when there were blank lines between the header and data rows (:issue:`28071`)
- Bug in :meth:`read_csv` was raising a misleading exception on a permissions issue (:issue:`23784`)
+- Bug in :meth:`read_csv` was raising an ``IndexError`` when header=None and 2 extra data columns
Plotting
diff --git a/pandas/_libs/parsers.pyx b/pandas/_libs/parsers.pyx
--- a/pandas/_libs/parsers.pyx
+++ b/pandas/_libs/parsers.pyx
@@ -1316,8 +1316,8 @@ cdef class TextReader:
else:
if self.header is not None:
j = i - self.leading_cols
- # hack for #2442
- if j == len(self.header[0]):
+ # generate extra (bogus) headers if there are more columns than headers
+ if j >= len(self.header[0]):
return j
else:
return self.header[0][j]
| read_csv() crashes if engine='c', header=None, and 2+ extra columns
#### Code Sample, a copy-pastable example if possible
```python
import io
import pandas as pd
# Create a CSV with a single row:
stream = io.StringIO(u'foo,bar,baz,bam,blah')
# Succeeds, returns a DataFrame with four columns and ignores the fifth.
pd.read_csv(
stream,
header=None,
names=['one', 'two', 'three', 'four'],
index_col=False,
engine='c'
)
# Change the engine to 'python' and you get the same result.
stream.seek(0)
pd.read_csv(
stream,
header=None,
names=['one', 'two', 'three', 'four'],
index_col=False,
engine='python'
)
# Succeeds, returns a DataFrame with three columns and ignores the extra two.
stream.seek(0)
pd.read_csv(
stream,
header=None,
names=['one', 'two', 'three'],
index_col=False,
engine='python'
)
# Change the engine to 'c' and it crashes:
stream.seek(0)
pd.read_csv(
stream,
header=None,
names=['one', 'two', 'three'],
index_col=False,
engine='c'
)
```
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/tux/.pyenv/versions/2.7.15/envs/gds/lib/python2.7/site-packages/pandas/io/parsers.py", line 702, in parser_f
return _read(filepath_or_buffer, kwds)
File "/Users/tux/.pyenv/versions/2.7.15/envs/gds/lib/python2.7/site-packages/pandas/io/parsers.py", line 435, in _read
data = parser.read(nrows)
File "/Users/tux/.pyenv/versions/2.7.15/envs/gds/lib/python2.7/site-packages/pandas/io/parsers.py", line 1139, in read
ret = self._engine.read(nrows)
File "/Users/tux/.pyenv/versions/2.7.15/envs/gds/lib/python2.7/site-packages/pandas/io/parsers.py", line 1995, in read
data = self._reader.read(nrows)
File "pandas/_libs/parsers.pyx", line 899, in pandas._libs.parsers.TextReader.read
File "pandas/_libs/parsers.pyx", line 914, in pandas._libs.parsers.TextReader._read_low_memory
File "pandas/_libs/parsers.pyx", line 991, in pandas._libs.parsers.TextReader._read_rows
File "pandas/_libs/parsers.pyx", line 1067, in pandas._libs.parsers.TextReader._convert_column_data
File "pandas/_libs/parsers.pyx", line 1387, in pandas._libs.parsers.TextReader._get_column_name
IndexError: list index out of range
```
#### Problem description
The normal behavior for `read_csv()` is to ignore extra columns if it's given `names`. However, if the CSV has _two_ or more extra columns and the `engine` is `c` then it crashes. The exact same CSV can be read correctly if `engine` is `python`.
#### Expected Output
Behavior of reading a CSV with the C and Python engines should be identical.
#### Output of ``pd.show_versions()``
<details>
```
INSTALLED VERSIONS
------------------
commit: None
python: 2.7.15.final.0
python-bits: 64
OS: Darwin
OS-release: 17.7.0
machine: x86_64
processor: i386
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: None.None
pandas: 0.24.2
pytest: 4.3.1
pip: 19.0.3
setuptools: 41.0.1
Cython: 0.29.5
numpy: 1.14.5
scipy: 1.1.0
pyarrow: 0.10.0
xarray: None
IPython: None
sphinx: None
patsy: None
dateutil: 2.6.1
pytz: 2016.6.1
blosc: None
bottleneck: None
tables: None
numexpr: None
feather: None
matplotlib: None
openpyxl: 2.5.14
xlrd: 1.1.0
xlwt: None
xlsxwriter: None
lxml.etree: None
bs4: None
html5lib: None
sqlalchemy: 1.0.15
pymysql: None
psycopg2: 2.7.3.2 (dt dec pq3 ext lo64)
jinja2: 2.10
s3fs: 0.1.6
fastparquet: None
pandas_gbq: None
pandas_datareader: None
gcsfs: 0.2.1
```
I tested this on Python 3.7.3 as well but for some reason `pd.show_versions()` keeps blowing up with:
```
Assertion failed: (PassInf && "Expected all immutable passes to be initialized"), function addImmutablePass, file /Users/buildbot/miniconda3/conda-bld/llvmdev_1545076115094/work/lib/IR/LegacyPassManager.cpp, line 812.
Abort trap: 6 (core dumped)
```
</details>
| Specifying the columns to use here would give you the result you want:
```python
stream.seek(0)
pd.read_csv(
stream,
header=None,
names=['one', 'two', 'three'],
index_col=False,
engine='c',
usecols=[0, 1, 2]
)
```
Behavior should be consistent, though I'm not sure dropping the remaining columns is really the correct approach regardless. Let's see what others think
> though I'm not sure dropping the remaining columns is really the correct approach regardless
It isn't, it's a known bug that's been hanging around for months. See #22144.
OK thanks. Linked issue seems related though not quite the same. If you'd like to take a look and submit a PR for this or the other would certainly be appreciated!
I think I found the problem. Do I fork off of `master` or `0.24.x`? Is 0.24 still taking bugfixes or no?
Fork off master | 2020-03-19T21:11:33Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/tux/.pyenv/versions/2.7.15/envs/gds/lib/python2.7/site-packages/pandas/io/parsers.py", line 702, in parser_f
return _read(filepath_or_buffer, kwds)
File "/Users/tux/.pyenv/versions/2.7.15/envs/gds/lib/python2.7/site-packages/pandas/io/parsers.py", line 435, in _read
data = parser.read(nrows)
File "/Users/tux/.pyenv/versions/2.7.15/envs/gds/lib/python2.7/site-packages/pandas/io/parsers.py", line 1139, in read
ret = self._engine.read(nrows)
File "/Users/tux/.pyenv/versions/2.7.15/envs/gds/lib/python2.7/site-packages/pandas/io/parsers.py", line 1995, in read
data = self._reader.read(nrows)
File "pandas/_libs/parsers.pyx", line 899, in pandas._libs.parsers.TextReader.read
File "pandas/_libs/parsers.pyx", line 914, in pandas._libs.parsers.TextReader._read_low_memory
File "pandas/_libs/parsers.pyx", line 991, in pandas._libs.parsers.TextReader._read_rows
File "pandas/_libs/parsers.pyx", line 1067, in pandas._libs.parsers.TextReader._convert_column_data
File "pandas/_libs/parsers.pyx", line 1387, in pandas._libs.parsers.TextReader._get_column_name
IndexError: list index out of range
| 13,583 |
|||
pandas-dev/pandas | pandas-dev__pandas-33070 | 92478d51f9b4316a66614afe20486ca247759221 | diff --git a/doc/source/whatsnew/v1.1.0.rst b/doc/source/whatsnew/v1.1.0.rst
--- a/doc/source/whatsnew/v1.1.0.rst
+++ b/doc/source/whatsnew/v1.1.0.rst
@@ -407,6 +407,7 @@ I/O
- Bug in :meth:`read_csv` was causing a segfault when there were blank lines between the header and data rows (:issue:`28071`)
- Bug in :meth:`read_csv` was raising a misleading exception on a permissions issue (:issue:`23784`)
- Bug in :meth:`read_csv` was raising an ``IndexError`` when header=None and 2 extra data columns
+- Bug in :meth:`read_sas` was raising an ``AttributeError`` when reading files from Google Cloud Storage (issue:`33069`)
- Bug in :meth:`DataFrame.to_sql` where an ``AttributeError`` was raised when saving an out of bounds date (:issue:`26761`)
Plotting
diff --git a/pandas/io/sas/sas_xport.py b/pandas/io/sas/sas_xport.py
--- a/pandas/io/sas/sas_xport.py
+++ b/pandas/io/sas/sas_xport.py
@@ -9,7 +9,6 @@
"""
from collections import abc
from datetime import datetime
-from io import BytesIO
import struct
import warnings
@@ -263,13 +262,9 @@ def __init__(
if isinstance(filepath_or_buffer, (str, bytes)):
self.filepath_or_buffer = open(filepath_or_buffer, "rb")
else:
- # Copy to BytesIO, and ensure no encoding
- contents = filepath_or_buffer.read()
- try:
- contents = contents.encode(self._encoding)
- except UnicodeEncodeError:
- pass
- self.filepath_or_buffer = BytesIO(contents)
+ # Since xport files include non-text byte sequences, xport files
+ # should already be opened in binary mode in Python 3.
+ self.filepath_or_buffer = filepath_or_buffer
self._read_header()
| read_sas fails when passed a file object from GCSFS
#### Code Sample, a copy-pastable example if possible
From https://stackoverflow.com/q/60848250/101923
```bash
export BUCKET_NAME=swast-scratch-us
curl -L https://wwwn.cdc.gov/Nchs/Nhanes/2017-2018/DEMO_J.XPT | gsutil cp - gs://${BUCKET_NAME}/sas_sample/Nchs/Nhanes/2017-2018/DEMO_J.XPT
```
```python
import pandas as pd
import gcsfs
bucket_name = "swast-scratch-us"
project_id = "swast-scratch"
fs = gcsfs.GCSFileSystem(project=project_id)
with fs.open(
"{}/sas_sample/Nchs/Nhanes/2017-2018/DEMO_J.XPT".format(bucket_name),
"rb"
) as f:
df = pd.read_sas(f, format="xport")
print(df)
```
#### Problem description
This throws the following exception:
```
Traceback (most recent call last):
File "after.py", line 15, in <module>
df = pd.read_sas(f, format="xport")
File "/Users/swast/miniconda3/envs/scratch/lib/python3.7/site-packages/pandas/io/sas/sasreader.py", line 70, in read_sas
filepath_or_buffer, index=index, encoding=encoding, chunksize=chunksize
File "/Users/swast/miniconda3/envs/scratch/lib/python3.7/site-packages/pandas/io/sas/sas_xport.py", line 280, in __init__
contents = contents.encode(self._encoding)
AttributeError: 'bytes' object has no attribute 'encode'
(scratch)
```
#### Expected Output
```
SEQN SDDSRVYR RIDSTATR RIAGENDR ... SDMVSTRA INDHHIN2 INDFMIN2 INDFMPIR
0 93703.0 10.0 2.0 2.0 ... 145.0 15.0 15.0 5.00
1 93704.0 10.0 2.0 1.0 ... 143.0 15.0 15.0 5.00
2 93705.0 10.0 2.0 2.0 ... 145.0 3.0 3.0 0.82
3 93706.0 10.0 2.0 1.0 ... 134.0 NaN NaN NaN
4 93707.0 10.0 2.0 1.0 ... 138.0 10.0 10.0 1.88
... ... ... ... ... ... ... ... ... ...
9249 102952.0 10.0 2.0 2.0 ... 138.0 4.0 4.0 0.95
9250 102953.0 10.0 2.0 1.0 ... 137.0 12.0 12.0 NaN
9251 102954.0 10.0 2.0 2.0 ... 144.0 10.0 10.0 1.18
9252 102955.0 10.0 2.0 2.0 ... 136.0 9.0 9.0 2.24
9253 102956.0 10.0 2.0 1.0 ... 142.0 7.0 7.0 1.56
[9254 rows x 46 columns]
```
Note: the expected output **is** printed when a local file is read.
#### Output of ``pd.show_versions()``
<details>
Python 3.7.3 | packaged by conda-forge | (default, Jul 1 2019, 14:38:56)
[Clang 4.0.1 (tags/RELEASE_401/final)] :: Anaconda, Inc. on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import pandas as pd
>>> pd.show_versions()
INSTALLED VERSIONS
------------------
commit : None
python : 3.7.3.final.0
python-bits : 64
OS : Darwin
OS-release : 19.4.0
machine : x86_64
processor : i386
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 0.25.1
numpy : 1.18.1
pytz : 2019.2
dateutil : 2.8.1
pip : 20.0.2
setuptools : 46.0.0.post20200311
Cython : None
pytest : 5.0.1
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : 4.5.0
html5lib : 1.0.1
pymysql : None
psycopg2 : None
jinja2 : 2.11.1
IPython : 7.7.0
pandas_datareader: None
bs4 : 4.8.0
bottleneck : None
fastparquet : None
gcsfs : 0.6.0
lxml.etree : 4.5.0
matplotlib : 3.1.1
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : 0.11.0
pyarrow : 0.15.1
pytables : None
s3fs : None
scipy : 1.4.1
sqlalchemy : None
tables : None
xarray : 0.12.3
xlrd : None
xlwt : None
xlsxwriter : None
</details>
| 2020-03-27T15:14:19Z | [] | [] |
Traceback (most recent call last):
File "after.py", line 15, in <module>
df = pd.read_sas(f, format="xport")
File "/Users/swast/miniconda3/envs/scratch/lib/python3.7/site-packages/pandas/io/sas/sasreader.py", line 70, in read_sas
filepath_or_buffer, index=index, encoding=encoding, chunksize=chunksize
File "/Users/swast/miniconda3/envs/scratch/lib/python3.7/site-packages/pandas/io/sas/sas_xport.py", line 280, in __init__
contents = contents.encode(self._encoding)
AttributeError: 'bytes' object has no attribute 'encode'
| 13,601 |
||||
pandas-dev/pandas | pandas-dev__pandas-33185 | 6a8aca97a1f049763c03af8e3b8b0495d61d2586 | Series.searchsorted with different timezones
#### Code Sample, a copy-pastable example if possible
`DatetimeIndex.searchsorted` works fine with mixed timezones:
```python
>>> series_dt = pd.date_range('2015-02-19', periods=2, freq='1d', tz='UTC')
>>> dt = pd.to_datetime('2015-02-19T12:34:56').tz_localize('America/New_York')
>>> series_dt.searchsorted(dt)
1
```
However, `Series.searchsorted` does not:
```python
>>> from io import StringIO
>>> df = pd.read_csv(StringIO("datetime\n2015-02-19T00:00:00Z\n2015-02-20T00:00:00Z"), parse_dates=['datetime'])
>>> df.datetime.searchsorted(dt)
Traceback (most recent call last):
File "/Users/kwilliams/Library/Application Support/IntelliJIdea2019.2/python/helpers/pydev/_pydevd_bundle/pydevd_exec2.py", line 3, in Exec
exec(exp, global_vars, local_vars)
File "<input>", line 1, in <module>
File "venv/lib/python3.7/site-packages/pandas/core/series.py", line 2694, in searchsorted
return algorithms.searchsorted(self._values, value, side=side, sorter=sorter)
File "venv/lib/python3.7/site-packages/pandas/core/algorithms.py", line 1887, in searchsorted
result = arr.searchsorted(value, side=side, sorter=sorter)
File "venv/lib/python3.7/site-packages/pandas/core/arrays/datetimelike.py", line 666, in searchsorted
self._check_compatible_with(value)
File "venv/lib/python3.7/site-packages/pandas/core/arrays/datetimes.py", line 591, in _check_compatible_with
own=self.tz, other=other.tz
ValueError: Timezones don't match. 'UTC != America/New_York'
```
Ostensibly, the underlying data vector is the same in both cases:
```python
>>> series_dt.dtype
datetime64[ns, UTC]
>>> df.datetime.dtype
datetime64[ns, UTC]
```
#### Problem description
I believe the `DatetimeIndex` is correct (or at least more useful), because even if timezones don't agree, the underlying instants are well-ordered and compare fine. In fact, both versions compare fine using a simple `>` comparison:
```python
>>> dt > series_dt
array([ True, False])
>>> dt > df.datetime
0 True
1 False
Name: datetime, dtype: bool
```
#### Expected Output
```python
>>> df.datetime.searchsorted(dt)
1
```
A workaround is to wrap the column using `pd.DatetimeIndex()`:
```python
>>> pd.DatetimeIndex(df.datetime).searchsorted(dt)
1
```
#### Output of ``pd.show_versions()``
<details>
<pre>
INSTALLED VERSIONS
------------------
commit : None
python : 3.7.3.final.0
python-bits : 64
OS : Darwin
OS-release : 18.0.0
machine : x86_64
processor : i386
byteorder : little
LC_ALL : None
LANG : None
LOCALE : en_US.UTF-8
pandas : 0.25.1
numpy : 1.17.2
pytz : 2019.3
dateutil : 2.8.0
pip : 19.3.1
setuptools : 41.4.0
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
##teamcity[testStdOut timestamp='2019-12-05T12:38:05.901' flowId='test.test_driver.MyTestCase.test_driver' locationHint='python</Users/kwilliams/git/dispatcher/rush-springs-simulations>://test.test_driver.MyTestCase.test_driver' name='test_driver' nodeId='75' out='feather : None|n' parentNodeId='74']
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 2.10.3
IPython : None
pandas_datareader: None
bs4 : None
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : None
matplotlib : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
s3fs : 0.3.5
scipy : 1.3.1
sqlalchemy : None
tables : None
xarray : None
##teamcity[testStdOut timestamp='2019-12-05T12:38:05.904' flowId='test.test_driver.MyTestCase.test_driver' locationHint='python</Users/kwilliams/git/dispatcher/rush-springs-simulations>://test.test_driver.MyTestCase.test_driver' name='test_driver' nodeId='75' out='xlrd : None|n' parentNodeId='74']
xlwt : None
xlsxwriter : None
</pre>
</details>
| This now works on master for me, can you confirm
Confirmed. Could use a test | 2020-03-31T17:17:12Z | [] | [] |
Traceback (most recent call last):
File "/Users/kwilliams/Library/Application Support/IntelliJIdea2019.2/python/helpers/pydev/_pydevd_bundle/pydevd_exec2.py", line 3, in Exec
exec(exp, global_vars, local_vars)
File "<input>", line 1, in <module>
File "venv/lib/python3.7/site-packages/pandas/core/series.py", line 2694, in searchsorted
return algorithms.searchsorted(self._values, value, side=side, sorter=sorter)
File "venv/lib/python3.7/site-packages/pandas/core/algorithms.py", line 1887, in searchsorted
result = arr.searchsorted(value, side=side, sorter=sorter)
File "venv/lib/python3.7/site-packages/pandas/core/arrays/datetimelike.py", line 666, in searchsorted
self._check_compatible_with(value)
File "venv/lib/python3.7/site-packages/pandas/core/arrays/datetimes.py", line 591, in _check_compatible_with
own=self.tz, other=other.tz
ValueError: Timezones don't match. 'UTC != America/New_York'
| 13,617 |
||||
pandas-dev/pandas | pandas-dev__pandas-33265 | cad602e16c8e93ed887c41ce8531dc734eef90a3 | Reindexing two tz-aware (UTC) indices gives 'DatetimeArray subtraction must have the same timezones or no timezones'
#### Code Sample, a copy-pastable example if possible
```python
import pandas as pd
import datetime as dt
print(pd.__version__)
a = pd.date_range('2010-01-01', '2010-01-02', periods=24, tz='utc')
b = pd.date_range('2010-01-01', '2010-01-02', periods=23, tz='utc')
a.reindex(b, method='nearest', tolerance=dt.timedelta(seconds=20))
```
#### Problem description
Output is:
```
Python 3.7.4 (default, Jul 27 2019, 21:25:02)
[GCC 7.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import pandas as pd
>>> import datetime as dt
>>>
>>> print(pd.__version__)
1.0.2
>>> a = pd.date_range('2010-01-01', '2010-01-02', periods=24, tz='utc')
>>> b = pd.date_range('2010-01-01', '2010-01-02', periods=23, tz='utc')
>>> a.reindex(b, method='nearest', tolerance=dt.timedelta(seconds=20))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/andrew/project/ch2/choochoo-1/py/env/lib/python3.7/site-packages/pandas/core/indexes/base.py", line 3144, in reindex
target, method=method, limit=limit, tolerance=tolerance
File "/home/andrew/project/ch2/choochoo-1/py/env/lib/python3.7/site-packages/pandas/core/indexes/base.py", line 2740, in get_indexer
indexer = self._get_nearest_indexer(target, limit, tolerance)
File "/home/andrew/project/ch2/choochoo-1/py/env/lib/python3.7/site-packages/pandas/core/indexes/base.py", line 2830, in _get_nearest_indexer
indexer = self._filter_indexer_tolerance(target, indexer, tolerance)
File "/home/andrew/project/ch2/choochoo-1/py/env/lib/python3.7/site-packages/pandas/core/indexes/base.py", line 2834, in _filter_indexer_tolerance
distance = abs(self.values[indexer] - target)
File "/home/andrew/project/ch2/choochoo-1/py/env/lib/python3.7/site-packages/pandas/core/indexes/extension.py", line 147, in method
result = meth(_maybe_unwrap_index(other))
File "/home/andrew/project/ch2/choochoo-1/py/env/lib/python3.7/site-packages/pandas/core/arrays/datetimelike.py", line 1458, in __rsub__
return -(self - other)
File "/home/andrew/project/ch2/choochoo-1/py/env/lib/python3.7/site-packages/pandas/core/ops/common.py", line 64, in new_method
return method(self, other)
File "/home/andrew/project/ch2/choochoo-1/py/env/lib/python3.7/site-packages/pandas/core/arrays/datetimelike.py", line 1406, in __sub__
result = self._sub_datetime_arraylike(other)
File "/home/andrew/project/ch2/choochoo-1/py/env/lib/python3.7/site-packages/pandas/core/arrays/datetimes.py", line 667, in _sub_datetime_arraylike
f"{type(self).__name__} subtraction must have the same "
TypeError: DatetimeArray subtraction must have the same timezones or no timezones
```
The problem is that I don't see how to reindex and / or the error message appears to be nonsensical since both have UTC timezones.
#### Expected Output
With 0.25.3 I see:
```
(DatetimeIndex([ '2010-01-01 00:00:00+00:00',
'2010-01-01 01:05:27.272727272+00:00',
'2010-01-01 02:10:54.545454545+00:00',
'2010-01-01 03:16:21.818181818+00:00',
'2010-01-01 04:21:49.090909090+00:00',
'2010-01-01 05:27:16.363636363+00:00',
'2010-01-01 06:32:43.636363636+00:00',
'2010-01-01 07:38:10.909090909+00:00',
'2010-01-01 08:43:38.181818181+00:00',
'2010-01-01 09:49:05.454545454+00:00',
'2010-01-01 10:54:32.727272727+00:00',
'2010-01-01 12:00:00+00:00',
'2010-01-01 13:05:27.272727272+00:00',
'2010-01-01 14:10:54.545454545+00:00',
'2010-01-01 15:16:21.818181818+00:00',
'2010-01-01 16:21:49.090909090+00:00',
'2010-01-01 17:27:16.363636363+00:00',
'2010-01-01 18:32:43.636363636+00:00',
'2010-01-01 19:38:10.909090909+00:00',
'2010-01-01 20:43:38.181818181+00:00',
'2010-01-01 21:49:05.454545454+00:00',
'2010-01-01 22:54:32.727272727+00:00',
'2010-01-02 00:00:00+00:00'],
dtype='datetime64[ns, UTC]', freq=None), array([ 0, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1,
-1, -1, -1, -1, -1, 23]))
```
#### Output of ``pd.show_versions()``
<details>
[paste the output of ``pd.show_versions()`` here below this line]
INSTALLED VERSIONS
------------------
commit : None
python : 3.7.4.final.0
python-bits : 64
OS : Linux
OS-release : 4.12.14-lp151.28.36-default
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_GB.UTF-8
LOCALE : en_GB.UTF-8
pandas : 1.0.2
numpy : 1.18.1
pytz : 2019.3
dateutil : 2.8.1
pip : 20.0.2
setuptools : 40.8.0
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 2.11.1
IPython : 7.13.0
pandas_datareader: None
bs4 : None
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : None
matplotlib : 3.2.0
numexpr : None
odfpy : None
openpyxl : 3.0.3
pandas_gbq : None
pyarrow : None
pytables : None
pytest : None
pyxlsb : None
s3fs : None
scipy : 1.4.1
sqlalchemy : 1.3.15
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
xlsxwriter : None
numba : None
</details>
| @jbrockmendel I see you're on a `.values` cleanup spree. If you change these calls in `pandas/core/indexes/base.py`, that'd fix this issue
```
(pandas-dev) matthewroeschke:pandas-mroeschke matthewroeschke$ git diff
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 4501dd1dd..480330943 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -3065,7 +3065,7 @@ class Index(IndexOpsMixin, PandasObject):
def _filter_indexer_tolerance(
self, target: "Index", indexer: np.ndarray, tolerance
) -> np.ndarray:
- distance = abs(self.values[indexer] - target)
+ distance = abs(self._values[indexer] - target)
indexer = np.where(distance <= tolerance, indexer, -1)
return indexer
```
thanks!
The suggested edit has already been made on master (for unrelated reasons). Is the issue fixed? If so, want to make a PR with a regression test?
Yup looks fixed on master. Could use a test
I tested in the current pandas dev version, the issue is apparently fixed
![image](https://user-images.githubusercontent.com/24482664/78322094-bcd7c900-753b-11ea-95da-ddb2f1175d14.png)
| 2020-04-03T17:48:39Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/andrew/project/ch2/choochoo-1/py/env/lib/python3.7/site-packages/pandas/core/indexes/base.py", line 3144, in reindex
target, method=method, limit=limit, tolerance=tolerance
File "/home/andrew/project/ch2/choochoo-1/py/env/lib/python3.7/site-packages/pandas/core/indexes/base.py", line 2740, in get_indexer
indexer = self._get_nearest_indexer(target, limit, tolerance)
File "/home/andrew/project/ch2/choochoo-1/py/env/lib/python3.7/site-packages/pandas/core/indexes/base.py", line 2830, in _get_nearest_indexer
indexer = self._filter_indexer_tolerance(target, indexer, tolerance)
File "/home/andrew/project/ch2/choochoo-1/py/env/lib/python3.7/site-packages/pandas/core/indexes/base.py", line 2834, in _filter_indexer_tolerance
distance = abs(self.values[indexer] - target)
File "/home/andrew/project/ch2/choochoo-1/py/env/lib/python3.7/site-packages/pandas/core/indexes/extension.py", line 147, in method
result = meth(_maybe_unwrap_index(other))
File "/home/andrew/project/ch2/choochoo-1/py/env/lib/python3.7/site-packages/pandas/core/arrays/datetimelike.py", line 1458, in __rsub__
return -(self - other)
File "/home/andrew/project/ch2/choochoo-1/py/env/lib/python3.7/site-packages/pandas/core/ops/common.py", line 64, in new_method
return method(self, other)
File "/home/andrew/project/ch2/choochoo-1/py/env/lib/python3.7/site-packages/pandas/core/arrays/datetimelike.py", line 1406, in __sub__
result = self._sub_datetime_arraylike(other)
File "/home/andrew/project/ch2/choochoo-1/py/env/lib/python3.7/site-packages/pandas/core/arrays/datetimes.py", line 667, in _sub_datetime_arraylike
f"{type(self).__name__} subtraction must have the same "
TypeError: DatetimeArray subtraction must have the same timezones or no timezones
| 13,627 |
||||
pandas-dev/pandas | pandas-dev__pandas-33362 | 06f4c9028f97b7ef7f4c05fff8c1aeaf011ea19e | DataFrame.iat indexing with duplicate columns
```
# error
In [27]: pd.DataFrame([[1, 1]], columns=['x','x']).iat[0,0]
TypeError: len() of unsized object
# ok
In [26]: pd.DataFrame([[1, 1]], columns=['x','y']).iat[0,0]
Out[26]: 1
```
I have some weird issue in a DataFrame I'm creating from a row-based array.
Using python3 and pandas 0.17.1 (from debian unstable), I get:
```
df = pandas.DataFrame(data=data[1:], columns=data[0])
df.iat[0, 0]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3/dist-packages/pandas/core/indexing.py", line 1555, in __getitem__
return self.obj.get_value(*key, takeable=self._takeable)
File "/usr/lib/python3/dist-packages/pandas/core/frame.py", line 1808, in get_value
series = self._iget_item_cache(col)
File "/usr/lib/python3/dist-packages/pandas/core/generic.py", line 1116, in _iget_item_cache
lower = self.take(item, axis=self._info_axis_number, convert=True)
File "/usr/lib/python3/dist-packages/pandas/core/generic.py", line 1371, in take
convert=True, verify=True)
File "/usr/lib/python3/dist-packages/pandas/core/internals.py", line 3628, in take
axis=axis, allow_dups=True)
File "/usr/lib/python3/dist-packages/pandas/core/internals.py", line 3510, in reindex_indexer
indexer, fill_tuple=(fill_value,))
File "/usr/lib/python3/dist-packages/pandas/core/internals.py", line 3536, in _slice_take_blocks_ax0
slice_or_indexer, self.shape[0], allow_fill=allow_fill)
File "/usr/lib/python3/dist-packages/pandas/core/internals.py", line 4865, in _preprocess_slice_or_indexer
return 'fancy', indexer, len(indexer)
TypeError: len() of unsized object
```
Interestingly, I can otherwise manage the dataframe just fine.
The same code, running under python2.7 shows no issue.
What could be the cause of such error?
| `data` isn't defined. Can you post a copy-pastable example?
The dataset was too big, and trying to reduce the example proved to be harder than I though (the dataset is merged over and over in several loops).
Any hint in what could cause this knowing the traceback? Or what to look for in the resulting df so that I can come up with an example?
pls post `pd.show_versions()`
show `data.info()` right before the call to `.iat` as well as `data.head()`
```
INSTALLED VERSIONS
------------------
commit: None
python: 3.4.3.final.0
python-bits: 64
OS: Linux
OS-release: 4.2.0-1-amd64
machine: x86_64
processor:
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
pandas: 0.17.1
nose: 1.3.6
pip: None
setuptools: 18.7
Cython: 0.23.2
numpy: 1.9.2
scipy: 0.16.1
statsmodels: None
IPython: 2.3.0
sphinx: None
patsy: 0.4.1
dateutil: 2.4.2
pytz: 2012c
blosc: None
bottleneck: None
tables: 3.2.2
numexpr: 2.4.3
matplotlib: 1.5.0rc2
openpyxl: 2.3.0
xlrd: 0.9.4
xlwt: None
xlsxwriter: None
lxml: 3.4.4
bs4: None
html5lib: 0.999
httplib2: None
apiclient: None
sqlalchemy: None
pymysql: None
psycopg2: None
Jinja2: None
```
The dataset has too many columns (>2k) for data.head() to be of any use here.
I tried subsetting it while it still triggers the issue, and I now noticed this in data.info():
```
<class 'pandas.core.frame.DataFrame'>
Int64Index: 1337 entries, 0 to 1336
Data columns (total 265 columns):
TYPE 1337 non-null object
1337 non-null object
object
dtype: object
dtype: object
CODE 1337 non-null object
SAMPLE-TYP 1337 non-null object
C8:0 1337 non-null object
C14:0 1337 non-null object
C16:0 1337 non-null object
C17:0 1337 non-null object
C18:1 1337 non-null object
C18:0 1337 non-null object
```
Notice the first/second column types, are pasted here literally (looks like come memory corruption). Indeed, I also noticed now that if I break out of the loop with an exit() python segfaults...
it looks like you have an actual pandas object (DataFrame maybe) _inside_ each cell in the TYPE column. not surprised it doesn't work then. This is theory is ok, except for indexing does not work very well with this.
show `df.iloc[0]` and `df.iloc[0,0]` (if they don't trigger errors)
They work:
```
>>> data.iloc[0]
CODE XS05301
SAMPLE-TYP Analyte
PLPE_PE_28:0% 2.29483
PLPE_16:0/16:1% 0.00195465
PLPE_16:0/16:0% 0.00357268
PLPE_16:0/18:3% 0.00302107
....
>>> data.iloc[0,0]
'XS05301'
```
show the `TYPE` columns as that the one with the issue (and do a `type(data.iloc[....])`
It's just a constant string:
```
>>> data.TYPE.describe()
count 1337
unique 1
top Tirol
freq 1337
Name: TYPE, dtype: object
```
I actually drop it later. To me it looks like it's the second column that could be potentially doubtful.
But I'll have to dig into this more closely, now that I noticed the segfault, and it's quite reproducible, there are a few modules (such as xlrd) from my test program that I can remove by going through a few more hoops.
Turns out, `data[0]` (the header list) contains at one step several names which are identical.
Indeed, that seem to be the only issue I had. By ensuring names are unique, I also don't have more segmentation faults.
Would it make sense to check the value provided directly in the constructor, or is it too expensive?
so can you post a short repro?
```
data = pd.DataFrame([[1, 1]], columns=['x','x'])
data.iat[0,0]
```
@wavexx thanks!
```
# error
In [27]: pd.DataFrame([[1, 1]], columns=['x','x']).iat[0,0]
TypeError: len() of unsized object
# ok
In [26]: pd.DataFrame([[1, 1]], columns=['x','y']).iat[0,0]
Out[26]: 1
```
pull-requests are welcome!
What should be the desired outcome here? Handling duplicate columns in indexing through-out, or refusing to handle duplicate columns?
duplicate indexing is handled pretty well, see how `.iloc` does it. this has prob just not been updated.
Ok. But there's probably some other bug which I still didn't figure out.
In my earlier example I showed:
```
<class 'pandas.core.frame.DataFrame'>
Int64Index: 1337 entries, 0 to 1336
Data columns (total 265 columns):
TYPE 1337 non-null object
1337 non-null object
object
```
That second column turned out to be named as a single space (`` ``), which was fetched from a hidden column from xlrd (yeah.. I know.) and was duplicated. During a merge, that duplicated column resulted somehow in a nested dataframe as you suggested.
I couldn't reproduce it succinctly yet, but I think pd.merge might also have some other edge cases with duplicated columns.
@wavexx maybe, but this is a clear easy-to-repro bug. if you can find the source of the other then pls open a new report.
Since there's intention to handle duplicated indexing, I opened a couple of issues for some cases I think should be improved.
Related for `at`:
```
In [11]: pd.DataFrame([[1, 1]], columns=['x','x']).at[0,'x']
AttributeError: 'BlockManager' object has no attribute 'T'
```
Has anybody got a shot at fixing this? I still get bitten by this from time to time. I wouldn't say duplicate columns name are "well" handled until plain ordinal intexing doesn't even work :(
This is fixed on master
```
>>> import pandas as pd
>>>
>>> pd.__version__
'1.1.0.dev0+1029.gbdf969cd6'
>>>
>>> pd.DataFrame([[1, 1]], columns=["x", "x"]).iat[0, 0]
1
```
> This is fixed on master
fixed in #32089 which is not yet released, so could also add a whatsnew to 1.1 for this issue.
dafec63f2e138d0451dae5b37edea2e83f9adc8a is the first new commit
commit dafec63f2e138d0451dae5b37edea2e83f9adc8a
Author: jbrockmendel <jbrockmendel@gmail.com>
Date: Sat Feb 22 07:56:03 2020 -0800
BUG: DataFrame.iat incorrectly wrapping datetime objects (#32089)
| 2020-04-07T10:34:43Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3/dist-packages/pandas/core/indexing.py", line 1555, in __getitem__
return self.obj.get_value(*key, takeable=self._takeable)
File "/usr/lib/python3/dist-packages/pandas/core/frame.py", line 1808, in get_value
series = self._iget_item_cache(col)
File "/usr/lib/python3/dist-packages/pandas/core/generic.py", line 1116, in _iget_item_cache
lower = self.take(item, axis=self._info_axis_number, convert=True)
File "/usr/lib/python3/dist-packages/pandas/core/generic.py", line 1371, in take
convert=True, verify=True)
File "/usr/lib/python3/dist-packages/pandas/core/internals.py", line 3628, in take
axis=axis, allow_dups=True)
File "/usr/lib/python3/dist-packages/pandas/core/internals.py", line 3510, in reindex_indexer
indexer, fill_tuple=(fill_value,))
File "/usr/lib/python3/dist-packages/pandas/core/internals.py", line 3536, in _slice_take_blocks_ax0
slice_or_indexer, self.shape[0], allow_fill=allow_fill)
File "/usr/lib/python3/dist-packages/pandas/core/internals.py", line 4865, in _preprocess_slice_or_indexer
return 'fancy', indexer, len(indexer)
TypeError: len() of unsized object
| 13,642 |
||||
pandas-dev/pandas | pandas-dev__pandas-33446 | b7e786e1ba4e621a785aa446f6ea9f146dcd3187 | diff --git a/doc/source/whatsnew/v1.1.0.rst b/doc/source/whatsnew/v1.1.0.rst
--- a/doc/source/whatsnew/v1.1.0.rst
+++ b/doc/source/whatsnew/v1.1.0.rst
@@ -517,6 +517,7 @@ Groupby/resample/rolling
- Bug in :meth:`SeriesGroupBy.first`, :meth:`SeriesGroupBy.last`, :meth:`SeriesGroupBy.min`, and :meth:`SeriesGroupBy.max` returning floats when applied to nullable Booleans (:issue:`33071`)
- Bug in :meth:`DataFrameGroupBy.agg` with dictionary input losing ``ExtensionArray`` dtypes (:issue:`32194`)
- Bug in :meth:`DataFrame.resample` where an ``AmbiguousTimeError`` would be raised when the resulting timezone aware :class:`DatetimeIndex` had a DST transition at midnight (:issue:`25758`)
+- Bug in :meth:`DataFrame.groupby` where a ``ValueError`` would be raised when grouping by a categorical column with read-only categories and ``sort=False`` (:issue:`33410`)
Reshaping
^^^^^^^^^
diff --git a/pandas/_libs/hashtable_func_helper.pxi.in b/pandas/_libs/hashtable_func_helper.pxi.in
--- a/pandas/_libs/hashtable_func_helper.pxi.in
+++ b/pandas/_libs/hashtable_func_helper.pxi.in
@@ -206,7 +206,7 @@ def duplicated_{{dtype}}({{c_type}}[:] values, object keep='first'):
{{if dtype == 'object'}}
def ismember_{{dtype}}(ndarray[{{c_type}}] arr, ndarray[{{c_type}}] values):
{{else}}
-def ismember_{{dtype}}({{c_type}}[:] arr, {{c_type}}[:] values):
+def ismember_{{dtype}}(const {{c_type}}[:] arr, {{c_type}}[:] values):
{{endif}}
"""
Return boolean of values in arr on an
| BUG: ValueError: buffer source array is read-only during groupby
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of pandas.
- [x] (optional) I have confirmed this bug exists on the master branch of pandas.
---
**Note**: Please read [this guide](https://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) detailing how to provide the necessary information for us to reproduce your bug.
#### Code Sample, a copy-pastable example
```python
import pandas as pd
df = pd.DataFrame(data={'x': [1], 'y': [2]})
df.to_parquet('pq_df', partition_cols='x')
df = pd.read_parquet('pq_df')
df.groupby('x', sort=False)
```
#### Problem description
The above code raises an exception:
```
Traceback (most recent call last):
File "mve.py", line 5, in <module>
df.groupby('x', sort=False)
File "/Users/ehasse/.local/lib/python3.8/site-packages/pandas/core/frame.py", line 5798, in groupby
return groupby_generic.DataFrameGroupBy(
File "/Users/ehasse/.local/lib/python3.8/site-packages/pandas/core/groupby/groupby.py", line 402, in __init__
grouper, exclusions, obj = get_grouper(
File "/Users/ehasse/.local/lib/python3.8/site-packages/pandas/core/groupby/grouper.py", line 615, in get_grouper
Grouping(
File "/Users/ehasse/.local/lib/python3.8/site-packages/pandas/core/groupby/grouper.py", line 312, in __init__
self.grouper, self.all_grouper = recode_for_groupby(
File "/Users/ehasse/.local/lib/python3.8/site-packages/pandas/core/groupby/categorical.py", line 72, in recode_for_groupby
cat = cat.add_categories(c.categories[~c.categories.isin(cat.categories)])
File "/Users/ehasse/.local/lib/python3.8/site-packages/pandas/core/indexes/base.py", line 4667, in isin
return algos.isin(self, values)
File "/Users/ehasse/.local/lib/python3.8/site-packages/pandas/core/algorithms.py", line 447, in isin
return f(comps, values)
File "pandas/_libs/hashtable_func_helper.pxi", line 555, in pandas._libs.hashtable.ismember_int64
File "stringsource", line 658, in View.MemoryView.memoryview_cwrapper
File "stringsource", line 349, in View.MemoryView.memoryview.__cinit__
ValueError: buffer source array is read-only
```
This specifically requires the following:
* The dataframe is loaded from a parquet file.
* The column being grouped by was used to partition the file.
* sort=False is passed.
In addtion, passing observed=True stops the error from occurring.
I believe this is related to #31710, but they were unable to provide an example for groupby, and the issue remains on 1.0.3.
#### Expected Output
A DataFrameGroupBy object.
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit : None
python : 3.8.1.final.0
python-bits : 64
OS : Darwin
OS-release : 18.7.0
machine : x86_64
processor : i386
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 1.0.3
numpy : 1.18.1
pytz : 2019.3
dateutil : 2.8.1
pip : 20.0.2
setuptools : 46.0.0.post20200309
Cython : None
pytest : 5.4.1
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 2.11.1
IPython : 7.13.0
pandas_datareader: None
bs4 : None
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : None
matplotlib : 3.2.1
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : 0.16.0
pytables : None
pytest : 5.4.1
pyxlsb : None
s3fs : None
scipy : 1.4.1
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
xlsxwriter : None
numba : 0.47.0
</details>
| Here's a reproducer without parquet.
```pytb
In [29]: cats = np.array([1])
In [30]: cats.flags.writeable = False
In [31]: df = pd.DataFrame({"a": [1], "b": pd.Categorical([1], categories=pd.Index(cats))})
In [32]: df.groupby("b", sort=False)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-32-882468ddac04> in <module>
----> 1 df.groupby("b", sort=False)
~/sandbox/pandas/pandas/core/frame.py in groupby(self, by, axis, level, as_index, sort, group_keys, squeeze, observed)
5825 group_keys=group_keys,
5826 squeeze=squeeze,
-> 5827 observed=observed,
5828 )
5829
~/sandbox/pandas/pandas/core/groupby/groupby.py in __init__(self, obj, keys, axis, level, grouper, exclusions, selection, as_index, sort, group_keys, squeeze, observed, mutated)
408 sort=sort,
409 observed=observed,
--> 410 mutated=self.mutated,
411 )
412
~/sandbox/pandas/pandas/core/groupby/grouper.py in get_grouper(obj, key, axis, level, sort, observed, mutated, validate)
623 in_axis=in_axis,
624 )
--> 625 if not isinstance(gpr, Grouping)
626 else gpr
627 )
~/sandbox/pandas/pandas/core/groupby/grouper.py in __init__(self, index, grouper, obj, name, level, sort, observed, in_axis)
310
311 self.grouper, self.all_grouper = recode_for_groupby(
--> 312 self.grouper, self.sort, observed
313 )
314 categories = self.grouper.categories
~/sandbox/pandas/pandas/core/groupby/categorical.py in recode_for_groupby(c, sort, observed)
69 # including those missing from the data (GH-13179), which .unique()
70 # above dropped
---> 71 cat = cat.add_categories(c.categories[~c.categories.isin(cat.categories)])
72
73 return c.reorder_categories(cat.categories), None
~/sandbox/pandas/pandas/core/indexes/base.py in isin(self, values, level)
4872 if level is not None:
4873 self._validate_index_level(level)
-> 4874 return algos.isin(self, values)
4875
4876 def _get_string_slice(self, key: str_t, use_lhs: bool = True, use_rhs: bool = True):
~/sandbox/pandas/pandas/core/algorithms.py in isin(comps, values)
452 comps = comps.astype(object)
453
--> 454 return f(comps, values)
455
456
~/sandbox/pandas/pandas/_libs/hashtable_func_helper.pxi in pandas._libs.hashtable.ismember_int64()
553 @cython.wraparound(False)
554 @cython.boundscheck(False)
--> 555 def ismember_int64(int64_t[:] arr, int64_t[:] values):
556 """
557 Return boolean of values in arr on an
~/sandbox/pandas/pandas/_libs/hashtable.cpython-37m-darwin.so in View.MemoryView.memoryview_cwrapper()
~/sandbox/pandas/pandas/_libs/hashtable.cpython-37m-darwin.so in View.MemoryView.memoryview.__cinit__()
ValueError: buffer source array is read-only
```
There's a standard way to get cython to accept readonly arrays (adding `const`?) but I don't recall exactly.
in hashtable_func_helper.pxi.in L 209 `{{c_type}}[:] arr` needs to be changed to `const {{c_type}}[:] arr`
Thanks. @erik-hasse are you interested in making a PR with that change and tests?
Sure. I haven't contributed to Pandas before, anything I should read before making the change and writing the test?
Our contributing doc page is apparent broken right, now doc/source/development/contributing.rst should have all the information you need.
> Our contributing doc page is apparent broken right, now doc/source/development/contributing.rst should have all the information you need.
The dev docs are still (or again since a few days) working fine: https://pandas.pydata.org/docs/dev/development/contributing.html | 2020-04-10T02:12:31Z | [] | [] |
Traceback (most recent call last):
File "mve.py", line 5, in <module>
df.groupby('x', sort=False)
File "/Users/ehasse/.local/lib/python3.8/site-packages/pandas/core/frame.py", line 5798, in groupby
return groupby_generic.DataFrameGroupBy(
File "/Users/ehasse/.local/lib/python3.8/site-packages/pandas/core/groupby/groupby.py", line 402, in __init__
grouper, exclusions, obj = get_grouper(
File "/Users/ehasse/.local/lib/python3.8/site-packages/pandas/core/groupby/grouper.py", line 615, in get_grouper
Grouping(
File "/Users/ehasse/.local/lib/python3.8/site-packages/pandas/core/groupby/grouper.py", line 312, in __init__
self.grouper, self.all_grouper = recode_for_groupby(
File "/Users/ehasse/.local/lib/python3.8/site-packages/pandas/core/groupby/categorical.py", line 72, in recode_for_groupby
cat = cat.add_categories(c.categories[~c.categories.isin(cat.categories)])
File "/Users/ehasse/.local/lib/python3.8/site-packages/pandas/core/indexes/base.py", line 4667, in isin
return algos.isin(self, values)
File "/Users/ehasse/.local/lib/python3.8/site-packages/pandas/core/algorithms.py", line 447, in isin
return f(comps, values)
File "pandas/_libs/hashtable_func_helper.pxi", line 555, in pandas._libs.hashtable.ismember_int64
File "stringsource", line 658, in View.MemoryView.memoryview_cwrapper
File "stringsource", line 349, in View.MemoryView.memoryview.__cinit__
ValueError: buffer source array is read-only
| 13,651 |
|||
pandas-dev/pandas | pandas-dev__pandas-33629 | 22cf0f5dfcfbddd5506fdaf260e485bff1b88ef1 | diff --git a/doc/source/whatsnew/v1.1.0.rst b/doc/source/whatsnew/v1.1.0.rst
--- a/doc/source/whatsnew/v1.1.0.rst
+++ b/doc/source/whatsnew/v1.1.0.rst
@@ -480,6 +480,7 @@ Categorical
- :meth:`Categorical.fillna` now accepts :class:`Categorical` ``other`` argument (:issue:`32420`)
- Bug where :meth:`Categorical.replace` would replace with ``NaN`` whenever the new value and replacement value were equal (:issue:`33288`)
- Bug where an ordered :class:`Categorical` containing only ``NaN`` values would raise rather than returning ``NaN`` when taking the minimum or maximum (:issue:`33450`)
+- Bug where :meth:`Series.isna` and :meth:`DataFrame.isna` would raise for categorical dtype when ``pandas.options.mode.use_inf_as_na`` was set to ``True`` (:issue:`33594`)
Datetimelike
^^^^^^^^^^^^
diff --git a/pandas/core/dtypes/missing.py b/pandas/core/dtypes/missing.py
--- a/pandas/core/dtypes/missing.py
+++ b/pandas/core/dtypes/missing.py
@@ -134,13 +134,13 @@ def _isna_new(obj):
elif isinstance(obj, type):
return False
elif isinstance(obj, (ABCSeries, np.ndarray, ABCIndexClass, ABCExtensionArray)):
- return _isna_ndarraylike(obj)
+ return _isna_ndarraylike(obj, old=False)
elif isinstance(obj, ABCDataFrame):
return obj.isna()
elif isinstance(obj, list):
- return _isna_ndarraylike(np.asarray(obj, dtype=object))
+ return _isna_ndarraylike(np.asarray(obj, dtype=object), old=False)
elif hasattr(obj, "__array__"):
- return _isna_ndarraylike(np.asarray(obj))
+ return _isna_ndarraylike(np.asarray(obj), old=False)
else:
return False
@@ -165,13 +165,13 @@ def _isna_old(obj):
elif isinstance(obj, type):
return False
elif isinstance(obj, (ABCSeries, np.ndarray, ABCIndexClass, ABCExtensionArray)):
- return _isna_ndarraylike_old(obj)
+ return _isna_ndarraylike(obj, old=True)
elif isinstance(obj, ABCDataFrame):
return obj.isna()
elif isinstance(obj, list):
- return _isna_ndarraylike_old(np.asarray(obj, dtype=object))
+ return _isna_ndarraylike(np.asarray(obj, dtype=object), old=True)
elif hasattr(obj, "__array__"):
- return _isna_ndarraylike_old(np.asarray(obj))
+ return _isna_ndarraylike(np.asarray(obj), old=True)
else:
return False
@@ -207,40 +207,40 @@ def _use_inf_as_na(key):
globals()["_isna"] = _isna_new
-def _isna_ndarraylike(obj):
- values = getattr(obj, "_values", obj)
- dtype = values.dtype
-
- if is_extension_array_dtype(dtype):
- result = values.isna()
- elif is_string_dtype(dtype):
- result = _isna_string_dtype(values, dtype, old=False)
-
- elif needs_i8_conversion(dtype):
- # this is the NaT pattern
- result = values.view("i8") == iNaT
- else:
- result = np.isnan(values)
-
- # box
- if isinstance(obj, ABCSeries):
- result = obj._constructor(result, index=obj.index, name=obj.name, copy=False)
-
- return result
+def _isna_ndarraylike(obj, old: bool = False):
+ """
+ Return an array indicating which values of the input array are NaN / NA.
+ Parameters
+ ----------
+ obj: array-like
+ The input array whose elements are to be checked.
+ old: bool
+ Whether or not to treat infinite values as NA.
-def _isna_ndarraylike_old(obj):
+ Returns
+ -------
+ array-like
+ Array of boolean values denoting the NA status of each element.
+ """
values = getattr(obj, "_values", obj)
dtype = values.dtype
- if is_string_dtype(dtype):
- result = _isna_string_dtype(values, dtype, old=True)
-
+ if is_extension_array_dtype(dtype):
+ if old:
+ result = values.isna() | (values == -np.inf) | (values == np.inf)
+ else:
+ result = values.isna()
+ elif is_string_dtype(dtype):
+ result = _isna_string_dtype(values, dtype, old=old)
elif needs_i8_conversion(dtype):
# this is the NaT pattern
result = values.view("i8") == iNaT
else:
- result = ~np.isfinite(values)
+ if old:
+ result = ~np.isfinite(values)
+ else:
+ result = np.isnan(values)
# box
if isinstance(obj, ABCSeries):
| BUG: pandas 1.0 dropna error with categorical data if pd.options.mode.use_inf_as_na = True
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the latest version of pandas.
- [ ] (optional) I have confirmed this bug exists on the master branch of pandas.
---
**Note**: Please read [this guide](https://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) detailing how to provide the necessary information for us to reproduce your bug.
#### Code Sample, a copy-pastable example
```python
import pandas as pd
import numpy as np
from pandas.api.types import CategoricalDtype
# with categorical column and use_inf_as_na = True -> ERROR
pd.options.mode.use_inf_as_na = True
df1 = pd.DataFrame([['a1', 'good'], ['b1', 'good'], ['c1', 'good'], ['d1', 'bad']], columns=['C1', 'C2'])
df2 = pd.DataFrame([['a1', 'good'], ['b1', np.inf], ['c1', np.NaN], ['d1', 'bad']], columns=['C1', 'C2'])
categories = CategoricalDtype(categories=['good', 'bad'], ordered=True)
df1.loc[:, 'C2'] = df1['C2'].astype(categories)
df2.loc[:, 'C2'] = df2['C2'].astype(categories)
df1.dropna(axis=0) # ERROR
df2.dropna(axis=0) # ERROR
```
#### Problem description
With the latest version of pandas (1.0.3, installed via pip on python 3.6.8), DataFrame.dropna returns an error if a column is of type **CategoricalDtype** AND **pd.options.mode.use_inf_as_na = True**.
Exception with traceback:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/sbucc/miniconda3/envs/tf114/lib/python3.6/site-packages/pandas/core/frame.py", line 4751, in dropna
count = agg_obj.count(axis=agg_axis)
File "/home/sbucc/miniconda3/envs/tf114/lib/python3.6/site-packages/pandas/core/frame.py", line 7800, in count
result = notna(frame).sum(axis=axis)
File "/home/sbucc/miniconda3/envs/tf114/lib/python3.6/site-packages/pandas/core/dtypes/missing.py", line 376, in notna
res = isna(obj)
File "/home/sbucc/miniconda3/envs/tf114/lib/python3.6/site-packages/pandas/core/dtypes/missing.py", line 126, in isna
return _isna(obj)
File "/home/sbucc/miniconda3/envs/tf114/lib/python3.6/site-packages/pandas/core/dtypes/missing.py", line 185, in _isna_old
return obj._constructor(obj._data.isna(func=_isna_old))
File "/home/sbucc/miniconda3/envs/tf114/lib/python3.6/site-packages/pandas/core/internals/managers.py", line 555, in isna
return self.apply("apply", func=func)
File "/home/sbucc/miniconda3/envs/tf114/lib/python3.6/site-packages/pandas/core/internals/managers.py", line 442, in apply
applied = getattr(b, f)(**kwargs)
File "/home/sbucc/miniconda3/envs/tf114/lib/python3.6/site-packages/pandas/core/internals/blocks.py", line 390, in apply
result = func(self.values, **kwargs)
File "/home/sbucc/miniconda3/envs/tf114/lib/python3.6/site-packages/pandas/core/dtypes/missing.py", line 183, in _isna_old
return _isna_ndarraylike_old(obj)
File "/home/sbucc/miniconda3/envs/tf114/lib/python3.6/site-packages/pandas/core/dtypes/missing.py", line 283, in _isna_ndarraylike_old
vec = libmissing.isnaobj_old(values.ravel())
TypeError: Argument 'arr' has incorrect type (expected numpy.ndarray, got Categorical)
This doesn't happen with pandas 0.24.0 or if pd.options.mode.use_inf_as_na = False (default).
#### Expected Output
no error
#### Output of ``pd.show_versions()``
<details>
>>> pd.show_versions()
INSTALLED VERSIONS
------------------
commit : None
python : 3.6.8.final.0
python-bits : 64
OS : Linux
OS-release : 4.15.0-96-generic
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_GB.UTF-8
LOCALE : en_GB.UTF-8
pandas : 1.0.3
numpy : 1.18.2
pytz : 2019.3
dateutil : 2.8.1
pip : 20.0.2
setuptools : 46.1.3.post20200330
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : None
pandas_datareader: None
bs4 : None
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : None
matplotlib : 3.2.1
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
pytest : None
pyxlsb : None
s3fs : None
scipy : 1.4.1
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
xlsxwriter : None
numba : None
</details>
| Can you edit your post to include the full traceback and remove all the unnecessary examples (all the things that work). Just have the DataFrame creation and the code that fails. | 2020-04-18T14:25:58Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/sbucc/miniconda3/envs/tf114/lib/python3.6/site-packages/pandas/core/frame.py", line 4751, in dropna
count = agg_obj.count(axis=agg_axis)
File "/home/sbucc/miniconda3/envs/tf114/lib/python3.6/site-packages/pandas/core/frame.py", line 7800, in count
result = notna(frame).sum(axis=axis)
File "/home/sbucc/miniconda3/envs/tf114/lib/python3.6/site-packages/pandas/core/dtypes/missing.py", line 376, in notna
res = isna(obj)
File "/home/sbucc/miniconda3/envs/tf114/lib/python3.6/site-packages/pandas/core/dtypes/missing.py", line 126, in isna
return _isna(obj)
File "/home/sbucc/miniconda3/envs/tf114/lib/python3.6/site-packages/pandas/core/dtypes/missing.py", line 185, in _isna_old
return obj._constructor(obj._data.isna(func=_isna_old))
File "/home/sbucc/miniconda3/envs/tf114/lib/python3.6/site-packages/pandas/core/internals/managers.py", line 555, in isna
return self.apply("apply", func=func)
File "/home/sbucc/miniconda3/envs/tf114/lib/python3.6/site-packages/pandas/core/internals/managers.py", line 442, in apply
applied = getattr(b, f)(**kwargs)
File "/home/sbucc/miniconda3/envs/tf114/lib/python3.6/site-packages/pandas/core/internals/blocks.py", line 390, in apply
result = func(self.values, **kwargs)
File "/home/sbucc/miniconda3/envs/tf114/lib/python3.6/site-packages/pandas/core/dtypes/missing.py", line 183, in _isna_old
return _isna_ndarraylike_old(obj)
File "/home/sbucc/miniconda3/envs/tf114/lib/python3.6/site-packages/pandas/core/dtypes/missing.py", line 283, in _isna_ndarraylike_old
vec = libmissing.isnaobj_old(values.ravel())
TypeError: Argument 'arr' has incorrect type (expected numpy.ndarray, got Categorical)
| 13,675 |
|||
pandas-dev/pandas | pandas-dev__pandas-33646 | e878fdc4170f6a2aee8d7b42aa39a438fdf6c67f | diff --git a/doc/source/whatsnew/v1.1.0.rst b/doc/source/whatsnew/v1.1.0.rst
--- a/doc/source/whatsnew/v1.1.0.rst
+++ b/doc/source/whatsnew/v1.1.0.rst
@@ -529,6 +529,7 @@ Indexing
- Bug in :meth:`DataFrame.iloc` when slicing a single column-:class:`DataFrame`` with ``ExtensionDtype`` (e.g. ``df.iloc[:, :1]``) returning an invalid result (:issue:`32957`)
- Bug in :meth:`DatetimeIndex.insert` and :meth:`TimedeltaIndex.insert` causing index ``freq`` to be lost when setting an element into an empty :class:`Series` (:issue:33573`)
- Bug in :meth:`Series.__setitem__` with an :class:`IntervalIndex` and a list-like key of integers (:issue:`33473`)
+- Bug in :meth:`Series.__getitem__` allowing missing labels with ``np.ndarray``, :class:`Index`, :class:`Series` indexers but not ``list``, these now all raise ``KeyError`` (:issue:`33646`)
Missing
^^^^^^^
diff --git a/pandas/core/series.py b/pandas/core/series.py
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -949,11 +949,8 @@ def _get_with(self, key):
else:
return self.iloc[key]
- if isinstance(key, list):
- # handle the dup indexing case GH#4246
- return self.loc[key]
-
- return self.reindex(key)
+ # handle the dup indexing case GH#4246
+ return self.loc[key]
def _get_values_tuple(self, key):
# mpl hackaround
| API: Series[index_with_no_matches] vs Series[list_with_no_matches]
We treat list indexers differently from array-like indexers:
```
ser = pd.Series(["A", "B"])
key = pd.Series(["C"])
>>> ser[key]
C NaN
dtype: object
>>> ser[pd.Index(key)]
C NaN
dtype: object
>>> ser[np.array(key)]
C NaN
dtype: object
>>> ser[list(key)]
Traceback (most recent call last):
[...]
File "/Users/bmendel/Desktop/pd/pandas/pandas/core/indexing.py", line 1312, in _validate_read_indexer
raise KeyError(f"None of [{key}] are in the [{axis_name}]")
KeyError: "None of [Index(['C'], dtype='object')] are in the [index]"
```
Also inconsistent because `ser.loc[key]` raises for all 4 cases.
Is there a compelling reason for this? I tried making all of these behave like the list case and only one test broke (that test being the example above). The test was added in #5880.
| 2020-04-19T02:18:36Z | [] | [] |
Traceback (most recent call last):
[...]
File "/Users/bmendel/Desktop/pd/pandas/pandas/core/indexing.py", line 1312, in _validate_read_indexer
raise KeyError(f"None of [{key}] are in the [{axis_name}]")
KeyError: "None of [Index(['C'], dtype='object')] are in the [index]"
| 13,680 |
||||
pandas-dev/pandas | pandas-dev__pandas-33751 | 428791c5e01453ff6979b43d37c39c7315c0aaa2 | diff --git a/pandas/conftest.py b/pandas/conftest.py
--- a/pandas/conftest.py
+++ b/pandas/conftest.py
@@ -256,7 +256,9 @@ def nselect_method(request):
# ----------------------------------------------------------------
# Missing values & co.
# ----------------------------------------------------------------
-@pytest.fixture(params=[None, np.nan, pd.NaT, float("nan"), np.float("NaN"), pd.NA])
+@pytest.fixture(
+ params=[None, np.nan, pd.NaT, float("nan"), np.float("NaN"), pd.NA], ids=str
+)
def nulls_fixture(request):
"""
Fixture for each null type in pandas.
| pd.NA TypeError in drop_duplicates with object dtype
#### Code Sample, a copy-pastable example if possible
```python-traceback
>>> pd.DataFrame([[1, pd.NA], [2, "a"]]).drop_duplicates()
Traceback (most recent call last):
...
File "/Users/williamayd/miniconda3/envs/sitka/lib/python3.8/site-packages/pandas/core/frame.py", line 4859, in f
labels, shape = algorithms.factorize(
File "/Users/williamayd/miniconda3/envs/sitka/lib/python3.8/site-packages/pandas/core/algorithms.py", line 629, in factorize
codes, uniques = _factorize_array(
File "/Users/williamayd/miniconda3/envs/sitka/lib/python3.8/site-packages/pandas/core/algorithms.py", line 478, in _factorize_array
uniques, codes = table.factorize(values, na_sentinel=na_sentinel, na_value=na_value)
File "pandas/_libs/hashtable_class_helper.pxi", line 1806, in pandas._libs.hashtable.PyObjectHashTable.factorize
File "pandas/_libs/hashtable_class_helper.pxi", line 1728, in pandas._libs.hashtable.PyObjectHashTable._unique
File "pandas/_libs/missing.pyx", line 360, in pandas._libs.missing.NAType.__bool__
TypeError: boolean value of NA is ambiguous
```
This same failure isn't present when using an extension type:
```python
>>> df = pd.DataFrame([[1, pd.NA], [2, "a"]], columns=list("ab"))
>>> df["b"] = df["b"].astype("string")
>>> df.drop_duplicates()
a b
0 1 <NA>
1 2 a
```
| I cannot reproduce the error. Has it been fixed already?
Thanks @AnnaDaglis and well spotted. @jbrockmendel any idea where this might have been fixed?
> any idea where this might have been fixed?
no idea off the top of my head.
I can not reproduce it either, but it could be related to https://github.com/pandas-dev/pandas/issues/15752
> Thanks @AnnaDaglis and well spotted. @jbrockmendel any idea where this might have been fixed?
fixed in #31939 (i.e. 1.0.2)
41bc226841eb59ccdfa279734dac98f7debc6249 is the first new commit
commit 41bc226841eb59ccdfa279734dac98f7debc6249
Author: Daniel Saxton <2658661+dsaxton@users.noreply.github.com>
Date: Sun Feb 23 08:57:07 2020 -0600
BUG: Fix construction of Categorical from pd.NA (#31939)
| 2020-04-23T19:49:07Z | [] | [] |
Traceback (most recent call last):
...
File "/Users/williamayd/miniconda3/envs/sitka/lib/python3.8/site-packages/pandas/core/frame.py", line 4859, in f
labels, shape = algorithms.factorize(
File "/Users/williamayd/miniconda3/envs/sitka/lib/python3.8/site-packages/pandas/core/algorithms.py", line 629, in factorize
codes, uniques = _factorize_array(
File "/Users/williamayd/miniconda3/envs/sitka/lib/python3.8/site-packages/pandas/core/algorithms.py", line 478, in _factorize_array
uniques, codes = table.factorize(values, na_sentinel=na_sentinel, na_value=na_value)
File "pandas/_libs/hashtable_class_helper.pxi", line 1806, in pandas._libs.hashtable.PyObjectHashTable.factorize
File "pandas/_libs/hashtable_class_helper.pxi", line 1728, in pandas._libs.hashtable.PyObjectHashTable._unique
File "pandas/_libs/missing.pyx", line 360, in pandas._libs.missing.NAType.__bool__
TypeError: boolean value of NA is ambiguous
| 13,690 |
|||
pandas-dev/pandas | pandas-dev__pandas-33798 | 0db2286c9c5ba1dbc1026b190feacea3030e418b | diff --git a/doc/source/whatsnew/v1.1.0.rst b/doc/source/whatsnew/v1.1.0.rst
--- a/doc/source/whatsnew/v1.1.0.rst
+++ b/doc/source/whatsnew/v1.1.0.rst
@@ -586,6 +586,7 @@ I/O
unsupported HDF file (:issue:`9539`)
- Bug in :meth:`~DataFrame.to_parquet` was not raising ``PermissionError`` when writing to a private s3 bucket with invalid creds. (:issue:`27679`)
- Bug in :meth:`~DataFrame.to_csv` was silently failing when writing to an invalid s3 bucket. (:issue:`32486`)
+- Bug in :meth:`~DataFrame.read_feather` was raising an `ArrowIOError` when reading an s3 or http file path (:issue:`29055`)
Plotting
^^^^^^^^
diff --git a/pandas/io/feather_format.py b/pandas/io/feather_format.py
--- a/pandas/io/feather_format.py
+++ b/pandas/io/feather_format.py
@@ -4,7 +4,7 @@
from pandas import DataFrame, Int64Index, RangeIndex
-from pandas.io.common import stringify_path
+from pandas.io.common import get_filepath_or_buffer, stringify_path
def to_feather(df: DataFrame, path, **kwargs):
@@ -98,6 +98,12 @@ def read_feather(path, columns=None, use_threads: bool = True):
import_optional_dependency("pyarrow")
from pyarrow import feather
- path = stringify_path(path)
+ path, _, _, should_close = get_filepath_or_buffer(path)
+
+ df = feather.read_feather(path, columns=columns, use_threads=bool(use_threads))
+
+ # s3fs only validates the credentials when the file is closed.
+ if should_close:
+ path.close()
- return feather.read_feather(path, columns=columns, use_threads=bool(use_threads))
+ return df
| URL treated as local file for read_feather
Not sure if this is a pandas issue or pyarrow, but when I try to read from a URL:
```python
import pandas as pd
pd.read_feather("https://github.com/wesm/feather/raw/master/R/inst/feather/iris.feather")
```
I get the following error:
```
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/ubuntu/miniconda3/envs/pandas/lib/python3.7/site-packages/pandas/util/_decorators.py", line 208, in wrapper
return func(*args, **kwargs)
File "/home/ubuntu/miniconda3/envs/pandas/lib/python3.7/site-packages/pandas/io/feather_format.py", line 119, in read_feather
return feather.read_feather(path, columns=columns, use_threads=bool(use_threads))
File "/home/ubuntu/miniconda3/envs/pandas/lib/python3.7/site-packages/pyarrow/feather.py", line 214, in read_feather
reader = FeatherReader(source)
File "/home/ubuntu/miniconda3/envs/pandas/lib/python3.7/site-packages/pyarrow/feather.py", line 40, in __init__
self.open(source)
File "pyarrow/error.pxi", line 80, in pyarrow.lib.check_status
File "pyarrow/io.pxi", line 1406, in pyarrow.lib.get_reader
File "pyarrow/io.pxi", line 1395, in pyarrow.lib._get_native_file
File "pyarrow/io.pxi", line 788, in pyarrow.lib.memory_map
File "pyarrow/io.pxi", line 751, in pyarrow.lib.MemoryMappedFile._open
File "pyarrow/error.pxi", line 80, in pyarrow.lib.check_status
pyarrow.lib.ArrowIOError: Failed to open local file 'https://github.com/wesm/feather/raw/master/R/inst/feather/iris.feather', error: No such file or directory
```
#### Output of ``pd.show_versions()``
<details>
```
INSTALLED VERSIONS
------------------
commit : None
python : 3.7.4.final.0
python-bits : 64
OS : Linux
OS-release : 4.15.0-64-generic
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 0.25.1
numpy : 1.17.3
pytz : 2019.3
dateutil : 2.8.0
pip : 19.2.3
setuptools : 41.4.0
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : 0.4.0
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : None
pandas_datareader: None
bs4 : None
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : None
matplotlib : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : 0.15.0
pytables : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
xarray : None
xlrd : None
xlwt : None
xlsxwriter : None
```
</details>
| Does this work using arrow directly?
On Thu, Oct 17, 2019 at 11:03 AM Ryan <notifications@github.com> wrote:
> Not sure if this is a pandas issue or pyarrow, but when I try to read from
> a URL:
>
> import pandas as pd
> pd.read_feather("https://github.com/wesm/feather/raw/master/R/inst/feather/iris.feather")
>
> I get the following error:
>
> Traceback (most recent call last):
> File "<string>", line 1, in <module>
> File "/home/ubuntu/miniconda3/envs/pandas/lib/python3.7/site-packages/pandas/util/_decorators.py", line 208, in wrapper
> return func(*args, **kwargs)
> File "/home/ubuntu/miniconda3/envs/pandas/lib/python3.7/site-packages/pandas/io/feather_format.py", line 119, in read_feather
> return feather.read_feather(path, columns=columns, use_threads=bool(use_threads))
> File "/home/ubuntu/miniconda3/envs/pandas/lib/python3.7/site-packages/pyarrow/feather.py", line 214, in read_feather
> reader = FeatherReader(source)
> File "/home/ubuntu/miniconda3/envs/pandas/lib/python3.7/site-packages/pyarrow/feather.py", line 40, in __init__
> self.open(source)
> File "pyarrow/error.pxi", line 80, in pyarrow.lib.check_status
> File "pyarrow/io.pxi", line 1406, in pyarrow.lib.get_reader
> File "pyarrow/io.pxi", line 1395, in pyarrow.lib._get_native_file
> File "pyarrow/io.pxi", line 788, in pyarrow.lib.memory_map
> File "pyarrow/io.pxi", line 751, in pyarrow.lib.MemoryMappedFile._open
> File "pyarrow/error.pxi", line 80, in pyarrow.lib.check_status
> pyarrow.lib.ArrowIOError: Failed to open local file 'https://github.com/wesm/feather/raw/master/R/inst/feather/iris.feather', error: No such file or directory
>
> Output of pd.show_versions()
>
> INSTALLED VERSIONS
> ------------------
> commit : None
> python : 3.7.4.final.0
> python-bits : 64
> OS : Linux
> OS-release : 4.15.0-64-generic
> machine : x86_64
> processor : x86_64
> byteorder : little
> LC_ALL : None
> LANG : en_US.UTF-8
> LOCALE : en_US.UTF-8
>
> pandas : 0.25.1
> numpy : 1.17.3
> pytz : 2019.3
> dateutil : 2.8.0
> pip : 19.2.3
> setuptools : 41.4.0
> Cython : None
> pytest : None
> hypothesis : None
> sphinx : None
> blosc : None
> feather : 0.4.0
> xlsxwriter : None
> lxml.etree : None
> html5lib : None
> pymysql : None
> psycopg2 : None
> jinja2 : None
> IPython : None
> pandas_datareader: None
> bs4 : None
> bottleneck : None
> fastparquet : None
> gcsfs : None
> lxml.etree : None
> matplotlib : None
> numexpr : None
> odfpy : None
> openpyxl : None
> pandas_gbq : None
> pyarrow : 0.15.0
> pytables : None
> s3fs : None
> scipy : None
> sqlalchemy : None
> tables : None
> xarray : None
> xlrd : None
> xlwt : None
> xlsxwriter : None
>
> —
> You are receiving this because you are subscribed to this thread.
> Reply to this email directly, view it on GitHub
> <https://github.com/pandas-dev/pandas/issues/29055?email_source=notifications&email_token=AAKAOIUNSOGXUME4UDVI7ELQPCEDXA5CNFSM4JB3ZMZ2YY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4HSQH5RQ>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AAKAOIVLYEDDCMDGV7D2ADTQPCEDXANCNFSM4JB3ZMZQ>
> .
>
A similar call with pyarrow:
```python
import pyarrow
pyarrow.feather.read_feather('https://github.com/wesm/feather/raw/master/R/inst/feather/iris.feather')
```
Fails with the same error. The pandas [docs](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_feather.html) say you can pass a URL, but the arrow help say it only take a path/file-like object:
```
read_feather(source, columns=None, use_threads=True)
Read a pandas.DataFrame from Feather format
Parameters
----------
source : string file path, or file-like object
columns : sequence, optional
Only read a specific set of columns. If not provided, all columns are
read
use_threads: bool, default True
Whether to parallelize reading using multiple threads
Returns
-------
df : pandas.DataFrame
```
So I was assuming pandas was doing extra work to parse the URL.
@TomAugspurger, is this just an error in the docs and a feature request to pyarrow?
Seems to be an issue with the pandas docs.
Looks like this feature [won't be coming](https://issues.apache.org/jira/browse/ARROW-6998?filter=-6) to pyarrow. So, until this gets added on the pandas side, here is a work around:
```python
import pandas as pd
import requests
import io
resp = requests.get(
'https://github.com/wesm/feather/raw/master/R/inst/feather/iris.feather',
stream=True
)
resp.raw.decode_content = True
mem_fh = io.BytesIO(resp.raw.read())
pd.read_feather(mem_fh)
```
@mccarthyryanc, But in case if we want to fetch data from AWS S3 bucket, then this workaround is of no use...Any idea whant need to be done in that case ??
@darshit-doshi if your data is in a public S3 bucket this method will work, you just need the full object URL.
If not in a public bucket I would use something like [s3fs](https://s3fs.readthedocs.io/en/latest/):
```python
import s3fs
fs = s3fs.S3FileSystem()
fh = fs.open('s3://bucketname/filename.feather')
df = pd.read_feather(fh)
```
Its maybe a case of us calling `get_filepath_or_buffer` similar to other readers (e.g parquet) [here](https://github.com/pandas-dev/pandas/blob/master/pandas/io/parquet.py#L95), this infers the filesystem to use based on the URL. | 2020-04-26T01:01:29Z | [] | [] |
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/ubuntu/miniconda3/envs/pandas/lib/python3.7/site-packages/pandas/util/_decorators.py", line 208, in wrapper
return func(*args, **kwargs)
File "/home/ubuntu/miniconda3/envs/pandas/lib/python3.7/site-packages/pandas/io/feather_format.py", line 119, in read_feather
return feather.read_feather(path, columns=columns, use_threads=bool(use_threads))
File "/home/ubuntu/miniconda3/envs/pandas/lib/python3.7/site-packages/pyarrow/feather.py", line 214, in read_feather
reader = FeatherReader(source)
File "/home/ubuntu/miniconda3/envs/pandas/lib/python3.7/site-packages/pyarrow/feather.py", line 40, in __init__
self.open(source)
File "pyarrow/error.pxi", line 80, in pyarrow.lib.check_status
File "pyarrow/io.pxi", line 1406, in pyarrow.lib.get_reader
File "pyarrow/io.pxi", line 1395, in pyarrow.lib._get_native_file
File "pyarrow/io.pxi", line 788, in pyarrow.lib.memory_map
File "pyarrow/io.pxi", line 751, in pyarrow.lib.MemoryMappedFile._open
File "pyarrow/error.pxi", line 80, in pyarrow.lib.check_status
pyarrow.lib.ArrowIOError: Failed to open local file 'https://github.com/wesm/feather/raw/master/R/inst/feather/iris.feather', error: No such file or directory
| 13,700 |
|||
pandas-dev/pandas | pandas-dev__pandas-33821 | c43652ef8a2342ba3eb065ba7e3e6733096bd4d3 | diff --git a/doc/source/whatsnew/v1.2.0.rst b/doc/source/whatsnew/v1.2.0.rst
--- a/doc/source/whatsnew/v1.2.0.rst
+++ b/doc/source/whatsnew/v1.2.0.rst
@@ -299,7 +299,7 @@ I/O
Plotting
^^^^^^^^
--
+- Bug in :meth:`DataFrame.plot` where a marker letter in the ``style`` keyword sometimes causes a ``ValueError`` (:issue:`21003`)
-
Groupby/resample/rolling
diff --git a/pandas/plotting/_matplotlib/core.py b/pandas/plotting/_matplotlib/core.py
--- a/pandas/plotting/_matplotlib/core.py
+++ b/pandas/plotting/_matplotlib/core.py
@@ -1,4 +1,3 @@
-import re
from typing import TYPE_CHECKING, List, Optional, Tuple
import warnings
@@ -55,6 +54,15 @@
from matplotlib.axis import Axis
+def _color_in_style(style: str) -> bool:
+ """
+ Check if there is a color letter in the style string.
+ """
+ from matplotlib.colors import BASE_COLORS
+
+ return not set(BASE_COLORS).isdisjoint(style)
+
+
class MPLPlot:
"""
Base class for assembling a pandas plot using matplotlib
@@ -200,8 +208,6 @@ def __init__(
self._validate_color_args()
def _validate_color_args(self):
- import matplotlib.colors
-
if (
"color" in self.kwds
and self.nseries == 1
@@ -233,13 +239,12 @@ def _validate_color_args(self):
styles = [self.style]
# need only a single match
for s in styles:
- for char in s:
- if char in matplotlib.colors.BASE_COLORS:
- raise ValueError(
- "Cannot pass 'style' string with a color symbol and "
- "'color' keyword argument. Please use one or the other or "
- "pass 'style' without a color symbol"
- )
+ if _color_in_style(s):
+ raise ValueError(
+ "Cannot pass 'style' string with a color symbol and "
+ "'color' keyword argument. Please use one or the "
+ "other or pass 'style' without a color symbol"
+ )
def _iter_data(self, data=None, keep_index=False, fillna=None):
if data is None:
@@ -739,7 +744,7 @@ def _apply_style_colors(self, colors, kwds, col_num, label):
style = self.style
has_color = "color" in kwds or self.colormap is not None
- nocolor_style = style is None or re.match("[a-z]+", style) is None
+ nocolor_style = style is None or not _color_in_style(style)
if (has_color or self.subplots) and nocolor_style:
if isinstance(colors, dict):
kwds["color"] = colors[label]
| pandas plotting raises ValueError on style strings that should be valid according to spec.
#### Code Sample
```python
import pandas as pd
from numpy.random import random
import matplotlib.pyplot as plt
data = random((7,4))
fmt = 'd'
plt.plot(data, fmt, color='green') # works as expected (no error)
df = pd.DataFrame(data)
df.plot(color='green', style=fmt)
# The previous line raises ValueError:
# Cannot pass 'style' string with a color symbol and 'color' keyword argument.
# Please use one or the other or pass 'style' without a color symbol
```
#### Full Stack Trace
```python
Traceback (most recent call last):
File "<ipython-input-178-b2322b55aff5>", line 1, in <module>
runfile('G:/irreplacable stuff/Scripts/playground/playground.py', wdir='G:/irreplacable stuff/Scripts/playground')
File "C:\ProgramData\Anaconda2\lib\site-packages\spyder\utils\site\sitecustomize.py", line 705, in runfile
execfile(filename, namespace)
File "C:\ProgramData\Anaconda2\lib\site-packages\spyder\utils\site\sitecustomize.py", line 87, in execfile
exec(compile(scripttext, filename, 'exec'), glob, loc)
File "G:/irreplacable stuff/Scripts/playground/playground.py", line 30, in <module>
df.plot(color='green', style='d')
File "C:\ProgramData\Anaconda2\lib\site-packages\pandas\plotting\_core.py", line 2677, in __call__
sort_columns=sort_columns, **kwds)
File "C:\ProgramData\Anaconda2\lib\site-packages\pandas\plotting\_core.py", line 1902, in plot_frame
**kwds)
File "C:\ProgramData\Anaconda2\lib\site-packages\pandas\plotting\_core.py", line 1727, in _plot
plot_obj = klass(data, subplots=subplots, ax=ax, kind=kind, **kwds)
File "C:\ProgramData\Anaconda2\lib\site-packages\pandas\plotting\_core.py", line 931, in __init__
MPLPlot.__init__(self, data, **kwargs)
File "C:\ProgramData\Anaconda2\lib\site-packages\pandas\plotting\_core.py", line 182, in __init__
self._validate_color_args()
File "C:\ProgramData\Anaconda2\lib\site-packages\pandas\plotting\_core.py", line 215, in _validate_color_args
"Cannot pass 'style' string with a color "
ValueError: Cannot pass 'style' string with a color symbol and 'color' keyword argument. Please use one or the other or pass 'style' without a color symbol
```
#### Problem description
`df.plot` should just pass the `style` kwarg as `fmt` arg to `matplotlib.axes.plot` but it does does some extra (sloppy) validation where it thinks that valid marker style symbols are color symbols and raises an error if the color is already defined elsewhere. The problem is clearly in `_core.py`, line 213.
This problem affects the following standard marker styles (key corresponds to `m` in the example code):
```python
{u'd': u'thin_diamond',
u'h': u'hexagon1',
u'o': u'circle',
u'p': u'pentagon',
u's': u'square',
u'v': u'triangle_down',
u'x': u'x'}
```
#### Expected Output
`df.plot(color='green', style=fmt)` should simply plot the plot the same way as `plt.plot(data, fmt, color='green')` without raising errors. All legal values for `fmt` arg in `pyplot.plot` should be legal for the `style` kwarg in `df.plot`.
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 2.7.14.final.0
python-bits: 64
OS: Windows
OS-release: 10
machine: AMD64
processor: Intel64 Family 6 Model 45 Stepping 7, GenuineIntel
byteorder: little
LC_ALL: None
LANG: en
LOCALE: None.None
pandas: 0.22.0
pytest: 3.5.0
pip: 9.0.3
setuptools: 39.0.1
Cython: 0.28.2
numpy: 1.14.2
scipy: 1.0.1
pyarrow: None
xarray: None
IPython: 5.6.0
sphinx: 1.7.2
patsy: 0.5.0
dateutil: 2.7.2
pytz: 2018.4
blosc: None
bottleneck: 1.2.1
tables: 3.4.2
numexpr: 2.6.4
feather: None
matplotlib: 2.2.2
openpyxl: 2.5.2
xlrd: 1.1.0
xlwt: 1.3.0
xlsxwriter: 1.0.4
lxml: 4.2.1
bs4: 4.6.0
html5lib: 1.0.1
sqlalchemy: 1.2.6
pymysql: None
psycopg2: None
jinja2: 2.10
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
</details>
| I think this can be solved by changing the regex at `_core.py` line 213 and line 670 to `'^[bgrcmykw]'` or perhaps pulling the valid color letters directly from `matplotlib.colors.BASE_COLORS`.
By Pandas 1.0.2 the issue has been partly fixed and the code sample runs fine now. However the same problem still persists when passing a list of colors as illustrated in the code sample below.
```
import pandas as pd
from numpy.random import random
import matplotlib.pyplot as plt
data = random((7,4))
fmt = 'd'
plt.plot(data, fmt, color='green') # works as expected (no error)
df = pd.DataFrame(data)
color = ['yellow', 'red', 'green', 'blue']
df.plot(color=color, style=fmt)
```
This can be fixed by changing pandas/plotting/matplotlib/core.py in the `_apply_style_colors` method line 727 from
`nocolor_style = style is None or re.match("[a-z]+", style) is None`
to
`nocolor_style = style is None or re.match("[bgrcmykw]+", style) is None`.
However, a real solution should pull the color letters from `matplotlib.colors.BASE_COLORS`.
By the way, I'm the same guy as Austrianguy. E-mail inboxes sometimes disappear. And passwords are forgotten.
I'm working on the bug fix. However I ran into the following issue when writing the test cases. The code above only fails during rendering. I included my code from 2018-05-10 and 2020-04-23 as tests in `pandas/tests/plotting/test_frame.py`. As expected the 2018 test passes. But unexpectedly the 2020 test passes too. I'm quite sure that is because when running the test suite the plot never renders so there's no opportunity for the error to raise. In fact the test suite doesn't even import matplotlib. Two options for the test suite: A) import matplotlib in the test. B) check that the output line colors are correct. I think I'll go for B.
Here's the traceback from running my 2020 code (copied from above) directly in Spyder:
```
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\lib\site-packages\matplotlib\backends\backend_qt5.py", line 508, in _draw_idle
self.draw()
File "C:\ProgramData\Anaconda3\lib\site-packages\matplotlib\backends\backend_agg.py", line 388, in draw
self.figure.draw(self.renderer)
File "C:\ProgramData\Anaconda3\lib\site-packages\matplotlib\artist.py", line 38, in draw_wrapper
return draw(artist, renderer, *args, **kwargs)
File "C:\ProgramData\Anaconda3\lib\site-packages\matplotlib\figure.py", line 1709, in draw
renderer, self, artists, self.suppressComposite)
File "C:\ProgramData\Anaconda3\lib\site-packages\matplotlib\image.py", line 135, in _draw_list_compositing_images
a.draw(renderer)
File "C:\ProgramData\Anaconda3\lib\site-packages\matplotlib\artist.py", line 38, in draw_wrapper
return draw(artist, renderer, *args, **kwargs)
File "C:\ProgramData\Anaconda3\lib\site-packages\matplotlib\axes\_base.py", line 2647, in draw
mimage._draw_list_compositing_images(renderer, self, artists)
File "C:\ProgramData\Anaconda3\lib\site-packages\matplotlib\image.py", line 135, in _draw_list_compositing_images
a.draw(renderer)
File "C:\ProgramData\Anaconda3\lib\site-packages\matplotlib\artist.py", line 38, in draw_wrapper
return draw(artist, renderer, *args, **kwargs)
File "C:\ProgramData\Anaconda3\lib\site-packages\matplotlib\lines.py", line 812, in draw
self.get_markeredgecolor(), self._alpha)
File "C:\ProgramData\Anaconda3\lib\site-packages\matplotlib\colors.py", line 177, in to_rgba
rgba = _to_rgba_no_colorcycle(c, alpha)
File "C:\ProgramData\Anaconda3\lib\site-packages\matplotlib\colors.py", line 240, in _to_rgba_no_colorcycle
raise ValueError("Invalid RGBA argument: {!r}".format(orig_c))
ValueError: Invalid RGBA argument: ['yellow', 'red', 'green', 'blue']
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\lib\site-packages\matplotlib\backends\backend_qt5.py", line 508, in _draw_idle
self.draw()
File "C:\ProgramData\Anaconda3\lib\site-packages\matplotlib\backends\backend_agg.py", line 388, in draw
self.figure.draw(self.renderer)
File "C:\ProgramData\Anaconda3\lib\site-packages\matplotlib\artist.py", line 38, in draw_wrapper
return draw(artist, renderer, *args, **kwargs)
File "C:\ProgramData\Anaconda3\lib\site-packages\matplotlib\figure.py", line 1709, in draw
renderer, self, artists, self.suppressComposite)
File "C:\ProgramData\Anaconda3\lib\site-packages\matplotlib\image.py", line 135, in _draw_list_compositing_images
a.draw(renderer)
File "C:\ProgramData\Anaconda3\lib\site-packages\matplotlib\artist.py", line 38, in draw_wrapper
return draw(artist, renderer, *args, **kwargs)
File "C:\ProgramData\Anaconda3\lib\site-packages\matplotlib\axes\_base.py", line 2647, in draw
mimage._draw_list_compositing_images(renderer, self, artists)
File "C:\ProgramData\Anaconda3\lib\site-packages\matplotlib\image.py", line 135, in _draw_list_compositing_images
a.draw(renderer)
File "C:\ProgramData\Anaconda3\lib\site-packages\matplotlib\artist.py", line 38, in draw_wrapper
return draw(artist, renderer, *args, **kwargs)
File "C:\ProgramData\Anaconda3\lib\site-packages\matplotlib\lines.py", line 812, in draw
self.get_markeredgecolor(), self._alpha)
File "C:\ProgramData\Anaconda3\lib\site-packages\matplotlib\colors.py", line 177, in to_rgba
rgba = _to_rgba_no_colorcycle(c, alpha)
File "C:\ProgramData\Anaconda3\lib\site-packages\matplotlib\colors.py", line 240, in _to_rgba_no_colorcycle
raise ValueError("Invalid RGBA argument: {!r}".format(orig_c))
ValueError: Invalid RGBA argument: ['yellow', 'red', 'green', 'blue']
``` | 2020-04-27T14:16:13Z | [] | [] |
Traceback (most recent call last):
File "<ipython-input-178-b2322b55aff5>", line 1, in <module>
runfile('G:/irreplacable stuff/Scripts/playground/playground.py', wdir='G:/irreplacable stuff/Scripts/playground')
File "C:\ProgramData\Anaconda2\lib\site-packages\spyder\utils\site\sitecustomize.py", line 705, in runfile
execfile(filename, namespace)
File "C:\ProgramData\Anaconda2\lib\site-packages\spyder\utils\site\sitecustomize.py", line 87, in execfile
exec(compile(scripttext, filename, 'exec'), glob, loc)
File "G:/irreplacable stuff/Scripts/playground/playground.py", line 30, in <module>
df.plot(color='green', style='d')
File "C:\ProgramData\Anaconda2\lib\site-packages\pandas\plotting\_core.py", line 2677, in __call__
sort_columns=sort_columns, **kwds)
File "C:\ProgramData\Anaconda2\lib\site-packages\pandas\plotting\_core.py", line 1902, in plot_frame
**kwds)
File "C:\ProgramData\Anaconda2\lib\site-packages\pandas\plotting\_core.py", line 1727, in _plot
plot_obj = klass(data, subplots=subplots, ax=ax, kind=kind, **kwds)
File "C:\ProgramData\Anaconda2\lib\site-packages\pandas\plotting\_core.py", line 931, in __init__
MPLPlot.__init__(self, data, **kwargs)
File "C:\ProgramData\Anaconda2\lib\site-packages\pandas\plotting\_core.py", line 182, in __init__
self._validate_color_args()
File "C:\ProgramData\Anaconda2\lib\site-packages\pandas\plotting\_core.py", line 215, in _validate_color_args
"Cannot pass 'style' string with a color "
ValueError: Cannot pass 'style' string with a color symbol and 'color' keyword argument. Please use one or the other or pass 'style' without a color symbol
| 13,703 |
|||
pandas-dev/pandas | pandas-dev__pandas-33911 | 3ed7dff48bb4e8c7c0129283ff51eccea3a0f861 | diff --git a/doc/source/whatsnew/v1.1.0.rst b/doc/source/whatsnew/v1.1.0.rst
--- a/doc/source/whatsnew/v1.1.0.rst
+++ b/doc/source/whatsnew/v1.1.0.rst
@@ -560,6 +560,7 @@ Datetimelike
- Bug in :meth:`DatetimeIndex.intersection` losing ``freq`` and timezone in some cases (:issue:`33604`)
- Bug in :class:`DatetimeIndex` addition and subtraction with some types of :class:`DateOffset` objects incorrectly retaining an invalid ``freq`` attribute (:issue:`33779`)
- Bug in :class:`DatetimeIndex` where setting the ``freq`` attribute on an index could silently change the ``freq`` attribute on another index viewing the same data (:issue:`33552`)
+- :meth:`DataFrame.min`/:meth:`DataFrame.max` not returning consistent result with :meth:`Series.min`/:meth:`Series.max` when called on objects initialized with empty :func:`pd.to_datetime`
- Bug in :meth:`DatetimeIndex.intersection` and :meth:`TimedeltaIndex.intersection` with results not having the correct ``name`` attribute (:issue:`33904`)
- Bug in :meth:`DatetimeArray.__setitem__`, :meth:`TimedeltaArray.__setitem__`, :meth:`PeriodArray.__setitem__` incorrectly allowing values with ``int64`` dtype to be silently cast (:issue:`33717`)
diff --git a/pandas/core/nanops.py b/pandas/core/nanops.py
--- a/pandas/core/nanops.py
+++ b/pandas/core/nanops.py
@@ -384,8 +384,12 @@ def _na_for_min_count(
else:
assert axis is not None # assertion to make mypy happy
result_shape = values.shape[:axis] + values.shape[axis + 1 :]
- result = np.empty(result_shape, dtype=values.dtype)
- result.fill(fill_value)
+ # calling np.full with dtype parameter throws an ValueError when called
+ # with dtype=np.datetime64 and and fill_value=pd.NaT
+ try:
+ result = np.full(result_shape, fill_value, dtype=values.dtype)
+ except ValueError:
+ result = np.full(result_shape, fill_value)
return result
| BUG: min/max of empty datetime dataframe raises
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of pandas.
- [ ] (optional) I have confirmed this bug exists on the master branch of pandas.
#### Code Sample, a copy-pastable example
```python
import pandas as pd
df = pd.DataFrame(dict(x=pd.to_datetime([])))
df.max()
```
```python-traceback
Traceback (most recent call last):
File "<ipython-input-17-be9940feb663>", line 1, in <module>
df.max()
File "pandas/core/generic.py", line 11215, in stat_func
f, name, axis=axis, skipna=skipna, numeric_only=numeric_only
File "pandas/core/frame.py", line 7907, in _reduce
result = f(values)
File "pandas/core/frame.py", line 7865, in f
return op(x, axis=axis, skipna=skipna, **kwds)
File "pandas/core/nanops.py", line 109, in f
return _na_for_min_count(values, axis)
File "pandas/core/nanops.py", line 392, in _na_for_min_count
result.fill(fill_value)
ValueError: cannot convert float NaN to integer
```
#### Problem description
When taking the min/max of an empty datetime dataframe, a ValueError is raised. This is surprising, and inconsistent with the case of an empty datetime series, where min/max return NaT.
#### Expected Output
```python
x NaT
dtype: datetime64[ns]
```
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit : None
python : 3.7.7.final.0
python-bits : 64
OS : Linux
OS-release : 4.20.11-100.fc28.x86_64
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_GB.UTF-8
LOCALE : en_GB.UTF-8
pandas : 1.0.3
numpy : 1.18.1
pytz : 2019.3
dateutil : 2.8.1
pip : 20.0.2
setuptools : 46.1.3.post20200330
Cython : 0.29.15
pytest : 5.4.1
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : 4.5.0
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 2.11.1
IPython : 7.13.0
pandas_datareader: None
bs4 : 4.9.0
bottleneck : 1.3.2
fastparquet : None
gcsfs : None
lxml.etree : 4.5.0
matplotlib : 3.1.3
numexpr : 2.7.1
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : 0.15.1
pytables : None
pytest : 5.4.1
pyxlsb : None
s3fs : None
scipy : 1.2.1
sqlalchemy : 1.3.16
tables : 3.6.1
tabulate : None
xarray : None
xlrd : None
xlwt : None
xlsxwriter : None
numba : None
</details>
| Note the difference here:
```python
In [1]: import pandas as pd
# initialize a series from pd.to_datetime
In [2]: srs = pd.Series(pd.to_datetime([]))
In [3]: srs
Out[3]: Series([], dtype: datetime64[ns])
In [4]: srs.max()
Out[4]: NaT
In [5]: df1 = pd.DataFrame(dict(x=pd.to_datetime([])))
In [6]: df1
Out[6]:
Empty DataFrame
Columns: [x]
Index: []
...
In [8]: df1.max()
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-8-ffa8d8284b6e> in <module>
----> 1 df1.max()
~/programming/python/pandas_tests/pandas/pandas/core/generic.py in stat_func(self, axis, skipna, level, numeric_only, **kwargs)
11172 return self._agg_by_level(name, axis=axis, level=level, skipna=skipna)
11173 #import pdb; pdb.set_trace()()
> 11174 return self._reduce(
11175 func, name=name, axis=axis, skipna=skipna, numeric_only=numeric_only
11176 )
~/programming/python/pandas_tests/pandas/pandas/core/frame.py in _reduce(self, op, name, axis, skipna, numeric_only, filter_type, **kwds)
8377
8378 try:
-> 8379 result = f(values)
8380
8381 except TypeError:
~/programming/python/pandas_tests/pandas/pandas/core/frame.py in f(x)
8296
8297 def f(x):
-> 8298 return op(x, axis=axis, skipna=skipna, **kwds)
8299
8300 def _get_data(axis_matters):
~/programming/python/pandas_tests/pandas/pandas/core/nanops.py in f(values, axis, skipna, **kwds)
111 # correctly handle empty inputs and remove this check.
112 # It *may* just be `var`
--> 113 return _na_for_min_count(values, axis)
114
115 if _USE_BOTTLENECK and skipna and _bn_ok_dtype(values.dtype, bn_name):
~/programming/python/pandas_tests/pandas/pandas/core/nanops.py in _na_for_min_count(values, axis)
386 result_shape = values.shape[:axis] + values.shape[axis + 1 :]
387 result = np.empty(result_shape, dtype=values.dtype)
--> 388 result.fill(fill_value)
389 return result
390
ValueError: cannot convert float NaN to integer
# initializing dataframe with an empty list of pandas datetimes
In [9]: df2 = pd.DataFrame([pd.to_datetime([])])
In [10]: df2
Out[10]:
Empty DataFrame
Columns: []
Index: [0]
In [11]: df2.max()
Out[11]: Series([], dtype: float64)
# when only column x is taken
In [13]: df1.x.max()
Out[13]: NaT
```
This totally seems unexpected, especially `[13]`.
| 2020-05-01T08:41:32Z | [] | [] |
Traceback (most recent call last):
File "<ipython-input-17-be9940feb663>", line 1, in <module>
df.max()
File "pandas/core/generic.py", line 11215, in stat_func
f, name, axis=axis, skipna=skipna, numeric_only=numeric_only
File "pandas/core/frame.py", line 7907, in _reduce
result = f(values)
File "pandas/core/frame.py", line 7865, in f
return op(x, axis=axis, skipna=skipna, **kwds)
File "pandas/core/nanops.py", line 109, in f
return _na_for_min_count(values, axis)
File "pandas/core/nanops.py", line 392, in _na_for_min_count
result.fill(fill_value)
ValueError: cannot convert float NaN to integer
| 13,713 |
|||
pandas-dev/pandas | pandas-dev__pandas-33984 | de8ca78921fa6aa003a047a14ffaa03a5f3b86ac | diff --git a/doc/source/whatsnew/v1.1.0.rst b/doc/source/whatsnew/v1.1.0.rst
--- a/doc/source/whatsnew/v1.1.0.rst
+++ b/doc/source/whatsnew/v1.1.0.rst
@@ -758,6 +758,7 @@ ExtensionArray
- Fixed bug where :meth:`Series.value_counts` would raise on empty input of ``Int64`` dtype (:issue:`33317`)
- Fixed bug in :class:`Series` construction with EA dtype and index but no data or scalar data fails (:issue:`26469`)
- Fixed bug that caused :meth:`Series.__repr__()` to crash for extension types whose elements are multidimensional arrays (:issue:`33770`).
+- Fixed bug where :meth:`Series.update` would raise a ``ValueError`` for ``ExtensionArray`` dtypes with missing values (:issue:`33980`)
- Fixed bug where :meth:`StringArray.memory_usage` was not implemented (:issue:`33963`)
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -1599,7 +1599,7 @@ def putmask(
new_values = self.values if inplace else self.values.copy()
- if isinstance(new, np.ndarray) and len(new) == len(mask):
+ if isinstance(new, (np.ndarray, ExtensionArray)) and len(new) == len(mask):
new = new[mask]
mask = _safe_reshape(mask, new_values.shape)
| BUG: Series.update() raises ValueError if dtype="string"
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of pandas.
- [ ] (optional) I have confirmed this bug exists on the master branch of pandas.
---
#### Code Sample, a copy-pastable example
```python
import pandas as pd
a = pd.Series(["a", None, "c"], dtype="string")
b = pd.Series([None, "b", None], dtype="string")
a.update(b)
```
results in:
```python-traceback
Traceback (most recent call last):
File "<ipython-input-15-b9da8f25067a>", line 1, in <module>
a.update(b)
File "C:\tools\anaconda3\envs\Simple\lib\site-packages\pandas\core\series.py", line 2810, in update
self._data = self._data.putmask(mask=mask, new=other, inplace=True)
File "C:\tools\anaconda3\envs\Simple\lib\site-packages\pandas\core\internals\managers.py", line 564, in putmask
return self.apply("putmask", **kwargs)
File "C:\tools\anaconda3\envs\Simple\lib\site-packages\pandas\core\internals\managers.py", line 442, in apply
applied = getattr(b, f)(**kwargs)
File "C:\tools\anaconda3\envs\Simple\lib\site-packages\pandas\core\internals\blocks.py", line 1676, in putmask
new_values[mask] = new
File "C:\tools\anaconda3\envs\Simple\lib\site-packages\pandas\core\arrays\string_.py", line 248, in __setitem__
super().__setitem__(key, value)
File "C:\tools\anaconda3\envs\Simple\lib\site-packages\pandas\core\arrays\numpy_.py", line 252, in __setitem__
self._ndarray[key] = value
ValueError: NumPy boolean array indexing assignment cannot assign 3 input values to the 1 output values where the mask is true
```
#### Problem description
The example works if I leave off the `dtype="string"` (resulting in the implicit dtype `object`).
IMO update should work for all dtypes, not only the "old" ones.
`a = pd.Series([1, None, 3], dtype="Int16")` etc. also raises ValueError, while the same with `dtype="float64"`works.
It looks as if update doesn't work with the new nullable dtypes (the ones with `pd.NA`).
#### Expected Output
The expected result is that `a.update(b)` updates `a` without raising an exception, not only for `object` and `float64`, but also for `string` and `Int16` etc..
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit : None
python : 3.7.7.final.0
python-bits : 64
OS : Windows
OS-release : 10
machine : AMD64
processor : Intel64 Family 6 ..., GenuineIntel
...
pandas : 1.0.3
numpy : 1.18.1
...
</details>
| 2020-05-05T00:07:40Z | [] | [] |
Traceback (most recent call last):
File "<ipython-input-15-b9da8f25067a>", line 1, in <module>
a.update(b)
File "C:\tools\anaconda3\envs\Simple\lib\site-packages\pandas\core\series.py", line 2810, in update
self._data = self._data.putmask(mask=mask, new=other, inplace=True)
File "C:\tools\anaconda3\envs\Simple\lib\site-packages\pandas\core\internals\managers.py", line 564, in putmask
return self.apply("putmask", **kwargs)
File "C:\tools\anaconda3\envs\Simple\lib\site-packages\pandas\core\internals\managers.py", line 442, in apply
applied = getattr(b, f)(**kwargs)
File "C:\tools\anaconda3\envs\Simple\lib\site-packages\pandas\core\internals\blocks.py", line 1676, in putmask
new_values[mask] = new
File "C:\tools\anaconda3\envs\Simple\lib\site-packages\pandas\core\arrays\string_.py", line 248, in __setitem__
super().__setitem__(key, value)
File "C:\tools\anaconda3\envs\Simple\lib\site-packages\pandas\core\arrays\numpy_.py", line 252, in __setitem__
self._ndarray[key] = value
ValueError: NumPy boolean array indexing assignment cannot assign 3 input values to the 1 output values where the mask is true
| 13,720 |
||||
pandas-dev/pandas | pandas-dev__pandas-34175 | 507cb1548d36bbf48c3084a78d59af2fed78a9d1 | BUG: PeriodIndex comparisons break on listlike
```
idx = pd.period_range('2016', periods=3, freq='M')
>>> idx == idx.values
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/site-packages/pandas/core/indexes/period.py", line 107, in wrapper
other = Period(other, freq=self.freq)
File "pandas/_libs/tslibs/period.pyx", line 1777, in pandas._libs.tslibs.period.Period.__new__
TypeError: unhashable type: 'numpy.ndarray'
```
| This looks to work on master now. Could use a test
```
In [56]: pd.__version__
Out[56]: '1.1.0.dev0+1536.ga7fb88fd5'
In [57]: idx == idx.values
Out[57]: array([ True, True, True])
``` | 2020-05-14T14:43:32Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/site-packages/pandas/core/indexes/period.py", line 107, in wrapper
other = Period(other, freq=self.freq)
File "pandas/_libs/tslibs/period.pyx", line 1777, in pandas._libs.tslibs.period.Period.__new__
TypeError: unhashable type: 'numpy.ndarray'
| 13,742 |
||||
pandas-dev/pandas | pandas-dev__pandas-34355 | cb35d8a938c9222d903482d2f66c62fece5a7aae | Concatenating Single-element dense series with SparseArray Series Raises Error
#### Code Sample, a copy-pastable example if possible
```python
import pandas as pd
import numpy as np
a = pd.Series(pd.SparseArray([1, None]), dtype=np.float)
b = pd.Series([1], dtype=np.float)
pd.concat([a, b])
```
With interpreter output
```python
>>>import pandas as pd
>>>import numpy as np
>>>
>>>
>>> a = pd.Series(pd.SparseArray([1, None]), dtype=np.float)
>>> b = pd.Series([1], dtype=np.float)
>>> pd.concat([a, b], axis=0)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/henighan/Documents/henighan-pandas/pandas/core/reshape/concat.py", line 246, in concat
return op.get_result()
File "/Users/henighan/Documents/henighan-pandas/pandas/core/reshape/concat.py", line 426, in get_result
[x._data for x in self.objs], self.new_axes
File "/Users/henighan/Documents/henighan-pandas/pandas/core/internals/managers.py", line 1629, in concat
values = concat_compat(values)
File "/Users/henighan/Documents/henighan-pandas/pandas/core/dtypes/concat.py", line 117, in concat_compat
return _concat_sparse(to_concat, axis=axis, typs=typs)
File "/Users/henighan/Documents/henighan-pandas/pandas/core/dtypes/concat.py", line 478, in _concat_sparse
for x in to_concat
File "/Users/henighan/Documents/henighan-pandas/pandas/core/dtypes/concat.py", line 478, in <listcomp>
for x in to_concat
File "/Users/henighan/Documents/henighan-pandas/pandas/core/arrays/sparse/array.py", line 361, in __init__
data, kind=kind, fill_value=fill_value, dtype=dtype
File "/Users/henighan/Documents/henighan-pandas/pandas/core/arrays/sparse/array.py", line 1504, in make_sparse
if arr.ndim > 1:
AttributeError: 'float' object has no attribute 'ndim'
```
#### Problem description
When concatenating a series of a sparse type (eg `Sparse[float64, nan]` above) with a serires of a dense type (eg `float64` above) that has only a single element, the above exception is raised.
I believe this may be the result of the `squeeze` happening here:
https://github.com/henighan/pandas/blob/45d8d77f27cf0dbc8cefe932f8fb64f6982b9527/pandas/core/dtypes/concat.py#L477
If I remove this `squeeze`, it resolves the issue for me (yielding the output below), and all tests still pass on my mac when running `./test_fast.sh`. If this seems right to others, I'd be happy to open a pull request.
#### Expected Output
```python
>>> import pandas as pd
>>> import numpy as np
>>>
>>>
>>> a = pd.Series(pd.SparseArray([1, None]), dtype=np.float)
>>> b = pd.Series([1], dtype=np.float)
>>> pd.concat([a, b])
0 1.0
1 NaN
0 1.0
dtype: Sparse[float64, nan]
```
#### Output of ``pd.show_versions()``
<details>
[paste the output of ``pd.show_versions()`` here below this line]
>>> pd.show_versions()
/Users/henighan/Documents/henighan-pandas/pandas/core/index.py:29: FutureWarning: pandas.core.index is deprecated and will be removed in a future version. The public classes are available in the top-level namespace.
FutureWarning,
INSTALLED VERSIONS
------------------
commit : 45d8d77f27cf0dbc8cefe932f8fb64f6982b9527
python : 3.7.6.final.0
python-bits : 64
OS : Darwin
OS-release : 18.7.0
machine : x86_64
processor : i386
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 0.26.0.dev0+1586.g45d8d77f2
numpy : 1.17.3
pytz : 2019.3
dateutil : 2.8.1
pip : 19.3.1
setuptools : 44.0.0.post20200102
Cython : 0.29.14
pytest : 5.3.2
hypothesis : 5.1.0
sphinx : 2.3.1
blosc : None
feather : None
xlsxwriter : 1.2.7
lxml.etree : 4.4.2
html5lib : 1.0.1
pymysql : None
psycopg2 : None
jinja2 : 2.10.3
IPython : 7.10.1
pandas_datareader: None
bs4 : 4.8.2
bottleneck : 1.3.1
fastparquet : 0.3.2
gcsfs : None
lxml.etree : 4.4.2
matplotlib : 3.1.2
numexpr : 2.7.0
odfpy : None
openpyxl : 3.0.1
pandas_gbq : None
pyarrow : 0.15.1
pytables : None
pytest : 5.3.2
s3fs : 0.4.0
scipy : 1.4.1
sqlalchemy : 1.3.12
tables : 3.6.1
tabulate : 0.8.6
xarray : 0.14.1
xlrd : 1.2.0
xlwt : 1.3.0
xlsxwriter : 1.2.7
numba : 0.46.0
</details>
| Thanks @henighan for the report.
This looks to be fixed on master.
```
>>> import numpy as np
>>> import pandas as pd
>>>
>>> pd.__version__
'1.1.0.dev0+1631.g42a5c1c1aa'
>>>
>>> a = pd.Series(pd.arrays.SparseArray([1, None]), dtype=np.float)
>>> a
0 1.0
1 NaN
dtype: Sparse[float64, nan]
>>>
>>> b = pd.Series([1], dtype=np.float)
>>> b
0 1.0
dtype: float64
>>>
>>> pd.concat([a, b])
0 1.0
1 NaN
0 1.0
dtype: Sparse[float64, nan]
>>>
>>> pd.concat([a, b], axis=0)
0 1.0
1 NaN
0 1.0
dtype: Sparse[float64, nan]
>>>
```
just need a test to prevent regression and close this issue.
PRs or further investigation welcome.
take | 2020-05-24T16:49:09Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/henighan/Documents/henighan-pandas/pandas/core/reshape/concat.py", line 246, in concat
return op.get_result()
File "/Users/henighan/Documents/henighan-pandas/pandas/core/reshape/concat.py", line 426, in get_result
[x._data for x in self.objs], self.new_axes
File "/Users/henighan/Documents/henighan-pandas/pandas/core/internals/managers.py", line 1629, in concat
values = concat_compat(values)
File "/Users/henighan/Documents/henighan-pandas/pandas/core/dtypes/concat.py", line 117, in concat_compat
return _concat_sparse(to_concat, axis=axis, typs=typs)
File "/Users/henighan/Documents/henighan-pandas/pandas/core/dtypes/concat.py", line 478, in _concat_sparse
for x in to_concat
File "/Users/henighan/Documents/henighan-pandas/pandas/core/dtypes/concat.py", line 478, in <listcomp>
for x in to_concat
File "/Users/henighan/Documents/henighan-pandas/pandas/core/arrays/sparse/array.py", line 361, in __init__
data, kind=kind, fill_value=fill_value, dtype=dtype
File "/Users/henighan/Documents/henighan-pandas/pandas/core/arrays/sparse/array.py", line 1504, in make_sparse
if arr.ndim > 1:
AttributeError: 'float' object has no attribute 'ndim'
| 13,769 |
||||
pandas-dev/pandas | pandas-dev__pandas-34363 | 470dfc6dda94adb43071b52ee506073c2d313f51 | BUG: DataFrame.isin fails when other is a categorical series
- [ ] I have checked that this issue has not already been reported.
- [ ] I have confirmed this bug exists on the latest version of pandas.
- [ ] (optional) I have confirmed this bug exists on the master branch of pandas.
---
#### Code Sample, a copy-pastable example
```python
import pandas as pd
print(pd.__version__)
x = pd.DataFrame.from_dict({'a':[1,2,3], 'b':[4,5,6]})
y = pd.DataFrame({'a':[1,2,3]}, dtype='category')
print(x.isin(y))
y = pd.Series([1,2,3]).astype('category')
print(x.isin(y))
1.0.3
a b
0 True False
1 True False
2 True False
Traceback (most recent call last):
File "/home/brmiller/repro.py", line 9, in <module>
print(x.isin(y))
File "/home/brmiller/anaconda3/envs/pandas1/lib/python3.7/site-packages/pandas/core/frame.py", line 8423, in isin
return self.eq(values.reindex_like(self), axis="index")
File "/home/brmiller/anaconda3/envs/pandas1/lib/python3.7/site-packages/pandas/core/ops/__init__.py", line 814, in f
self, other, op, fill_value=None, axis=axis, level=level
File "/home/brmiller/anaconda3/envs/pandas1/lib/python3.7/site-packages/pandas/core/ops/__init__.py", line 618, in _combine_series_frame
new_data = left._combine_match_index(right, func)
File "/home/brmiller/anaconda3/envs/pandas1/lib/python3.7/site-packages/pandas/core/frame.py", line 5317, in _combine_match_index
new_data = func(self.values.T, other.values).T
File "/home/brmiller/anaconda3/envs/pandas1/lib/python3.7/site-packages/pandas/core/ops/common.py", line 64, in new_method
return method(self, other)
File "/home/brmiller/anaconda3/envs/pandas1/lib/python3.7/site-packages/pandas/core/arrays/categorical.py", line 72, in func
raise ValueError("Lengths must match.")
ValueError: Lengths must match.
```
#### Problem description
This operation previously worked in pandas 0.25.3 and gave the same result as the case when other is a single column DataFrame.
#### Expected Output
```
a b
0 True False
1 True False
2 True False
a b
0 True False
1 True False
2 True False
```
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit : None
python : 3.7.6.final.0
python-bits : 64
OS : Linux
OS-release : 4.15.0-76-generic
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 1.0.3
numpy : 1.18.4
pytz : 2020.1
dateutil : 2.8.1
pip : 20.1
setuptools : 46.4.0.post20200518
Cython : 0.29.17
pytest : 5.4.2
hypothesis : 5.14.0
sphinx : 3.0.3
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 2.11.2
IPython : 7.14.0
pandas_datareader: None
bs4 : None
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : None
matplotlib : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : 0.15.0
pytables : None
pytest : 5.4.2
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
xlsxwriter : None
numba : 0.49.1
</details>
| Seems to be fixed on master. Not sure if we have a test for it though.
Can I work on this?
@saiajay5674 PRs are welcome, see the [contributing guide](https://pandas.pydata.org/pandas-docs/stable/development/contributing.html) for how to get started
Perhaps check first to see if there's already a test for this issue, and if there's not, you could add one
@MarcoGorelli We can create Pytest test cases, right?
> @MarcoGorelli We can create Pytest test cases, right?
Yes, that's the testing framework pandas uses. If you'd like to work on this, please comment 'take' so the issue is assigned to you
Take | 2020-05-25T08:42:07Z | [] | [] |
Traceback (most recent call last):
File "/home/brmiller/repro.py", line 9, in <module>
print(x.isin(y))
File "/home/brmiller/anaconda3/envs/pandas1/lib/python3.7/site-packages/pandas/core/frame.py", line 8423, in isin
return self.eq(values.reindex_like(self), axis="index")
File "/home/brmiller/anaconda3/envs/pandas1/lib/python3.7/site-packages/pandas/core/ops/__init__.py", line 814, in f
self, other, op, fill_value=None, axis=axis, level=level
File "/home/brmiller/anaconda3/envs/pandas1/lib/python3.7/site-packages/pandas/core/ops/__init__.py", line 618, in _combine_series_frame
new_data = left._combine_match_index(right, func)
File "/home/brmiller/anaconda3/envs/pandas1/lib/python3.7/site-packages/pandas/core/frame.py", line 5317, in _combine_match_index
new_data = func(self.values.T, other.values).T
File "/home/brmiller/anaconda3/envs/pandas1/lib/python3.7/site-packages/pandas/core/ops/common.py", line 64, in new_method
return method(self, other)
File "/home/brmiller/anaconda3/envs/pandas1/lib/python3.7/site-packages/pandas/core/arrays/categorical.py", line 72, in func
raise ValueError("Lengths must match.")
ValueError: Lengths must match.
| 13,770 |
||||
pandas-dev/pandas | pandas-dev__pandas-34414 | 89b3d6b201b5d429a202b5239054d5a70c8b5071 | diff --git a/doc/source/whatsnew/v1.3.0.rst b/doc/source/whatsnew/v1.3.0.rst
--- a/doc/source/whatsnew/v1.3.0.rst
+++ b/doc/source/whatsnew/v1.3.0.rst
@@ -293,6 +293,7 @@ Groupby/resample/rolling
Reshaping
^^^^^^^^^
+- Bug in :func:`merge` raising error when performing an inner join with partial index and ``right_index`` when no overlap between indices (:issue:`33814`)
- Bug in :meth:`DataFrame.unstack` with missing levels led to incorrect index names (:issue:`37510`)
- Bug in :func:`concat` incorrectly casting to ``object`` dtype in some cases when one or more of the operands is empty (:issue:`38843`)
-
diff --git a/pandas/core/reshape/merge.py b/pandas/core/reshape/merge.py
--- a/pandas/core/reshape/merge.py
+++ b/pandas/core/reshape/merge.py
@@ -864,9 +864,9 @@ def _maybe_add_join_keys(self, result, left_indexer, right_indexer):
mask_left = left_indexer == -1
mask_right = right_indexer == -1
if mask_left.all():
- key_col = rvals
+ key_col = Index(rvals)
elif right_indexer is not None and mask_right.all():
- key_col = lvals
+ key_col = Index(lvals)
else:
key_col = Index(lvals).where(~mask_left, rvals)
| BUG: Merge between partial index and index fails if result is empty
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of pandas.
- [ ] (optional) I have confirmed this bug exists on the master branch of pandas.
---
**Note**: Please read [this guide](https://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) detailing how to provide the necessary information for us to reproduce your bug.
#### Code Sample, a copy-pastable example
```python
import pandas
pandas.merge(
pandas.DataFrame({'a': [1], 'i': [2]}).set_index(['a', 'i']),
pandas.DataFrame({'i': [1]}).set_index(['i']),
left_on=['i'],
right_index=True,
)
```
#### Problem description
This merge fails with the following error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.8/site-packages/pandas/core/reshape/merge.py", line 88, in merge
return op.get_result()
File "/usr/lib/python3.8/site-packages/pandas/core/reshape/merge.py", line 668, in get_result
self._maybe_add_join_keys(result, left_indexer, right_indexer)
File "/usr/lib/python3.8/site-packages/pandas/core/reshape/merge.py", line 824, in _maybe_add_join_keys
key_col.name = name
AttributeError: 'numpy.ndarray' object has no attribute 'name'
```
Thing is, it works fine if I change `i` in the left DataFrame from `[2]` to `[1]`. It only fails if there's no overlap between the join keys.
This behavior is problematic because the failure depends on the outcome of the merge, so it's difficult to avoid.
#### Expected Output
I expect this to not fail and just return an empty result.
#### Output of ``pd.show_versions()``
<details>
```
>>> pandas.show_versions()
INSTALLED VERSIONS
------------------
commit : None
python : 3.8.2.final.0
python-bits : 64
OS : Linux
OS-release : 5.6.2-arch1-2
machine : x86_64
processor :
byteorder : little
LC_ALL : None
LANG : en_FYL.UTF-8
LOCALE : en_US.UTF-8
pandas : 1.0.3
numpy : 1.18.2
pytz : 2019.3
dateutil : 2.8.1
pip : 20.0.2
setuptools : 46.1.3
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : 4.5.0
html5lib : 1.0.1
pymysql : None
psycopg2 : None
jinja2 : 2.11.1
IPython : 7.13.0
pandas_datareader: None
bs4 : 4.8.2
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : 4.5.0
matplotlib : 3.2.1
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
pytest : None
pyxlsb : None
s3fs : None
scipy : 1.4.1
sqlalchemy : 1.3.15
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
xlsxwriter : None
numba : None
```
</details>
| 2020-05-27T18:35:11Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.8/site-packages/pandas/core/reshape/merge.py", line 88, in merge
return op.get_result()
File "/usr/lib/python3.8/site-packages/pandas/core/reshape/merge.py", line 668, in get_result
self._maybe_add_join_keys(result, left_indexer, right_indexer)
File "/usr/lib/python3.8/site-packages/pandas/core/reshape/merge.py", line 824, in _maybe_add_join_keys
key_col.name = name
AttributeError: 'numpy.ndarray' object has no attribute 'name'
| 13,778 |
||||
pandas-dev/pandas | pandas-dev__pandas-34473 | 6a6faf596bc1e3bf4078d4837f654d0f2f754820 | diff --git a/asv_bench/benchmarks/io/json.py b/asv_bench/benchmarks/io/json.py
--- a/asv_bench/benchmarks/io/json.py
+++ b/asv_bench/benchmarks/io/json.py
@@ -1,3 +1,5 @@
+import sys
+
import numpy as np
from pandas import DataFrame, concat, date_range, read_json, timedelta_range
@@ -82,6 +84,7 @@ def setup(self, orient, frame):
timedeltas = timedelta_range(start=1, periods=N, freq="s")
datetimes = date_range(start=1, periods=N, freq="s")
ints = np.random.randint(100000000, size=N)
+ longints = sys.maxsize * np.random.randint(100000000, size=N)
floats = np.random.randn(N)
strings = tm.makeStringIndex(N)
self.df = DataFrame(np.random.randn(N, ncols), index=np.arange(N))
@@ -120,6 +123,18 @@ def setup(self, orient, frame):
index=index,
)
+ self.df_longint_float_str = DataFrame(
+ {
+ "longint_1": longints,
+ "longint_2": longints,
+ "float_1": floats,
+ "float_2": floats,
+ "str_1": strings,
+ "str_2": strings,
+ },
+ index=index,
+ )
+
def time_to_json(self, orient, frame):
getattr(self, frame).to_json(self.fname, orient=orient)
@@ -172,6 +187,7 @@ def setup(self):
timedeltas = timedelta_range(start=1, periods=N, freq="s")
datetimes = date_range(start=1, periods=N, freq="s")
ints = np.random.randint(100000000, size=N)
+ longints = sys.maxsize * np.random.randint(100000000, size=N)
floats = np.random.randn(N)
strings = tm.makeStringIndex(N)
self.df = DataFrame(np.random.randn(N, ncols), index=np.arange(N))
@@ -209,6 +225,17 @@ def setup(self):
},
index=index,
)
+ self.df_longint_float_str = DataFrame(
+ {
+ "longint_1": longints,
+ "longint_2": longints,
+ "float_1": floats,
+ "float_2": floats,
+ "str_1": strings,
+ "str_2": strings,
+ },
+ index=index,
+ )
def time_floats_with_int_idex_lines(self):
self.df.to_json(self.fname, orient="records", lines=True)
@@ -225,6 +252,9 @@ def time_float_int_lines(self):
def time_float_int_str_lines(self):
self.df_int_float_str.to_json(self.fname, orient="records", lines=True)
+ def time_float_longint_str_lines(self):
+ self.df_longint_float_str.to_json(self.fname, orient="records", lines=True)
+
class ToJSONMem:
def setup_cache(self):
diff --git a/doc/source/whatsnew/v1.1.0.rst b/doc/source/whatsnew/v1.1.0.rst
--- a/doc/source/whatsnew/v1.1.0.rst
+++ b/doc/source/whatsnew/v1.1.0.rst
@@ -1020,6 +1020,7 @@ I/O
- Bug in :meth:`~pandas.io.stata.StataReader` which resulted in categorical variables with difference dtypes when reading data using an iterator. (:issue:`31544`)
- :meth:`HDFStore.keys` has now an optional `include` parameter that allows the retrieval of all native HDF5 table names (:issue:`29916`)
- Bug in :meth:`read_excel` for ODS files removes 0.0 values (:issue:`27222`)
+- Bug in :meth:`ujson.encode` was raising an `OverflowError` with numbers larger than sys.maxsize (:issue: `34395`)
Plotting
^^^^^^^^
diff --git a/pandas/_libs/src/ujson/lib/ultrajson.h b/pandas/_libs/src/ujson/lib/ultrajson.h
--- a/pandas/_libs/src/ujson/lib/ultrajson.h
+++ b/pandas/_libs/src/ujson/lib/ultrajson.h
@@ -150,6 +150,7 @@ enum JSTYPES {
JT_INT, // (JSINT32 (signed 32-bit))
JT_LONG, // (JSINT64 (signed 64-bit))
JT_DOUBLE, // (double)
+ JT_BIGNUM, // integer larger than sys.maxsize
JT_UTF8, // (char 8-bit)
JT_ARRAY, // Array structure
JT_OBJECT, // Key/Value structure
@@ -187,6 +188,8 @@ typedef struct __JSONObjectEncoder {
JSINT64 (*getLongValue)(JSOBJ obj, JSONTypeContext *tc);
JSINT32 (*getIntValue)(JSOBJ obj, JSONTypeContext *tc);
double (*getDoubleValue)(JSOBJ obj, JSONTypeContext *tc);
+ const char *(*getBigNumStringValue)(JSOBJ obj, JSONTypeContext *tc,
+ size_t *_outLen);
/*
Begin iteration of an iteratable object (JS_ARRAY or JS_OBJECT)
diff --git a/pandas/_libs/src/ujson/lib/ultrajsonenc.c b/pandas/_libs/src/ujson/lib/ultrajsonenc.c
--- a/pandas/_libs/src/ujson/lib/ultrajsonenc.c
+++ b/pandas/_libs/src/ujson/lib/ultrajsonenc.c
@@ -1107,6 +1107,35 @@ void encode(JSOBJ obj, JSONObjectEncoder *enc, const char *name,
Buffer_AppendCharUnchecked(enc, '\"');
break;
}
+
+ case JT_BIGNUM: {
+ value = enc->getBigNumStringValue(obj, &tc, &szlen);
+
+ Buffer_Reserve(enc, RESERVE_STRING(szlen));
+ if (enc->errorMsg) {
+ enc->endTypeContext(obj, &tc);
+ return;
+ }
+
+ if (enc->forceASCII) {
+ if (!Buffer_EscapeStringValidated(obj, enc, value,
+ value + szlen)) {
+ enc->endTypeContext(obj, &tc);
+ enc->level--;
+ return;
+ }
+ } else {
+ if (!Buffer_EscapeStringUnvalidated(enc, value,
+ value + szlen)) {
+ enc->endTypeContext(obj, &tc);
+ enc->level--;
+ return;
+ }
+ }
+
+ break;
+
+ }
}
enc->endTypeContext(obj, &tc);
diff --git a/pandas/_libs/src/ujson/python/objToJSON.c b/pandas/_libs/src/ujson/python/objToJSON.c
--- a/pandas/_libs/src/ujson/python/objToJSON.c
+++ b/pandas/_libs/src/ujson/python/objToJSON.c
@@ -1629,15 +1629,20 @@ void Object_beginTypeContext(JSOBJ _obj, JSONTypeContext *tc) {
if (PyLong_Check(obj)) {
PRINTMARK();
tc->type = JT_LONG;
- GET_TC(tc)->longValue = PyLong_AsLongLong(obj);
+ int overflow = 0;
+ GET_TC(tc)->longValue = PyLong_AsLongLongAndOverflow(obj, &overflow);
+ int err;
+ err = (GET_TC(tc)->longValue == -1) && PyErr_Occurred();
- exc = PyErr_Occurred();
-
- if (exc && PyErr_ExceptionMatches(PyExc_OverflowError)) {
+ if (overflow){
+ PRINTMARK();
+ tc->type = JT_BIGNUM;
+ }
+ else if (err) {
PRINTMARK();
goto INVALID;
}
-
+
return;
} else if (PyFloat_Check(obj)) {
PRINTMARK();
@@ -2105,7 +2110,6 @@ void Object_endTypeContext(JSOBJ Py_UNUSED(obj), JSONTypeContext *tc) {
NpyArr_freeLabels(GET_TC(tc)->columnLabels,
GET_TC(tc)->columnLabelsLen);
GET_TC(tc)->columnLabels = NULL;
-
PyObject_Free(GET_TC(tc)->cStr);
GET_TC(tc)->cStr = NULL;
PyObject_Free(tc->prv);
@@ -2126,6 +2130,19 @@ double Object_getDoubleValue(JSOBJ Py_UNUSED(obj), JSONTypeContext *tc) {
return GET_TC(tc)->doubleValue;
}
+const char *Object_getBigNumStringValue(JSOBJ obj, JSONTypeContext *tc,
+ size_t *_outLen) {
+ PyObject* repr = PyObject_Str(obj);
+ const char *str = PyUnicode_AsUTF8AndSize(repr, (Py_ssize_t *) _outLen);
+ char* bytes = PyObject_Malloc(*_outLen + 1);
+ memcpy(bytes, str, *_outLen + 1);
+ GET_TC(tc)->cStr = bytes;
+
+ Py_DECREF(repr);
+
+ return GET_TC(tc)->cStr;
+}
+
static void Object_releaseObject(JSOBJ _obj) { Py_DECREF((PyObject *)_obj); }
void Object_iterBegin(JSOBJ obj, JSONTypeContext *tc) {
@@ -2181,6 +2198,7 @@ PyObject *objToJSON(PyObject *Py_UNUSED(self), PyObject *args,
Object_getLongValue,
NULL, // getIntValue is unused
Object_getDoubleValue,
+ Object_getBigNumStringValue,
Object_iterBegin,
Object_iterNext,
Object_iterEnd,
@@ -2294,7 +2312,6 @@ PyObject *objToJSON(PyObject *Py_UNUSED(self), PyObject *args,
if (ret != buffer) {
encoder->free(ret);
}
-
PyErr_Format(PyExc_OverflowError, "%s", encoder->errorMsg);
return NULL;
}
| BUG: OverflowError on to_json with numbers larger than sys.maxsize
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the latest version of pandas.
- [ ] (optional) I have confirmed this bug exists on the master branch of pandas.
---
**Note**: Please read [this guide](https://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) detailing how to provide the necessary information for us to reproduce your bug.
#### Code Sample, a copy-pastable example
```python
import sys
from pandas.io.json import dumps
dumps(sys.maxsize)
dumps(sys.maxsize + 1)
```
#### Problem description
The Pandas JSON dumper doesn't seem to handle number values larger than `sys.maxsize` (a word). I have a dataframe that I'm trying to write to_json, but it's failing with `OverflowError: int too big to convert`. There are some numbers larger than `9223372036854775807` in it.
Passing a `default_handler` doesn't help. It doesn't get called for the error.
```python
>>> dumps(sys.maxsize)
'9223372036854775807'
>>> dumps(sys.maxsize + 1, default_handler=str)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
OverflowError: int too big to convert
```
#### Expected Output
Python's built-in json module handles large numbers without issues.
```python
>>> import json
>>> json.dumps(sys.maxsize)
'9223372036854775807'
>>> json.dumps(sys.maxsize+1)
'9223372036854775808'
```
I expect Pandas to be able to output large numbers to JSON. An option to use the built-in `json` module instead of `ujson` would be fine.
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit : None
python : 3.8.3.final.0
python-bits : 64
OS : Linux
OS-release : 4.19.76-linuxkit
machine : x86_64
processor :
byteorder : little
LC_ALL : None
LANG : C.UTF-8
LOCALE : en_US.UTF-8
pandas : 1.0.3
numpy : 1.18.2
pytz : 2019.3
dateutil : 2.8.1
pip : 20.1.1
setuptools : 46.4.0
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 2.11.2
IPython : None
pandas_datareader: None
bs4 : None
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : None
matplotlib : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : 0.16.0
pytables : None
pytest : None
pyxlsb : None
s3fs : 0.4.2
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
xlsxwriter : None
numba : None
</details>
| I checked that this bug exists in the master version.
<details><summary><b>Output of pd.show_versions()</b></summary>
<p>
INSTALLED VERSIONS
------------------
commit : 62c7dd3e771d5dc2921212cb363239b8f1447058
python : 3.8.2.final.0
python-bits : 64
OS : Linux
OS-release : 4.15.0-101-generic
Version : #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020
machine : x86_64
processor :
byteorder : little
LC_ALL : C.UTF-8
LANG : C.UTF-8
LOCALE : en_US.UTF-8
pandas : 1.1.0.dev0+1681.g62c7dd3e7
numpy : 1.17.5
pytz : 2020.1
dateutil : 2.8.1
pip : 20.1.1
setuptools : 46.4.0.post20200518
Cython : 0.29.19
pytest : 5.4.2
hypothesis : 5.15.1
sphinx : 3.0.4
blosc : None
feather : None
xlsxwriter : 1.2.8
lxml.etree : 4.5.1
html5lib : 1.0.1
pymysql : None
psycopg2 : None
jinja2 : 2.11.2
IPython : 7.14.0
pandas_datareader: None
bs4 : 4.9.1
bottleneck : 1.3.2
fastparquet : 0.4.0
gcsfs : None
matplotlib : 3.2.1
numexpr : 2.7.1
odfpy : None
openpyxl : 3.0.3
pandas_gbq : None
pyarrow : 0.17.1
pytables : None
pyxlsb : None
s3fs : 0.4.2
scipy : 1.4.1
sqlalchemy : 1.3.17
tables : 3.6.1
tabulate : 0.8.7
xarray : 0.15.1
xlrd : 1.2.0
xlwt : 1.3.0
numba : 0.49.1
</p>
</details>
I dug a little and tracked the problem down to the version of dumps specified in `pandas._libs`. The following reproduces the same bug as above:
```python
import sys
import pandas as pd
pd._libs.json.dumps(sys.maxsize)
pd._libs.json.dumps(sys.maxsize + 1)
```
I'm stuck on finding the actual code for `dumps` inside `_libs`. I'm happy to keep going with this, though, if somebody can give me a prod in the right direction!
I think the implementation comes from the embedded version of ultrajson in [pandas/_libs/src/ujson](https://github.com/pandas-dev/pandas/tree/master/pandas/_libs/src/ujson). I'm not sure how it's vendored or gets linked up, though.
Thanks!
It looks to me like the code which does the encoding is in `pandas/_libs/src/ujson/lib/ultrajsonenc.c` and it gets linked up in `pandas/_libs/src/ujson/python/objToJSON.c`.
I guess that there is no way to fix the problem without messing with the ultrajson source code?
In `pandas/io/json/_json.py` `dumps` is defined via a direct call to ultrajson's `dumps` method, so I think to resolve the current bug one has to make changes to ultrajson.
```
import pandas._libs.json as json # line 10
dumps = json.dumps # line 28
```
Related to #20599 this isn’t really feasible to do in the ujson source so would probably have to catch and coerce to a serializable type
@WillAyd Thanks for this!
Reading through that thread it seems like a solution to this issue would be to wrap ultrajson's `dumps` and catch the `OverflowError` inside `pandas/io/json/_json.py`.
So, instead of:
``` python
dumps = json.dumps # line 28
```
we would do something like this:
``` python
def dumps(obj, default_handler=str, **kwargs):
try:
return json.dumps(obj, **kwargs)
except OverflowError:
return json.dumps(default_handler(obj), **kwargs)
```
This fixes the original error. I checked that with this change the code still passes the unit test in `pandas/tests/io/json/test_ujson.py` - so the rewrite doesn't seem to break anything.
I'm happy to keep working on fixing this is this solution isn't quite right!
Once we've settled on the fix, would the next steps be these?
- [ ] add the testcase to `pandas/tests/io/json/test_ujson.py`
- [ ] submit a pull request
Yea if you want to add a test case and submit a pull request we can go from there. Will also want to check the performance benchmarks for JSON which you’ll find more info on here
https://pandas.pydata.org/pandas-docs/stable/development/contributing.html#running-the-performance-test-suite | 2020-05-30T00:40:35Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
OverflowError: int too big to convert
| 13,790 |
|||
pandas-dev/pandas | pandas-dev__pandas-34595 | c71bfc36211b5e2d860a06d8fbef902b757bd6e4 | diff --git a/doc/source/whatsnew/v1.1.0.rst b/doc/source/whatsnew/v1.1.0.rst
--- a/doc/source/whatsnew/v1.1.0.rst
+++ b/doc/source/whatsnew/v1.1.0.rst
@@ -1003,6 +1003,7 @@ Reshaping
- Bug in :func:`cut` raised an error when non-unique labels (:issue:`33141`)
- Ensure only named functions can be used in :func:`eval()` (:issue:`32460`)
- Fixed bug in :func:`melt` where melting MultiIndex columns with ``col_level`` > 0 would raise a ``KeyError`` on ``id_vars`` (:issue:`34129`)
+- Bug in :meth:`Series.where` with an empty Series and empty ``cond`` having non-bool dtype (:issue:`34592`)
Sparse
^^^^^^
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -8826,16 +8826,17 @@ def _where(
msg = "Boolean array expected for the condition, not {dtype}"
- if not isinstance(cond, ABCDataFrame):
- # This is a single-dimensional object.
- if not is_bool_dtype(cond):
- raise ValueError(msg.format(dtype=cond.dtype))
- elif not cond.empty:
- for dt in cond.dtypes:
- if not is_bool_dtype(dt):
- raise ValueError(msg.format(dtype=dt))
+ if not cond.empty:
+ if not isinstance(cond, ABCDataFrame):
+ # This is a single-dimensional object.
+ if not is_bool_dtype(cond):
+ raise ValueError(msg.format(dtype=cond.dtype))
+ else:
+ for dt in cond.dtypes:
+ if not is_bool_dtype(dt):
+ raise ValueError(msg.format(dtype=dt))
else:
- # GH#21947 we have an empty DataFrame, could be object-dtype
+ # GH#21947 we have an empty DataFrame/Series, could be object-dtype
cond = cond.astype(bool)
cond = -cond if inplace else cond
| BUG: Series.where doesn't work with empty lists
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of pandas. (Checked 1.0.3)
- [ ] (optional) I have confirmed this bug exists on the master branch of pandas.
---
**Note**: Please read [this guide](https://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) detailing how to provide the necessary information for us to reproduce your bug.
#### Code Sample, a copy-pastable example
```python
>>> pd.Series([], dtype=float).where([])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/nix/store/vqd2pmsja84h36wz6c878bgbpz46cy2h-python3.7-pandas-1.0.3/lib/python3.7/site-packages/pandas/core/generic.py", line 8919, in where
cond, other, inplace, axis, level, errors=errors, try_cast=try_cast
File "/nix/store/vqd2pmsja84h36wz6c878bgbpz46cy2h-python3.7-pandas-1.0.3/lib/python3.7/site-packages/pandas/core/generic.py", line 8673, in _where
raise ValueError(msg.format(dtype=cond.dtype))
ValueError: Boolean array expected for the condition, not float64
```
For comparison, if the list isn't empty, it works:
```python
>>> pd.Series([42], dtype=float).where([True])
0 42
dtype: int64
```
#### Problem description
[this should explain **why** the current behaviour is a problem and why the expected output is a better solution]
If `where` accepts lists as its mask argument, it should work for empty lists too. Either don't support lists at all or support empty lists too.
#### Expected Output
```
>>> pd.Series([], dtype=float).where([])
Series([], dtype: float64)
```
#### Output of ``pd.show_versions()``
<details>
>>> pd.show_versions()
INSTALLED VERSIONS
------------------
commit : None
python : 3.7.7.final.0
python-bits : 64
OS : Linux
OS-release : 5.4.41
machine : x86_64
processor :
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 1.0.3
numpy : 1.18.3
pytz : 2019.3
dateutil : 2.8.1
pip : None
setuptools : None
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : 4.5.0
html5lib : 1.0.1
pymysql : None
psycopg2 : None
jinja2 : None
IPython : None
pandas_datareader: None
bs4 : 4.8.2
bottleneck : 1.3.1
fastparquet : None
gcsfs : None
lxml.etree : 4.5.0
matplotlib : None
numexpr : 2.7.1
odfpy : None
openpyxl : 3.0.2
pandas_gbq : None
pyarrow : None
pytables : None
pytest : None
pyxlsb : None
s3fs : None
scipy : 1.4.1
sqlalchemy : 1.3.13
tables : None
tabulate : None
xarray : None
xlrd : 1.2.0
xlwt : 1.3.0
xlsxwriter : None
numba : None
</details>
| 2020-06-05T10:13:34Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/nix/store/vqd2pmsja84h36wz6c878bgbpz46cy2h-python3.7-pandas-1.0.3/lib/python3.7/site-packages/pandas/core/generic.py", line 8919, in where
cond, other, inplace, axis, level, errors=errors, try_cast=try_cast
File "/nix/store/vqd2pmsja84h36wz6c878bgbpz46cy2h-python3.7-pandas-1.0.3/lib/python3.7/site-packages/pandas/core/generic.py", line 8673, in _where
raise ValueError(msg.format(dtype=cond.dtype))
ValueError: Boolean array expected for the condition, not float64
| 13,806 |
||||
pandas-dev/pandas | pandas-dev__pandas-34709 | f30aeefec8e6f0819b9e7c2f8375f8f0f7b21086 | diff --git a/pandas/core/reshape/reshape.py b/pandas/core/reshape/reshape.py
--- a/pandas/core/reshape/reshape.py
+++ b/pandas/core/reshape/reshape.py
@@ -41,8 +41,7 @@ class _Unstacker:
Parameters
----------
- index : object
- Pandas ``Index``
+ index : MultiIndex
level : int or str, default last level
Level to "unstack". Accepts a name for the level.
fill_value : scalar, optional
@@ -83,7 +82,7 @@ class _Unstacker:
"""
def __init__(
- self, index, level=-1, constructor=None,
+ self, index: MultiIndex, level=-1, constructor=None,
):
if constructor is None:
@@ -415,7 +414,7 @@ def unstack(obj, level, fill_value=None):
level = obj.index._get_level_number(level)
if isinstance(obj, DataFrame):
- if isinstance(obj.index, MultiIndex) or not obj._can_fast_transpose:
+ if isinstance(obj.index, MultiIndex):
return _unstack_frame(obj, level, fill_value=fill_value)
else:
return obj.T.stack(dropna=False)
| BUG: DataFrame.unstack on non-consolidated frame
```
df = pd.DataFrame({"x": [1, 2, np.NaN], "y": [3.0, 4, np.NaN]})
df2 = df[["x"]]
df2["y"] = df["y"]
assert len(df2._mgr.blocks) == 2
>>> df2.unstack()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "pandas/core/frame.py", line 7031, in unstack
return unstack(self, level, fill_value)
File "pandas/core/reshape/reshape.py", line 419, in unstack
return _unstack_frame(obj, level, fill_value=fill_value)
File "pandas/core/reshape/reshape.py", line 435, in _unstack_frame
unstacker = _Unstacker(obj.index, level=level)
File "pandas/core/reshape/reshape.py", line 93, in __init__
self.index = index.remove_unused_levels()
AttributeError: 'RangeIndex' object has no attribute 'remove_unused_levels'
```
| 2020-06-11T00:32:04Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "pandas/core/frame.py", line 7031, in unstack
return unstack(self, level, fill_value)
File "pandas/core/reshape/reshape.py", line 419, in unstack
return _unstack_frame(obj, level, fill_value=fill_value)
File "pandas/core/reshape/reshape.py", line 435, in _unstack_frame
unstacker = _Unstacker(obj.index, level=level)
File "pandas/core/reshape/reshape.py", line 93, in __init__
self.index = index.remove_unused_levels()
AttributeError: 'RangeIndex' object has no attribute 'remove_unused_levels'
| 13,822 |
||||
pandas-dev/pandas | pandas-dev__pandas-34920 | 7d0ee96f9aedbbcd50781e1b8536f1914ecae984 | BUG: Cannot create empty dataframe having columns with dtype string
- [x] I have checked that this issue has not already been reported.
I have checked, but this requirement is grossly unreasonable since there are currently 3,426 other open issues. It's impossible for anyone to be sure. This requirement should be reworded.
- [x] I have confirmed this bug exists on the latest version of pandas.
- [ ] (optional) I have confirmed this bug exists on the master branch of pandas.
---
**Note**: Please read [this guide](https://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) detailing how to provide the necessary information for us to reproduce your bug.
#### Code Sample, a copy-pastable example
```python-traceback
>>> import pandas as pd
>>> pd.DataFrame(columns=['c1'], dtype='string')
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/home/homeuser/PycharmProjects/.venv/myenv/lib/python3.8/site-packages/pandas/core/frame.py", line 435, in __init__
mgr = init_dict(data, index, columns, dtype=dtype)
File "/home/homeuser/PycharmProjects/.venv/myenv/lib/python3.8/site-packages/pandas/core/internals/construction.py", line 234, in init_dict
if dtype is None or np.issubdtype(dtype, np.flexible):
File "/home/homeuser/PycharmProjects/.venv/myenv/lib/python3.8/site-packages/numpy/core/numerictypes.py", line 388, in issubdtype
arg1 = dtype(arg1).type
TypeError: Cannot interpret 'StringDtype' as a data type
```
#### Problem description
These three alternatives work, so there is no reason for this bug to continue to not be fixed:
```python-traceback
>>> pd.DataFrame(columns=['c1'], dtype='str')
Empty DataFrame
Columns: [c1]
Index: []
>>> pd.DataFrame([{}], columns=['c1'], dtype='string')
c1
0 <NA>
>>> pd.DataFrame([{}], columns=['c1'], dtype='string').drop(labels=0)
Empty DataFrame
Columns: [c1]
Index: []
```
#### Expected Output
```python-traceback
Empty DataFrame
Columns: [c1]
Index: []
```
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit : None
python : 3.8.3.final.0
python-bits : 64
OS : Linux
OS-release : 4.15.0-1087-oem
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 1.0.5
numpy : 1.19.0
pytz : 2020.1
dateutil : 2.8.1
pip : 20.1.1
setuptools : 40.8.0
Cython : None
pytest : 5.4.3
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : 4.5.1
html5lib : 1.0.1
pymysql : None
psycopg2 : None
jinja2 : None
IPython : None
pandas_datareader: None
bs4 : 4.9.1
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : 4.5.1
matplotlib : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
pytest : 5.4.3
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
xlsxwriter : None
numba : None
</details>
| Appears fixed on master though would welcome a test.
```python
In [1]: import pandas as pd
In [2]: pd.__version__
Out[2]: '1.1.0.dev0+1915.g1e0147774.dirty'
In [3]: pd.DataFrame(columns=['c1'], dtype='string')
Out[3]:
Empty DataFrame
Columns: [c1]
Index: []
```
will add tests | 2020-06-21T11:12:13Z | [] | [] |
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/home/homeuser/PycharmProjects/.venv/myenv/lib/python3.8/site-packages/pandas/core/frame.py", line 435, in __init__
mgr = init_dict(data, index, columns, dtype=dtype)
File "/home/homeuser/PycharmProjects/.venv/myenv/lib/python3.8/site-packages/pandas/core/internals/construction.py", line 234, in init_dict
if dtype is None or np.issubdtype(dtype, np.flexible):
File "/home/homeuser/PycharmProjects/.venv/myenv/lib/python3.8/site-packages/numpy/core/numerictypes.py", line 388, in issubdtype
arg1 = dtype(arg1).type
TypeError: Cannot interpret 'StringDtype' as a data type
| 13,853 |
||||
pandas-dev/pandas | pandas-dev__pandas-34939 | d85b93df71de72409c3cf409379ff68c52c3c022 | diff --git a/doc/source/whatsnew/v1.1.0.rst b/doc/source/whatsnew/v1.1.0.rst
--- a/doc/source/whatsnew/v1.1.0.rst
+++ b/doc/source/whatsnew/v1.1.0.rst
@@ -1027,6 +1027,7 @@ I/O
- :meth:`HDFStore.keys` has now an optional `include` parameter that allows the retrieval of all native HDF5 table names (:issue:`29916`)
- Bug in :meth:`read_excel` for ODS files removes 0.0 values (:issue:`27222`)
- Bug in :meth:`ujson.encode` was raising an `OverflowError` with numbers larger than sys.maxsize (:issue: `34395`)
+- Bug in :meth:`HDFStore.append_to_multiple` was raising a ``ValueError`` when the min_itemsize parameter is set (:issue:`11238`)
Plotting
^^^^^^^^
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -1303,6 +1303,8 @@ def append_to_multiple(
valid_index = valid_index.intersection(index)
value = value.loc[valid_index]
+ min_itemsize = kwargs.pop("min_itemsize", None)
+
# append
for k, v in d.items():
dc = data_columns if k == selector else None
@@ -1310,7 +1312,12 @@ def append_to_multiple(
# compute the val
val = value.reindex(v, axis=axis)
- self.append(k, val, data_columns=dc, **kwargs)
+ filtered = (
+ {key: value for (key, value) in min_itemsize.items() if key in v}
+ if min_itemsize is not None
+ else None
+ )
+ self.append(k, val, data_columns=dc, min_itemsize=filtered, **kwargs)
def create_table_index(
self,
| column-specific min_itemsize doesn't work with append_to_multiple
The `HDFStore.append_to_multiple` passes on its entire `min_itemsize` argument to every sub-append. Because not all columns are in every append, it fails when it tries to set a min_itemsize for a certain column when appending to a table that doesn't use that column.
Simple example:
```
>>> store.append_to_multiple({
... 'index': ["IX"],
... 'nums': ["Num", "BigNum", "RandNum"],
... "strs": ["Str", "LongStr"]
... }, d.iloc[[0]], 'index', min_itemsize={"Str": 10, "LongStr": 100})
Traceback (most recent call last):
File "<pyshell#52>", line 5, in <module>
}, d.iloc[[0]], 'index', min_itemsize={"Str": 10, "LongStr": 100})
File "c:\users\brenbarn\documents\python\extensions\pandas\pandas\io\pytables.py", line 1002, in append_to_multiple
self.append(k, val, data_columns=dc, **kwargs)
File "c:\users\brenbarn\documents\python\extensions\pandas\pandas\io\pytables.py", line 920, in append
**kwargs)
File "c:\users\brenbarn\documents\python\extensions\pandas\pandas\io\pytables.py", line 1265, in _write_to_group
s.write(obj=value, append=append, complib=complib, **kwargs)
File "c:\users\brenbarn\documents\python\extensions\pandas\pandas\io\pytables.py", line 3773, in write
**kwargs)
File "c:\users\brenbarn\documents\python\extensions\pandas\pandas\io\pytables.py", line 3460, in create_axes
self.validate_min_itemsize(min_itemsize)
File "c:\users\brenbarn\documents\python\extensions\pandas\pandas\io\pytables.py", line 3101, in validate_min_itemsize
"data_column" % k)
ValueError: min_itemsize has the key [LongStr] which is not an axis or data_column
```
This apparently means that you can't use `min_itemsize` without manually creating and appending to each separate table beforehand, which is the kind of thing `append_to_multiple` is supposed to shield you from.
I think `append_to_multiple` should special-case `min_itemsize` and split it into separate dicts for each sub-append. I don't know if there are other potential kwargs that need to be "allocated" separately to sub-appends, but if there are it might be good to split them too.
| you can pass a dict for min_itrmsize with columns to sizes
Yes, that's what I did in my example. The problem is that `append_to_multiple` passes that entire dict to every `append`, so it tries to specify a `min_itemsize` for columns that don't exist in some of the multiple tables being appended to.
I see this may be related to another thing I was just going to raise an issue about (namely that you can't specify data_columns for tables other than the "selector" with `append_to_multiple`).
ahh ok
as an aside there a couple of issues for doing a real column store (this is a hybrid, but the API is really not good here)
- interested?
I saw some of those issues but I don't really know anything about those other backends that are being discussed for column stores. I might be able to work something up to make the existing `append_to_multiple` approach a bit more flexible though. See #11239 which I just created about a related limitation.
@BrenBarn can you show a complete copy-pastable example (IOW, show the creation of `d` as well). pls `pd.show_versions()` for the record as well.
`d` is just like this:
```
import pandas
import numpy as np
import string
d = pandas.DataFrame({
"IX": np.arange(1, 21),
"Num": np.arange(1, 21),
"BigNum": np.arange(1, 21)*88,
"RandNum": np.random.randn(20),
"Str": [chr(a) for a in np.random.randint(65, 81, 20)],
"LongStr": [''.join(np.random.choice(list(string.lowercase), 5)) for a in xrange(20)]
})
```
Here is `show_versions()`, although I don't really think most of it is relevant for this issue (I removed a bunch of "None" ones):
```
INSTALLED VERSIONS
------------------
commit: None
python: 2.7.8.final.0
python-bits: 64
OS: Windows
OS-release: 7
machine: AMD64
processor: AMD64 Family 16 Model 4 Stepping 3, AuthenticAMD
byteorder: little
LC_ALL: None
LANG: None
pandas: 0+unknown
nose: 1.3.7
pip: 7.1.2
setuptools: 5.7
Cython: 0.21.1
numpy: 1.9.2
scipy: 0.15.1
statsmodels: 0.6.1
IPython: 2.2.0
sphinx: None
patsy: 0.3.0
dateutil: 2.2
pytz: 2013b
blosc: None
bottleneck: None
tables: 3.1.1
numexpr: 2.4
matplotlib: 1.5.0rc2
```
The pandas version is built from the github repo as of a couple days ago. I think it is revision 50aceeaa652c but I'm not sure because I pull it via hg-git and due to the issue I mentioned [here](https://groups.google.com/forum/#!searchin/pydata/__version__/pydata/4wpRuf-Vtbo/gPe5TMczECEJ) `pandas.__version__` does not appear to be updated correctly if git is not available when building.
Any chance this will be implemented soon?
a community pull request would make this happen | 2020-06-22T21:00:12Z | [] | [] |
Traceback (most recent call last):
File "<pyshell#52>", line 5, in <module>
}, d.iloc[[0]], 'index', min_itemsize={"Str": 10, "LongStr": 100})
File "c:\users\brenbarn\documents\python\extensions\pandas\pandas\io\pytables.py", line 1002, in append_to_multiple
self.append(k, val, data_columns=dc, **kwargs)
File "c:\users\brenbarn\documents\python\extensions\pandas\pandas\io\pytables.py", line 920, in append
**kwargs)
File "c:\users\brenbarn\documents\python\extensions\pandas\pandas\io\pytables.py", line 1265, in _write_to_group
s.write(obj=value, append=append, complib=complib, **kwargs)
File "c:\users\brenbarn\documents\python\extensions\pandas\pandas\io\pytables.py", line 3773, in write
**kwargs)
File "c:\users\brenbarn\documents\python\extensions\pandas\pandas\io\pytables.py", line 3460, in create_axes
self.validate_min_itemsize(min_itemsize)
File "c:\users\brenbarn\documents\python\extensions\pandas\pandas\io\pytables.py", line 3101, in validate_min_itemsize
"data_column" % k)
ValueError: min_itemsize has the key [LongStr] which is not an axis or data_column
| 13,856 |
|||
pandas-dev/pandas | pandas-dev__pandas-34954 | 74f77a1f010e99379e58df5f7612743b4b8616d5 | diff --git a/doc/source/whatsnew/v1.1.0.rst b/doc/source/whatsnew/v1.1.0.rst
--- a/doc/source/whatsnew/v1.1.0.rst
+++ b/doc/source/whatsnew/v1.1.0.rst
@@ -1051,6 +1051,7 @@ I/O
- Bug in :meth:`~HDFStore.create_table` now raises an error when `column` argument was not specified in `data_columns` on input (:issue:`28156`)
- :meth:`read_json` now could read line-delimited json file from a file url while `lines` and `chunksize` are set.
- Bug in :meth:`DataFrame.to_sql` when reading DataFrames with ``-np.inf`` entries with MySQL now has a more explicit ``ValueError`` (:issue:`34431`)
+- Bug in "meth"`read_excel` where datetime values are used in the header in a `MultiIndex` (:issue:`34748`)
Plotting
^^^^^^^^
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -1614,7 +1614,7 @@ def extract(r):
# Clean the column names (if we have an index_col).
if len(ic):
col_names = [
- r[0] if (len(r[0]) and r[0] not in self.unnamed_cols) else None
+ r[0] if ((r[0] is not None) and r[0] not in self.unnamed_cols) else None
for r in header
]
else:
| BUG: read_excel issue - 2 (when "column name" is a date/time)
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of pandas.
- [ ] (optional) I have confirmed this bug exists on the master branch of pandas.
---
#### Code Sample, a copy-pastable example
```python
import pandas as pd
pd.read_excel('test.xlsx', header=[0, 1], index_col=0, engine='openpyxl')
```
(`engine` could be `xlrd` or `openpyxl`, doesn't matter.)
[test.xlsx](https://github.com/pandas-dev/pandas/files/4774723/test.xlsx)
#### Problem description
Exception:
```
Traceback (most recent call last):
File "test.py", line 3, in <module>
pd.read_excel('test.xlsx', header=[0, 1], index_col=0, engine='xlrd')
File "/home/sasha/miniconda3/lib/python3.7/site-packages/pandas/io/excel/_base.py", line 334, in read_excel
**kwds,
File "/home/sasha/miniconda3/lib/python3.7/site-packages/pandas/io/excel/_base.py", line 888, in parse
**kwds,
File "/home/sasha/miniconda3/lib/python3.7/site-packages/pandas/io/excel/_base.py", line 512, in parse
**kwds,
File "/home/sasha/miniconda3/lib/python3.7/site-packages/pandas/io/parsers.py", line 2201, in TextParser
return TextFileReader(*args, **kwds)
File "/home/sasha/miniconda3/lib/python3.7/site-packages/pandas/io/parsers.py", line 880, in __init__
self._make_engine(self.engine)
File "/home/sasha/miniconda3/lib/python3.7/site-packages/pandas/io/parsers.py", line 1126, in _make_engine
self._engine = klass(self.f, **self.options)
File "/home/sasha/miniconda3/lib/python3.7/site-packages/pandas/io/parsers.py", line 2298, in __init__
self.columns, self.index_names, self.col_names
File "/home/sasha/miniconda3/lib/python3.7/site-packages/pandas/io/parsers.py", line 1508, in _extract_multi_indexer_columns
for r in header
File "/home/sasha/miniconda3/lib/python3.7/site-packages/pandas/io/parsers.py", line 1508, in <listcomp>
for r in header
TypeError: object of type 'datetime.datetime' has no len()
```
#### Expected Output
No exceptions.
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit : None
python : 3.7.7.final.0
python-bits : 64
OS : Linux
OS-release : 5.4.42-calculate
machine : x86_64
processor : Intel(R) Core(TM) i5-7200U CPU @ 2.50GHz
byteorder : little
LC_ALL : None
LANG : ru_RU.utf8
LOCALE : ru_RU.UTF-8
pandas : 1.0.3
numpy : 1.18.1
pytz : 2020.1
dateutil : 2.8.1
pip : 20.0.2
setuptools : 46.4.0.post20200518
Cython : 0.29.17
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : 1.2.8
lxml.etree : 4.5.0
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 2.11.2
IPython : 7.13.0
pandas_datareader: None
bs4 : None
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : 4.5.0
matplotlib : 3.1.3
numexpr : None
odfpy : None
openpyxl : 3.0.3
pandas_gbq : None
pyarrow : None
pytables : None
pytest : None
pyxlsb : None
s3fs : None
scipy : 1.4.1
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : 1.2.0
xlwt : None
xlsxwriter : 1.2.8
numba : None
</details>
| take
| 2020-06-23T16:04:14Z | [] | [] |
Traceback (most recent call last):
File "test.py", line 3, in <module>
pd.read_excel('test.xlsx', header=[0, 1], index_col=0, engine='xlrd')
File "/home/sasha/miniconda3/lib/python3.7/site-packages/pandas/io/excel/_base.py", line 334, in read_excel
**kwds,
File "/home/sasha/miniconda3/lib/python3.7/site-packages/pandas/io/excel/_base.py", line 888, in parse
**kwds,
File "/home/sasha/miniconda3/lib/python3.7/site-packages/pandas/io/excel/_base.py", line 512, in parse
**kwds,
File "/home/sasha/miniconda3/lib/python3.7/site-packages/pandas/io/parsers.py", line 2201, in TextParser
return TextFileReader(*args, **kwds)
File "/home/sasha/miniconda3/lib/python3.7/site-packages/pandas/io/parsers.py", line 880, in __init__
self._make_engine(self.engine)
File "/home/sasha/miniconda3/lib/python3.7/site-packages/pandas/io/parsers.py", line 1126, in _make_engine
self._engine = klass(self.f, **self.options)
File "/home/sasha/miniconda3/lib/python3.7/site-packages/pandas/io/parsers.py", line 2298, in __init__
self.columns, self.index_names, self.col_names
File "/home/sasha/miniconda3/lib/python3.7/site-packages/pandas/io/parsers.py", line 1508, in _extract_multi_indexer_columns
for r in header
File "/home/sasha/miniconda3/lib/python3.7/site-packages/pandas/io/parsers.py", line 1508, in <listcomp>
for r in header
TypeError: object of type 'datetime.datetime' has no len()
| 13,862 |
|||
pandas-dev/pandas | pandas-dev__pandas-35411 | ce03883911ec35c3eea89ed828765ec4be415263 | diff --git a/doc/source/whatsnew/v1.2.0.rst b/doc/source/whatsnew/v1.2.0.rst
--- a/doc/source/whatsnew/v1.2.0.rst
+++ b/doc/source/whatsnew/v1.2.0.rst
@@ -134,7 +134,7 @@ Missing
MultiIndex
^^^^^^^^^^
--
+- Bug in :meth:`DataFrame.xs` when used with :class:`IndexSlice` raises ``TypeError`` with message `Expected label or tuple of labels` (:issue:`35301`)
-
I/O
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -3492,7 +3492,10 @@ class animal locomotion
index = self.index
if isinstance(index, MultiIndex):
- loc, new_index = self.index.get_loc_level(key, drop_level=drop_level)
+ try:
+ loc, new_index = self.index.get_loc_level(key, drop_level=drop_level)
+ except TypeError as e:
+ raise TypeError(f"Expected label or tuple of labels, got {key}") from e
else:
loc = self.index.get_loc(key)
| BUG: xs not working with slice
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of pandas.
- [ ] (optional) I have confirmed this bug exists on the master branch of pandas.
#### Code Sample, a copy-pastable example
```python
data = """
C1 C2 V
A1 0 10
A1 1 20
A2 0 2
A2 1 3
B1 0 2
B2 1 3
"""
import pandas as pd
from io import StringIO
df = pd.read_csv(StringIO(data), sep=' +').set_index(['C1', 'C2'])
df.xs(pd.IndexSlice['A1', :])
```
#### Problem description
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/***/lib/python3.7/site-packages/pandas/core/generic.py", line 3535, in xs
loc, new_index = self.index.get_loc_level(key, drop_level=drop_level)
File "/home/***/lib/python3.7/site-packages/pandas/core/indexes/multi.py", line 2835, in get_loc_level
raise TypeError(key)
TypeError: ('A1', slice(None, None, None))
```
also similar code produce the same problem (`df.xs(('A1', slice(None)))`). Strangely this works:
```python
df = pd.DataFrame({'a': [1, 2, 3, 1], 'b': ['a', 'b', 'c', 'd'], 'v': [2, 3, 4, 5]}).set_index(['a', 'b'])
df.xs(pd.IndexSlice[1, :])
```
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit : None
python : 3.7.7.final.0
python-bits : 64
OS : Linux
OS-release : 5.7.7-100.fc31.x86_64
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : it_IT.UTF-8
LOCALE : it_IT.UTF-8
pandas : 1.0.5
numpy : 1.19.0
pytz : 2019.2
dateutil : 2.7.5
pip : 20.1.1
setuptools : 41.6.0
Cython : 0.29.15
pytest : 4.0.0
hypothesis : None
sphinx : 3.1.1
blosc : None
feather : None
xlsxwriter : None
lxml.etree : 4.4.0
html5lib : 1.0.1
pymysql : None
psycopg2 : None
jinja2 : 2.10
IPython : 7.16.1
pandas_datareader: 0.8.0
bs4 : 4.7.1
bottleneck : 1.2.1
fastparquet : None
gcsfs : None
lxml.etree : 4.4.0
matplotlib : 3.2.2
numexpr : 2.7.1
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
pytest : 4.0.0
pyxlsb : None
s3fs : 0.4.2
scipy : 1.5.1
sqlalchemy : None
tables : 3.5.2
tabulate : 0.8.5
xarray : 0.12.1
xlrd : 1.2.0
xlwt : 1.1.2
xlsxwriter : None
numba : 0.48.0
</details>
| 2020-07-25T14:20:48Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/***/lib/python3.7/site-packages/pandas/core/generic.py", line 3535, in xs
loc, new_index = self.index.get_loc_level(key, drop_level=drop_level)
File "/home/***/lib/python3.7/site-packages/pandas/core/indexes/multi.py", line 2835, in get_loc_level
raise TypeError(key)
TypeError: ('A1', slice(None, None, None))
| 13,930 |
||||
pandas-dev/pandas | pandas-dev__pandas-35510 | a0c8425a5f2b74e8a716defd799c4a3716f66eff | diff --git a/doc/source/whatsnew/v1.1.1.rst b/doc/source/whatsnew/v1.1.1.rst
--- a/doc/source/whatsnew/v1.1.1.rst
+++ b/doc/source/whatsnew/v1.1.1.rst
@@ -15,7 +15,7 @@ including other versions of pandas.
Fixed regressions
~~~~~~~~~~~~~~~~~
--
+- Fixed regression where :func:`read_csv` would raise a ``ValueError`` when ``pandas.options.mode.use_inf_as_na`` was set to ``True`` (:issue:`35493`).
-
-
diff --git a/pandas/_libs/missing.pyx b/pandas/_libs/missing.pyx
--- a/pandas/_libs/missing.pyx
+++ b/pandas/_libs/missing.pyx
@@ -155,7 +155,10 @@ def isnaobj_old(arr: ndarray) -> ndarray:
result = np.zeros(n, dtype=np.uint8)
for i in range(n):
val = arr[i]
- result[i] = checknull(val) or val == INF or val == NEGINF
+ result[i] = (
+ checknull(val)
+ or util.is_float_object(val) and (val == INF or val == NEGINF)
+ )
return result.view(np.bool_)
| BUG: use_inf_as_na options raises value error for read_csv
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of pandas.
- [ ] (optional) I have confirmed this bug exists on the master branch of pandas.
---
#### Code Sample to Reproduce Error
```python
import pandas as pd
import numpy as np
pd.set_option('use_inf_as_na', True)
pd.DataFrame({'test_data':[1,3,4,np.nan]}).to_csv('test_data.csv', na_rep='NaN')
pd.read_csv('test_data.csv',sep=',' ,na_values='NaN')
```
Causes `ValueError`:
```python-traceback
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/lena/anaconda3/envs/pandas_pip2/lib/python3.8/site-packages/pandas/io/parsers.py", line 686, in read_csv
return _read(filepath_or_buffer, kwds)
File "/home/lena/anaconda3/envs/pandas_pip2/lib/python3.8/site-packages/pandas/io/parsers.py", line 458, in _read
data = parser.read(nrows)
File "/home/lena/anaconda3/envs/pandas_pip2/lib/python3.8/site-packages/pandas/io/parsers.py", line 1201, in read
df = DataFrame(col_dict, columns=columns, index=index)
File "/home/lena/anaconda3/envs/pandas_pip2/lib/python3.8/site-packages/pandas/core/frame.py", line 467, in __init__
mgr = init_dict(data, index, columns, dtype=dtype)
File "/home/lena/anaconda3/envs/pandas_pip2/lib/python3.8/site-packages/pandas/core/internals/construction.py", line 250, in init_dict
missing = arrays.isna()
File "/home/lena/anaconda3/envs/pandas_pip2/lib/python3.8/site-packages/pandas/core/series.py", line 4795, in isna
return super().isna()
File "/home/lena/anaconda3/envs/pandas_pip2/lib/python3.8/site-packages/pandas/core/generic.py", line 7109, in isna
return isna(self).__finalize__(self, method="isna")
File "/home/lena/anaconda3/envs/pandas_pip2/lib/python3.8/site-packages/pandas/core/dtypes/missing.py", line 124, in isna
return _isna(obj)
File "/home/lena/anaconda3/envs/pandas_pip2/lib/python3.8/site-packages/pandas/core/dtypes/missing.py", line 157, in _isna
return _isna_ndarraylike(obj, inf_as_na=inf_as_na)
File "/home/lena/anaconda3/envs/pandas_pip2/lib/python3.8/site-packages/pandas/core/dtypes/missing.py", line 218, in _isna_ndarraylike
result = _isna_string_dtype(values, dtype, inf_as_na=inf_as_na)
File "/home/lena/anaconda3/envs/pandas_pip2/lib/python3.8/site-packages/pandas/core/dtypes/missing.py", line 246, in _isna_string_dtype
vec = libmissing.isnaobj_old(values.ravel())
File "pandas/_libs/missing.pyx", line 160, in pandas._libs.missing.isnaobj_old
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
```
#### Problem description
Nans are not recognized as expected anymore if pandas option `use_inf_as_na`is set to True. Occurred first after upgrading to pandas 1.1.0
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit : d9fff2792bf16178d4e450fe7384244e50635733
python : 3.8.5.final.0
python-bits : 64
OS : Linux
OS-release : 5.4.0-42-generic
Version : #46-Ubuntu SMP Fri Jul 10 00:24:02 UTC 2020
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 1.1.0
numpy : 1.19.1
pytz : 2020.1
dateutil : 2.8.1
pip : 20.2
setuptools : 49.2.0.post20200712
Cython : None
pytest : 6.0.1
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : 1.1
pymysql : None
psycopg2 : None
jinja2 : 2.11.2
IPython : None
pandas_datareader: None
bs4 : None
bottleneck : None
fsspec : None
fastparquet : None
gcsfs : None
matplotlib : 3.3.0
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
pyxlsb : None
s3fs : None
scipy : 1.5.2
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
numba : None
</details>
| Thanks @mhaselsteiner for the report.
> Occurred first after upgrading to pandas 1.1.0
can confirm ok in 1.0.5, so marking as regression.
```
>>> pd.__version__
'1.0.5'
>>>
>>> pd.set_option('use_inf_as_na', True)
>>> df = pd.DataFrame({'test_data':[1,3,4,np.nan]})
>>> data = df.to_csv(na_rep='NaN')
>>> print(data)
,test_data
0,1.0
1,3.0
2,4.0
3,NaN
>>>
>>> from io import StringIO
>>> pd.read_csv(StringIO(data),sep=',' ,na_values='NaN')
Unnamed: 0 test_data
0 0 1.0
1 1 3.0
2 2 4.0
3 3 NaN
>>>
```
this issue starts to occur with #33656 cc @dsaxton
678a9ac7c198513367f6f1180c5fd2bf6bc6949b is the first bad commit
commit 678a9ac7c198513367f6f1180c5fd2bf6bc6949b
Author: Daniel Saxton <2658661+dsaxton@users.noreply.github.com>
Date: Sun May 10 12:12:45 2020 -0500
BUG: Fix StringArray use_inf_as_na bug (#33656)
| 2020-08-01T21:55:55Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/lena/anaconda3/envs/pandas_pip2/lib/python3.8/site-packages/pandas/io/parsers.py", line 686, in read_csv
return _read(filepath_or_buffer, kwds)
File "/home/lena/anaconda3/envs/pandas_pip2/lib/python3.8/site-packages/pandas/io/parsers.py", line 458, in _read
data = parser.read(nrows)
File "/home/lena/anaconda3/envs/pandas_pip2/lib/python3.8/site-packages/pandas/io/parsers.py", line 1201, in read
df = DataFrame(col_dict, columns=columns, index=index)
File "/home/lena/anaconda3/envs/pandas_pip2/lib/python3.8/site-packages/pandas/core/frame.py", line 467, in __init__
mgr = init_dict(data, index, columns, dtype=dtype)
File "/home/lena/anaconda3/envs/pandas_pip2/lib/python3.8/site-packages/pandas/core/internals/construction.py", line 250, in init_dict
missing = arrays.isna()
File "/home/lena/anaconda3/envs/pandas_pip2/lib/python3.8/site-packages/pandas/core/series.py", line 4795, in isna
return super().isna()
File "/home/lena/anaconda3/envs/pandas_pip2/lib/python3.8/site-packages/pandas/core/generic.py", line 7109, in isna
return isna(self).__finalize__(self, method="isna")
File "/home/lena/anaconda3/envs/pandas_pip2/lib/python3.8/site-packages/pandas/core/dtypes/missing.py", line 124, in isna
return _isna(obj)
File "/home/lena/anaconda3/envs/pandas_pip2/lib/python3.8/site-packages/pandas/core/dtypes/missing.py", line 157, in _isna
return _isna_ndarraylike(obj, inf_as_na=inf_as_na)
File "/home/lena/anaconda3/envs/pandas_pip2/lib/python3.8/site-packages/pandas/core/dtypes/missing.py", line 218, in _isna_ndarraylike
result = _isna_string_dtype(values, dtype, inf_as_na=inf_as_na)
File "/home/lena/anaconda3/envs/pandas_pip2/lib/python3.8/site-packages/pandas/core/dtypes/missing.py", line 246, in _isna_string_dtype
vec = libmissing.isnaobj_old(values.ravel())
File "pandas/_libs/missing.pyx", line 160, in pandas._libs.missing.isnaobj_old
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
| 13,944 |
|||
pandas-dev/pandas | pandas-dev__pandas-35532 | 3701a9b1bfc3ad3890c2fb1fe1974a4768f6d5f8 | diff --git a/doc/source/whatsnew/v1.2.0.rst b/doc/source/whatsnew/v1.2.0.rst
--- a/doc/source/whatsnew/v1.2.0.rst
+++ b/doc/source/whatsnew/v1.2.0.rst
@@ -59,7 +59,7 @@ Categorical
Datetimelike
^^^^^^^^^^^^
--
+- Bug in :attr:`DatetimeArray.date` where a ``ValueError`` would be raised with a read-only backing array (:issue:`33530`)
-
Timedelta
diff --git a/pandas/_libs/tslibs/tzconversion.pyx b/pandas/_libs/tslibs/tzconversion.pyx
--- a/pandas/_libs/tslibs/tzconversion.pyx
+++ b/pandas/_libs/tslibs/tzconversion.pyx
@@ -410,7 +410,7 @@ cpdef int64_t tz_convert_from_utc_single(int64_t val, tzinfo tz):
return val + deltas[pos]
-def tz_convert_from_utc(int64_t[:] vals, tzinfo tz):
+def tz_convert_from_utc(const int64_t[:] vals, tzinfo tz):
"""
Convert the values (in i8) from UTC to tz
@@ -435,7 +435,7 @@ def tz_convert_from_utc(int64_t[:] vals, tzinfo tz):
@cython.boundscheck(False)
@cython.wraparound(False)
-cdef int64_t[:] _tz_convert_from_utc(int64_t[:] vals, tzinfo tz):
+cdef int64_t[:] _tz_convert_from_utc(const int64_t[:] vals, tzinfo tz):
"""
Convert the given values (in i8) either to UTC or from UTC.
@@ -457,7 +457,7 @@ cdef int64_t[:] _tz_convert_from_utc(int64_t[:] vals, tzinfo tz):
str typ
if is_utc(tz):
- converted = vals
+ converted = vals.copy()
elif is_tzlocal(tz):
converted = np.empty(n, dtype=np.int64)
for i in range(n):
| BUG: "buffer source array is read-only" with tz_convert_from_utc/DatetimeArray.date
- [X] I have checked that this issue has not already been reported.
There are similar issues with the same symptom
- [X] I have confirmed this bug exists on the latest version of pandas.
Tested with Pandas 1.1.0
- [X] (optional) I have confirmed this bug exists on the master branch of pandas.
Tested with bdcc5bffaadb7488474b65554c4e8e96a00aa4af
---
**Note**: Please read [this guide](https://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) detailing how to provide the necessary information for us to reproduce your bug.
#### Code Sample, a copy-pastable example
```python
import pyarrow as pa
import pandas as pd
import pytz
print("PyArrow:", pa.__version__)
print("Pandas:", pd.__version__)
table = pa.table([
pa.array([
pd.Timestamp('2014-01-01'),
], type=pa.timestamp("ns"))
], names=["time"])
df = table.to_pandas(self_destruct=True, split_blocks=True)
df.set_index("time", inplace=True)
df.index = df.index.tz_localize(pytz.utc)
# These are OK
print(df.index.date)
print(df.index.tz_convert("America/New_York"))
# But not this
print(df.index.tz_convert("America/New_York").date)
```
#### Problem description
The reproduction results in this exception:
```python
Traceback (most recent call last):
File "repro.py", line 18, in <module>
print(df.index.tz_convert("America/New_York").date)
File "/home/lidavidm/Code/twosigma/pandas/temp/venv/lib/python3.8/site-packages/pandas/core/indexes/extension.py", line 54, in fget
result = getattr(self._data, name)
File "/home/lidavidm/Code/twosigma/pandas/temp/venv/lib/python3.8/site-packages/pandas/core/arrays/datetimes.py", line 1246, in date
timestamps = self._local_timestamps()
File "/home/lidavidm/Code/twosigma/pandas/temp/venv/lib/python3.8/site-packages/pandas/core/arrays/datetimes.py", line 731, in _local_timestamps
return tzconversion.tz_convert_from_utc(self.asi8, self.tz)
File "pandas/_libs/tslibs/tzconversion.pyx", line 407, in pandas._libs.tslibs.tzconversion.tz_convert_from_utc
File "stringsource", line 658, in View.MemoryView.memoryview_cwrapper
File "stringsource", line 349, in View.MemoryView.memoryview.__cinit__
ValueError: buffer source array is read-only
```
The data is not being modified in place, so it should work with an immutable source array.
This is because `tz_convert_from_utc` and `_tz_convert_from_utc` are missing some `const` specifiers in Cython.
#### Expected Output
The `.date` accessor should work as expected:
```python
[datetime.date(2013, 12, 31)]
```
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit : d9fff2792bf16178d4e450fe7384244e50635733
python : 3.8.3.final.0
python-bits : 64
OS : Linux
OS-release : 5.7.9-arch1-1
Version : #1 SMP PREEMPT Thu, 16 Jul 2020 19:34:49 +0000
machine : x86_64
processor :
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 1.1.0
numpy : 1.19.1
pytz : 2020.1
dateutil : 2.8.1
pip : 20.1.1
setuptools : 47.1.1
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : None
pandas_datareader: None
bs4 : None
bottleneck : None
fsspec : None
fastparquet : None
gcsfs : None
matplotlib : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : 1.0.0
pytables : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
numba : None
</details>
| @lidavidm thanks for the report!
A reproducer without pyarrow:
```
import pandas as pd
import pytz
arr = pd.array([pd.Timestamp('2014-01-01')]).to_numpy()
arr.flags.writeable = False
df = pd.DataFrame(index=pd.DatetimeIndex(arr, name="time"))
df.index = df.index.tz_localize(pytz.utc)
df.index.tz_convert("America/New_York").date
``` | 2020-08-03T22:49:18Z | [] | [] |
Traceback (most recent call last):
File "repro.py", line 18, in <module>
print(df.index.tz_convert("America/New_York").date)
File "/home/lidavidm/Code/twosigma/pandas/temp/venv/lib/python3.8/site-packages/pandas/core/indexes/extension.py", line 54, in fget
result = getattr(self._data, name)
File "/home/lidavidm/Code/twosigma/pandas/temp/venv/lib/python3.8/site-packages/pandas/core/arrays/datetimes.py", line 1246, in date
timestamps = self._local_timestamps()
File "/home/lidavidm/Code/twosigma/pandas/temp/venv/lib/python3.8/site-packages/pandas/core/arrays/datetimes.py", line 731, in _local_timestamps
return tzconversion.tz_convert_from_utc(self.asi8, self.tz)
File "pandas/_libs/tslibs/tzconversion.pyx", line 407, in pandas._libs.tslibs.tzconversion.tz_convert_from_utc
File "stringsource", line 658, in View.MemoryView.memoryview_cwrapper
File "stringsource", line 349, in View.MemoryView.memoryview.__cinit__
ValueError: buffer source array is read-only
| 13,948 |
|||
pandas-dev/pandas | pandas-dev__pandas-35583 | 3701a9b1bfc3ad3890c2fb1fe1974a4768f6d5f8 | diff --git a/doc/source/whatsnew/v1.2.0.rst b/doc/source/whatsnew/v1.2.0.rst
--- a/doc/source/whatsnew/v1.2.0.rst
+++ b/doc/source/whatsnew/v1.2.0.rst
@@ -64,7 +64,7 @@ Datetimelike
Timedelta
^^^^^^^^^
-
+- Bug in :class:`TimedeltaIndex`, :class:`Series`, and :class:`DataFrame` floor-division with ``timedelta64`` dtypes and ``NaT`` in the denominator (:issue:`35529`)
-
-
diff --git a/pandas/core/arrays/timedeltas.py b/pandas/core/arrays/timedeltas.py
--- a/pandas/core/arrays/timedeltas.py
+++ b/pandas/core/arrays/timedeltas.py
@@ -628,7 +628,7 @@ def __floordiv__(self, other):
result = self.asi8 // other.asi8
mask = self._isnan | other._isnan
if mask.any():
- result = result.astype(np.int64)
+ result = result.astype(np.float64)
result[mask] = np.nan
return result
| BUG: ValueError during floordiv of a series with timedelta type
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of pandas.
- [ ] (optional) I have confirmed this bug exists on the master branch of pandas.
---
#### Code Sample, a copy-pastable example
```python
>>> import pandas as pd
>>> sr = pd.Series([10, 20, 30], dtype='timedelta64[ns]')
>>> sr
0 00:00:00.000000
1 00:00:00.000000
2 00:00:00.000000
dtype: timedelta64[ns]
>>> sr = pd.Series([1000, 20, 30], dtype='timedelta64[ns]')
>>> sr
0 00:00:00.000001
1 00:00:00.000000
2 00:00:00.000000
dtype: timedelta64[ns]
>>> sr = pd.Series([1000, 222330, 30], dtype='timedelta64[ns]')
>>> sr
0 00:00:00.000001
1 00:00:00.000222
2 00:00:00.000000
dtype: timedelta64[ns]
>>> sr1 = pd.Series([1000, 222330, None], dtype='timedelta64[ns]')
>>> sr1
0 00:00:00.000001
1 00:00:00.000222
2 NaT
dtype: timedelta64[ns]
>>> sr / sr1
0 1.0
1 1.0
2 NaN
dtype: float64
>>> sr // sr1
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/nvme/0/pgali/envs/cudfdev1/lib/python3.7/site-packages/pandas/core/ops/common.py", line 64, in new_method
return method(self, other)
File "/nvme/0/pgali/envs/cudfdev1/lib/python3.7/site-packages/pandas/core/ops/__init__.py", line 503, in wrapper
result = arithmetic_op(lvalues, rvalues, op, str_rep)
File "/nvme/0/pgali/envs/cudfdev1/lib/python3.7/site-packages/pandas/core/ops/array_ops.py", line 193, in arithmetic_op
res_values = dispatch_to_extension_op(op, lvalues, rvalues)
File "/nvme/0/pgali/envs/cudfdev1/lib/python3.7/site-packages/pandas/core/ops/dispatch.py", line 125, in dispatch_to_extension_op
res_values = op(left, right)
File "/nvme/0/pgali/envs/cudfdev1/lib/python3.7/site-packages/pandas/core/arrays/timedeltas.py", line 637, in __floordiv__
result[mask] = np.nan
ValueError: cannot convert float NaN to integer
```
#### Problem description
When there is a `NaT` in either of the series, the result should ideally be of type `nullable` integer (`Int64`) to avoid this kind of `ValueError`
#### Expected Output
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit : None
python : 3.7.6.final.0
python-bits : 64
OS : Linux
OS-release : 4.15.0-76-generic
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 1.0.5
numpy : 1.18.5
pytz : 2020.1
dateutil : 2.8.1
pip : 20.1.1
setuptools : 49.1.0.post20200704
Cython : 0.29.21
pytest : 5.4.3
hypothesis : 5.19.0
sphinx : 3.1.2
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 2.11.2
IPython : 7.16.1
pandas_datareader: None
bs4 : None
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : None
matplotlib : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : 0.17.1
pytables : None
pytest : 5.4.3
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
xlsxwriter : None
numba : 0.50.1
</details>
| 2020-08-06T01:42:45Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/nvme/0/pgali/envs/cudfdev1/lib/python3.7/site-packages/pandas/core/ops/common.py", line 64, in new_method
return method(self, other)
File "/nvme/0/pgali/envs/cudfdev1/lib/python3.7/site-packages/pandas/core/ops/__init__.py", line 503, in wrapper
result = arithmetic_op(lvalues, rvalues, op, str_rep)
File "/nvme/0/pgali/envs/cudfdev1/lib/python3.7/site-packages/pandas/core/ops/array_ops.py", line 193, in arithmetic_op
res_values = dispatch_to_extension_op(op, lvalues, rvalues)
File "/nvme/0/pgali/envs/cudfdev1/lib/python3.7/site-packages/pandas/core/ops/dispatch.py", line 125, in dispatch_to_extension_op
res_values = op(left, right)
File "/nvme/0/pgali/envs/cudfdev1/lib/python3.7/site-packages/pandas/core/arrays/timedeltas.py", line 637, in __floordiv__
result[mask] = np.nan
ValueError: cannot convert float NaN to integer
| 13,959 |
||||
pandas-dev/pandas | pandas-dev__pandas-35604 | 309018c7ce7e9e31708f58af971ed92823921493 | diff --git a/doc/source/whatsnew/v1.2.0.rst b/doc/source/whatsnew/v1.2.0.rst
--- a/doc/source/whatsnew/v1.2.0.rst
+++ b/doc/source/whatsnew/v1.2.0.rst
@@ -267,8 +267,9 @@ Interval
Indexing
^^^^^^^^
+
- Bug in :meth:`PeriodIndex.get_loc` incorrectly raising ``ValueError`` on non-datelike strings instead of ``KeyError``, causing similar errors in :meth:`Series.__geitem__`, :meth:`Series.__contains__`, and :meth:`Series.loc.__getitem__` (:issue:`34240`)
--
+- Bug in :meth:`Index.sort_values` where, when empty values were passed, the method would break by trying to compare missing values instead of pushing them to the end of the sort order. (:issue:`35584`)
-
Missing
diff --git a/pandas/conftest.py b/pandas/conftest.py
--- a/pandas/conftest.py
+++ b/pandas/conftest.py
@@ -437,6 +437,29 @@ def index(request):
index_fixture2 = index
+@pytest.fixture(params=indices_dict.keys())
+def index_with_missing(request):
+ """
+ Fixture for indices with missing values
+ """
+ if request.param in ["int", "uint", "range", "empty", "repeats"]:
+ pytest.xfail("missing values not supported")
+ # GH 35538. Use deep copy to avoid illusive bug on np-dev
+ # Azure pipeline that writes into indices_dict despite copy
+ ind = indices_dict[request.param].copy(deep=True)
+ vals = ind.values
+ if request.param in ["tuples", "mi-with-dt64tz-level", "multi"]:
+ # For setting missing values in the top level of MultiIndex
+ vals = ind.tolist()
+ vals[0] = tuple([None]) + vals[0][1:]
+ vals[-1] = tuple([None]) + vals[-1][1:]
+ return MultiIndex.from_tuples(vals)
+ else:
+ vals[0] = None
+ vals[-1] = None
+ return type(ind)(vals)
+
+
# ----------------------------------------------------------------
# Series'
# ----------------------------------------------------------------
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -88,7 +88,7 @@
import pandas.core.missing as missing
from pandas.core.ops import get_op_result_name
from pandas.core.ops.invalid import make_invalid_op
-from pandas.core.sorting import ensure_key_mapped
+from pandas.core.sorting import ensure_key_mapped, nargsort
from pandas.core.strings import StringMethods
from pandas.io.formats.printing import (
@@ -4443,7 +4443,11 @@ def asof_locs(self, where, mask):
return result
def sort_values(
- self, return_indexer=False, ascending=True, key: Optional[Callable] = None
+ self,
+ return_indexer=False,
+ ascending=True,
+ na_position: str_t = "last",
+ key: Optional[Callable] = None,
):
"""
Return a sorted copy of the index.
@@ -4457,6 +4461,12 @@ def sort_values(
Should the indices that would sort the index be returned.
ascending : bool, default True
Should the index values be sorted in an ascending order.
+ na_position : {'first' or 'last'}, default 'last'
+ Argument 'first' puts NaNs at the beginning, 'last' puts NaNs at
+ the end.
+
+ .. versionadded:: 1.2.0
+
key : callable, optional
If not None, apply the key function to the index values
before sorting. This is similar to the `key` argument in the
@@ -4497,9 +4507,16 @@ def sort_values(
"""
idx = ensure_key_mapped(self, key)
- _as = idx.argsort()
- if not ascending:
- _as = _as[::-1]
+ # GH 35584. Sort missing values according to na_position kwarg
+ # ignore na_position for MutiIndex
+ if not isinstance(self, ABCMultiIndex):
+ _as = nargsort(
+ items=idx, ascending=ascending, na_position=na_position, key=key
+ )
+ else:
+ _as = idx.argsort()
+ if not ascending:
+ _as = _as[::-1]
sorted_index = self.take(_as)
| BUG: `Index.sort_values` fails with TypeError
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of pandas.
- [ ] (optional) I have confirmed this bug exists on the master branch of pandas.
---
#### Code Sample, a copy-pastable example
```python
>>> import pandas as pd
>>> pd.__version__
'1.1.0'
>>> sr = pd.Series(['a',None,'c',None,'e'])
>>> sr.sort_values()
0 a
2 c
4 e
1 None
3 None
dtype: object
>>> idx = pd.Index(['a',None,'c',None,'e'])
>>> idx
Index(['a', None, 'c', None, 'e'], dtype='object')
>>> idx.sort_values()
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/Users/pgali/PycharmProjects/del/venv1/lib/python3.7/site-packages/pandas/core/indexes/base.py", line 4448, in sort_values
_as = idx.argsort()
File "/Users/pgali/PycharmProjects/del/venv1/lib/python3.7/site-packages/pandas/core/indexes/base.py", line 4563, in argsort
return result.argsort(*args, **kwargs)
TypeError: '<' not supported between instances of 'NoneType' and 'str'
```
#### Problem description
`Index.sort_values()` fails when there are `None` values in it. However `Series.sort_values` performs sorting as expected in similar scenario.
#### Expected Output
We should be able to sort the values of an index similar to that of a series when there are `None` values.
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit : d9fff2792bf16178d4e450fe7384244e50635733
python : 3.7.3.final.0
python-bits : 64
OS : Darwin
OS-release : 19.6.0
Version : Darwin Kernel Version 19.6.0: Sun Jul 5 00:43:10 PDT 2020; root:xnu-6153.141.1~9/RELEASE_X86_64
machine : x86_64
processor : i386
byteorder : little
LC_ALL : None
LANG : None
LOCALE : en_US.UTF-8
pandas : 1.1.0
numpy : 1.19.0
pytz : 2020.1
dateutil : 2.8.1
pip : 20.1.1
setuptools : 49.1.0
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 2.11.2
IPython : None
pandas_datareader: None
bs4 : None
bottleneck : None
fsspec : None
fastparquet : None
gcsfs : None
matplotlib : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
numba : None
</details>
| Checked the issue on the last commit in master. It persists. | 2020-08-07T12:15:38Z | [] | [] |
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/Users/pgali/PycharmProjects/del/venv1/lib/python3.7/site-packages/pandas/core/indexes/base.py", line 4448, in sort_values
_as = idx.argsort()
File "/Users/pgali/PycharmProjects/del/venv1/lib/python3.7/site-packages/pandas/core/indexes/base.py", line 4563, in argsort
return result.argsort(*args, **kwargs)
TypeError: '<' not supported between instances of 'NoneType' and 'str'
| 13,965 |
|||
pandas-dev/pandas | pandas-dev__pandas-35673 | 40795053aaf82f8f55abe56001b1276b1ef0a916 | diff --git a/doc/source/whatsnew/v1.1.1.rst b/doc/source/whatsnew/v1.1.1.rst
--- a/doc/source/whatsnew/v1.1.1.rst
+++ b/doc/source/whatsnew/v1.1.1.rst
@@ -22,6 +22,7 @@ Fixed regressions
- Fixed regression in :meth:`DataFrame.shift` with ``axis=1`` and heterogeneous dtypes (:issue:`35488`)
- Fixed regression in ``.groupby(..).rolling(..)`` where a segfault would occur with ``center=True`` and an odd number of values (:issue:`35552`)
- Fixed regression in :meth:`DataFrame.apply` where functions that altered the input in-place only operated on a single row (:issue:`35462`)
+- Fixed regression in :meth:`DataFrame.reset_index` would raise a ``ValueError`` on empty :class:`DataFrame` with a :class:`MultiIndex` with a ``datetime64`` dtype level (:issue:`35606`, :issue:`35657`)
- Fixed regression where :meth:`DataFrame.merge_asof` would raise a ``UnboundLocalError`` when ``left_index`` , ``right_index`` and ``tolerance`` were set (:issue:`35558`)
- Fixed regression in ``.groupby(..).rolling(..)`` where a custom ``BaseIndexer`` would be ignored (:issue:`35557`)
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -4816,7 +4816,7 @@ def _maybe_casted_values(index, labels=None):
# we can have situations where the whole mask is -1,
# meaning there is nothing found in labels, so make all nan's
- if mask.all():
+ if mask.size > 0 and mask.all():
dtype = index.dtype
fill_value = na_value_for_dtype(dtype)
values = construct_1d_arraylike_from_scalar(
| BUG: ValueError: cannot convert float NaN to integer - on dataframe.reset_index()
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the latest version of pandas.
- [ ] (optional) I have confirmed this bug exists on the master branch of pandas.
---
#### Code Sample
```python
import pandas as pd
df = pd.DataFrame(
dict(c1=[10.], c2=['a'], c3=pd.to_datetime('2020-01-01')))
# Triggering conditions: multiindex with date, empty dataframe
# Multiindex without date works
df.set_index(['c1', 'c2']).head(0).reset_index()
# Regular index with date also works
df.set_index(['c3']).head(0).reset_index()
# Multiindex with date crashes...
df.set_index(['c2', 'c3']).head(0).reset_index()
# >> ValueError: cannot convert float NaN to integer
# This used to work on pandas 1.0.3, but breaks on pandas 1.1.0
# Though the error doesn't trigger if the dataframe is empty before
# calling set_index()
df.head(0).set_index(['c2', 'c3']).reset_index()
# I originally observed the bug in a groupby call
df.head(0).groupby(['c2', 'c3'])[['c1']].sum().reset_index()
# >> ValueError: cannot convert float NaN to integer
# This used to work on pandas 1.0.3, but breaks on pandas 1.1.0
```
#### Problem description
On pandas 1.1.0, I'm getting a ValueError exception when calling dataframe.reset_index() under the following conditions:
- Input dataframe is empty
- Multiindex from multiple columns, at least one of which is a datetime
The exception message is `ValueError: cannot convert float NaN to integer`.
Error trace:
```
Error
Traceback (most recent call last):
df_out.reset_index()
File "/Users/pec21/PycharmProjects/anp_voice_report/virtual/lib/python3.6/site-packages/pandas/core/frame.py", line 4848, in reset_index
level_values = _maybe_casted_values(lev, lab)
File "/Users/pec21/PycharmProjects/anp_voice_report/virtual/lib/python3.6/site-packages/pandas/core/frame.py", line 4782, in _maybe_casted_values
fill_value, len(mask), dtype
File "/Users/pec21/PycharmProjects/anp_voice_report/virtual/lib/python3.6/site-packages/pandas/core/dtypes/cast.py", line 1554, in construct_1d_arraylike_from_scalar
subarr.fill(value)
ValueError: cannot convert float NaN to integer
```
This error didn't happen on pandas 1.0.3 and earlier. I haven't tested any intermediate releases, nor the master branch.
#### Expected Output
No exception is raised, returns an empty dataframe.
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit : d9fff2792bf16178d4e450fe7384244e50635733
python : 3.6.6.final.0
python-bits : 64
OS : Darwin
OS-release : 18.6.0
Version : Darwin Kernel Version 18.6.0: Thu Apr 25 23:16:27 PDT 2019; root:xnu-4903.261.4~2/RELEASE_X86_64
machine : x86_64
processor : i386
byteorder : little
LC_ALL : None
LANG : None
LOCALE : en_GB.UTF-8
pandas : 1.1.0
numpy : 1.17.4
pytz : 2019.3
dateutil : 2.8.1
pip : 20.0.2
setuptools : 49.1.0
Cython : None
pytest : 5.3.4
hypothesis : None
sphinx : 2.3.1
blosc : None
feather : None
xlsxwriter : None
lxml.etree : 4.4.2
html5lib : None
pymysql : None
psycopg2 : 2.8.4 (dt dec pq3 ext lo64)
jinja2 : 2.10.3
IPython : None
pandas_datareader: None
bs4 : 4.8.1
bottleneck : None
fsspec : None
fastparquet : None
gcsfs : None
matplotlib : None
numexpr : None
odfpy : None
openpyxl : 3.0.2
pandas_gbq : None
pyarrow : 0.15.1
pytables : None
pyxlsb : None
s3fs : None
scipy : 1.3.3
sqlalchemy : 1.3.11
tables : None
tabulate : None
xarray : None
xlrd : 1.2.0
xlwt : None
numba : None
</details>
| Thanks @capelastegui for the report. This is a duplicate of #35606
Sorry about the duplicate! I had looked at that issue, and thought that it was a similar error, but not quite the same.
> Sorry about the duplicate! I had looked at that issue, and thought that it was a similar error, but not quite the same.
the second failing case maybe different, does not involve datetime so will reopen for now.
```
>>> # Multiindex with date crashes...
>>> res = df.set_index(['c2', 'c3']).head(0)
>>> res
Empty DataFrame
Columns: [c1]
Index: []
>>>
>>> res.index
MultiIndex([], names=['c2', 'c3'])
>>>
>>> res.index.levels, res.index.codes
(FrozenList([['a'], [2020-01-01 00:00:00]]), FrozenList([[], []]))
>>>
>>> res.reset_index()
Traceback (most recent call last):
...
ValueError: cannot convert float NaN to integer
>>>
>>> # I originally observed the bug in a groupby call
>>> res = df.head(0).groupby(['c2', 'c3'])[['c1']].sum()
>>> res
Empty DataFrame
Columns: [c1]
Index: []
>>> # >> ValueError: cannot convert float NaN to integer
>>> # This used to work on pandas 1.0.3, but breaks on pandas 1.1.0
>>>
>>> res.index
MultiIndex([], names=['c2', 'c3'])
>>>
>>> res.index.levels, res.index.codes
(FrozenList([[], []]), FrozenList([[], []]))
>>>
>>> res.reset_index()
Traceback (most recent call last):
...
ValueError: cannot convert float NaN to integer
>>>
>>>
>>>
```
> the second failing case maybe different, does not involve datetime so will reopen for now.
depends in the dtype which is datetime64 so constructing a mutliindex with empty levels and codes does not recreate issue.
PR with fix shortly | 2020-08-11T16:24:25Z | [] | [] |
Traceback (most recent call last):
df_out.reset_index()
File "/Users/pec21/PycharmProjects/anp_voice_report/virtual/lib/python3.6/site-packages/pandas/core/frame.py", line 4848, in reset_index
level_values = _maybe_casted_values(lev, lab)
File "/Users/pec21/PycharmProjects/anp_voice_report/virtual/lib/python3.6/site-packages/pandas/core/frame.py", line 4782, in _maybe_casted_values
fill_value, len(mask), dtype
File "/Users/pec21/PycharmProjects/anp_voice_report/virtual/lib/python3.6/site-packages/pandas/core/dtypes/cast.py", line 1554, in construct_1d_arraylike_from_scalar
subarr.fill(value)
ValueError: cannot convert float NaN to integer
| 13,979 |
|||
pandas-dev/pandas | pandas-dev__pandas-35736 | c43652ef8a2342ba3eb065ba7e3e6733096bd4d3 | diff --git a/doc/source/whatsnew/v1.2.0.rst b/doc/source/whatsnew/v1.2.0.rst
--- a/doc/source/whatsnew/v1.2.0.rst
+++ b/doc/source/whatsnew/v1.2.0.rst
@@ -295,6 +295,7 @@ I/O
- :meth:`to_csv` passes compression arguments for `'gzip'` always to `gzip.GzipFile` (:issue:`28103`)
- :meth:`to_csv` did not support zip compression for binary file object not having a filename (:issue: `35058`)
- :meth:`to_csv` and :meth:`read_csv` did not honor `compression` and `encoding` for path-like objects that are internally converted to file-like objects (:issue:`35677`, :issue:`26124`, and :issue:`32392`)
+- :meth:`to_picke` and :meth:`read_pickle` did not support compression for file-objects (:issue:`26237`, :issue:`29054`, and :issue:`29570`)
Plotting
^^^^^^^^
diff --git a/pandas/_typing.py b/pandas/_typing.py
--- a/pandas/_typing.py
+++ b/pandas/_typing.py
@@ -116,7 +116,7 @@
# compression keywords and compression
-CompressionDict = Mapping[str, Optional[Union[str, int, bool]]]
+CompressionDict = Dict[str, Any]
CompressionOptions = Optional[Union[str, CompressionDict]]
@@ -138,6 +138,6 @@ class IOargs(Generic[ModeVar, EncodingVar]):
filepath_or_buffer: FileOrBuffer
encoding: EncodingVar
- compression: CompressionOptions
+ compression: CompressionDict
should_close: bool
mode: Union[ModeVar, str]
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -27,7 +27,6 @@
Iterable,
Iterator,
List,
- Mapping,
Optional,
Sequence,
Set,
@@ -49,6 +48,7 @@
ArrayLike,
Axes,
Axis,
+ CompressionOptions,
Dtype,
FilePathOrBuffer,
FrameOrSeriesUnion,
@@ -2062,7 +2062,7 @@ def to_stata(
variable_labels: Optional[Dict[Label, str]] = None,
version: Optional[int] = 114,
convert_strl: Optional[Sequence[Label]] = None,
- compression: Union[str, Mapping[str, str], None] = "infer",
+ compression: CompressionOptions = "infer",
storage_options: StorageOptions = None,
) -> None:
"""
diff --git a/pandas/io/common.py b/pandas/io/common.py
--- a/pandas/io/common.py
+++ b/pandas/io/common.py
@@ -205,11 +205,13 @@ def get_filepath_or_buffer(
"""
filepath_or_buffer = stringify_path(filepath_or_buffer)
+ # handle compression dict
+ compression_method, compression = get_compression_method(compression)
+ compression_method = infer_compression(filepath_or_buffer, compression_method)
+ compression = dict(compression, method=compression_method)
+
# bz2 and xz do not write the byte order mark for utf-16 and utf-32
# print a warning when writing such files
- compression_method = infer_compression(
- filepath_or_buffer, get_compression_method(compression)[0]
- )
if (
mode
and "w" in mode
@@ -238,7 +240,7 @@ def get_filepath_or_buffer(
content_encoding = req.headers.get("Content-Encoding", None)
if content_encoding == "gzip":
# Override compression based on Content-Encoding header
- compression = "gzip"
+ compression = {"method": "gzip"}
reader = BytesIO(req.read())
req.close()
return IOargs(
@@ -374,11 +376,7 @@ def get_compression_method(
if isinstance(compression, Mapping):
compression_args = dict(compression)
try:
- # error: Incompatible types in assignment (expression has type
- # "Union[str, int, None]", variable has type "Optional[str]")
- compression_method = compression_args.pop( # type: ignore[assignment]
- "method"
- )
+ compression_method = compression_args.pop("method")
except KeyError as err:
raise ValueError("If mapping, compression must have key 'method'") from err
else:
@@ -652,12 +650,8 @@ def __init__(
super().__init__(file, mode, **kwargs_zip) # type: ignore[arg-type]
def write(self, data):
- archive_name = self.filename
- if self.archive_name is not None:
- archive_name = self.archive_name
- if archive_name is None:
- # ZipFile needs a non-empty string
- archive_name = "zip"
+ # ZipFile needs a non-empty string
+ archive_name = self.archive_name or self.filename or "zip"
super().writestr(archive_name, data)
@property
diff --git a/pandas/io/formats/csvs.py b/pandas/io/formats/csvs.py
--- a/pandas/io/formats/csvs.py
+++ b/pandas/io/formats/csvs.py
@@ -21,12 +21,7 @@
)
from pandas.core.dtypes.missing import notna
-from pandas.io.common import (
- get_compression_method,
- get_filepath_or_buffer,
- get_handle,
- infer_compression,
-)
+from pandas.io.common import get_filepath_or_buffer, get_handle
class CSVFormatter:
@@ -60,17 +55,15 @@ def __init__(
if path_or_buf is None:
path_or_buf = StringIO()
- # Extract compression mode as given, if dict
- compression, self.compression_args = get_compression_method(compression)
- self.compression = infer_compression(path_or_buf, compression)
-
ioargs = get_filepath_or_buffer(
path_or_buf,
encoding=encoding,
- compression=self.compression,
+ compression=compression,
mode=mode,
storage_options=storage_options,
)
+ self.compression = ioargs.compression.pop("method")
+ self.compression_args = ioargs.compression
self.path_or_buf = ioargs.filepath_or_buffer
self.should_close = ioargs.should_close
self.mode = ioargs.mode
diff --git a/pandas/io/json/_json.py b/pandas/io/json/_json.py
--- a/pandas/io/json/_json.py
+++ b/pandas/io/json/_json.py
@@ -19,12 +19,7 @@
from pandas.core.construction import create_series_with_explicit_dtype
from pandas.core.reshape.concat import concat
-from pandas.io.common import (
- get_compression_method,
- get_filepath_or_buffer,
- get_handle,
- infer_compression,
-)
+from pandas.io.common import get_compression_method, get_filepath_or_buffer, get_handle
from pandas.io.json._normalize import convert_to_line_delimits
from pandas.io.json._table_schema import build_table_schema, parse_table_schema
from pandas.io.parsers import _validate_integer
@@ -66,6 +61,7 @@ def to_json(
)
path_or_buf = ioargs.filepath_or_buffer
should_close = ioargs.should_close
+ compression = ioargs.compression
if lines and orient != "records":
raise ValueError("'lines' keyword only valid when 'orient' is records")
@@ -616,9 +612,6 @@ def read_json(
if encoding is None:
encoding = "utf-8"
- compression_method, compression = get_compression_method(compression)
- compression_method = infer_compression(path_or_buf, compression_method)
- compression = dict(compression, method=compression_method)
ioargs = get_filepath_or_buffer(
path_or_buf,
encoding=encoding,
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -63,12 +63,7 @@
from pandas.core.series import Series
from pandas.core.tools import datetimes as tools
-from pandas.io.common import (
- get_filepath_or_buffer,
- get_handle,
- infer_compression,
- validate_header_arg,
-)
+from pandas.io.common import get_filepath_or_buffer, get_handle, validate_header_arg
from pandas.io.date_converters import generic_parser
# BOM character (byte order mark)
@@ -424,9 +419,7 @@ def _read(filepath_or_buffer: FilePathOrBuffer, kwds):
if encoding is not None:
encoding = re.sub("_", "-", encoding).lower()
kwds["encoding"] = encoding
-
compression = kwds.get("compression", "infer")
- compression = infer_compression(filepath_or_buffer, compression)
# TODO: get_filepath_or_buffer could return
# Union[FilePathOrBuffer, s3fs.S3File, gcsfs.GCSFile]
@@ -1976,6 +1969,10 @@ def __init__(self, src, **kwds):
encoding = kwds.get("encoding")
+ # parsers.TextReader doesn't support compression dicts
+ if isinstance(kwds.get("compression"), dict):
+ kwds["compression"] = kwds["compression"]["method"]
+
if kwds.get("compression") is None and encoding:
if isinstance(src, str):
src = open(src, "rb")
diff --git a/pandas/io/pickle.py b/pandas/io/pickle.py
--- a/pandas/io/pickle.py
+++ b/pandas/io/pickle.py
@@ -92,11 +92,8 @@ def to_pickle(
mode="wb",
storage_options=storage_options,
)
- compression = ioargs.compression
- if not isinstance(ioargs.filepath_or_buffer, str) and compression == "infer":
- compression = None
f, fh = get_handle(
- ioargs.filepath_or_buffer, "wb", compression=compression, is_text=False
+ ioargs.filepath_or_buffer, "wb", compression=ioargs.compression, is_text=False
)
if protocol < 0:
protocol = pickle.HIGHEST_PROTOCOL
@@ -196,11 +193,8 @@ def read_pickle(
ioargs = get_filepath_or_buffer(
filepath_or_buffer, compression=compression, storage_options=storage_options
)
- compression = ioargs.compression
- if not isinstance(ioargs.filepath_or_buffer, str) and compression == "infer":
- compression = None
f, fh = get_handle(
- ioargs.filepath_or_buffer, "rb", compression=compression, is_text=False
+ ioargs.filepath_or_buffer, "rb", compression=ioargs.compression, is_text=False
)
# 1) try standard library Pickle
diff --git a/pandas/io/stata.py b/pandas/io/stata.py
--- a/pandas/io/stata.py
+++ b/pandas/io/stata.py
@@ -16,18 +16,7 @@
from pathlib import Path
import struct
import sys
-from typing import (
- Any,
- AnyStr,
- BinaryIO,
- Dict,
- List,
- Mapping,
- Optional,
- Sequence,
- Tuple,
- Union,
-)
+from typing import Any, AnyStr, BinaryIO, Dict, List, Optional, Sequence, Tuple, Union
import warnings
from dateutil.relativedelta import relativedelta
@@ -58,13 +47,7 @@
from pandas.core.indexes.base import Index
from pandas.core.series import Series
-from pandas.io.common import (
- get_compression_method,
- get_filepath_or_buffer,
- get_handle,
- infer_compression,
- stringify_path,
-)
+from pandas.io.common import get_filepath_or_buffer, get_handle, stringify_path
_version_error = (
"Version of given Stata file is {version}. pandas supports importing "
@@ -1976,9 +1959,6 @@ def _open_file_binary_write(
return fname, False, None # type: ignore[return-value]
elif isinstance(fname, (str, Path)):
# Extract compression mode as given, if dict
- compression_typ, compression_args = get_compression_method(compression)
- compression_typ = infer_compression(fname, compression_typ)
- compression = dict(compression_args, method=compression_typ)
ioargs = get_filepath_or_buffer(
fname, mode="wb", compression=compression, storage_options=storage_options
)
@@ -2235,7 +2215,7 @@ def __init__(
time_stamp: Optional[datetime.datetime] = None,
data_label: Optional[str] = None,
variable_labels: Optional[Dict[Label, str]] = None,
- compression: Union[str, Mapping[str, str], None] = "infer",
+ compression: CompressionOptions = "infer",
storage_options: StorageOptions = None,
):
super().__init__()
@@ -3118,7 +3098,7 @@ def __init__(
data_label: Optional[str] = None,
variable_labels: Optional[Dict[Label, str]] = None,
convert_strl: Optional[Sequence[Label]] = None,
- compression: Union[str, Mapping[str, str], None] = "infer",
+ compression: CompressionOptions = "infer",
storage_options: StorageOptions = None,
):
# Copy to new list since convert_strl might be modified later
@@ -3523,7 +3503,7 @@ def __init__(
variable_labels: Optional[Dict[Label, str]] = None,
convert_strl: Optional[Sequence[Label]] = None,
version: Optional[int] = None,
- compression: Union[str, Mapping[str, str], None] = "infer",
+ compression: CompressionOptions = "infer",
storage_options: StorageOptions = None,
):
if version is None:
| In-memory to_pickle leads to I/O error
#### Code Sample
```
# import libraries
import pandas as pd
import io
# show version
print(pd.__version__)
# 0.25.2
# create example dataframe
df = pd.DataFrame({"A": [1, 2, 3, 4], "B": [5, 6, 7, 8]})
# create io-stream to act as surrogate file
stream = io.BytesIO()
# since the compression cannot be inferred from the filename, it has to be set explicitly.
df.to_pickle(stream, compression=None)
# stream.getvalue() can be used as binary load in an api call to save the dataframe in the cloud
print(stream.getvalue())
'''
correct output pandas version 0.24.1:
b'\x80\x04\x95\xe2\x02\x00\x00\x00\x00\x00\x00\x8c\x11pandas.core.frame\x94\x8c\tDataFrame\x94\x93\x94)\x81\x94}\x94(\x8c\x05_data\x94\x8c\x1epandas.core.internals.managers\x94\x8c\x0cBlockManager\x94\x93\x94)\x81\x94(]\x94(\x8c\x18pandas.core.indexes.base\x94\x8c\n_new_Index\x94\x93\x94h\x0b\x8c\x05Index\x94\x93\x94}\x94(\x8c\x04data\x94\x8c\x15numpy.core.multiarray\x94\x8c\x0c_reconstruct\x94\x93\x94\x8c\x05numpy\x94\x8c\x07ndarray\x94\x93\x94K\x00\x85\x94C\x01b\x94\x87\x94R\x94(K\x01K\x02\x85\x94h\x15\x8c\x05dtype\x94\x93\x94\x8c\x02O8\x94K\x00K\x01\x87\x94R\x94(K\x03\x8c\x01|\x94NNNJ\xff\xff\xff\xffJ\xff\xff\xff\xffK?t\x94b\x89]\x94(\x8c\x01A\x94\x8c\x01B\x94et\x94b\x8c\x04name\x94Nu\x86\x94R\x94h\r\x8c\x19pandas.core.indexes.range\x94\x8c\nRangeIndex\x94\x93\x94}\x94(h(N\x8c\x05start\x94K\x00\x8c\x04stop\x94K\x04\x8c\x04step\x94K\x01u\x86\x94R\x94e]\x94h\x14h\x17K\x00\x85\x94h\x19\x87\x94R\x94(K\x01K\x02K\x04\x86\x94h\x1e\x8c\x02i8\x94K\x00K\x01\x87\x94R\x94(K\x03\x8c\x01<\x94NNNJ\xff\xff\xff\xffJ\xff\xff\xff\xffK\x00t\x94b\x89C@\x01\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x05\x00\x00\x00\x00\x00\x00\x00\x06\x00\x00\x00\x00\x00\x00\x00\x07\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x94t\x94ba]\x94h\rh\x0f}\x94(h\x11h\x14h\x17K\x00\x85\x94h\x19\x87\x94R\x94(K\x01K\x02\x85\x94h!\x89]\x94(h%h&et\x94bh(Nu\x86\x94R\x94a}\x94\x8c\x060.14.1\x94}\x94(\x8c\x04axes\x94h\n\x8c\x06blocks\x94]\x94}\x94(\x8c\x06values\x94h7\x8c\x08mgr_locs\x94\x8c\x08builtins\x94\x8c\x05slice\x94\x93\x94K\x00K\x02K\x01\x87\x94R\x94uaust\x94b\x8c\x04_typ\x94\x8c\tdataframe\x94\x8c\t_metadata\x94]\x94ub.'
erroneous output pandas version 0.25.2:
Traceback (most recent call last):
File "C:/Projects/bug_report/report_bug.py", line 20, in <module>
print(stream.getvalue())
ValueError: I/O operation on closed file.
'''
```
#### Problem description
Occasionally I would like to save Pandas dataframes in the cloud. This can be done through api-calls in which the dataframe is uploaded as binary content. The binary content can be created through providing a io.BytesIO stream to the pandas.to_pickle method. Subsequently the binary content can be obtained by the method getvalue from io.BytesIO. This works perfectly in pandas version 0.24.1. However, when updating to pandas version 0.25.2, this ceases to work. Apparently the io.BytesIO stream gets now closed in the pandas.to_pickle method and can no longer be accessed.
#### Expected Output
A binary string as produced in pandas version 0.24.1, see code example above
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit : None
python : 3.7.5.final.0
python-bits : 64
OS : Windows
OS-release : 10
machine : AMD64
processor : Intel64 Family 6 Model 158 Stepping 10, GenuineIntel
byteorder : little
LC_ALL : None
LANG : None
LOCALE : None.None
pandas : 0.25.2
numpy : 1.17.3
pytz : 2019.3
dateutil : 2.8.1
pip : 19.3.1
setuptools : 41.6.0.post20191030
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : None
pandas_datareader: None
bs4 : None
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : None
matplotlib : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
xarray : None
xlrd : None
xlwt : None
xlsxwriter : None
None
</details>
| I got the same problem.
I got the same problem in version 1.0.3
@SebastianB12 I also have the same problem in 1.0.3 version :(
Anyone interested in working on this?
By the way, to those interested in a workaround in the meantime: you can use this
```python
class ResilientBytesIO(BytesIO):
def close(self):
pass # Refuse to close to avoid pandas bug
def really_close(self):
super().close()
``` | 2020-08-15T15:24:31Z | [] | [] |
Traceback (most recent call last):
File "C:/Projects/bug_report/report_bug.py", line 20, in <module>
print(stream.getvalue())
ValueError: I/O operation on closed file.
| 13,990 |
|||
pandas-dev/pandas | pandas-dev__pandas-35794 | db6414fbea66aa59dfc0fcd0e19648fc532f7502 | diff --git a/doc/source/whatsnew/v1.1.2.rst b/doc/source/whatsnew/v1.1.2.rst
--- a/doc/source/whatsnew/v1.1.2.rst
+++ b/doc/source/whatsnew/v1.1.2.rst
@@ -24,7 +24,7 @@ Fixed regressions
Bug fixes
~~~~~~~~~
-
+- Bug in :meth:`DataFrame.eval` with ``object`` dtype column binary operations (:issue:`35794`)
-
-
diff --git a/pandas/core/computation/ops.py b/pandas/core/computation/ops.py
--- a/pandas/core/computation/ops.py
+++ b/pandas/core/computation/ops.py
@@ -481,13 +481,21 @@ def stringify(value):
self.lhs.update(v)
def _disallow_scalar_only_bool_ops(self):
+ rhs = self.rhs
+ lhs = self.lhs
+
+ # GH#24883 unwrap dtype if necessary to ensure we have a type object
+ rhs_rt = rhs.return_type
+ rhs_rt = getattr(rhs_rt, "type", rhs_rt)
+ lhs_rt = lhs.return_type
+ lhs_rt = getattr(lhs_rt, "type", lhs_rt)
if (
- (self.lhs.is_scalar or self.rhs.is_scalar)
+ (lhs.is_scalar or rhs.is_scalar)
and self.op in _bool_ops_dict
and (
not (
- issubclass(self.rhs.return_type, (bool, np.bool_))
- and issubclass(self.lhs.return_type, (bool, np.bool_))
+ issubclass(rhs_rt, (bool, np.bool_))
+ and issubclass(lhs_rt, (bool, np.bool_))
)
)
):
| Pandas dataframe eval
#### Code Sample, a copy-pastable example if possible
```python
# Your code here
Scenario 1:
df2=pd.DataFrame({'a1':[10,20]})
df2.eval("c=((a1>10) & True )")
Out[8]:
a1 c
0 10 False
1 20 True
Scenario 2:
df2=pd.DataFrame({'a1':['Y','N']})
df2.eval("c=((a1 == 'Y') & True )")
Traceback (most recent call last):
TypeError: issubclass() arg 1 must be a class
```
#### Problem description
[this should explain **why** the current behaviour is a problem and why the expected output is a better solution.]
By changing datatype of a column from int to str, pd.eval throws "TypeError: issubclass() arg 1 must be a class"
**Note**: We receive a lot of issues on our GitHub tracker, so it is very possible that your issue has been posted before. Please check first before submitting so that we do not have to handle and close duplicates!
**Note**: Many problems can be resolved by simply upgrading `pandas` to the latest version. Before submitting, please check if that solution works for you. If possible, you may want to check if `master` addresses this issue, but that is not necessary.
For documentation-related issues, you can check the latest versions of the docs on `master` here:
https://pandas-docs.github.io/pandas-docs-travis/
If the issue has not been resolved there, go ahead and file it in the issue tracker.
#### Expected Output
In Scenario 2
a1 c
10 True
20 False
#### Output of ``pd.show_versions()``
<details>
[paste the output of ``pd.show_versions()`` here below this line]
INSTALLED VERSIONS
------------------
commit: None
python: 2.7.13.final.0
python-bits: 64
OS: Windows
OS-release: 10
machine: AMD64
processor: Intel64 Family 6 Model 78 Stepping 3, GenuineIntel
byteorder: little
LC_ALL: None
LANG: en
LOCALE: None.None
pandas: 0.21.1
pytest: 3.0.7
pip: 9.0.1
setuptools: 27.2.0
Cython: 0.25.2
numpy: 1.14.2
scipy: 0.19.0
pyarrow: None
xarray: None
IPython: 5.3.0
sphinx: 1.5.6
patsy: 0.4.1
dateutil: 2.6.0
pytz: 2017.2
blosc: None
bottleneck: 1.2.1
tables: 3.2.2
numexpr: 2.6.2
feather: None
matplotlib: 2.0.2
openpyxl: 2.4.7
xlrd: 1.0.0
xlwt: 1.2.0
xlsxwriter: 0.9.6
lxml: 3.7.3
bs4: 4.6.0
html5lib: 0.999
sqlalchemy: 1.1.9
pymysql: None
psycopg2: None
jinja2: 2.9.6
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
</details>
| this is a duplicate issue
can u @jreback provide the solution to handle this? | 2020-08-18T23:00:56Z | [] | [] |
Traceback (most recent call last):
TypeError: issubclass() arg 1 must be a class
| 14,004 |
|||
pandas-dev/pandas | pandas-dev__pandas-3591 | 56da2b2049a77b3ce5dca28074ba84514f0c20ae | diff --git a/RELEASE.rst b/RELEASE.rst
--- a/RELEASE.rst
+++ b/RELEASE.rst
@@ -103,6 +103,7 @@ pandas 0.11.1
- Fix ``.diff`` on datelike and timedelta operations (GH3100_)
- ``combine_first`` not returning the same dtype in cases where it can (GH3552_)
- Fixed bug with ``Panel.transpose`` argument aliases (GH3556_)
+ - Fixed platform bug in ``PeriodIndex.take`` (GH3579_)
- Fixed bug in reset_index with ``NaN`` in a multi-index (GH3586_)
.. _GH3164: https://github.com/pydata/pandas/issues/3164
@@ -143,6 +144,7 @@ pandas 0.11.1
.. _GH3562: https://github.com/pydata/pandas/issues/3562
.. _GH3586: https://github.com/pydata/pandas/issues/3586
.. _GH3493: https://github.com/pydata/pandas/issues/3493
+.. _GH3579: https://github.com/pydata/pandas/issues/3579
.. _GH3556: https://github.com/pydata/pandas/issues/3556
diff --git a/pandas/tseries/period.py b/pandas/tseries/period.py
--- a/pandas/tseries/period.py
+++ b/pandas/tseries/period.py
@@ -1125,6 +1125,7 @@ def take(self, indices, axis=None):
"""
Analogous to ndarray.take
"""
+ indices = com._ensure_platform_int(indices)
taken = self.values.take(indices, axis=axis)
taken = taken.view(PeriodIndex)
taken.freq = self.freq
| Iterating over groupby fails with period-indexed dataframe
I have found this while working with period-indexed data frames on Windows XP 32 bits:
```
import numpy as np
import pandas as pd
index = pd.period_range(start='1999-01', periods=5, freq='M')
s1 = pd.Series(np.random.rand(len(index)), index=index)
s2 = pd.Series(np.random.rand(len(index)), index=index)
series = [('s1', s1), ('s2',s2)]
df = pd.DataFrame.from_items(series)
grouped = df.groupby(df.index.month)
list(grouped)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "D:\Python27\lib\site-packages\pandas\core\groupby.py", line 595, in get_iterator
for key, (i, group) in izip(keys, splitter):
File "D:\Python27\lib\site-packages\pandas\core\groupby.py", line 2214, in __iter__
sdata = self._get_sorted_data()
File "D:\Python27\lib\site-packages\pandas\core\groupby.py", line 2231, in _get_sorted_data
return self.data.take(self.sort_idx, axis=self.axis)
File "D:\Python27\lib\site-packages\pandas\core\frame.py", line 2891, in take
new_index = self.index.take(indices)
File "D:\Python27\lib\site-packages\pandas\tseries\period.py", line 1110, in take
taken = self.values.take(indices, axis=axis)
TypeError: Cannot cast array data from dtype('int64') to dtype('int32') according to the rule 'safe'
```
This happens with the last stable version 0.11.0. However iterating over grouped.s1 or grouped.s2 just works.
| See [the docs for `numpy.ndarray.astype`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.astype.html) for why this is happening. Let `d1` and `d2` be two dtypes. `array([1,2], d1).astype(d2, cast='safe')` will _not_ work if `d1.itemsize > d2.itemsize` because `'safe'` implies that you are not okay with losing information when casting from `d1` to `d2`. You are on a 32-bit system and it sounds like there is no 64-bit integer emulation on your OS.
@cpcloud this might be a bug I have to look
period index is backed by int64, by there are conversions necessary at times to platform int
(which should be transparent)
will take a look
@jreback I was just about to ask if this is a numpy or pandas issue :)
| 2013-05-13T18:02:38Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "D:\Python27\lib\site-packages\pandas\core\groupby.py", line 595, in get_iterator
for key, (i, group) in izip(keys, splitter):
File "D:\Python27\lib\site-packages\pandas\core\groupby.py", line 2214, in __iter__
sdata = self._get_sorted_data()
File "D:\Python27\lib\site-packages\pandas\core\groupby.py", line 2231, in _get_sorted_data
return self.data.take(self.sort_idx, axis=self.axis)
File "D:\Python27\lib\site-packages\pandas\core\frame.py", line 2891, in take
new_index = self.index.take(indices)
File "D:\Python27\lib\site-packages\pandas\tseries\period.py", line 1110, in take
taken = self.values.take(indices, axis=axis)
TypeError: Cannot cast array data from dtype('int64') to dtype('int32') according to the rule 'safe'
| 14,022 |
|||
pandas-dev/pandas | pandas-dev__pandas-36061 | 73c1d3269830d787c8990de8f02bf4279d2720ab | diff --git a/doc/source/whatsnew/v1.1.2.rst b/doc/source/whatsnew/v1.1.2.rst
--- a/doc/source/whatsnew/v1.1.2.rst
+++ b/doc/source/whatsnew/v1.1.2.rst
@@ -18,7 +18,7 @@ Fixed regressions
- Fix regression in updating a column inplace (e.g. using ``df['col'].fillna(.., inplace=True)``) (:issue:`35731`)
- Performance regression for :meth:`RangeIndex.format` (:issue:`35712`)
- Regression in :meth:`DataFrame.replace` where a ``TypeError`` would be raised when attempting to replace elements of type :class:`Interval` (:issue:`35931`)
--
+- Fixed regression in :meth:`DataFrameGroupBy.agg` where a ``ValueError: buffer source array is read-only`` would be raised when the underlying array is read-only (:issue:`36014`)
.. ---------------------------------------------------------------------------
diff --git a/pandas/_libs/groupby.pyx b/pandas/_libs/groupby.pyx
--- a/pandas/_libs/groupby.pyx
+++ b/pandas/_libs/groupby.pyx
@@ -229,7 +229,7 @@ def group_cumprod_float64(float64_t[:, :] out,
@cython.boundscheck(False)
@cython.wraparound(False)
def group_cumsum(numeric[:, :] out,
- numeric[:, :] values,
+ ndarray[numeric, ndim=2] values,
const int64_t[:] labels,
int ngroups,
is_datetimelike,
@@ -472,7 +472,7 @@ ctypedef fused complexfloating_t:
@cython.boundscheck(False)
def _group_add(complexfloating_t[:, :] out,
int64_t[:] counts,
- complexfloating_t[:, :] values,
+ ndarray[complexfloating_t, ndim=2] values,
const int64_t[:] labels,
Py_ssize_t min_count=0):
"""
@@ -483,8 +483,9 @@ def _group_add(complexfloating_t[:, :] out,
complexfloating_t val, count
complexfloating_t[:, :] sumx
int64_t[:, :] nobs
+ Py_ssize_t len_values = len(values), len_labels = len(labels)
- if len(values) != len(labels):
+ if len_values != len_labels:
raise ValueError("len(index) != len(labels)")
nobs = np.zeros((<object>out).shape, dtype=np.int64)
@@ -530,7 +531,7 @@ group_add_complex128 = _group_add['double complex']
@cython.boundscheck(False)
def _group_prod(floating[:, :] out,
int64_t[:] counts,
- floating[:, :] values,
+ ndarray[floating, ndim=2] values,
const int64_t[:] labels,
Py_ssize_t min_count=0):
"""
@@ -541,8 +542,9 @@ def _group_prod(floating[:, :] out,
floating val, count
floating[:, :] prodx
int64_t[:, :] nobs
+ Py_ssize_t len_values = len(values), len_labels = len(labels)
- if not len(values) == len(labels):
+ if len_values != len_labels:
raise ValueError("len(index) != len(labels)")
nobs = np.zeros((<object>out).shape, dtype=np.int64)
@@ -582,7 +584,7 @@ group_prod_float64 = _group_prod['double']
@cython.cdivision(True)
def _group_var(floating[:, :] out,
int64_t[:] counts,
- floating[:, :] values,
+ ndarray[floating, ndim=2] values,
const int64_t[:] labels,
Py_ssize_t min_count=-1,
int64_t ddof=1):
@@ -591,10 +593,11 @@ def _group_var(floating[:, :] out,
floating val, ct, oldmean
floating[:, :] mean
int64_t[:, :] nobs
+ Py_ssize_t len_values = len(values), len_labels = len(labels)
assert min_count == -1, "'min_count' only used in add and prod"
- if not len(values) == len(labels):
+ if len_values != len_labels:
raise ValueError("len(index) != len(labels)")
nobs = np.zeros((<object>out).shape, dtype=np.int64)
@@ -639,7 +642,7 @@ group_var_float64 = _group_var['double']
@cython.boundscheck(False)
def _group_mean(floating[:, :] out,
int64_t[:] counts,
- floating[:, :] values,
+ ndarray[floating, ndim=2] values,
const int64_t[:] labels,
Py_ssize_t min_count=-1):
cdef:
@@ -647,10 +650,11 @@ def _group_mean(floating[:, :] out,
floating val, count
floating[:, :] sumx
int64_t[:, :] nobs
+ Py_ssize_t len_values = len(values), len_labels = len(labels)
assert min_count == -1, "'min_count' only used in add and prod"
- if not len(values) == len(labels):
+ if len_values != len_labels:
raise ValueError("len(index) != len(labels)")
nobs = np.zeros((<object>out).shape, dtype=np.int64)
@@ -689,7 +693,7 @@ group_mean_float64 = _group_mean['double']
@cython.boundscheck(False)
def _group_ohlc(floating[:, :] out,
int64_t[:] counts,
- floating[:, :] values,
+ ndarray[floating, ndim=2] values,
const int64_t[:] labels,
Py_ssize_t min_count=-1):
"""
@@ -740,7 +744,7 @@ group_ohlc_float64 = _group_ohlc['double']
@cython.boundscheck(False)
@cython.wraparound(False)
def group_quantile(ndarray[float64_t] out,
- numeric[:] values,
+ ndarray[numeric, ndim=1] values,
ndarray[int64_t] labels,
ndarray[uint8_t] mask,
float64_t q,
@@ -1072,7 +1076,7 @@ def group_nth(rank_t[:, :] out,
@cython.boundscheck(False)
@cython.wraparound(False)
def group_rank(float64_t[:, :] out,
- rank_t[:, :] values,
+ ndarray[rank_t, ndim=2] values,
const int64_t[:] labels,
int ngroups,
bint is_datetimelike, object ties_method="average",
@@ -1424,7 +1428,7 @@ def group_min(groupby_t[:, :] out,
@cython.boundscheck(False)
@cython.wraparound(False)
def group_cummin(groupby_t[:, :] out,
- groupby_t[:, :] values,
+ ndarray[groupby_t, ndim=2] values,
const int64_t[:] labels,
int ngroups,
bint is_datetimelike):
@@ -1484,7 +1488,7 @@ def group_cummin(groupby_t[:, :] out,
@cython.boundscheck(False)
@cython.wraparound(False)
def group_cummax(groupby_t[:, :] out,
- groupby_t[:, :] values,
+ ndarray[groupby_t, ndim=2] values,
const int64_t[:] labels,
int ngroups,
bint is_datetimelike):
| BUG: groupby and agg on read-only array gives ValueError: buffer source array is read-only
- [x] I have checked that this issue has not already been reported.
Two variants of this bug have been reported - #35436 and #34857
EDIT: I read into those two issues a bit more. They don't seem similar. But I'll keep it there.
- [x] I have confirmed this bug exists on the latest version of pandas.
Bug exists in pandas 1.1.1
---
#### Code Sample, a copy-pastable example
```python
import pandas as pd
import pyarrow as pa
df = pd.DataFrame(
{
"sepal_length": [5.1, 4.9, 4.7, 4.6, 5.0],
"species": ["setosa", "setosa", "setosa", "setosa", "setosa"],
}
)
context = pa.default_serialization_context()
data = context.serialize(df).to_buffer().to_pybytes()
df_new = context.deserialize(data)
# this fails
df_new.groupby(["species"]).agg({"sepal_length": "sum"})
# this works
# df_new.copy().groupby(["species"]).agg({"sepal_length": "sum"})
```
#### Problem description
This is the traceback.
```python-traceback
Traceback (most recent call last):
File "demo.py", line 16, in <module>
df_new.groupby(["species"]).agg({"sepal_length": "sum"})
File "/home/jeet/miniconda3/envs/rnd/lib/python3.8/site-packages/pandas/core/groupby/generic.py", line 949, in aggregate
result, how = self._aggregate(func, *args, **kwargs)
File "/home/jeet/miniconda3/envs/rnd/lib/python3.8/site-packages/pandas/core/base.py", line 416, in _aggregate
result = _agg(arg, _agg_1dim)
File "/home/jeet/miniconda3/envs/rnd/lib/python3.8/site-packages/pandas/core/base.py", line 383, in _agg
result[fname] = func(fname, agg_how)
File "/home/jeet/miniconda3/envs/rnd/lib/python3.8/site-packages/pandas/core/base.py", line 367, in _agg_1dim
return colg.aggregate(how)
File "/home/jeet/miniconda3/envs/rnd/lib/python3.8/site-packages/pandas/core/groupby/generic.py", line 240, in aggregate
return getattr(self, func)(*args, **kwargs)
File "/home/jeet/miniconda3/envs/rnd/lib/python3.8/site-packages/pandas/core/groupby/groupby.py", line 1539, in sum
return self._agg_general(
File "/home/jeet/miniconda3/envs/rnd/lib/python3.8/site-packages/pandas/core/groupby/groupby.py", line 999, in _agg_general
return self._cython_agg_general(
File "/home/jeet/miniconda3/envs/rnd/lib/python3.8/site-packages/pandas/core/groupby/groupby.py", line 1033, in _cython_agg_general
result, agg_names = self.grouper.aggregate(
File "/home/jeet/miniconda3/envs/rnd/lib/python3.8/site-packages/pandas/core/groupby/ops.py", line 584, in aggregate
return self._cython_operation(
File "/home/jeet/miniconda3/envs/rnd/lib/python3.8/site-packages/pandas/core/groupby/ops.py", line 537, in _cython_operation
result = self._aggregate(result, counts, values, codes, func, min_count)
File "/home/jeet/miniconda3/envs/rnd/lib/python3.8/site-packages/pandas/core/groupby/ops.py", line 599, in _aggregate
agg_func(result, counts, values, comp_ids, min_count)
File "pandas/_libs/groupby.pyx", line 475, in pandas._libs.groupby._group_add
File "stringsource", line 658, in View.MemoryView.memoryview_cwrapper
File "stringsource", line 349, in View.MemoryView.memoryview.__cinit__
ValueError: buffer source array is read-only
```
In the `.agg` line that fails, if you do a min, max, median, or count aggregation, then it's going to work.
But if you do a sum or mean, then it fails.
#### Expected Output
I expected the aggregation to succeed without any error.
#### Output of ``pd.show_versions()``
<details>
```
INSTALLED VERSIONS
------------------
commit : f2ca0a2665b2d169c97de87b8e778dbed86aea07
python : 3.8.5.final.0
python-bits : 64
OS : Linux
OS-release : 5.4.0-7642-generic
Version : #46~1597422484~20.04~e78f762-Ubuntu SMP Wed Aug 19 14:35:06 UTC
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 1.1.1
numpy : 1.19.1
pytz : 2020.1
dateutil : 2.8.1
pip : 20.2.2
setuptools : 49.6.0.post20200814
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : 2.8.5 (dt dec pq3 ext lo64)
jinja2 : 2.11.2
IPython : 7.17.0
pandas_datareader: None
bs4 : None
bottleneck : None
fsspec : None
fastparquet : None
gcsfs : None
matplotlib : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : 1.0.1
pytables : None
pyxlsb : None
s3fs : None
scipy : 1.5.2
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
numba : None
```
</details>
| @jeet-parekh Can you create a copy / pastable example that doesn't use external links?
https://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports
A couple of more things.
This fails
```python
df_new.groupby(["species"])["sepal_length"].sum()
```
This works
```python
df_new.groupby(["species"])[["sepal_length"]].sum()
```
@dsaxton, fixed it. I missed that fact that it isn't copy-pastable. Will edit in the main issue post as well.
Thanks @jeet-parekh. Fails on master as well and looks like a bug to me.
Another temporary workaround is to make a copy:
df_new = context.deserialize(data).copy()
Then it seems to me that all groupby ops work, whether as a Series or a DataFrame.
A reproducer without the use of pyarrow:
```
df = pd.DataFrame(
{
"sepal_length": [5.1, 4.9, 4.7, 4.6, 5.0],
"species": ["setosa", "setosa", "setosa", "setosa", "setosa"],
}
)
df._mgr.blocks[0].values.flags.writeable = False
df.groupby(["species"]).agg({"sepal_length": "sum"})
```
It's already failing in 1.0, but not in 0.25. So not a very recent regression, but still a regression compared to 0.25.
I see the same behaviour with @jorisvandenbossche's code. It succeeds for min, max, count, and median aggregations. But fails for sum and mean. Not sure if that's relevant.
The direct fix would be to add a `const` to the `values` keyword declaration at
https://github.com/pandas-dev/pandas/blob/b528be68fdfe5400cf53e4b6f2ecade6b01208f6/pandas/_libs/groupby.pyx#L473-L477
however, using `const` with fused types will only be available for cython 3. So a workaround for now would be to use ndarray interface (`ndarray[complexfloating_t, ndim=2]`) instead of memoryview, I suppose. | 2020-09-02T07:00:58Z | [] | [] |
Traceback (most recent call last):
File "demo.py", line 16, in <module>
df_new.groupby(["species"]).agg({"sepal_length": "sum"})
File "/home/jeet/miniconda3/envs/rnd/lib/python3.8/site-packages/pandas/core/groupby/generic.py", line 949, in aggregate
result, how = self._aggregate(func, *args, **kwargs)
File "/home/jeet/miniconda3/envs/rnd/lib/python3.8/site-packages/pandas/core/base.py", line 416, in _aggregate
result = _agg(arg, _agg_1dim)
File "/home/jeet/miniconda3/envs/rnd/lib/python3.8/site-packages/pandas/core/base.py", line 383, in _agg
result[fname] = func(fname, agg_how)
File "/home/jeet/miniconda3/envs/rnd/lib/python3.8/site-packages/pandas/core/base.py", line 367, in _agg_1dim
return colg.aggregate(how)
File "/home/jeet/miniconda3/envs/rnd/lib/python3.8/site-packages/pandas/core/groupby/generic.py", line 240, in aggregate
return getattr(self, func)(*args, **kwargs)
File "/home/jeet/miniconda3/envs/rnd/lib/python3.8/site-packages/pandas/core/groupby/groupby.py", line 1539, in sum
return self._agg_general(
File "/home/jeet/miniconda3/envs/rnd/lib/python3.8/site-packages/pandas/core/groupby/groupby.py", line 999, in _agg_general
return self._cython_agg_general(
File "/home/jeet/miniconda3/envs/rnd/lib/python3.8/site-packages/pandas/core/groupby/groupby.py", line 1033, in _cython_agg_general
result, agg_names = self.grouper.aggregate(
File "/home/jeet/miniconda3/envs/rnd/lib/python3.8/site-packages/pandas/core/groupby/ops.py", line 584, in aggregate
return self._cython_operation(
File "/home/jeet/miniconda3/envs/rnd/lib/python3.8/site-packages/pandas/core/groupby/ops.py", line 537, in _cython_operation
result = self._aggregate(result, counts, values, codes, func, min_count)
File "/home/jeet/miniconda3/envs/rnd/lib/python3.8/site-packages/pandas/core/groupby/ops.py", line 599, in _aggregate
agg_func(result, counts, values, comp_ids, min_count)
File "pandas/_libs/groupby.pyx", line 475, in pandas._libs.groupby._group_add
File "stringsource", line 658, in View.MemoryView.memoryview_cwrapper
File "stringsource", line 349, in View.MemoryView.memoryview.__cinit__
ValueError: buffer source array is read-only
| 14,049 |
|||
pandas-dev/pandas | pandas-dev__pandas-36093 | 497ede8065e81065636908fecd31432a91c1ff64 | diff --git a/doc/source/user_guide/missing_data.rst b/doc/source/user_guide/missing_data.rst
--- a/doc/source/user_guide/missing_data.rst
+++ b/doc/source/user_guide/missing_data.rst
@@ -689,32 +689,6 @@ You can also operate on the DataFrame in place:
df.replace(1.5, np.nan, inplace=True)
-.. warning::
-
- When replacing multiple ``bool`` or ``datetime64`` objects, the first
- argument to ``replace`` (``to_replace``) must match the type of the value
- being replaced. For example,
-
- .. code-block:: python
-
- >>> s = pd.Series([True, False, True])
- >>> s.replace({'a string': 'new value', True: False}) # raises
- TypeError: Cannot compare types 'ndarray(dtype=bool)' and 'str'
-
- will raise a ``TypeError`` because one of the ``dict`` keys is not of the
- correct type for replacement.
-
- However, when replacing a *single* object such as,
-
- .. ipython:: python
-
- s = pd.Series([True, False, True])
- s.replace('a string', 'another string')
-
- the original ``NDFrame`` object will be returned untouched. We're working on
- unifying this API, but for backwards compatibility reasons we cannot break
- the latter behavior. See :issue:`6354` for more details.
-
Missing data casting rules and indexing
---------------------------------------
diff --git a/doc/source/whatsnew/v1.2.0.rst b/doc/source/whatsnew/v1.2.0.rst
--- a/doc/source/whatsnew/v1.2.0.rst
+++ b/doc/source/whatsnew/v1.2.0.rst
@@ -337,6 +337,7 @@ ExtensionArray
Other
^^^^^
- Bug in :meth:`DataFrame.replace` and :meth:`Series.replace` incorrectly raising ``AssertionError`` instead of ``ValueError`` when invalid parameter combinations are passed (:issue:`36045`)
+- Bug in :meth:`DataFrame.replace` and :meth:`Series.replace` with numeric values and string ``to_replace`` (:issue:`34789`)
-
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/array_algos/replace.py b/pandas/core/array_algos/replace.py
new file mode 100644
--- /dev/null
+++ b/pandas/core/array_algos/replace.py
@@ -0,0 +1,95 @@
+"""
+Methods used by Block.replace and related methods.
+"""
+import operator
+import re
+from typing import Optional, Pattern, Union
+
+import numpy as np
+
+from pandas._typing import ArrayLike, Scalar
+
+from pandas.core.dtypes.common import (
+ is_datetimelike_v_numeric,
+ is_numeric_v_string_like,
+ is_scalar,
+)
+from pandas.core.dtypes.missing import isna
+
+
+def compare_or_regex_search(
+ a: ArrayLike,
+ b: Union[Scalar, Pattern],
+ regex: bool = False,
+ mask: Optional[ArrayLike] = None,
+) -> Union[ArrayLike, bool]:
+ """
+ Compare two array_like inputs of the same shape or two scalar values
+
+ Calls operator.eq or re.search, depending on regex argument. If regex is
+ True, perform an element-wise regex matching.
+
+ Parameters
+ ----------
+ a : array_like
+ b : scalar or regex pattern
+ regex : bool, default False
+ mask : array_like or None (default)
+
+ Returns
+ -------
+ mask : array_like of bool
+ """
+
+ def _check_comparison_types(
+ result: Union[ArrayLike, bool], a: ArrayLike, b: Union[Scalar, Pattern]
+ ):
+ """
+ Raises an error if the two arrays (a,b) cannot be compared.
+ Otherwise, returns the comparison result as expected.
+ """
+ if is_scalar(result) and isinstance(a, np.ndarray):
+ type_names = [type(a).__name__, type(b).__name__]
+
+ if isinstance(a, np.ndarray):
+ type_names[0] = f"ndarray(dtype={a.dtype})"
+
+ raise TypeError(
+ f"Cannot compare types {repr(type_names[0])} and {repr(type_names[1])}"
+ )
+
+ if not regex:
+ op = lambda x: operator.eq(x, b)
+ else:
+ op = np.vectorize(
+ lambda x: bool(re.search(b, x))
+ if isinstance(x, str) and isinstance(b, (str, Pattern))
+ else False
+ )
+
+ # GH#32621 use mask to avoid comparing to NAs
+ if mask is None and isinstance(a, np.ndarray) and not isinstance(b, np.ndarray):
+ mask = np.reshape(~(isna(a)), a.shape)
+ if isinstance(a, np.ndarray):
+ a = a[mask]
+
+ if is_numeric_v_string_like(a, b):
+ # GH#29553 avoid deprecation warnings from numpy
+ return np.zeros(a.shape, dtype=bool)
+
+ elif is_datetimelike_v_numeric(a, b):
+ # GH#29553 avoid deprecation warnings from numpy
+ _check_comparison_types(False, a, b)
+ return False
+
+ result = op(a)
+
+ if isinstance(result, np.ndarray) and mask is not None:
+ # The shape of the mask can differ to that of the result
+ # since we may compare only a subset of a's or b's elements
+ tmp = np.zeros(mask.shape, dtype=np.bool_)
+ tmp[mask] = result
+ result = tmp
+
+ _check_comparison_types(result, a, b)
+ return result
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -6559,20 +6559,6 @@ def replace(
1 new new
2 bait xyz
- Note that when replacing multiple ``bool`` or ``datetime64`` objects,
- the data types in the `to_replace` parameter must match the data
- type of the value being replaced:
-
- >>> df = pd.DataFrame({{'A': [True, False, True],
- ... 'B': [False, True, False]}})
- >>> df.replace({{'a string': 'new value', True: False}}) # raises
- Traceback (most recent call last):
- ...
- TypeError: Cannot compare types 'ndarray(dtype=bool)' and 'str'
-
- This raises a ``TypeError`` because one of the ``dict`` keys is not of
- the correct type for replacement.
-
Compare the behavior of ``s.replace({{'a': None}})`` and
``s.replace('a', None)`` to understand the peculiarities
of the `to_replace` parameter:
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -11,7 +11,7 @@
from pandas._libs.internals import BlockPlacement
from pandas._libs.tslibs import conversion
from pandas._libs.tslibs.timezones import tz_compare
-from pandas._typing import ArrayLike
+from pandas._typing import ArrayLike, Scalar
from pandas.util._validators import validate_bool_kwarg
from pandas.core.dtypes.cast import (
@@ -59,6 +59,7 @@
from pandas.core.dtypes.missing import _isna_compat, is_valid_nat_for_dtype, isna
import pandas.core.algorithms as algos
+from pandas.core.array_algos.replace import compare_or_regex_search
from pandas.core.array_algos.transforms import shift
from pandas.core.arrays import (
Categorical,
@@ -792,7 +793,6 @@ def _replace_list(
self,
src_list: List[Any],
dest_list: List[Any],
- masks: List[np.ndarray],
inplace: bool = False,
regex: bool = False,
) -> List["Block"]:
@@ -801,11 +801,28 @@ def _replace_list(
"""
src_len = len(src_list) - 1
+ def comp(s: Scalar, mask: np.ndarray, regex: bool = False) -> np.ndarray:
+ """
+ Generate a bool array by perform an equality check, or perform
+ an element-wise regular expression matching
+ """
+ if isna(s):
+ return ~mask
+
+ s = com.maybe_box_datetimelike(s)
+ return compare_or_regex_search(self.values, s, regex, mask)
+
+ # Calculate the mask once, prior to the call of comp
+ # in order to avoid repeating the same computations
+ mask = ~isna(self.values)
+
+ masks = [comp(s, mask, regex) for s in src_list]
+
rb = [self if inplace else self.copy()]
for i, (src, dest) in enumerate(zip(src_list, dest_list)):
new_rb: List["Block"] = []
for blk in rb:
- m = masks[i][blk.mgr_locs.indexer]
+ m = masks[i]
convert = i == src_len # only convert once at the end
result = blk._replace_coerce(
mask=m,
@@ -2908,7 +2925,9 @@ def _extract_bool_array(mask: ArrayLike) -> np.ndarray:
"""
if isinstance(mask, ExtensionArray):
# We could have BooleanArray, Sparse[bool], ...
- mask = np.asarray(mask, dtype=np.bool_)
+ # Except for BooleanArray, this is equivalent to just
+ # np.asarray(mask, dtype=bool)
+ mask = mask.to_numpy(dtype=bool, na_value=False)
assert isinstance(mask, np.ndarray), type(mask)
assert mask.dtype == bool, mask.dtype
diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py
--- a/pandas/core/internals/managers.py
+++ b/pandas/core/internals/managers.py
@@ -1,14 +1,11 @@
from collections import defaultdict
import itertools
-import operator
-import re
from typing import (
Any,
DefaultDict,
Dict,
List,
Optional,
- Pattern,
Sequence,
Tuple,
TypeVar,
@@ -19,7 +16,7 @@
import numpy as np
from pandas._libs import internals as libinternals, lib
-from pandas._typing import ArrayLike, DtypeObj, Label, Scalar
+from pandas._typing import ArrayLike, DtypeObj, Label
from pandas.util._validators import validate_bool_kwarg
from pandas.core.dtypes.cast import (
@@ -29,12 +26,9 @@
)
from pandas.core.dtypes.common import (
DT64NS_DTYPE,
- is_datetimelike_v_numeric,
is_dtype_equal,
is_extension_array_dtype,
is_list_like,
- is_numeric_v_string_like,
- is_scalar,
)
from pandas.core.dtypes.concat import concat_compat
from pandas.core.dtypes.dtypes import ExtensionDtype
@@ -44,7 +38,6 @@
import pandas.core.algorithms as algos
from pandas.core.arrays.sparse import SparseDtype
from pandas.core.base import PandasObject
-import pandas.core.common as com
from pandas.core.construction import extract_array
from pandas.core.indexers import maybe_convert_indices
from pandas.core.indexes.api import Index, ensure_index
@@ -628,31 +621,10 @@ def replace_list(
""" do a list replace """
inplace = validate_bool_kwarg(inplace, "inplace")
- # figure out our mask apriori to avoid repeated replacements
- values = self.as_array()
-
- def comp(s: Scalar, mask: np.ndarray, regex: bool = False):
- """
- Generate a bool array by perform an equality check, or perform
- an element-wise regular expression matching
- """
- if isna(s):
- return ~mask
-
- s = com.maybe_box_datetimelike(s)
- return _compare_or_regex_search(values, s, regex, mask)
-
- # Calculate the mask once, prior to the call of comp
- # in order to avoid repeating the same computations
- mask = ~isna(values)
-
- masks = [comp(s, mask, regex) for s in src_list]
-
bm = self.apply(
"_replace_list",
src_list=src_list,
dest_list=dest_list,
- masks=masks,
inplace=inplace,
regex=regex,
)
@@ -1900,80 +1872,6 @@ def _merge_blocks(
return blocks
-def _compare_or_regex_search(
- a: ArrayLike,
- b: Union[Scalar, Pattern],
- regex: bool = False,
- mask: Optional[ArrayLike] = None,
-) -> Union[ArrayLike, bool]:
- """
- Compare two array_like inputs of the same shape or two scalar values
-
- Calls operator.eq or re.search, depending on regex argument. If regex is
- True, perform an element-wise regex matching.
-
- Parameters
- ----------
- a : array_like
- b : scalar or regex pattern
- regex : bool, default False
- mask : array_like or None (default)
-
- Returns
- -------
- mask : array_like of bool
- """
-
- def _check_comparison_types(
- result: Union[ArrayLike, bool], a: ArrayLike, b: Union[Scalar, Pattern]
- ):
- """
- Raises an error if the two arrays (a,b) cannot be compared.
- Otherwise, returns the comparison result as expected.
- """
- if is_scalar(result) and isinstance(a, np.ndarray):
- type_names = [type(a).__name__, type(b).__name__]
-
- if isinstance(a, np.ndarray):
- type_names[0] = f"ndarray(dtype={a.dtype})"
-
- raise TypeError(
- f"Cannot compare types {repr(type_names[0])} and {repr(type_names[1])}"
- )
-
- if not regex:
- op = lambda x: operator.eq(x, b)
- else:
- op = np.vectorize(
- lambda x: bool(re.search(b, x))
- if isinstance(x, str) and isinstance(b, (str, Pattern))
- else False
- )
-
- # GH#32621 use mask to avoid comparing to NAs
- if mask is None and isinstance(a, np.ndarray) and not isinstance(b, np.ndarray):
- mask = np.reshape(~(isna(a)), a.shape)
- if isinstance(a, np.ndarray):
- a = a[mask]
-
- if is_datetimelike_v_numeric(a, b) or is_numeric_v_string_like(a, b):
- # GH#29553 avoid deprecation warnings from numpy
- _check_comparison_types(False, a, b)
- return False
-
- result = op(a)
-
- if isinstance(result, np.ndarray) and mask is not None:
- # The shape of the mask can differ to that of the result
- # since we may compare only a subset of a's or b's elements
- tmp = np.zeros(mask.shape, dtype=np.bool_)
- tmp[mask] = result
- result = tmp
-
- _check_comparison_types(result, a, b)
- return result
-
-
def _fast_count_smallints(arr: np.ndarray) -> np.ndarray:
"""Faster version of set(arr) for sequences of small numbers."""
counts = np.bincount(arr.astype(np.int_))
| BUG: Replace raises TypeError if to_replace is Dict with numeric DataFrame and key of Dict is String
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of pandas.
- [x] (optional) I have confirmed this bug exists on the master branch of pandas.
---
**Note**: Please read [this guide](https://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) detailing how to provide the necessary information for us to reproduce your bug.
#### Code Sample, a copy-pastable example
```python
import pandas as pd
df = pd.DataFrame({"Y0": [1, 2], "Y1": [3, 4]})
df = df.replace({"replace_string": "test"})
```
#### Problem description
This raises a TypeError with the following Traceback:
```python
Traceback (most recent call last):
File "/home/developer/.config/JetBrains/PyCharmCE2020.1/scratches/scratch_1.py", line 7, in <module>
df = df.replace({"replace_string": "test"})
File "/home/developer/PycharmProjects/pandas/pandas/core/frame.py", line 4277, in replace
return super().replace(
File "/home/developer/PycharmProjects/pandas/pandas/core/generic.py", line 6598, in replace
return self.replace(
File "/home/developer/PycharmProjects/pandas/pandas/core/frame.py", line 4277, in replace
return super().replace(
File "/home/developer/PycharmProjects/pandas/pandas/core/generic.py", line 6641, in replace
new_data = self._mgr.replace_list(
File "/home/developer/PycharmProjects/pandas/pandas/core/internals/managers.py", line 616, in replace_list
masks = [comp(s, regex) for s in src_list]
File "/home/developer/PycharmProjects/pandas/pandas/core/internals/managers.py", line 616, in <listcomp>
masks = [comp(s, regex) for s in src_list]
File "/home/developer/PycharmProjects/pandas/pandas/core/internals/managers.py", line 614, in comp
return _compare_or_regex_search(values, s, regex)
File "/home/developer/PycharmProjects/pandas/pandas/core/internals/managers.py", line 1946, in _compare_or_regex_search
_check_comparison_types(False, a, b)
File "/home/developer/PycharmProjects/pandas/pandas/core/internals/managers.py", line 1925, in _check_comparison_types
raise TypeError(
TypeError: Cannot compare types 'ndarray(dtype=int64)' and 'str'
Process finished with exit code 1
```
#### Expected Output
The original DataFrame.
If at least one column is from dtype object, the replace works, for example the following works:
```python
import pandas as pd
df = pd.DataFrame({"Y0": [1, "2"], "Y1": [3, 4]})
df = df.replace({"replace_string": "test"})
```
Unexpectedly the following Code works:
```python
import pandas as pd
df = pd.DataFrame({"Y0": [1, 2], "Y1": [3, 4]})
df = df.replace(to_replace="replace_string", value="test")
```
This is probably related to #16784
Tested on master.
| 2020-09-03T15:57:21Z | [] | [] |
Traceback (most recent call last):
File "/home/developer/.config/JetBrains/PyCharmCE2020.1/scratches/scratch_1.py", line 7, in <module>
df = df.replace({"replace_string": "test"})
File "/home/developer/PycharmProjects/pandas/pandas/core/frame.py", line 4277, in replace
return super().replace(
File "/home/developer/PycharmProjects/pandas/pandas/core/generic.py", line 6598, in replace
return self.replace(
File "/home/developer/PycharmProjects/pandas/pandas/core/frame.py", line 4277, in replace
return super().replace(
File "/home/developer/PycharmProjects/pandas/pandas/core/generic.py", line 6641, in replace
new_data = self._mgr.replace_list(
File "/home/developer/PycharmProjects/pandas/pandas/core/internals/managers.py", line 616, in replace_list
masks = [comp(s, regex) for s in src_list]
File "/home/developer/PycharmProjects/pandas/pandas/core/internals/managers.py", line 616, in <listcomp>
masks = [comp(s, regex) for s in src_list]
File "/home/developer/PycharmProjects/pandas/pandas/core/internals/managers.py", line 614, in comp
return _compare_or_regex_search(values, s, regex)
File "/home/developer/PycharmProjects/pandas/pandas/core/internals/managers.py", line 1946, in _compare_or_regex_search
_check_comparison_types(False, a, b)
File "/home/developer/PycharmProjects/pandas/pandas/core/internals/managers.py", line 1925, in _check_comparison_types
raise TypeError(
TypeError: Cannot compare types 'ndarray(dtype=int64)' and 'str'
| 14,052 |
||||
pandas-dev/pandas | pandas-dev__pandas-36175 | 7d1622443f026729786616f8d5dda5a5a97be90a | diff --git a/doc/source/whatsnew/v1.2.0.rst b/doc/source/whatsnew/v1.2.0.rst
--- a/doc/source/whatsnew/v1.2.0.rst
+++ b/doc/source/whatsnew/v1.2.0.rst
@@ -296,6 +296,7 @@ I/O
- :meth:`to_csv` did not support zip compression for binary file object not having a filename (:issue: `35058`)
- :meth:`to_csv` and :meth:`read_csv` did not honor `compression` and `encoding` for path-like objects that are internally converted to file-like objects (:issue:`35677`, :issue:`26124`, and :issue:`32392`)
- :meth:`to_picke` and :meth:`read_pickle` did not support compression for file-objects (:issue:`26237`, :issue:`29054`, and :issue:`29570`)
+- Bug in :meth:`read_excel` with `engine="odf"` caused ``UnboundLocalError`` in some cases where cells had nested child nodes (:issue:`36122`, and :issue:`35802`)
Plotting
^^^^^^^^
diff --git a/pandas/io/excel/_odfreader.py b/pandas/io/excel/_odfreader.py
--- a/pandas/io/excel/_odfreader.py
+++ b/pandas/io/excel/_odfreader.py
@@ -197,22 +197,24 @@ def _get_cell_string_value(self, cell) -> str:
Find and decode OpenDocument text:s tags that represent
a run length encoded sequence of space characters.
"""
- from odf.element import Element, Text
+ from odf.element import Element
from odf.namespaces import TEXTNS
- from odf.text import P, S
+ from odf.text import S
- text_p = P().qname
text_s = S().qname
- p = cell.childNodes[0]
-
value = []
- if p.qname == text_p:
- for k, fragment in enumerate(p.childNodes):
- if isinstance(fragment, Text):
- value.append(fragment.data)
- elif isinstance(fragment, Element):
- if fragment.qname == text_s:
- spaces = int(fragment.attributes.get((TEXTNS, "c"), 1))
+
+ for fragment in cell.childNodes:
+ if isinstance(fragment, Element):
+ if fragment.qname == text_s:
+ spaces = int(fragment.attributes.get((TEXTNS, "c"), 1))
value.append(" " * spaces)
+ else:
+ # recursive impl needed in case of nested fragments
+ # with multiple spaces
+ # https://github.com/pandas-dev/pandas/pull/36175#discussion_r484639704
+ value.append(self._get_cell_string_value(fragment))
+ else:
+ value.append(str(fragment))
return "".join(value)
| BUG: Was trying to read an ods file and ran into UnboundLocalError in odfreader.py
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of pandas.
- [ ] (optional) I have confirmed this bug exists on the master branch of pandas.
---
**Note**: Please read [this guide](https://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) detailing how to provide the necessary information for us to reproduce your bug.
#### Code Sample, a copy-pastable example
```python
for file in os.listdir('data'): pandas.read_excel(pathlib.Path('data', file), engine='odf')
```
Sorry I don't have a minimal data example at this time.
#### Problem description
Was trying to test pandas reading a collection of ods files and ran into this error.
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/michael/.local/share/virtualenvs/merge-csv-NFbvYFrS/lib/python3.8/site-packages/pandas/util/_decorators.py", line 296, in wrapper
return func(*args, **kwargs)
File "/home/michael/.local/share/virtualenvs/merge-csv-NFbvYFrS/lib/python3.8/site-packages/pandas/io/excel/_base.py", line 311, in read_excel
return io.parse(
File "/home/michael/.local/share/virtualenvs/merge-csv-NFbvYFrS/lib/python3.8/site-packages/pandas/io/excel/_base.py", line 906, in parse
return self._reader.parse(
File "/home/michael/.local/share/virtualenvs/merge-csv-NFbvYFrS/lib/python3.8/site-packages/pandas/io/excel/_base.py", line 443, in parse
data = self.get_sheet_data(sheet, convert_float)
File "/home/michael/.local/share/virtualenvs/merge-csv-NFbvYFrS/lib/python3.8/site-packages/pandas/io/excel/_odfreader.py", line 91, in get_sheet_data
value = self._get_cell_value(sheet_cell, convert_float)
File "/home/michael/.local/share/virtualenvs/merge-csv-NFbvYFrS/lib/python3.8/site-packages/pandas/io/excel/_odfreader.py", line 175, in _get_cell_value
return self._get_cell_string_value(cell)
File "/home/michael/.local/share/virtualenvs/merge-csv-NFbvYFrS/lib/python3.8/site-packages/pandas/io/excel/_odfreader.py", line 211, in _get_cell_string_value
value.append(" " * spaces)
UnboundLocalError: local variable 'spaces' referenced before assignment
```
I took a look at the [code in question](https://github.com/pandas-dev/pandas/blob/ddf4f2430dd0a6fd51d7409ad3f524aeb5cbace2/pandas/io/excel/_odfreader.py#L211) and it seems like the line may be on the wrong indent level?
#### Expected Output
The usual dataframes :+1:
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit : d9fff2792bf16178d4e450fe7384244e50635733
python : 3.8.2.final.0
python-bits : 64
OS : Linux
OS-release : 5.4.0-42-generic
Version : #46-Ubuntu SMP Fri Jul 10 00:24:02 UTC 2020
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 1.1.0
numpy : 1.19.1
pytz : 2020.1
dateutil : 2.8.1
pip : 20.0.2
setuptools : 44.0.0
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : 1.0.1
pymysql : None
psycopg2 : None
jinja2 : None
IPython : None
pandas_datareader: None
bs4 : None
bottleneck : None
fsspec : None
fastparquet : None
gcsfs : None
matplotlib : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
numba : None
</details>
| Your proposed fix seems reasonable - want to push a PR with a test case?
I believe I can do that for you. Assuming I can sleuth out what exactly the file is crashing on would a test case similar to [this test](https://github.com/pandas-dev/pandas/blob/df1d440c4a0c583b06d17a885d4589b9edaa840b/pandas/tests/io/excel/test_readers.py#L504) work?
> I believe I can do that for you. Assuming I can sleuth out what exactly the file is crashing on would a test case similar to [this test](https://github.com/pandas-dev/pandas/blob/df1d440c4a0c583b06d17a885d4589b9edaa840b/pandas/tests/io/excel/test_readers.py#L504) work?
hard to say what the best test would be without a reproducible failing code sample. It maybe that a simple roundtrip test could suffice.
find a [minimal .ods file here](https://openweb.cc/bugs/bug_pandas_odfreader_l211_spaces.ods) that has one column with a header and one data-cell with content `Test (1)` that causes the fail. Put the .ods in the same directory where you run the following python3 snippet:
```
python3
import pandas as pd
print(pd.show_versions())
df = pd.read_excel("bug_odfreader_l211_spaces.ods", sheet_name="Test123")
```
I think, the fail is related to the type of the data-cell where `fragment.qname` is reported as `('urn:oasis:names:tc:opendocument:xmlns:text:1.0', 'span')`. As it is not handled by the if-clause, `spaces` is not set and `value.append(" " * spaces)` throws the error. Fixing the indent would solve the problem, but leave the `span`-clause unimplemented.
I could not reproduce creating a cell of type `span` in LibreOffice.
Sorry for linking to the .ods file but I was not able to upload it here as .ods is not a supported file type
I was working with some bad csv data and using libreoffice to deal with it so that could be how a span, or other element cause I think it might be a line-break, got in there. I got around the issue by returning the data to a csv file and continuing as normal. I'll be taking a look into implementing this test in the coming days now that my work is done.
Judging by [the spec](https://docs.oasis-open.org/office/v1.2/os/OpenDocument-v1.2-os-part1.html#__RefHeading__1415196_253892949) there is quite a few elements that aren't being checked for. I'm not all that sure what sort of guarantees pandas makes for reading data so could I get some feedback on whether we should handle all these cases or just set something of a default for spaces so that it doesn't throw? | 2020-09-06T22:58:37Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/michael/.local/share/virtualenvs/merge-csv-NFbvYFrS/lib/python3.8/site-packages/pandas/util/_decorators.py", line 296, in wrapper
return func(*args, **kwargs)
File "/home/michael/.local/share/virtualenvs/merge-csv-NFbvYFrS/lib/python3.8/site-packages/pandas/io/excel/_base.py", line 311, in read_excel
return io.parse(
File "/home/michael/.local/share/virtualenvs/merge-csv-NFbvYFrS/lib/python3.8/site-packages/pandas/io/excel/_base.py", line 906, in parse
return self._reader.parse(
File "/home/michael/.local/share/virtualenvs/merge-csv-NFbvYFrS/lib/python3.8/site-packages/pandas/io/excel/_base.py", line 443, in parse
data = self.get_sheet_data(sheet, convert_float)
File "/home/michael/.local/share/virtualenvs/merge-csv-NFbvYFrS/lib/python3.8/site-packages/pandas/io/excel/_odfreader.py", line 91, in get_sheet_data
value = self._get_cell_value(sheet_cell, convert_float)
File "/home/michael/.local/share/virtualenvs/merge-csv-NFbvYFrS/lib/python3.8/site-packages/pandas/io/excel/_odfreader.py", line 175, in _get_cell_value
return self._get_cell_string_value(cell)
File "/home/michael/.local/share/virtualenvs/merge-csv-NFbvYFrS/lib/python3.8/site-packages/pandas/io/excel/_odfreader.py", line 211, in _get_cell_string_value
value.append(" " * spaces)
UnboundLocalError: local variable 'spaces' referenced before assignment
| 14,073 |
|||
pandas-dev/pandas | pandas-dev__pandas-36208 | 19f0a9fa0293cf291ef1f7d6e39fa07ec21f56ed | diff --git a/doc/source/whatsnew/v1.1.2.rst b/doc/source/whatsnew/v1.1.2.rst
--- a/doc/source/whatsnew/v1.1.2.rst
+++ b/doc/source/whatsnew/v1.1.2.rst
@@ -24,7 +24,7 @@ Fixed regressions
- Fix regression in pickle roundtrip of the ``closed`` attribute of :class:`IntervalIndex` (:issue:`35658`)
- Fixed regression in :meth:`DataFrameGroupBy.agg` where a ``ValueError: buffer source array is read-only`` would be raised when the underlying array is read-only (:issue:`36014`)
- Fixed regression in :meth:`Series.groupby.rolling` number of levels of :class:`MultiIndex` in input was compressed to one (:issue:`36018`)
--
+- Fixed regression in :class:`DataFrameGroupBy` on an empty :class:`DataFrame` (:issue:`36197`)
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/window/rolling.py b/pandas/core/window/rolling.py
--- a/pandas/core/window/rolling.py
+++ b/pandas/core/window/rolling.py
@@ -2240,10 +2240,12 @@ def _create_blocks(self, obj: FrameOrSeriesUnion):
"""
# Ensure the object we're rolling over is monotonically sorted relative
# to the groups
- groupby_order = np.concatenate(
- list(self._groupby.grouper.indices.values())
- ).astype(np.int64)
- obj = obj.take(groupby_order)
+ # GH 36197
+ if not obj.empty:
+ groupby_order = np.concatenate(
+ list(self._groupby.grouper.indices.values())
+ ).astype(np.int64)
+ obj = obj.take(groupby_order)
return super()._create_blocks(obj)
def _get_cython_func_type(self, func: str) -> Callable:
| BUG: DataFrameGroupBy.rolling on an empty DataFrame throws a ValueError
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of pandas.
- [x] (optional) I have confirmed this bug exists on the master branch of pandas.
---
#### Code Sample, a copy-pastable example
```python
import pandas as pd
pd.DataFrame({"s1": []}).groupby("s1").rolling(window=1).sum()
```
#### Problem description
Since a pandas update, the groupby above started failing in the edge case of an empty DataFrame. It used to simply return an empty DataFrame before, which seems like a more robust behaviour to me.
#### Expected Output
```
>>> pd.DataFrame({"s1": []}).groupby("s1").rolling(window=1).sum()
Empty DataFrame
Columns: []
Index: []
```
#### Actual Output
```
>>> pd.DataFrame({"s1": []}).groupby("s1").rolling(window=1).sum()'
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "site-packages/pandas/core/window/rolling.py", line 2072, in sum
return super().sum(*args, **kwargs)
File "site-packages/pandas/core/window/rolling.py", line 1424, in sum
window_func, center=self.center, floor=0, name="sum", **kwargs
File "site-packages/pandas/core/window/rolling.py", line 2194, in _apply
**kwargs,
File "site-packages/pandas/core/window/rolling.py", line 528, in _apply
blocks, obj = self._create_blocks(self._selected_obj)
File "site-packages/pandas/core/window/rolling.py", line 2230, in _create_blocks
list(self._groupby.grouper.indices.values())
File "<__array_function__ internals>", line 6, in concatenate
ValueError: need at least one array to concatenate
```
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit : f2ca0a2665b2d169c97de87b8e778dbed86aea07
python : 3.7.9.final.0
python-bits : 64
OS : Linux
OS-release : 5.7.17-200.fc32.x86_64
Version : #1 SMP Fri Aug 21 15:23:46 UTC 2020
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 1.1.1
numpy : 1.19.1
pytz : 2020.1
dateutil : 2.7.3
pip : 19.3.1
setuptools : 42.0.2
Cython : None
pytest : 5.1.1
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : 4.3.3
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 2.10.1
IPython : 7.3.0
pandas_datareader: None
bs4 : None
bottleneck : None
fsspec : 0.8.0
fastparquet : None
gcsfs : None
matplotlib : 3.0.3
numexpr : None
odfpy : None
openpyxl : 3.0.4
pandas_gbq : None
pyarrow : 0.12.1
pytables : None
pyxlsb : None
s3fs : None
scipy : 1.5.2
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : 1.2.0
xlwt : None
numba : None
</details>
| The last version where this worked was 1.0.5, i.e. it is broken since 1.1.0rc0. | 2020-09-08T02:01:13Z | [] | [] |
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "site-packages/pandas/core/window/rolling.py", line 2072, in sum
return super().sum(*args, **kwargs)
File "site-packages/pandas/core/window/rolling.py", line 1424, in sum
window_func, center=self.center, floor=0, name="sum", **kwargs
File "site-packages/pandas/core/window/rolling.py", line 2194, in _apply
**kwargs,
File "site-packages/pandas/core/window/rolling.py", line 528, in _apply
blocks, obj = self._create_blocks(self._selected_obj)
File "site-packages/pandas/core/window/rolling.py", line 2230, in _create_blocks
list(self._groupby.grouper.indices.values())
File "<__array_function__ internals>", line 6, in concatenate
ValueError: need at least one array to concatenate
| 14,082 |
|||
pandas-dev/pandas | pandas-dev__pandas-36264 | 6aa311db41896f9fcdc07e6610be8dd7117a5f27 | diff --git a/doc/source/whatsnew/v1.2.0.rst b/doc/source/whatsnew/v1.2.0.rst
--- a/doc/source/whatsnew/v1.2.0.rst
+++ b/doc/source/whatsnew/v1.2.0.rst
@@ -307,6 +307,7 @@ Groupby/resample/rolling
- Bug in :meth:`DataFrameGroupBy.count` and :meth:`SeriesGroupBy.sum` returning ``NaN`` for missing categories when grouped on multiple ``Categoricals``. Now returning ``0`` (:issue:`35028`)
- Bug in :meth:`DataFrameGroupBy.apply` that would some times throw an erroneous ``ValueError`` if the grouping axis had duplicate entries (:issue:`16646`)
+- Bug in :meth:`DataFrame.resample(...)` that would throw a ``ValueError`` when resampling from "D" to "24H" over a transition into daylight savings time (DST) (:issue:`35219`)
- Bug when combining methods :meth:`DataFrame.groupby` with :meth:`DataFrame.resample` and :meth:`DataFrame.interpolate` raising an ``TypeError`` (:issue:`35325`)
- Bug in :meth:`DataFrameGroupBy.apply` where a non-nuisance grouping column would be dropped from the output columns if another groupby method was called before ``.apply()`` (:issue:`34656`)
- Bug in :meth:`DataFrameGroupby.apply` would drop a :class:`CategoricalIndex` when grouped on. (:issue:`35792`)
diff --git a/pandas/core/resample.py b/pandas/core/resample.py
--- a/pandas/core/resample.py
+++ b/pandas/core/resample.py
@@ -1087,7 +1087,11 @@ def _upsample(self, method, limit=None, fill_value=None):
res_index = self._adjust_binner_for_upsample(binner)
# if we have the same frequency as our axis, then we are equal sampling
- if limit is None and to_offset(ax.inferred_freq) == self.freq:
+ if (
+ limit is None
+ and to_offset(ax.inferred_freq) == self.freq
+ and len(obj) == len(res_index)
+ ):
result = obj.copy()
result.index = res_index
else:
| BUG: resampling error when date_range includes a single DST
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of pandas.
- [x] (optional) I have confirmed this bug exists on the master branch of pandas.
---
#### Code Sample, a copy-pastable example
```python
>>> import pandas as pd
>>> # works as expected (note the daylight savings transitions in the head and tail)
>>> pd.Series(1., pd.date_range('2020-03-28','2020-10-27', freq='D', tz="Europe/Amsterdam")).resample('24H').pad()
2020-03-28 00:00:00+01:00 1.0
2020-03-29 00:00:00+01:00 1.0
2020-03-30 01:00:00+02:00 1.0
2020-03-31 01:00:00+02:00 1.0
2020-04-01 01:00:00+02:00 1.0
...
2020-10-23 01:00:00+02:00 1.0
2020-10-24 01:00:00+02:00 1.0
2020-10-25 01:00:00+02:00 1.0
2020-10-26 00:00:00+01:00 1.0
2020-10-27 00:00:00+01:00 1.0
Freq: 24H, Length: 214, dtype: float64
>>> # fails unexpectedly
>>> pd.Series(1., pd.date_range('2020-03-28','2020-03-31', freq='D', tz="Europe/Amsterdam")).resample('24H').pad()
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/home/felix/anaconda3/envs/bvp-venv/lib/python3.6/site-packages/pandas/core/resample.py", line 453, in pad
return self._upsample("pad", limit=limit)
File "/home/felix/anaconda3/envs/bvp-venv/lib/python3.6/site-packages/pandas/core/resample.py", line 1092, in _upsample
result.index = res_index
File "/home/felix/anaconda3/envs/bvp-venv/lib/python3.6/site-packages/pandas/core/generic.py", line 5287, in __setattr__
return object.__setattr__(self, name, value)
File "pandas/_libs/properties.pyx", line 67, in pandas._libs.properties.AxisProperty.__set__
File "/home/felix/anaconda3/envs/bvp-venv/lib/python3.6/site-packages/pandas/core/series.py", line 401, in _set_axis
self._data.set_axis(axis, labels)
File "/home/felix/anaconda3/envs/bvp-venv/lib/python3.6/site-packages/pandas/core/internals/managers.py", line 178, in set_axis
f"Length mismatch: Expected axis has {old_len} elements, new "
ValueError: Length mismatch: Expected axis has 4 elements, new values have 3 elements
```
#### Problem description
In my first example, resampling from an offset of 1 day to an offset of 24 hours works as expected, but only when the start and end of the DatetimeIndex share the same timezone. In my second example the date range start and ends in a different timezone due to a daylight savings transition on 29 March 2020, for which resampling to 24 hours fails.
Possibly related issue:
- #35248
#### Expected Output
```python
>>> pd.Series(1., pd.date_range('2020-03-28','2020-03-31', freq='D', tz="Europe/Amsterdam")).resample('24H').pad()
2020-03-28 00:00:00+01:00 1.0
2020-03-29 00:00:00+01:00 1.0
2020-03-30 01:00:00+02:00 1.0
Freq: 24H, Length: 3, dtype: float64
```
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit : None
python : 3.7.7.final.0
python-bits : 64
OS : Linux
OS-release : 4.9.0-11-amd64
machine : x86_64
processor :
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 1.0.5
numpy : 1.19.0
pytz : 2020.1
dateutil : 2.8.1
pip : 20.1.1
setuptools : 47.3.1.post20200622
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : None
pandas_datareader: None
bs4 : None
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : None
matplotlib : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
pytest : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
xlsxwriter : None
numba : None
</details>
| The issue is possibly around here
```python
1090 # if we have the same frequency as our axis, then we are equal sampling
1091 if limit is None and to_offset(ax.inferred_freq) == self.freq:
1092 result = obj.copy()
-> 1093 result.index = res_index
1094 else:
1095 result = obj.reindex(
1096 res_index, method=method, limit=limit, fill_value=fill_value
1097 )
```
Do we also need a length check in that if condition? Can you investigate @Flix6x?
And given the discussion in https://github.com/pandas-dev/pandas/issues/35248, is this a duplicate or distinct?
Adding the length check ` and len(obj) == len(res_index)` to line 1091 indeed resolves the problem.
I would have actually expected the preceding offset comparison to fail:
```
>>> pd.offsets.Day(1) == pd.offsets.Hour(24)
True
```
To me, adding the length check remedies a symptom of maintaining this equality. Both `date_range()` and `resample()` treat `pd.offsets.Day(1)` as a calendar day, in which case this equality shouldn't hold.
This issue is distinct from #35248, and both are symptoms of #22864. Your suggested fix would resolve this issue (which deals with offsets only), but not #35248 (which deals with pandas offsets versus datetime timedeltas).
Would you like me to make a pull request (with or without test)?
That'd be great (including a test). | 2020-09-10T07:48:04Z | [] | [] |
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/home/felix/anaconda3/envs/bvp-venv/lib/python3.6/site-packages/pandas/core/resample.py", line 453, in pad
return self._upsample("pad", limit=limit)
File "/home/felix/anaconda3/envs/bvp-venv/lib/python3.6/site-packages/pandas/core/resample.py", line 1092, in _upsample
result.index = res_index
File "/home/felix/anaconda3/envs/bvp-venv/lib/python3.6/site-packages/pandas/core/generic.py", line 5287, in __setattr__
return object.__setattr__(self, name, value)
File "pandas/_libs/properties.pyx", line 67, in pandas._libs.properties.AxisProperty.__set__
File "/home/felix/anaconda3/envs/bvp-venv/lib/python3.6/site-packages/pandas/core/series.py", line 401, in _set_axis
self._data.set_axis(axis, labels)
File "/home/felix/anaconda3/envs/bvp-venv/lib/python3.6/site-packages/pandas/core/internals/managers.py", line 178, in set_axis
f"Length mismatch: Expected axis has {old_len} elements, new "
ValueError: Length mismatch: Expected axis has 4 elements, new values have 3 elements
| 14,093 |
|||
pandas-dev/pandas | pandas-dev__pandas-36303 | 822dc6f901fafd646257de2fc5ea918bbec82f93 | diff --git a/doc/source/whatsnew/v1.1.3.rst b/doc/source/whatsnew/v1.1.3.rst
--- a/doc/source/whatsnew/v1.1.3.rst
+++ b/doc/source/whatsnew/v1.1.3.rst
@@ -14,6 +14,7 @@ including other versions of pandas.
Fixed regressions
~~~~~~~~~~~~~~~~~
+- Fixed regression in :class:`IntegerArray` unary plus and minus operations raising a ``TypeError`` (:issue:`36063`)
- Fixed regression in :meth:`Series.__getitem__` incorrectly raising when the input was a tuple (:issue:`35534`)
- Fixed regression in :meth:`Series.__getitem__` incorrectly raising when the input was a frozenset (:issue:`35747`)
-
diff --git a/pandas/conftest.py b/pandas/conftest.py
--- a/pandas/conftest.py
+++ b/pandas/conftest.py
@@ -1055,6 +1055,19 @@ def any_nullable_int_dtype(request):
return request.param
+@pytest.fixture(params=tm.SIGNED_EA_INT_DTYPES)
+def any_signed_nullable_int_dtype(request):
+ """
+ Parameterized fixture for any signed nullable integer dtype.
+
+ * 'Int8'
+ * 'Int16'
+ * 'Int32'
+ * 'Int64'
+ """
+ return request.param
+
+
@pytest.fixture(params=tm.ALL_REAL_DTYPES)
def any_real_dtype(request):
"""
diff --git a/pandas/core/arrays/integer.py b/pandas/core/arrays/integer.py
--- a/pandas/core/arrays/integer.py
+++ b/pandas/core/arrays/integer.py
@@ -364,6 +364,15 @@ def __init__(self, values: np.ndarray, mask: np.ndarray, copy: bool = False):
)
super().__init__(values, mask, copy=copy)
+ def __neg__(self):
+ return type(self)(-self._data, self._mask)
+
+ def __pos__(self):
+ return self
+
+ def __abs__(self):
+ return type(self)(np.abs(self._data), self._mask)
+
@classmethod
def _from_sequence(cls, scalars, dtype=None, copy: bool = False) -> "IntegerArray":
return integer_array(scalars, dtype=dtype, copy=copy)
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -1417,7 +1417,10 @@ def __pos__(self):
):
arr = operator.pos(values)
else:
- raise TypeError(f"Unary plus expects numeric dtype, not {values.dtype}")
+ raise TypeError(
+ "Unary plus expects bool, numeric, timedelta, "
+ f"or object dtype, not {values.dtype}"
+ )
return self.__array_wrap__(arr)
def __invert__(self):
| BUG: Unary minus raises for series with Int64Dtype
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of pandas.
- [x] (optional) I have confirmed this bug exists on the master branch of pandas (commit 73c1d32)
---
#### Code Sample, a copy-pastable example
```python
>>> import pandas as pd
>>> pd.__version__
'1.1.1'
>>> s = pd.Series([1, 2, 3], dtype="Int64")
>>> -s
Traceback (most recent call last):
File "...", line 1, in <module>
-s
File ".../lib/python3.8/site-packages/pandas/core/generic.py", line 1297, in __neg__
arr = operator.neg(values)
TypeError: bad operand type for unary -: 'IntegerArray'
```
#### Problem description
I cannot negate series with Int64Dtype. This was possible in previous versions (but it returned object dtype).
#### Expected Output
```
>>> import pandas as pd
>>> pd.__version__
'1.0.5'
>>> s = pd.Series([1, 2, 3], dtype="Int64")
>>> -s
0 -1
1 -2
2 -3
dtype: object
```
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit : f2ca0a2665b2d169c97de87b8e778dbed86aea07
python : 3.8.3.final.0
python-bits : 64
OS : Darwin
OS-release : 19.6.0
Version : Darwin Kernel Version 19.6.0: Thu Jun 18 20:49:00 PDT 2020; root:xnu-6153.141.1~1/RELEASE_X86_64
machine : x86_64
processor : i386
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 1.1.1
numpy : 1.19.1
pytz : 2020.1
dateutil : 2.8.1
pip : 20.1.1
setuptools : 47.3.1
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : 7.15.0
pandas_datareader: None
bs4 : None
bottleneck : None
fsspec : None
fastparquet : None
gcsfs : None
matplotlib : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
numba : None
</details>
| Bisection identified 67aae8028773eae231662e47ec2c46124ad4229e as the first bad commit. (PR https://github.com/pandas-dev/pandas/pull/32422)
bisect log
<details>
```
# bad: [d9fff2792bf16178d4e450fe7384244e50635733] RLS: 1.1.0
git bisect bad d9fff2792bf16178d4e450fe7384244e50635733
# good: [b687cd4d9e520666a956a60849568a98dd00c672] RLS: 1.0.5
git bisect good b687cd4d9e520666a956a60849568a98dd00c672
# good: [d3f08566a80239a18a813ebda9a2ebb0368b1dc5] RLS: 1.0.0rc0
git bisect good d3f08566a80239a18a813ebda9a2ebb0368b1dc5
# bad: [63957e46c789d5f891344cf335eaf8eaccabe5ba] Requested ASV (#33197)
git bisect bad 63957e46c789d5f891344cf335eaf8eaccabe5ba
# good: [c0066f32b4667744d7b86d680e7e3e5d9a08e33d] REG: dont call func on empty input (#32121)
git bisect good c0066f32b4667744d7b86d680e7e3e5d9a08e33d
# bad: [3b66021ecb74da2c35e16958121bd224d5de5264] TST: make tests stricter (#32527)
git bisect bad 3b66021ecb74da2c35e16958121bd224d5de5264
# good: [821aa25c9039e72da9a7b236cf2f9e7d549cbb7b] BUG: Fix __ne__ comparison for Categorical (#32304)
git bisect good 821aa25c9039e72da9a7b236cf2f9e7d549cbb7b
# bad: [09a46a4e465b19fa688f1d58124d266c0bbbebf0] DOC: Add extended summary, update parameter types desc, update return types desc, and add whitespaces after commas in list declarations to DataFrame.first in core/generic.py (#32018)
git bisect bad 09a46a4e465b19fa688f1d58124d266c0bbbebf0
# bad: [777c0f90c6067c636fcd76ce003a8fbfcc311d7b] CLN: remove unreachable _internal_get_values in blocks (#32472)
git bisect bad 777c0f90c6067c636fcd76ce003a8fbfcc311d7b
# bad: [86ed2b654abfe82adeba2bae7c4424377d428d91] PERF: lazify blknos and blklocs (#32261)
git bisect bad 86ed2b654abfe82adeba2bae7c4424377d428d91
# good: [d33b0025db0b2d79abe51343054d4c8bbed104c6] CLN: remove unreachable branch (#32405)
git bisect good d33b0025db0b2d79abe51343054d4c8bbed104c6
# bad: [67aae8028773eae231662e47ec2c46124ad4229e] CLN: avoid values_from_object in NDFrame (#32422)
git bisect bad 67aae8028773eae231662e47ec2c46124ad4229e
# good: [c5f0ebf8d92322445a4413c1bf9bd08e226583f0] TYP/cln: generic._make_*_function (#32363)
git bisect good c5f0ebf8d92322445a4413c1bf9bd08e226583f0
# good: [0d04683baaf65f6023fa83ab0d985726fb3c98b0] TST: Split and simplify test_value_counts_unique_nunique (#32281)
git bisect good 0d04683baaf65f6023fa83ab0d985726fb3c98b0
# first bad commit: [67aae8028773eae231662e47ec2c46124ad4229e] CLN: avoid values_from_object in NDFrame (#32422)
```
</details>
setup to reproduce
<details>
Ran bisection with this shell script, first good/bad commits are 1.0.5 and 1.1.0 releases respectively:
```
set -eu
python3 setup.py build_ext --inplace --force
python3 un.py
```
where un.py is
```python
import pandas
s = pandas.Series([1, 2, 3], dtype="Int64")
print(-s)
```
</details>
moving off 1.1.2 milestone, xref https://github.com/pandas-dev/pandas/pull/36081#issuecomment-688290598
> Bisection identified [67aae80](https://github.com/pandas-dev/pandas/commit/67aae8028773eae231662e47ec2c46124ad4229e) as the first bad commit. (PR #32422)
Thanks @kokes cc @jbrockmendel
Yikes, returning object dtype in the old version is not great. This is just waiting for someone to implement `IntegerArray.__neg__` | 2020-09-12T04:06:22Z | [] | [] |
Traceback (most recent call last):
File "...", line 1, in <module>
-s
File ".../lib/python3.8/site-packages/pandas/core/generic.py", line 1297, in __neg__
arr = operator.neg(values)
TypeError: bad operand type for unary -: 'IntegerArray'
| 14,102 |
|||
pandas-dev/pandas | pandas-dev__pandas-36316 | 822dc6f901fafd646257de2fc5ea918bbec82f93 | diff --git a/doc/source/whatsnew/v1.1.3.rst b/doc/source/whatsnew/v1.1.3.rst
--- a/doc/source/whatsnew/v1.1.3.rst
+++ b/doc/source/whatsnew/v1.1.3.rst
@@ -25,6 +25,7 @@ Fixed regressions
Bug fixes
~~~~~~~~~
- Bug in :meth:`Series.str.startswith` and :meth:`Series.str.endswith` with ``category`` dtype not propagating ``na`` parameter (:issue:`36241`)
+- Bug in :class:`Series` constructor where integer overflow would occur for sufficiently large scalar inputs when an index was provided (:issue:`36291`)
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/dtypes/cast.py b/pandas/core/dtypes/cast.py
--- a/pandas/core/dtypes/cast.py
+++ b/pandas/core/dtypes/cast.py
@@ -697,6 +697,11 @@ def infer_dtype_from_scalar(val, pandas_dtype: bool = False) -> Tuple[DtypeObj,
else:
dtype = np.dtype(np.int64)
+ try:
+ np.array(val, dtype=dtype)
+ except OverflowError:
+ dtype = np.array(val).dtype
+
elif is_float(val):
if isinstance(val, np.floating):
dtype = np.dtype(type(val))
| BUG: pandas series creation fails with OverflowError when given large integers
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of pandas.
- [ ] (optional) I have confirmed this bug exists on the master branch of pandas.
---
#### Code Sample, a copy-pastable example
```python
>>> import pandas as pd
>>> pd.Series(1000000000000000000000)
0 1000000000000000000000
dtype: object
>>> pd.Series(1000000000000000000000, index = pd.date_range(pd.Timestamp.now().floor("1D"), pd.Timestamp.now(), freq='T'))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/matt/opt/anaconda3/lib/python3.7/site-packages/pandas/core/series.py", line 327, in __init__
data = sanitize_array(data, index, dtype, copy, raise_cast_failure=True)
File "/Users/matt/opt/anaconda3/lib/python3.7/site-packages/pandas/core/construction.py", line 475, in sanitize_array
subarr = construct_1d_arraylike_from_scalar(value, len(index), dtype)
File "/Users/matt/opt/anaconda3/lib/python3.7/site-packages/pandas/core/dtypes/cast.py", line 1555, in construct_1d_arraylike_from_scalar
subarr.fill(value)
OverflowError: int too big to convert
>>> pd.Series(1000000000000000000000.0, index = pd.date_range(pd.Timestamp.now().floor("1D"), pd.Timestamp.now(), freq='T'))
2020-09-11 00:00:00 1.000000e+21
2020-09-11 00:01:00 1.000000e+21
2020-09-11 00:02:00 1.000000e+21
2020-09-11 00:03:00 1.000000e+21
2020-09-11 00:04:00 1.000000e+21
...
2020-09-11 11:24:00 1.000000e+21
2020-09-11 11:25:00 1.000000e+21
2020-09-11 11:26:00 1.000000e+21
2020-09-11 11:27:00 1.000000e+21
2020-09-11 11:28:00 1.000000e+21
Freq: T, Length: 689, dtype: float64
```
#### Problem description
Hi pandas, when creating a new series with very large integers, series creation fails. This is not true if you pass in a float, or if you just pass in one value of a series and no index. the traceback points to pandas code so I'm submitting a bug here.
#### Expected Output
I would expect this to fail more gracefully or either output a series of object type, or float type. when initializing an array in numpy, it transparently converts it to object type
```python
>>> np.array([1000000000000000000000]*1000).dtype
dtype('O')
```
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit : f2ca0a2665b2d169c97de87b8e778dbed86aea07
python : 3.7.6.final.0
python-bits : 64
OS : Darwin
OS-release : 19.6.0
Version : Darwin Kernel Version 19.6.0: Thu Jun 18 20:49:00 PDT 2020; root:xnu-6153.141.1~1/RELEASE_X86_64
machine : x86_64
processor : i386
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 1.1.1
numpy : 1.19.1
pytz : 2020.1
dateutil : 2.8.1
pip : 20.2.2
setuptools : 49.6.0.post20200814
Cython : 0.29.21
pytest : 6.0.1
hypothesis : None
sphinx : 3.2.1
blosc : 1.7.0
feather : None
xlsxwriter : 1.3.3
lxml.etree : 4.5.2
html5lib : 1.1
pymysql : None
psycopg2 : None
jinja2 : 2.11.2
IPython : 7.18.1
pandas_datareader: None
bs4 : 4.9.1
bottleneck : 1.3.2
fsspec : 0.8.0
fastparquet : None
gcsfs : None
matplotlib : 3.3.1
numexpr : 2.7.1
odfpy : None
openpyxl : 3.0.5
pandas_gbq : None
pyarrow : 1.0.1
pytables : None
pyxlsb : None
s3fs : None
scipy : 1.5.2
sqlalchemy : 1.3.19
tables : 3.6.1
tabulate : None
xarray : None
xlrd : 1.2.0
xlwt : 1.3.0
numba : 0.50.1
</details>
| Confirming that this happens on 1.2 master
<details><summary><b>Output of pd.show_versions()</b></summary>
INSTALLED VERSIONS
------------------
commit : 03c704087ec02ce2f889ac0037b5082924df7fca
python : 3.8.3.final.0
python-bits : 64
OS : Linux
OS-release : 5.4.0-47-generic
Version : #51-Ubuntu SMP Fri Sep 4 19:50:52 UTC 2020
machine : x86_64
processor :
byteorder : little
LC_ALL : C.UTF-8
LANG : C.UTF-8
LOCALE : en_US.UTF-8
pandas : 1.2.0.dev0+318.g03c704087
numpy : 1.18.5
pytz : 2020.1
dateutil : 2.8.1
pip : 20.1.1
setuptools : 49.1.0.post20200704
Cython : 0.29.21
pytest : 5.4.3
hypothesis : 5.19.0
sphinx : 3.1.1
blosc : None
feather : None
xlsxwriter : 1.2.9
lxml.etree : 4.5.2
html5lib : 1.1
pymysql : None
psycopg2 : 2.8.5 (dt dec pq3 ext lo64)
jinja2 : 2.11.2
IPython : 7.16.1
pandas_datareader: None
bs4 : 4.9.1
bottleneck : 1.3.2
fsspec : 0.7.4
fastparquet : 0.4.0
gcsfs : 0.6.2
matplotlib : 3.2.2
numexpr : 2.7.1
odfpy : None
openpyxl : 3.0.4
pandas_gbq : None
pyarrow : 0.17.1
pytables : None
pyxlsb : None
s3fs : 0.4.2
scipy : 1.5.0
sqlalchemy : 1.3.18
tables : 3.6.1
tabulate : 0.8.7
xarray : 0.15.1
xlrd : 1.2.0
xlwt : 1.3.0
numba : 0.50.1
</details> | 2020-09-12T22:58:41Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/matt/opt/anaconda3/lib/python3.7/site-packages/pandas/core/series.py", line 327, in __init__
data = sanitize_array(data, index, dtype, copy, raise_cast_failure=True)
File "/Users/matt/opt/anaconda3/lib/python3.7/site-packages/pandas/core/construction.py", line 475, in sanitize_array
subarr = construct_1d_arraylike_from_scalar(value, len(index), dtype)
File "/Users/matt/opt/anaconda3/lib/python3.7/site-packages/pandas/core/dtypes/cast.py", line 1555, in construct_1d_arraylike_from_scalar
subarr.fill(value)
OverflowError: int too big to convert
| 14,105 |
|||
pandas-dev/pandas | pandas-dev__pandas-36606 | 027f365418d8b9fd6afddf2f8028b2467e289e0b | diff --git a/pandas/_libs/tslibs/nattype.pyx b/pandas/_libs/tslibs/nattype.pyx
--- a/pandas/_libs/tslibs/nattype.pyx
+++ b/pandas/_libs/tslibs/nattype.pyx
@@ -392,7 +392,7 @@ class NaTType(_NaT):
Returns
-------
- string
+ str
""",
)
day_name = _make_nan_func(
@@ -407,7 +407,7 @@ class NaTType(_NaT):
Returns
-------
- string
+ str
""",
)
# _nat_methods
diff --git a/pandas/_libs/tslibs/offsets.pyx b/pandas/_libs/tslibs/offsets.pyx
--- a/pandas/_libs/tslibs/offsets.pyx
+++ b/pandas/_libs/tslibs/offsets.pyx
@@ -219,7 +219,9 @@ cdef _wrap_timedelta_result(result):
cdef _get_calendar(weekmask, holidays, calendar):
- """Generate busdaycalendar"""
+ """
+ Generate busdaycalendar
+ """
if isinstance(calendar, np.busdaycalendar):
if not holidays:
holidays = tuple(calendar.holidays)
@@ -659,14 +661,18 @@ cdef class BaseOffset:
return nint
def __setstate__(self, state):
- """Reconstruct an instance from a pickled state"""
+ """
+ Reconstruct an instance from a pickled state
+ """
self.n = state.pop("n")
self.normalize = state.pop("normalize")
self._cache = state.pop("_cache", {})
# At this point we expect state to be empty
def __getstate__(self):
- """Return a pickleable state"""
+ """
+ Return a pickleable state
+ """
state = {}
state["n"] = self.n
state["normalize"] = self.normalize
@@ -971,7 +977,9 @@ cdef class RelativeDeltaOffset(BaseOffset):
object.__setattr__(self, key, val)
def __getstate__(self):
- """Return a pickleable state"""
+ """
+ Return a pickleable state
+ """
# RelativeDeltaOffset (technically DateOffset) is the only non-cdef
# class, so the only one with __dict__
state = self.__dict__.copy()
@@ -980,7 +988,9 @@ cdef class RelativeDeltaOffset(BaseOffset):
return state
def __setstate__(self, state):
- """Reconstruct an instance from a pickled state"""
+ """
+ Reconstruct an instance from a pickled state
+ """
if "offset" in state:
# Older (<0.22.0) versions have offset attribute instead of _offset
@@ -3604,7 +3614,9 @@ def shift_day(other: datetime, days: int) -> datetime:
cdef inline int year_add_months(npy_datetimestruct dts, int months) nogil:
- """new year number after shifting npy_datetimestruct number of months"""
+ """
+ New year number after shifting npy_datetimestruct number of months.
+ """
return dts.year + (dts.month + months - 1) // 12
@@ -3702,7 +3714,9 @@ cdef inline void _shift_months(const int64_t[:] dtindex,
Py_ssize_t count,
int months,
str day_opt) nogil:
- """See shift_months.__doc__"""
+ """
+ See shift_months.__doc__
+ """
cdef:
Py_ssize_t i
int months_to_roll
@@ -3734,7 +3748,9 @@ cdef inline void _shift_quarters(const int64_t[:] dtindex,
int q1start_month,
str day_opt,
int modby) nogil:
- """See shift_quarters.__doc__"""
+ """
+ See shift_quarters.__doc__
+ """
cdef:
Py_ssize_t i
int months_since, n
@@ -3990,7 +4006,9 @@ cdef inline int _roll_qtrday(npy_datetimestruct* dts,
int n,
int months_since,
str day_opt) nogil except? -1:
- """See roll_qtrday.__doc__"""
+ """
+ See roll_qtrday.__doc__
+ """
if n > 0:
if months_since < 0 or (months_since == 0 and
diff --git a/pandas/_libs/tslibs/timestamps.pyx b/pandas/_libs/tslibs/timestamps.pyx
--- a/pandas/_libs/tslibs/timestamps.pyx
+++ b/pandas/_libs/tslibs/timestamps.pyx
@@ -503,7 +503,7 @@ cdef class _Timestamp(ABCTimestamp):
Returns
-------
- string
+ str
"""
return self._get_date_name_field("day_name", locale)
@@ -518,7 +518,7 @@ cdef class _Timestamp(ABCTimestamp):
Returns
-------
- string
+ str
"""
return self._get_date_name_field("month_name", locale)
diff --git a/pandas/core/indexes/range.py b/pandas/core/indexes/range.py
--- a/pandas/core/indexes/range.py
+++ b/pandas/core/indexes/range.py
@@ -53,10 +53,12 @@ class RangeIndex(Int64Index):
If int and "stop" is not given, interpreted as "stop" instead.
stop : int (default: 0)
step : int (default: 1)
- name : object, optional
- Name to be stored in the index.
+ dtype : np.int64
+ Unused, accepted for homogeneity with other index types.
copy : bool, default False
Unused, accepted for homogeneity with other index types.
+ name : object, optional
+ Name to be stored in the index.
Attributes
----------
diff --git a/scripts/validate_rst_title_capitalization.py b/scripts/validate_rst_title_capitalization.py
--- a/scripts/validate_rst_title_capitalization.py
+++ b/scripts/validate_rst_title_capitalization.py
@@ -1,4 +1,4 @@
-#!/usr/bin/env python
+#!/usr/bin/env python3
"""
Validate that the titles in the rst files follow the proper capitalization convention.
| version 0.23: RangeIndex argument order changed, docs stale
```python
>>> import pandas as pd
>>> pd.__version__
'0.23.4'
>>> pd.RangeIndex(0, 3, 1, 'name')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/mpfluege/.local/miniconda3/envs/pitchwalk/lib/python3.7/site-packages/pandas/core/indexes/range.py", line 74, in __new__
cls._validate_dtype(dtype)
File "/home/mpfluege/.local/miniconda3/envs/pitchwalk/lib/python3.7/site-packages/pandas/core/indexes/range.py", line 161, in _validate_dtype
if not (dtype is None or is_int64_dtype(dtype)):
File "/home/mpfluege/.local/miniconda3/envs/pitchwalk/lib/python3.7/site-packages/pandas/core/dtypes/common.py", line 991, in is_int64_dtype
tipo = _get_dtype_type(arr_or_dtype)
File "/home/mpfluege/.local/miniconda3/envs/pitchwalk/lib/python3.7/site-packages/pandas/core/dtypes/common.py", line 1872, in _get_dtype_type
return _get_dtype_type(np.dtype(arr_or_dtype))
TypeError: data type "name" not understood
```
#### Problem description
[Documentation of RangeIndex](https://pandas-docs.github.io/pandas-docs-travis/generated/pandas.RangeIndex.html) states that it has the arguments `start`, `stop`, `step`, `name` and `copy`. However, the actual signature is `RangeIndex(start=None, stop=None, step=None, dtype=None, copy=False, name=None, fastpath=False)`, which has `dtype` as fourth argument. This breaks scripts that relied on the fourth argument being the name, which used to be the case:
```python
>>> import pandas as pd
>>> pd.__version__
'0.22.0'
>>> pd.RangeIndex(0, 3, 1, 'name')
RangeIndex(start=0, stop=3, step=1, name='name')
```
#### Expected Output
Either revert to the old argument order or at least update the documentation.
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 3.7.0.final.0
python-bits: 64
OS: Linux
OS-release: 4.4.140-62-default
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: de_DE.UTF-8
LOCALE: de_DE.UTF-8
pandas: 0.23.4
pytest: None
pip: 10.0.1
setuptools: 40.0.0
Cython: None
numpy: 1.15.0
scipy: 1.1.0
pyarrow: None
xarray: None
IPython: None
sphinx: None
patsy: None
dateutil: 2.7.3
pytz: 2018.5
blosc: None
bottleneck: None
tables: None
numexpr: 2.6.7
feather: None
matplotlib: 2.2.3
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: None
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: None
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
</details>
| @mikapfl : Good catch! Feel free to update the documentation.
Marking for `0.23.5` because this looks like a broken doc from `0.22.0` to `0.23.0`.
@gfyoung: First time contributing to pandas, I hope my changes look good. I split it up into many patches, only the first two are needed to fix this specific bug if you want a minimal changeset for `0.23.5`. | 2020-09-24T16:14:11Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/mpfluege/.local/miniconda3/envs/pitchwalk/lib/python3.7/site-packages/pandas/core/indexes/range.py", line 74, in __new__
cls._validate_dtype(dtype)
File "/home/mpfluege/.local/miniconda3/envs/pitchwalk/lib/python3.7/site-packages/pandas/core/indexes/range.py", line 161, in _validate_dtype
if not (dtype is None or is_int64_dtype(dtype)):
File "/home/mpfluege/.local/miniconda3/envs/pitchwalk/lib/python3.7/site-packages/pandas/core/dtypes/common.py", line 991, in is_int64_dtype
tipo = _get_dtype_type(arr_or_dtype)
File "/home/mpfluege/.local/miniconda3/envs/pitchwalk/lib/python3.7/site-packages/pandas/core/dtypes/common.py", line 1872, in _get_dtype_type
return _get_dtype_type(np.dtype(arr_or_dtype))
TypeError: data type "name" not understood
| 14,162 |
|||
pandas-dev/pandas | pandas-dev__pandas-36613 | 40d3b5f33b35ccbd822c927cfab48e15e1154986 | diff --git a/doc/source/whatsnew/v1.1.3.rst b/doc/source/whatsnew/v1.1.3.rst
--- a/doc/source/whatsnew/v1.1.3.rst
+++ b/doc/source/whatsnew/v1.1.3.rst
@@ -53,6 +53,7 @@ Bug fixes
- Bug in :meth:`DataFrame.stack` raising a ``ValueError`` when stacking :class:`MultiIndex` columns based on position when the levels had duplicate names (:issue:`36353`)
- Bug in :meth:`Series.astype` showing too much precision when casting from ``np.float32`` to string dtype (:issue:`36451`)
- Bug in :meth:`Series.isin` and :meth:`DataFrame.isin` when using ``NaN`` and a row length above 1,000,000 (:issue:`22205`)
+- Bug in :func:`cut` raising a ``ValueError`` when passed a :class:`Series` of labels with ``ordered=False`` (:issue:`36603`)
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/reshape/tile.py b/pandas/core/reshape/tile.py
--- a/pandas/core/reshape/tile.py
+++ b/pandas/core/reshape/tile.py
@@ -379,7 +379,7 @@ def _bins_to_cuts(
duplicates: str = "raise",
ordered: bool = True,
):
- if not ordered and not labels:
+ if not ordered and labels is None:
raise ValueError("'labels' must be provided if 'ordered = False'")
if duplicates not in ["raise", "drop"]:
| BUG: pd.cut fails when ordered is set to False and labels is set to a series
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of pandas.
- [ ] (optional) I have confirmed this bug exists on the master branch of pandas.
---
#### Code Sample, a copy-pastable example
```python
import pandas as pd
test = pd.DataFrame([['a', 5], ['b', 2], ['c', 6], ['d', 3], ['e', 8]],
columns=['row_name', 'row_value'])
cuts = pd.DataFrame([[1, 'odd'], [2, 'even'], [3, 'odd'], [4, 'even'], [5, 'odd'],
[6, 'even'], [7, 'odd'], [8, 'even'], [9, 'odd'], [10, '']],
columns=['cut_value', 'cut_label'])
print(pd.cut(test.row_value, cuts.cut_value, labels=cuts.cut_label[:-1], ordered=False))
```
#### Problem description
When a user wants to call pd.cut with a set of labels that includes duplicate values, they must set ordered to False and set labels to the series of strings to be used.
But running the sample code above, with ordered set to False and labels set to a series, gives this output:
```
Traceback (most recent call last):
File "/Users/mark/PycharmProjects/temp/bug1/cut_test.py", line 8, in <module>
print(pd.cut(test.row_value, cuts.cut_value, labels=cuts.cut_label[:-1], ordered=False))
File "/Users/mark/PycharmProjects/temp/bug1/venv/lib/python3.7/site-packages/pandas/core/reshape/tile.py", line 284, in cut
ordered=ordered,
File "/Users/mark/PycharmProjects/temp/bug1/venv/lib/python3.7/site-packages/pandas/core/reshape/tile.py", line 384, in _bins_to_cuts
if not ordered and not labels:
File "/Users/mark/PycharmProjects/temp/bug1/venv/lib/python3.7/site-packages/pandas/core/generic.py", line 1327, in __nonzero__
f"The truth value of a {type(self).__name__} is ambiguous. "
ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
Process finished with exit code 1
```
#### Expected Output
The expected output is a series of strings taken from the labels list, as documented.
You can see from the Traceback that the error occurs at line 384 of tile.py. which is
```
if not ordered and not labels:
```
If that line is changed to
```
if not ordered and labels is None:
```
then running the sample code again gives the correct output:
```
0 even
1 odd
2 odd
3 even
4 odd
Name: row_value, dtype: category
Categories (2, object): ['even', 'odd']
Process finished with exit code 0
```
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit : 2a7d3326dee660824a8433ffd01065f8ac37f7d6
python : 3.7.3.final.0
python-bits : 64
OS : Darwin
OS-release : 19.5.0
Version : Darwin Kernel Version 19.5.0: Tue May 26 20:41:44 PDT 2020; root:xnu-6153.121.2~2/RELEASE_X86_64
machine : x86_64
processor : i386
byteorder : little
LC_ALL : None
LANG : None
LOCALE : en_CA.UTF-8
pandas : 1.1.2
numpy : 1.19.2
pytz : 2020.1
dateutil : 2.8.1
pip : 20.2.3
setuptools : 50.3.0
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : None
pandas_datareader: None
bs4 : None
bottleneck : None
fsspec : None
fastparquet : None
gcsfs : None
matplotlib : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
numba : None
</details>
| Thanks for the report! Confirming that this happens on 1.2 master
<details><summary><b>Output of pd.show_versions()</b></summary>
INSTALLED VERSIONS
------------------
commit : ae57e062bf895bf12c23027364265252eb7d6fcc
python : 3.8.3.final.0
python-bits : 64
OS : Linux
OS-release : 5.4.0-48-generic
Version : #52-Ubuntu SMP Thu Sep 10 10:58:49 UTC 2020
machine : x86_64
processor :
byteorder : little
LC_ALL : C.UTF-8
LANG : C.UTF-8
LOCALE : en_US.UTF-8
pandas : 1.2.0.dev0+487.gae57e062b
numpy : 1.18.5
pytz : 2020.1
dateutil : 2.8.1
pip : 20.1.1
setuptools : 49.1.0.post20200704
Cython : 0.29.21
pytest : 5.4.3
hypothesis : 5.19.0
sphinx : 3.1.1
blosc : None
feather : None
xlsxwriter : 1.2.9
lxml.etree : 4.5.2
html5lib : 1.1
pymysql : None
psycopg2 : 2.8.5 (dt dec pq3 ext lo64)
jinja2 : 2.11.2
IPython : 7.16.1
pandas_datareader: None
bs4 : 4.9.1
bottleneck : 1.3.2
fsspec : 0.7.4
fastparquet : 0.4.0
gcsfs : 0.6.2
matplotlib : 3.2.2
numexpr : 2.7.1
odfpy : None
openpyxl : 3.0.4
pandas_gbq : None
pyarrow : 1.0.1
pytables : None
pyxlsb : None
s3fs : 0.4.2
scipy : 1.5.0
sqlalchemy : 1.3.18
tables : 3.6.1
tabulate : 0.8.7
xarray : 0.15.1
xlrd : 1.2.0
xlwt : 1.3.0
numba : 0.50.1
</details>
@Mark-BC would you be interested in opening a PR with the fix? | 2020-09-24T21:21:09Z | [] | [] |
Traceback (most recent call last):
File "/Users/mark/PycharmProjects/temp/bug1/cut_test.py", line 8, in <module>
print(pd.cut(test.row_value, cuts.cut_value, labels=cuts.cut_label[:-1], ordered=False))
File "/Users/mark/PycharmProjects/temp/bug1/venv/lib/python3.7/site-packages/pandas/core/reshape/tile.py", line 284, in cut
ordered=ordered,
File "/Users/mark/PycharmProjects/temp/bug1/venv/lib/python3.7/site-packages/pandas/core/reshape/tile.py", line 384, in _bins_to_cuts
if not ordered and not labels:
File "/Users/mark/PycharmProjects/temp/bug1/venv/lib/python3.7/site-packages/pandas/core/generic.py", line 1327, in __nonzero__
f"The truth value of a {type(self).__name__} is ambiguous. "
ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
| 14,165 |
|||
pandas-dev/pandas | pandas-dev__pandas-36671 | 8e10daa8ea363ea7a9543ef6c846592a22605a68 | groupby apply failed on dataframe with DatetimeIndex
#### Code Sample, a copy-pastable example if possible
```python
# -*- coding: utf-8 -*-
import pandas as pd
def _do_calc_(_df):
df2 = _df.copy()
df2["vol_ma20"] = df2["volume"].rolling(20, min_periods=1).mean()
return df2
dbars = pd.read_csv("2019.dbar_ftridx.csv.gz",
index_col=False,
encoding="utf-8-sig",
parse_dates=["trade_day"])
dbars = dbars.set_index("trade_day", drop=False) # everything works fine if this line is commented
df = dbars.groupby("exchange").apply(_do_calc_)
print(len(df))
```
#### Problem description
here is the input data file:
[2019.dbar_ftridx.csv.gz](https://github.com/pandas-dev/pandas/files/3102450/2019.dbar_ftridx.csv.gz)
this piece of code runs well with pandas 0.23, when upgraded to 0.24.2, it reports error:
```
Traceback (most recent call last):
File "D:/test/groupby_bug.py", line 16, in <module>
df = dbars.groupby("exchange").apply(_do_calc_)
File "C:\Anaconda3\lib\site-packages\pandas\core\groupby\groupby.py", line 701, in apply
return self._python_apply_general(f)
File "C:\Anaconda3\lib\site-packages\pandas\core\groupby\groupby.py", line 712, in _python_apply_general
not_indexed_same=mutated or self.mutated)
File "C:\Anaconda3\lib\site-packages\pandas\core\groupby\generic.py", line 318, in _wrap_applied_output
not_indexed_same=not_indexed_same)
File "C:\Anaconda3\lib\site-packages\pandas\core\groupby\groupby.py", line 918, in _concat_objects
sort=False)
File "C:\Anaconda3\lib\site-packages\pandas\core\reshape\concat.py", line 228, in concat
copy=copy, sort=sort)
File "C:\Anaconda3\lib\site-packages\pandas\core\reshape\concat.py", line 292, in __init__
obj._consolidate(inplace=True)
File "C:\Anaconda3\lib\site-packages\pandas\core\generic.py", line 5156, in _consolidate
self._consolidate_inplace()
File "C:\Anaconda3\lib\site-packages\pandas\core\generic.py", line 5138, in _consolidate_inplace
self._protect_consolidate(f)
File "C:\Anaconda3\lib\site-packages\pandas\core\generic.py", line 5127, in _protect_consolidate
result = f()
File "C:\Anaconda3\lib\site-packages\pandas\core\generic.py", line 5136, in f
self._data = self._data.consolidate()
File "C:\Anaconda3\lib\site-packages\pandas\core\internals\managers.py", line 922, in consolidate
bm = self.__class__(self.blocks, self.axes)
File "C:\Anaconda3\lib\site-packages\pandas\core\internals\managers.py", line 114, in __init__
self._verify_integrity()
File "C:\Anaconda3\lib\site-packages\pandas\core\internals\managers.py", line 311, in _verify_integrity
construction_error(tot_items, block.shape[1:], self.axes)
File "C:\Anaconda3\lib\site-packages\pandas\core\internals\managers.py", line 1691, in construction_error
passed, implied))
ValueError: Shape of passed values is (432, 27), indices imply (1080, 27)
```
If I do not call set_index() on the dataframe, it works fine. Seems there is something wrong with the DatetimeIndex?
I don't know if this error can be re-produced on your machine, I can re-produce the same error on all my machines.
#### Expected Output
4104
#### Output of ``pd.show_versions()``
<details>
[paste the output of ``pd.show_versions()`` here below this line]
C:\Anaconda3\python.exe D:/test/groupby_bug.py
INSTALLED VERSIONS
------------------
commit: None
python: 3.7.3.final.0
python-bits: 64
OS: Windows
OS-release: 10
machine: AMD64
processor: Intel64 Family 6 Model 79 Stepping 1, GenuineIntel
byteorder: little
LC_ALL: None
LANG: None
LOCALE: None.None
pandas: 0.24.2
pytest: 4.4.0
pip: 19.0.3
setuptools: 41.0.0
Cython: 0.29.7
numpy: 1.16.2
scipy: 1.2.1
pyarrow: None
xarray: None
IPython: 7.4.0
sphinx: 2.0.1
patsy: 0.5.1
dateutil: 2.8.0
pytz: 2019.1
blosc: None
bottleneck: 1.2.1
tables: 3.5.1
numexpr: 2.6.9
feather: None
matplotlib: 3.0.3
openpyxl: 2.6.2
xlrd: 1.2.0
xlwt: 1.3.0
xlsxwriter: 1.1.6
lxml.etree: 4.3.3
bs4: 4.7.1
html5lib: 1.0.1
sqlalchemy: 1.3.3
pymysql: None
psycopg2: None
jinja2: 2.10.1
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
gcsfs: None
None
</details>
| Can you provide a code sample that is self-contained, i.e. doesn't require the external file to reproduce the issue?
I picked up several rows from the csv file, here is the self-contained version:
```python
# -*- coding: utf-8 -*-
import pandas as pd
from io import StringIO
def _do_calc_(_df):
df2 = _df.copy()
df2["ma20"] = df2["close"].rolling(20, min_periods=1).mean()
return df2
data = """trade_day,exchange,code,close
2019-01-02,CFFEX,ic,4074
2019-01-02,DCE,a,3408
2019-01-02,DCE,b,2970
2019-01-02,SHFE,ag,3711
2019-01-02,SHFE,al,13446
2019-01-02,SHFE,au,288
2019-01-02,CZCE,ap,10678
2019-01-02,CZCE,cf,14870
2019-01-02,CZCE,cy,23811
2019-01-02,CZCE,fg,1284
2019-01-02,INE,sc,371
2019-01-03,CFFEX,ic,4062
2019-01-03,CFFEX,if,2962
2019-01-03,CFFEX,ih,2270
2019-01-03,CFFEX,t,98
2019-01-03,CFFEX,tf,99
2019-01-03,CFFEX,ts,100
2019-01-03,DCE,a,3439
2019-01-03,DCE,b,2969
2019-01-03,DCE,bb,134
2019-01-03,DCE,c,1874
2019-01-03,DCE,cs,2340
2019-01-03,DCE,eg,5119
2019-01-03,DCE,fb,84
2019-01-03,DCE,i,501
2019-01-03,DCE,j,1934
2019-01-03,DCE,jd,3488
2019-01-03,DCE,jm,1191
2019-01-03,DCE,l,8459
2019-01-03,DCE,m,2676
2019-01-03,DCE,p,4627
2019-01-03,DCE,pp,8499
2019-01-03,DCE,v,6313
2019-01-03,DCE,y,5506
2019-01-03,SHFE,ag,3763
2019-01-03,SHFE,al,13348
2019-01-03,SHFE,au,290
2019-01-03,SHFE,bu,2628
2019-01-03,SHFE,cu,47326
2019-01-03,SHFE,fu,2396
2019-01-03,SHFE,hc,3337
2019-01-03,SHFE,ni,88385
2019-01-03,SHFE,pb,17804
2019-01-03,SHFE,rb,3429
2019-01-03,SHFE,ru,11485
2019-01-03,SHFE,sn,143901
2019-01-03,SHFE,sp,5067
2019-01-03,SHFE,wr,3560
2019-01-03,SHFE,zn,20071
2019-01-03,CZCE,ap,10476
2019-01-03,CZCE,cf,14846
2019-01-03,CZCE,cy,23679
2019-01-03,CZCE,fg,1302
2019-01-03,CZCE,jr,2853
2019-01-03,CZCE,lr,2634
2019-01-03,CZCE,ma,2439
2019-01-03,CZCE,oi,6526
2019-01-03,CZCE,pm,2268
2019-01-03,CZCE,ri,2430
2019-01-03,CZCE,rm,2142
2019-01-03,CZCE,rs,5405
2019-01-03,CZCE,sf,5735
2019-01-03,CZCE,sm,7408
2019-01-03,CZCE,sr,4678
2019-01-03,CZCE,ta,5677
2019-01-03,CZCE,wh,2411
2019-01-03,CZCE,zc,563
2019-01-03,INE,sc,385"""
dbars = pd.read_csv(StringIO(data), index_col=False, parse_dates=["trade_day"])
dbars = dbars.set_index("trade_day", drop=False) # everything works fine if this line is commented
df = dbars.groupby("exchange").apply(_do_calc_)
print(len(df))
```
any update? seems not fixed in v0.25.1
seems fixed in 1.0.0 👍
would take a validation test with the above example (or if u can pinpoint an existing test which replicates)
> seems fixed in 1.0.0 👍
This was fixed in #28662
7c9042a73aea312806c10b6805a4ceccd6341bbd is the first new commit
commit 7c9042a73aea312806c10b6805a4ceccd6341bbd
Author: Daniel Saxton <2658661+dsaxton@users.noreply.github.com>
Date: Wed Jan 1 10:21:27 2020 -0600
BUG: Fix groupby.apply (#28662)
the code sample above gives the following traceback with 0.25.3 (expand details)
<details>
```
>>> import pandas as pd
>>> from io import StringIO
>>>
>>>
>>> def _do_calc_(_df):
... df2 = _df.copy()
... df2["ma20"] = df2["close"].rolling(20, min_periods=1).mean()
... return df2
...
>>>
>>> data = """trade_day,exchange,code,close
... 2019-01-02,CFFEX,ic,4074
... 2019-01-02,DCE,a,3408
... 2019-01-02,DCE,b,2970
... 2019-01-02,SHFE,ag,3711
... 2019-01-02,SHFE,al,13446
... 2019-01-02,SHFE,au,288
... 2019-01-02,CZCE,ap,10678
... 2019-01-02,CZCE,cf,14870
... 2019-01-02,CZCE,cy,23811
... 2019-01-02,CZCE,fg,1284
... 2019-01-02,INE,sc,371
... 2019-01-03,CFFEX,ic,4062
... 2019-01-03,CFFEX,if,2962
... 2019-01-03,CFFEX,ih,2270
... 2019-01-03,CFFEX,t,98
... 2019-01-03,CFFEX,tf,99
... 2019-01-03,CFFEX,ts,100
... 2019-01-03,DCE,a,3439
... 2019-01-03,DCE,b,2969
... 2019-01-03,DCE,bb,134
... 2019-01-03,DCE,c,1874
... 2019-01-03,DCE,cs,2340
... 2019-01-03,DCE,eg,5119
... 2019-01-03,DCE,fb,84
... 2019-01-03,DCE,i,501
... 2019-01-03,DCE,j,1934
... 2019-01-03,DCE,jd,3488
... 2019-01-03,DCE,jm,1191
... 2019-01-03,DCE,l,8459
... 2019-01-03,DCE,m,2676
... 2019-01-03,DCE,p,4627
... 2019-01-03,DCE,pp,8499
... 2019-01-03,DCE,v,6313
... 2019-01-03,DCE,y,5506
... 2019-01-03,SHFE,ag,3763
... 2019-01-03,SHFE,al,13348
... 2019-01-03,SHFE,au,290
... 2019-01-03,SHFE,bu,2628
... 2019-01-03,SHFE,cu,47326
... 2019-01-03,SHFE,fu,2396
... 2019-01-03,SHFE,hc,3337
... 2019-01-03,SHFE,ni,88385
... 2019-01-03,SHFE,pb,17804
... 2019-01-03,SHFE,rb,3429
... 2019-01-03,SHFE,ru,11485
... 2019-01-03,SHFE,sn,143901
... 2019-01-03,SHFE,sp,5067
... 2019-01-03,SHFE,wr,3560
... 2019-01-03,SHFE,zn,20071
... 2019-01-03,CZCE,ap,10476
... 2019-01-03,CZCE,cf,14846
... 2019-01-03,CZCE,cy,23679
... 2019-01-03,CZCE,fg,1302
... 2019-01-03,CZCE,jr,2853
... 2019-01-03,CZCE,lr,2634
... 2019-01-03,CZCE,ma,2439
... 2019-01-03,CZCE,oi,6526
... 2019-01-03,CZCE,pm,2268
... 2019-01-03,CZCE,ri,2430
... 2019-01-03,CZCE,rm,2142
... 2019-01-03,CZCE,rs,5405
... 2019-01-03,CZCE,sf,5735
... 2019-01-03,CZCE,sm,7408
... 2019-01-03,CZCE,sr,4678
... 2019-01-03,CZCE,ta,5677
... 2019-01-03,CZCE,wh,2411
... 2019-01-03,CZCE,zc,563
... 2019-01-03,INE,sc,385"""
>>>
>>> dbars = pd.read_csv(StringIO(data), index_col=False, parse_dates=["trade_day"])
>>> dbars = dbars.set_index(
... "trade_day", drop=False
... ) # everything works fine if this line is commented
>>> df = dbars.groupby("exchange").apply(_do_calc_)
Traceback (most recent call last):
File "C:\Users\simon\Anaconda3\lib\site-packages\pandas\core\groupby\groupby.py", line 725, in apply
result = self._python_apply_general(f)
File "C:\Users\simon\Anaconda3\lib\site-packages\pandas\core\groupby\groupby.py", line 745, in _python_apply_general
keys, values, not_indexed_same=mutated or self.mutated
File "C:\Users\simon\Anaconda3\lib\site-packages\pandas\core\groupby\generic.py", line 372, in _wrap_applied_output
return self._concat_objects(keys, values, not_indexed_same=not_indexed_same)
File "C:\Users\simon\Anaconda3\lib\site-packages\pandas\core\groupby\groupby.py", line 973, in _concat_objects
sort=False,
File "C:\Users\simon\Anaconda3\lib\site-packages\pandas\core\reshape\concat.py", line 258, in concat
return op.get_result()
File "C:\Users\simon\Anaconda3\lib\site-packages\pandas\core\reshape\concat.py", line 473, in get_result
mgrs_indexers, self.new_axes, concat_axis=self.axis, copy=self.copy
File "C:\Users\simon\Anaconda3\lib\site-packages\pandas\core\internals\managers.py", line 2059, in concatenate_block_managers
return BlockManager(blocks, axes)
File "C:\Users\simon\Anaconda3\lib\site-packages\pandas\core\internals\managers.py", line 143, in __init__
self._verify_integrity()
File "C:\Users\simon\Anaconda3\lib\site-packages\pandas\core\internals\managers.py", line 345, in _verify_integrity
construction_error(tot_items, block.shape[1:], self.axes)
File "C:\Users\simon\Anaconda3\lib\site-packages\pandas\core\internals\managers.py", line 1719, in construction_error
"Shape of passed values is {0}, indices imply {1}".format(passed, implied)
ValueError: Shape of passed values is (68, 5), indices imply (90, 5)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\simon\Anaconda3\lib\site-packages\pandas\core\groupby\groupby.py", line 737, in apply
return self._python_apply_general(f)
File "C:\Users\simon\Anaconda3\lib\site-packages\pandas\core\groupby\groupby.py", line 745, in _python_apply_general
keys, values, not_indexed_same=mutated or self.mutated
File "C:\Users\simon\Anaconda3\lib\site-packages\pandas\core\groupby\generic.py", line 372, in _wrap_applied_output
return self._concat_objects(keys, values, not_indexed_same=not_indexed_same)
File "C:\Users\simon\Anaconda3\lib\site-packages\pandas\core\groupby\groupby.py", line 973, in _concat_objects
sort=False,
File "C:\Users\simon\Anaconda3\lib\site-packages\pandas\core\reshape\concat.py", line 258, in concat
return op.get_result()
File "C:\Users\simon\Anaconda3\lib\site-packages\pandas\core\reshape\concat.py", line 473, in get_result
mgrs_indexers, self.new_axes, concat_axis=self.axis, copy=self.copy
File "C:\Users\simon\Anaconda3\lib\site-packages\pandas\core\internals\managers.py", line 2059, in concatenate_block_managers
return BlockManager(blocks, axes)
File "C:\Users\simon\Anaconda3\lib\site-packages\pandas\core\internals\managers.py", line 143, in __init__
self._verify_integrity()
File "C:\Users\simon\Anaconda3\lib\site-packages\pandas\core\internals\managers.py", line 345, in _verify_integrity
construction_error(tot_items, block.shape[1:], self.axes)
File "C:\Users\simon\Anaconda3\lib\site-packages\pandas\core\internals\managers.py", line 1719, in construction_error
"Shape of passed values is {0}, indices imply {1}".format(passed, implied)
ValueError: Shape of passed values is (68, 4), indices imply (90, 4)
>>>
```
</details>
The following snippet follows the same path
```
>>> df = pd.DataFrame(
... {"foo": list("ababb"), "bar": range(5)},
... index=pd.DatetimeIndex(
... ["1/1/2000", "1/1/2000", "2/1/2000", "2/1/2000", "2/1/2000"]
... ),
... )
>>> df.groupby("foo").apply(lambda df: df.copy())
Traceback (most recent call last):
File "C:\Users\simon\Anaconda3\lib\site-packages\pandas\core\groupby\groupby.py", line 725, in apply
result = self._python_apply_general(f)
File "C:\Users\simon\Anaconda3\lib\site-packages\pandas\core\groupby\groupby.py", line 745, in _python_apply_general
keys, values, not_indexed_same=mutated or self.mutated
File "C:\Users\simon\Anaconda3\lib\site-packages\pandas\core\groupby\generic.py", line 372, in _wrap_applied_output
return self._concat_objects(keys, values, not_indexed_same=not_indexed_same)
File "C:\Users\simon\Anaconda3\lib\site-packages\pandas\core\groupby\groupby.py", line 973, in _concat_objects
sort=False,
File "C:\Users\simon\Anaconda3\lib\site-packages\pandas\core\reshape\concat.py", line 258, in concat
return op.get_result()
File "C:\Users\simon\Anaconda3\lib\site-packages\pandas\core\reshape\concat.py", line 473, in get_result
mgrs_indexers, self.new_axes, concat_axis=self.axis, copy=self.copy
File "C:\Users\simon\Anaconda3\lib\site-packages\pandas\core\internals\managers.py", line 2059, in concatenate_block_managers
return BlockManager(blocks, axes)
File "C:\Users\simon\Anaconda3\lib\site-packages\pandas\core\internals\managers.py", line 143, in __init__
self._verify_integrity()
File "C:\Users\simon\Anaconda3\lib\site-packages\pandas\core\internals\managers.py", line 345, in _verify_integrity
construction_error(tot_items, block.shape[1:], self.axes)
File "C:\Users\simon\Anaconda3\lib\site-packages\pandas\core\internals\managers.py", line 1719, in construction_error
"Shape of passed values is {0}, indices imply {1}".format(passed, implied)
ValueError: Shape of passed values is (5, 2), indices imply (6, 2)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\simon\Anaconda3\lib\site-packages\pandas\core\groupby\groupby.py", line 737, in apply
return self._python_apply_general(f)
File "C:\Users\simon\Anaconda3\lib\site-packages\pandas\core\groupby\groupby.py", line 745, in _python_apply_general
keys, values, not_indexed_same=mutated or self.mutated
File "C:\Users\simon\Anaconda3\lib\site-packages\pandas\core\groupby\generic.py", line 372, in _wrap_applied_output
return self._concat_objects(keys, values, not_indexed_same=not_indexed_same)
File "C:\Users\simon\Anaconda3\lib\site-packages\pandas\core\groupby\groupby.py", line 973, in _concat_objects
sort=False,
File "C:\Users\simon\Anaconda3\lib\site-packages\pandas\core\reshape\concat.py", line 258, in concat
return op.get_result()
File "C:\Users\simon\Anaconda3\lib\site-packages\pandas\core\reshape\concat.py", line 473, in get_result
mgrs_indexers, self.new_axes, concat_axis=self.axis, copy=self.copy
File "C:\Users\simon\Anaconda3\lib\site-packages\pandas\core\internals\managers.py", line 2059, in concatenate_block_managers
return BlockManager(blocks, axes)
File "C:\Users\simon\Anaconda3\lib\site-packages\pandas\core\internals\managers.py", line 143, in __init__
self._verify_integrity()
File "C:\Users\simon\Anaconda3\lib\site-packages\pandas\core\internals\managers.py", line 345, in _verify_integrity
construction_error(tot_items, block.shape[1:], self.axes)
File "C:\Users\simon\Anaconda3\lib\site-packages\pandas\core\internals\managers.py", line 1719, in construction_error
"Shape of passed values is {0}, indices imply {1}".format(passed, implied)
ValueError: Shape of passed values is (5, 1), indices imply (6, 1)
>>>
```
and on master ...
```
>>> import pandas as pd
>>> pd.__version__
'1.1.0.dev0+1108.gcad602e16'
>>> df = pd.DataFrame(
... {"foo": list("ababb"), "bar": range(5)},
... index=pd.DatetimeIndex(
... ["1/1/2000", "1/1/2000", "2/1/2000", "2/1/2000", "2/1/2000"]
... ),
... )
>>> df.groupby("foo").apply(lambda df: df.copy())
foo bar
foo
a 2000-01-01 a 0
2000-02-01 a 2
b 2000-01-01 b 1
2000-02-01 b 3
2000-02-01 b 4
>>>
```
This looks fixed in 1.1.2
```
Python 3.7.3 (default, Apr 24 2019, 15:29:51) [MSC v.1915 64 bit (AMD64)] :: Anaconda, Inc. on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import pandas as pd
>>> pd.__version__
'1.1.2'
>>> df = pd.DataFrame({"foo": list("ababb"), "bar": range(5)},index=pd.DatetimeIndex(["1/1/2000", "1/1/2000", "2/1/2000", "2/1/2000", "2/1/2000"]),)
>>> df.groupby("foo").apply(lambda df: df.copy())
foo bar
foo
a 2000-01-01 a 0
2000-02-01 a 2
b 2000-01-01 b 1
2000-02-01 b 3
2000-02-01 b 4
>>> exit()
Python 3.8.5 | packaged by conda-forge | (default, Aug 29 2020, 00:43:28) [MSC v.1916 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import pandas as pd
>>> pd.__version__
'1.2.0.dev0+328.g2067d7e30'
>>> df = pd.DataFrame({"foo": list("ababb"), "bar": range(5)},index=pd.DatetimeIndex(["1/1/2000", "1/1/2000", "2/1/2000", "2/1/2000", "2/1/2000"]),)
>>> df.groupby("foo").apply(lambda df: df.copy())
foo bar
foo
a 2000-01-01 a 0
2000-02-01 a 2
b 2000-01-01 b 1
2000-02-01 b 3
2000-02-01 b 4
>>>
```
Thanks @amy12xx would you like to submit a PR with a test? see https://github.com/pandas-dev/pandas/issues/26182#issuecomment-581137695
can use code sample in https://github.com/pandas-dev/pandas/issues/26182#issuecomment-609093622 as validation test.
Sure, I can do that. | 2020-09-26T16:31:18Z | [] | [] |
Traceback (most recent call last):
File "D:/test/groupby_bug.py", line 16, in <module>
df = dbars.groupby("exchange").apply(_do_calc_)
File "C:\Anaconda3\lib\site-packages\pandas\core\groupby\groupby.py", line 701, in apply
return self._python_apply_general(f)
File "C:\Anaconda3\lib\site-packages\pandas\core\groupby\groupby.py", line 712, in _python_apply_general
not_indexed_same=mutated or self.mutated)
File "C:\Anaconda3\lib\site-packages\pandas\core\groupby\generic.py", line 318, in _wrap_applied_output
not_indexed_same=not_indexed_same)
File "C:\Anaconda3\lib\site-packages\pandas\core\groupby\groupby.py", line 918, in _concat_objects
sort=False)
File "C:\Anaconda3\lib\site-packages\pandas\core\reshape\concat.py", line 228, in concat
copy=copy, sort=sort)
File "C:\Anaconda3\lib\site-packages\pandas\core\reshape\concat.py", line 292, in __init__
obj._consolidate(inplace=True)
File "C:\Anaconda3\lib\site-packages\pandas\core\generic.py", line 5156, in _consolidate
self._consolidate_inplace()
File "C:\Anaconda3\lib\site-packages\pandas\core\generic.py", line 5138, in _consolidate_inplace
self._protect_consolidate(f)
File "C:\Anaconda3\lib\site-packages\pandas\core\generic.py", line 5127, in _protect_consolidate
result = f()
File "C:\Anaconda3\lib\site-packages\pandas\core\generic.py", line 5136, in f
self._data = self._data.consolidate()
File "C:\Anaconda3\lib\site-packages\pandas\core\internals\managers.py", line 922, in consolidate
bm = self.__class__(self.blocks, self.axes)
File "C:\Anaconda3\lib\site-packages\pandas\core\internals\managers.py", line 114, in __init__
self._verify_integrity()
File "C:\Anaconda3\lib\site-packages\pandas\core\internals\managers.py", line 311, in _verify_integrity
construction_error(tot_items, block.shape[1:], self.axes)
File "C:\Anaconda3\lib\site-packages\pandas\core\internals\managers.py", line 1691, in construction_error
passed, implied))
ValueError: Shape of passed values is (432, 27), indices imply (1080, 27)
| 14,176 |
||||
pandas-dev/pandas | pandas-dev__pandas-36683 | 4ead5520fd28c220d19530baaef7904541a1f2c9 | diff --git a/ci/deps/azure-37-minimum_versions.yaml b/ci/deps/azure-37-minimum_versions.yaml
--- a/ci/deps/azure-37-minimum_versions.yaml
+++ b/ci/deps/azure-37-minimum_versions.yaml
@@ -20,7 +20,7 @@ dependencies:
- numexpr=2.6.8
- numpy=1.16.5
- openpyxl=2.6.0
- - pytables=3.4.4
+ - pytables=3.5.1
- python-dateutil=2.7.3
- pytz=2017.3
- pyarrow=0.15
diff --git a/ci/deps/travis-37-locale.yaml b/ci/deps/travis-37-locale.yaml
--- a/ci/deps/travis-37-locale.yaml
+++ b/ci/deps/travis-37-locale.yaml
@@ -13,7 +13,7 @@ dependencies:
# pandas dependencies
- beautifulsoup4
- - blosc=1.14.3
+ - blosc=1.15.0
- python-blosc
- fastparquet=0.3.2
- html5lib
@@ -30,7 +30,7 @@ dependencies:
- pyarrow>=0.17
- psycopg2=2.7
- pymysql=0.7.11
- - pytables
+ - pytables>=3.5.1
- python-dateutil
- pytz
- scipy
diff --git a/doc/source/getting_started/install.rst b/doc/source/getting_started/install.rst
--- a/doc/source/getting_started/install.rst
+++ b/doc/source/getting_started/install.rst
@@ -266,7 +266,7 @@ PyTables 3.4.4 HDF5-based reading / writing
SQLAlchemy 1.2.8 SQL support for databases other than sqlite
SciPy 1.12.0 Miscellaneous statistical functions
xlsxwriter 1.0.2 Excel writing
-blosc 1.14.3 Compression for HDF5
+blosc 1.15.0 Compression for HDF5
fsspec 0.7.4 Handling files aside from local and HTTP
fastparquet 0.3.2 Parquet reading / writing
gcsfs 0.6.0 Google Cloud Storage access
@@ -280,7 +280,7 @@ psycopg2 2.7 PostgreSQL engine for sqlalchemy
pyarrow 0.15.0 Parquet, ORC, and feather reading / writing
pymysql 0.7.11 MySQL engine for sqlalchemy
pyreadstat SPSS files (.sav) reading
-pytables 3.4.4 HDF5 reading / writing
+pytables 3.5.1 HDF5 reading / writing
pyxlsb 1.0.6 Reading for xlsb files
qtpy Clipboard I/O
s3fs 0.4.0 Amazon S3 access
diff --git a/doc/source/whatsnew/v1.2.0.rst b/doc/source/whatsnew/v1.2.0.rst
--- a/doc/source/whatsnew/v1.2.0.rst
+++ b/doc/source/whatsnew/v1.2.0.rst
@@ -181,7 +181,7 @@ Optional libraries below the lowest tested version may still work, but are not c
+-----------------+-----------------+---------+
| pymysql | 0.7.11 | X |
+-----------------+-----------------+---------+
-| pytables | 3.4.4 | X |
+| pytables | 3.5.1 | X |
+-----------------+-----------------+---------+
| s3fs | 0.4.0 | |
+-----------------+-----------------+---------+
@@ -328,6 +328,7 @@ I/O
- Bug in :func:`LongTableBuilder.middle_separator` was duplicating LaTeX longtable entries in the List of Tables of a LaTeX document (:issue:`34360`)
- Bug in :meth:`read_csv` with ``engine='python'`` truncating data if multiple items present in first row and first element started with BOM (:issue:`36343`)
- Removed ``private_key`` and ``verbose`` from :func:`read_gbq` as they are no longer supported in ``pandas-gbq`` (:issue:`34654`, :issue:`30200`)
+- Bumped minimum pytables version to 3.5.1 to avoid a ``ValueError`` in :meth:`read_hdf` (:issue:`24839`)
Plotting
^^^^^^^^
diff --git a/environment.yml b/environment.yml
--- a/environment.yml
+++ b/environment.yml
@@ -100,7 +100,7 @@ dependencies:
- python-snappy # required by pyarrow
- pyqt>=5.9.2 # pandas.read_clipboard
- - pytables>=3.4.4 # pandas.read_hdf, DataFrame.to_hdf
+ - pytables>=3.5.1 # pandas.read_hdf, DataFrame.to_hdf
- s3fs>=0.4.0 # file IO when using 's3://...' path
- fsspec>=0.7.4 # for generic remote file operations
- gcsfs>=0.6.0 # file IO when using 'gcs://...' path
diff --git a/requirements-dev.txt b/requirements-dev.txt
--- a/requirements-dev.txt
+++ b/requirements-dev.txt
@@ -67,7 +67,7 @@ fastparquet>=0.3.2
pyarrow>=0.15.0
python-snappy
pyqt5>=5.9.2
-tables>=3.4.4
+tables>=3.5.1
s3fs>=0.4.0
fsspec>=0.7.4
gcsfs>=0.6.0
| ValueError: cannot set WRITEABLE flag to True of this array
will need to revert the xfail decorator in: https://github.com/pandas-dev/pandas/pull/25517 when this is fixed
#### Code Sample, a copy-pastable example if possible
Im getting all of a sudden this Error, any idea?
```python
# Your code here
input_df = pd.read_hdf(path_or_buf='x.hdf5',key='/x',mode='r')
```
#### Problem description
Traceback :
````
Traceback (most recent call last):
File "...", line 115, in <module>
input_df = pd.read_hdf(path_or_buf='x.hdf5',key='/x',mode='r')
File "/usr/local/lib/python3.6/dist-packages/pandas/io/pytables.py", line 394, in read_hdf
return store.select(key, auto_close=auto_close, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/pandas/io/pytables.py", line 741, in select
return it.get_result()
File "/usr/local/lib/python3.6/dist-packages/pandas/io/pytables.py", line 1483, in get_result
results = self.func(self.start, self.stop, where)
File "/usr/local/lib/python3.6/dist-packages/pandas/io/pytables.py", line 734, in func
columns=columns)
File "/usr/local/lib/python3.6/dist-packages/pandas/io/pytables.py", line 2937, in read
start=_start, stop=_stop)
File "/usr/local/lib/python3.6/dist-packages/pandas/io/pytables.py", line 2489, in read_array
ret = node[0][start:stop]
File "/usr/local/lib/python3.6/dist-packages/tables/vlarray.py", line 681, in __getitem__
return self.read(start, stop, step)[0]
File "/usr/local/lib/python3.6/dist-packages/tables/vlarray.py", line 821, in read
listarr = self._read_array(start, stop, step)
File "tables/hdf5extension.pyx", line 2155, in tables.hdf5extension.VLArray._read_array
ValueError: cannot set WRITEABLE flag to True of this array
```
| @macd2 : Thanks for reporting this! A couple of things:
* In the issue, could you provide your environment information from `pandas.show_versions` ?
* Assuming you can, do you mind sharing the file that triggers this error?
> Im getting all of a sudden this Error, any idea?
* It sounds like this was working for you on a previous versions of `pandas` . When did this code last work for you? What version are you using now (related to the first question)?
cc @jreback
When using numpy=1.16.0 I get this error, when I downgrade numpy=1.15.4 problem is gone
@gfyoung Sure here is the Versions:
```pandas.show_versions
'dependencies':
{'pandas': '0.23.4', 'pytest': '3.4.0', 'pip': '18.1', 'setuptools': '40.6.3', 'Cython': '0.29.3', 'numpy': '1.16.0', 'scipy': '1.2.0', 'pyarrow': None, 'xarray': None, 'IPython': '6.5.0', 'sphinx': None, 'patsy': '0.5.0', 'dateutil': '2.7.5', 'pytz': '2018.7', 'blosc': None, 'bottleneck': '1.2.1', 'tables': '3.4.4', 'numexpr': '2.6.8', 'feather': None, 'matplotlib': '3.0.2', 'openpyxl': '2.5.12', 'xlrd': '1.1.0', 'xlwt': '1.3.0', 'xlsxwriter': '0.7.3', 'lxml': '4.1.1', 'bs4': '4.4.1', 'html5lib': '1.0b8', 'sqlalchemy': '1.2.15', 'pymysql': '0.9.2', 'psycopg2': '2.7.6.1 (dt dec pq3 ext lo64)', 'jinja2': '2.10', 's3fs': None, 'fastparquet': None, 'pandas_gbq': None, 'pandas_datareader': '0.7.0'}
```
unfortunately i can not share the file but i think the issues indeed comes from numpy as @vvvlc said here is an other issue on their git:
https://github.com/nipy/nibabel/issues/697
`PS: just downgraded to numpy=1.15.4 and indeed it resolves the issue`
Is there anything pandas can do in the meantime? Or just wait for the next
pytables release?
On Mon, Jan 21, 2019 at 5:34 AM macd2 <notifications@github.com> wrote:
> @gfyoung <https://github.com/gfyoung> Sure here is the Versions:
>
> 'dependencies':
> {'pandas': '0.23.4', 'pytest': '3.4.0', 'pip': '18.1', 'setuptools': '40.6.3', 'Cython': '0.29.3', 'numpy': '1.16.0', 'scipy': '1.2.0', 'pyarrow': None, 'xarray': None, 'IPython': '6.5.0', 'sphinx': None, 'patsy': '0.5.0', 'dateutil': '2.7.5', 'pytz': '2018.7', 'blosc': None, 'bottleneck': '1.2.1', 'tables': '3.4.4', 'numexpr': '2.6.8', 'feather': None, 'matplotlib': '3.0.2', 'openpyxl': '2.5.12', 'xlrd': '1.1.0', 'xlwt': '1.3.0', 'xlsxwriter': '0.7.3', 'lxml': '4.1.1', 'bs4': '4.4.1', 'html5lib': '1.0b8', 'sqlalchemy': '1.2.15', 'pymysql': '0.9.2', 'psycopg2': '2.7.6.1 (dt dec pq3 ext lo64)', 'jinja2': '2.10', 's3fs': None, 'fastparquet': None, 'pandas_gbq': None, 'pandas_datareader': '0.7.0'}
>
>
> unfortunately i can not share the file but i think the issues indeed comes
> from numpy as @vvvlc <https://github.com/vvvlc> said here is an other
> issue on their git:
>
> nipy/nibabel#697 <https://github.com/nipy/nibabel/issues/697>
>
> —
> You are receiving this because you are subscribed to this thread.
> Reply to this email directly, view it on GitHub
> <https://github.com/pandas-dev/pandas/issues/24839#issuecomment-456043488>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/ABQHIqiIyuE-lmMlN2Ep-4htyMrTBdefks5vFaWqgaJpZM4aJQqs>
> .
>
For me `pip install numpy==1.15.4` also resolves this issue.
@TomAugspurger this issue comes up on Google on top (at least for me it did) and it seems that the remedy is simple, so maybe it's enough to just wait for an upstream fix. Really great that you would ask :)
`pip3 install numpy==1.15.4` also solved for me..
However, when I downgraded numpy i kept getting this error `ImportError: No module named 'numpy.core._multiarray_umath'`. Finally figured out it was happening because I stored the .h5 file with numpy 1.16 installed and it wouldn't reopen with the downgraded numpy...
you can avoid the error:
`ValueError: cannot set WRITEABLE flag to True of this array`
passing `format='table'` to `HDFStore.append` or `HDFStore.put` when you save data with pandas.
This will likely solve your problem, tested with pandas 0.24 and numpy 1.16+
@dev72
ok but how about old HDF files that already exist?
I think the best way to read old hdf files is to downgrade your pandas+numpy version, read all data and write it in a new hdf store with `format='table'`.
Then it should work with newer numpy and pandas versions.
@dev72 yes right im at the same point, but than i rather stick with the downgrade until this is properly fixed
yes same came from google, thanks for filing this issue @macd2
So after chasing around a few issues submitted at `numpy` and `PyTables` this post by @avalentino would suggest this issue is fixed in `PyTables` master but not in a release yet.
https://github.com/PyTables/PyTables/issues/719#issuecomment-455612656
Has anyone tried using `PyTables` master w/ `numpy >= 0.16`?
I got it to work using this
```
HDF5_DIR={HDF5_PATH} pip install -e git+https://github.com/PyTables/PyTables@492ee2f#egg=tables
pip install numpy==1.16.0
```
Make sure cython and hdf5 are installed.
What versions of pytables and numpy reproduce this? Is it specific to the data?
with
```
pandas: 0.24.1
numpy: 1.16.2
tables: 3.4.4
```
this doesn't raise,
```python
In [8]: df = pd.DataFrame({"A": [1, 2]})
In [9]: df.to_hdf('x.hdf5', key='x')
In [10]: pd.read_hdf('x.hdf5', 'x', mode='r')
Out[10]:
A
0 1
1 2
```
FYI, pytables 3.5.0 and 3.5.1 are on PyPI with the fix from the pytables side.
Upgrading to pytables 3.5.1 fixes the problem for me also with numpy 1.16.2
dealing with tf 2.0.0 need numpy (at least) 1.16.0. downgrading numpy to the previous version won't work on tf 2.0.0 | 2020-09-27T14:49:33Z | [] | [] |
Traceback (most recent call last):
File "...", line 115, in <module>
input_df = pd.read_hdf(path_or_buf='x.hdf5',key='/x',mode='r')
File "/usr/local/lib/python3.6/dist-packages/pandas/io/pytables.py", line 394, in read_hdf
return store.select(key, auto_close=auto_close, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/pandas/io/pytables.py", line 741, in select
return it.get_result()
File "/usr/local/lib/python3.6/dist-packages/pandas/io/pytables.py", line 1483, in get_result
results = self.func(self.start, self.stop, where)
File "/usr/local/lib/python3.6/dist-packages/pandas/io/pytables.py", line 734, in func
columns=columns)
File "/usr/local/lib/python3.6/dist-packages/pandas/io/pytables.py", line 2937, in read
start=_start, stop=_stop)
File "/usr/local/lib/python3.6/dist-packages/pandas/io/pytables.py", line 2489, in read_array
ret = node[0][start:stop]
File "/usr/local/lib/python3.6/dist-packages/tables/vlarray.py", line 681, in __getitem__
return self.read(start, stop, step)[0]
File "/usr/local/lib/python3.6/dist-packages/tables/vlarray.py", line 821, in read
listarr = self._read_array(start, stop, step)
File "tables/hdf5extension.pyx", line 2155, in tables.hdf5extension.VLArray._read_array
ValueError: cannot set WRITEABLE flag to True of this array
| 14,178 |
|||
pandas-dev/pandas | pandas-dev__pandas-36937 | 41ec93a5a4019462c4e461914007d8b25fb91e48 | diff --git a/doc/source/whatsnew/v1.1.4.rst b/doc/source/whatsnew/v1.1.4.rst
--- a/doc/source/whatsnew/v1.1.4.rst
+++ b/doc/source/whatsnew/v1.1.4.rst
@@ -14,6 +14,7 @@ including other versions of pandas.
Fixed regressions
~~~~~~~~~~~~~~~~~
+- Fixed regression in :func:`read_csv` raising a ``ValueError`` when ``names`` was of type ``dict_keys`` (:issue:`36928`)
- Fixed regression where attempting to mutate a :class:`DateOffset` object would no longer raise an ``AttributeError`` (:issue:`36940`)
.. ---------------------------------------------------------------------------
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -420,7 +420,9 @@ def _validate_names(names):
if names is not None:
if len(names) != len(set(names)):
raise ValueError("Duplicate names are not allowed.")
- if not is_list_like(names, allow_sets=False):
+ if not (
+ is_list_like(names, allow_sets=False) or isinstance(names, abc.KeysView)
+ ):
raise ValueError("Names should be an ordered collection.")
| BUG: `dict_keys` cannot be used as `pd.read_csv`'s `names` parameter
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of pandas.
- [x] (optional) I have confirmed this bug exists on the master branch of pandas.
---
#### Code Sample, a copy-pastable example
`test.csv`:
```csv
a,b
1,2
```
```python
import pandas as pd
data = {'a': 10, 'b': 20}
pd.read_csv('test.csv', names=data.keys())
```
Error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.8/dist-packages/pandas/io/parsers.py", line 691, in read_csv
return _read(filepath_or_buffer, kwds)
File "/usr/local/lib/python3.8/dist-packages/pandas/io/parsers.py", line 451, in _read
_validate_names(kwds.get("names", None))
File "/usr/local/lib/python3.8/dist-packages/pandas/io/parsers.py", line 424, in _validate_names
raise ValueError("Names should be an ordered collection.")
ValueError: Names should be an ordered collection.
```
#### Problem description
When a `dict_keys` object is passed, the keys aren't used as names. This is because `dict_keys` isn't list_like (https://github.com/pandas-dev/pandas/blob/4e553464f97a83dd2d827d559b74e17603695ca7/pandas/io/parsers.py#L423) or indexable, so the issue is understandable, but it would be nice to be able to pass `dict_keys` without issue.
#### Expected Output
Passing the dict keys to names shouldn't fail.
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit : 4e553464f97a83dd2d827d559b74e17603695ca7
python : 3.8.3.final.0
python-bits : 64
OS : Linux
OS-release : 4.15.0-118-generic
Version : #119-Ubuntu SMP Tue Sep 8 12:30:01 UTC 2020
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : None
LOCALE : en_US.UTF-8
pandas : 1.2.0.dev0+632.g4e553464f
numpy : 1.18.2
pytz : 2019.3
dateutil : 2.8.1
pip : 20.1.1
setuptools : 46.1.3
Cython : None
pytest : None
hypothesis : None
sphinx : 1.6.7
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : 0.999999999
pymysql : None
psycopg2 : None
jinja2 : 2.10
IPython : None
pandas_datareader: None
bs4 : 4.6.0
bottleneck : None
fsspec : None
fastparquet : None
gcsfs : None
matplotlib : 3.2.1
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
pyxlsb : None
s3fs : None
scipy : 1.4.1
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
numba : None
</details>
| Thanks @abmyii, probably we should be making an exception for ordered sets here. PR welcome.
ref https://github.com/pandas-dev/pandas/pull/34956
cc @MJafarMashhadi | 2020-10-07T08:14:21Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.8/dist-packages/pandas/io/parsers.py", line 691, in read_csv
return _read(filepath_or_buffer, kwds)
File "/usr/local/lib/python3.8/dist-packages/pandas/io/parsers.py", line 451, in _read
_validate_names(kwds.get("names", None))
File "/usr/local/lib/python3.8/dist-packages/pandas/io/parsers.py", line 424, in _validate_names
raise ValueError("Names should be an ordered collection.")
ValueError: Names should be an ordered collection.
| 14,233 |
|||
pandas-dev/pandas | pandas-dev__pandas-37129 | 4a08c02b0ecd60d7b2549fc83ebdc2719c19270a | diff --git a/doc/source/whatsnew/v1.2.0.rst b/doc/source/whatsnew/v1.2.0.rst
--- a/doc/source/whatsnew/v1.2.0.rst
+++ b/doc/source/whatsnew/v1.2.0.rst
@@ -221,6 +221,7 @@ Other enhancements
- :meth:`Rolling.var()` and :meth:`Rolling.std()` use Kahan summation and Welfords Method to avoid numerical issues (:issue:`37051`)
- :meth:`DataFrame.plot` now recognizes ``xlabel`` and ``ylabel`` arguments for plots of type ``scatter`` and ``hexbin`` (:issue:`37001`)
- :class:`DataFrame` now supports ``divmod`` operation (:issue:`37165`)
+- :meth:`DataFrame.to_parquet` now returns a ``bytes`` object when no ``path`` argument is passed (:issue:`37105`)
.. _whatsnew_120.api_breaking.python:
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -2289,14 +2289,14 @@ def to_markdown(
@deprecate_kwarg(old_arg_name="fname", new_arg_name="path")
def to_parquet(
self,
- path: FilePathOrBuffer[AnyStr],
+ path: Optional[FilePathOrBuffer] = None,
engine: str = "auto",
compression: Optional[str] = "snappy",
index: Optional[bool] = None,
partition_cols: Optional[List[str]] = None,
storage_options: StorageOptions = None,
**kwargs,
- ) -> None:
+ ) -> Optional[bytes]:
"""
Write a DataFrame to the binary parquet format.
@@ -2307,14 +2307,15 @@ def to_parquet(
Parameters
----------
- path : str or file-like object
+ path : str or file-like object, default None
If a string, it will be used as Root Directory path
when writing a partitioned dataset. By file-like object,
we refer to objects with a write() method, such as a file handle
(e.g. via builtin open function) or io.BytesIO. The engine
- fastparquet does not accept file-like objects.
+ fastparquet does not accept file-like objects. If path is None,
+ a bytes object is returned.
- .. versionchanged:: 1.0.0
+ .. versionchanged:: 1.2.0
Previously this was "fname"
@@ -2357,6 +2358,10 @@ def to_parquet(
Additional arguments passed to the parquet library. See
:ref:`pandas io <io.parquet>` for more details.
+ Returns
+ -------
+ bytes if no path argument is provided else None
+
See Also
--------
read_parquet : Read a parquet file.
@@ -2392,7 +2397,7 @@ def to_parquet(
"""
from pandas.io.parquet import to_parquet
- to_parquet(
+ return to_parquet(
self,
path,
engine,
diff --git a/pandas/io/parquet.py b/pandas/io/parquet.py
--- a/pandas/io/parquet.py
+++ b/pandas/io/parquet.py
@@ -1,5 +1,6 @@
""" parquet compat """
+import io
from typing import Any, AnyStr, Dict, List, Optional
from warnings import catch_warnings
@@ -238,28 +239,29 @@ def read(
def to_parquet(
df: DataFrame,
- path: FilePathOrBuffer[AnyStr],
+ path: Optional[FilePathOrBuffer] = None,
engine: str = "auto",
compression: Optional[str] = "snappy",
index: Optional[bool] = None,
storage_options: StorageOptions = None,
partition_cols: Optional[List[str]] = None,
**kwargs,
-):
+) -> Optional[bytes]:
"""
Write a DataFrame to the parquet format.
Parameters
----------
df : DataFrame
- path : str or file-like object
+ path : str or file-like object, default None
If a string, it will be used as Root Directory path
when writing a partitioned dataset. By file-like object,
we refer to objects with a write() method, such as a file handle
(e.g. via builtin open function) or io.BytesIO. The engine
- fastparquet does not accept file-like objects.
+ fastparquet does not accept file-like objects. If path is None,
+ a bytes object is returned.
- .. versionchanged:: 0.24.0
+ .. versionchanged:: 1.2.0
engine : {'auto', 'pyarrow', 'fastparquet'}, default 'auto'
Parquet library to use. If 'auto', then the option
@@ -298,13 +300,20 @@ def to_parquet(
kwargs
Additional keyword arguments passed to the engine
+
+ Returns
+ -------
+ bytes if no path argument is provided else None
"""
if isinstance(partition_cols, str):
partition_cols = [partition_cols]
impl = get_engine(engine)
- return impl.write(
+
+ path_or_buf: FilePathOrBuffer = io.BytesIO() if path is None else path
+
+ impl.write(
df,
- path,
+ path_or_buf,
compression=compression,
index=index,
partition_cols=partition_cols,
@@ -312,6 +321,12 @@ def to_parquet(
**kwargs,
)
+ if path is None:
+ assert isinstance(path_or_buf, io.BytesIO)
+ return path_or_buf.getvalue()
+ else:
+ return None
+
def read_parquet(path, engine: str = "auto", columns=None, **kwargs):
"""
| ENH: df.to_parquet() should return bytes
#### Is your feature request related to a problem?
I find it useful to write a parquet to a `bytes` object for some unit tests. The code that I currently use to do this is quite verbose.
To provide some background, `df.to_csv()` (w/o args) just works. It returns a `str` object as is expected. In the same vein, **`df.to_parquet()` (w/o args) should return a `bytes` object.**
More precisely, the current behavior is:
```python
>>> df = pd.DataFrame()
>>> type(df.to_csv()) # This works
<class 'str'>
>>> df.to_parquet() # This should be made to work
Traceback (most recent call last):
File "<input>", line 1, in <module>
TypeError: to_parquet() missing 1 required positional argument: 'path'
```
#### Describe the solution you'd like
The requested behavior is:
```python
>>> df = pd.DataFrame()
>>> type(df.to_parquet())
<class 'bytes'>
```
Other uses of `df.to_parquet` should obviously remain unaffected.
#### API breaking implications
It won't break the documented API.
#### Describe alternatives you've considered
I currently use this verbose code to get what I want:
```python
import io
import pandas as pd
df = pd.DataFrame()
pq_file = io.BytesIO()
df.to_parquet(pq_file)
pq_bytes = pq_file.getvalue()
```
This workaround is too effortful.
| +1
+1 | 2020-10-15T05:35:54Z | [] | [] |
Traceback (most recent call last):
File "<input>", line 1, in <module>
TypeError: to_parquet() missing 1 required positional argument: 'path'
| 14,268 |
|||
pandas-dev/pandas | pandas-dev__pandas-37181 | 9fed16cd4c302e47383480361260d63dc23cbefc | diff --git a/doc/source/whatsnew/v1.1.4.rst b/doc/source/whatsnew/v1.1.4.rst
--- a/doc/source/whatsnew/v1.1.4.rst
+++ b/doc/source/whatsnew/v1.1.4.rst
@@ -28,6 +28,7 @@ Fixed regressions
Bug fixes
~~~~~~~~~
- Bug causing ``groupby(...).sum()`` and similar to not preserve metadata (:issue:`29442`)
+- Bug in :meth:`Series.isin` and :meth:`DataFrame.isin` raising a ``ValueError`` when the target was read-only (:issue:`37174`)
.. ---------------------------------------------------------------------------
diff --git a/pandas/_libs/hashtable_func_helper.pxi.in b/pandas/_libs/hashtable_func_helper.pxi.in
--- a/pandas/_libs/hashtable_func_helper.pxi.in
+++ b/pandas/_libs/hashtable_func_helper.pxi.in
@@ -208,7 +208,7 @@ def duplicated_{{dtype}}(const {{c_type}}[:] values, object keep='first'):
{{if dtype == 'object'}}
def ismember_{{dtype}}(ndarray[{{c_type}}] arr, ndarray[{{c_type}}] values):
{{else}}
-def ismember_{{dtype}}(const {{c_type}}[:] arr, {{c_type}}[:] values):
+def ismember_{{dtype}}(const {{c_type}}[:] arr, const {{c_type}}[:] values):
{{endif}}
"""
Return boolean of values in arr on an
| BUG: ValueError: buffer source array is read-only
- [ ] I have checked that this issue has not already been reported.
- [ X ] I have confirmed this bug exists on the latest version of pandas.
- [ ] (optional) I have confirmed this bug exists on the master branch of pandas.
---
This seems similar to: https://github.com/pandas-dev/pandas/issues/31710
I am getting it in a dash application, so it's difficult to get an example from there.
However, isin() example from the above post also causes the error:
#### Code Sample, a copy-pastable example
```python
>>> import numpy as np # v1.16.4, then v1.18.1
>>> import pandas as pd # v1.0.1
>>>
>>> arr = np.array([1,2,3], dtype=np.int64) # works fine if I don't set the dtype!
>>>
>>> arr.setflags(write=False) # make it read-only
>>>
>>> df = pd.DataFrame({"col": [2,4,8]})
>>>
>>> test = df.col.isin(arr)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/z0022z7b/anaconda3/envs/mindsynchro/lib/python3.8/site-packages/pandas/core/series.py", line 4685, in isin
result = algorithms.isin(self, values)
File "/home/z0022z7b/anaconda3/envs/mindsynchro/lib/python3.8/site-packages/pandas/core/algorithms.py", line 465, in isin
return f(comps, values)
File "pandas/_libs/hashtable_func_helper.pxi", line 566, in pandas._libs.hashtable.ismember_int64
File "stringsource", line 658, in View.MemoryView.memoryview_cwrapper
File "stringsource", line 349, in View.MemoryView.memoryview.__cinit__
ValueError: buffer source array is read-only
```
#### Problem description
The problem is this error. My application code worked fine before, but I had to set it up on a new machine with new pandas and cython a couple days ago, and ran into this.
(Previously I had
- cython=0.29.14=py37he1b5a44_0
- pandas=0.24.2=py37he6710b0_0
Don't know if the isin() example would have thrown an error with that)
#### Output of ``pd.show_versions()``
'''
INSTALLED VERSIONS
------------------
commit : db08276bc116c438d3fdee492026f8223584c477
python : 3.8.6.final.0
python-bits : 64
OS : Linux
OS-release : 4.15.0-118-generic
Version : #119-Ubuntu SMP Tue Sep 8 12:30:01 UTC 2020
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 1.1.3
numpy : 1.19.2
pytz : 2020.1
dateutil : 2.8.1
pip : 20.2.3
setuptools : 49.6.0.post20201009
Cython : 0.29.21
lxml.etree : 4.5.2
jinja2 : 2.11.2
IPython : 7.18.1
fsspec : 0.8.4
fastparquet : 0.4.1
matplotlib : 3.3.2
pyarrow : 0.17.1
scipy : 1.5.2
xarray : 0.16.1
numba : 0.51.2
'''
</details>
| 2020-10-17T00:02:58Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/z0022z7b/anaconda3/envs/mindsynchro/lib/python3.8/site-packages/pandas/core/series.py", line 4685, in isin
result = algorithms.isin(self, values)
File "/home/z0022z7b/anaconda3/envs/mindsynchro/lib/python3.8/site-packages/pandas/core/algorithms.py", line 465, in isin
return f(comps, values)
File "pandas/_libs/hashtable_func_helper.pxi", line 566, in pandas._libs.hashtable.ismember_int64
File "stringsource", line 658, in View.MemoryView.memoryview_cwrapper
File "stringsource", line 349, in View.MemoryView.memoryview.__cinit__
ValueError: buffer source array is read-only
| 14,283 |
||||
pandas-dev/pandas | pandas-dev__pandas-37288 | ac7ca2390d88c256f955f8ab55103a66300ccee6 | diff --git a/doc/source/whatsnew/v1.1.4.rst b/doc/source/whatsnew/v1.1.4.rst
--- a/doc/source/whatsnew/v1.1.4.rst
+++ b/doc/source/whatsnew/v1.1.4.rst
@@ -22,6 +22,7 @@ Fixed regressions
- Fixed regression in :class:`RollingGroupby` causing a segmentation fault with Index of dtype object (:issue:`36727`)
- Fixed regression in :meth:`DataFrame.resample(...).apply(...)` raised ``AttributeError`` when input was a :class:`DataFrame` and only a :class:`Series` was evaluated (:issue:`36951`)
- Fixed regression in :class:`PeriodDtype` comparing both equal and unequal to its string representation (:issue:`37265`)
+- Fixed regression in certain offsets (:meth:`pd.offsets.Day() <pandas.tseries.offsets.Day>` and below) no longer being hashable (:issue:`37267`)
.. ---------------------------------------------------------------------------
diff --git a/pandas/_libs/tslibs/offsets.pyx b/pandas/_libs/tslibs/offsets.pyx
--- a/pandas/_libs/tslibs/offsets.pyx
+++ b/pandas/_libs/tslibs/offsets.pyx
@@ -791,6 +791,11 @@ cdef class Tick(SingleConstructorOffset):
def is_anchored(self) -> bool:
return False
+ # This is identical to BaseOffset.__hash__, but has to be redefined here
+ # for Python 3, because we've redefined __eq__.
+ def __hash__(self) -> int:
+ return hash(self._params)
+
# --------------------------------------------------------------------
# Comparison and Arithmetic Methods
| BUG: offsets are now unhashable
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of pandas.
- [ ] (optional) I have confirmed this bug exists on the master branch of pandas.
---
As discussed on gitter, this was apparently an unindended consequence of https://github.com/pandas-dev/pandas/pull/34227
![image](https://user-images.githubusercontent.com/881019/96518983-56d04a80-12af-11eb-951d-89b02badf9df.png)
https://gitter.im/pydata/pandas?at=5f8d8a106c8d484be294872f
#### Code Sample, a copy-pastable example
```python
>>> import pandas as pd
>>> hash(pd.offsets.Day())
Traceback (most recent call last):
File "C:\Users\dhirschf\envs\dev\lib\site-packages\IPython\core\interactiveshell.py", line 3417, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-46-d78c0d3ee4ad>", line 1, in <module>
hash(pd.offsets.Day())
TypeError: unhashable type: 'pandas._libs.tslibs.offsets.Day'
```
#### Problem description
My code relied on the hashability of offsets and so this broke my code and has so far prevented me upgrading to the latest `pandas`
#### Expected Output
#### Output of ``pd.show_versions()``
<details>
```
>>> pd.show_versions()
INSTALLED VERSIONS
------------------
commit : db08276bc116c438d3fdee492026f8223584c477
python : 3.7.7.final.0
python-bits : 64
OS : Windows
OS-release : 10
Version : 10.0.17763
machine : AMD64
processor : Intel64 Family 6 Model 58 Stepping 0, GenuineIntel
byteorder : little
LC_ALL : None
LANG : None
LOCALE : None.None
pandas : 1.1.3
numpy : 1.19.1
pytz : 2020.1
dateutil : 2.8.1
pip : 20.2.4
setuptools : 50.3.0.post20201006
Cython : 0.29.21
pytest : 6.1.1
hypothesis : 5.37.3
sphinx : 3.2.1
blosc : 1.9.2
feather : None
xlsxwriter : 1.3.7
lxml.etree : 4.6.0
html5lib : 1.1
pymysql : 0.10.1
psycopg2 : 2.8.6 (dt dec pq3 ext lo64)
jinja2 : 2.11.2
IPython : 7.18.1
pandas_datareader: None
bs4 : 4.9.3
bottleneck : 1.3.2
fsspec : 0.8.4
fastparquet : None
gcsfs : None
matplotlib : 3.3.2
numexpr : 2.7.1
odfpy : None
openpyxl : 3.0.5
pandas_gbq : None
pyarrow : 1.0.1
pytables : None
pyxlsb : None
s3fs : None
scipy : 1.5.2
sqlalchemy : 1.3.20
tables : 3.6.1
tabulate : 0.8.7
xarray : 0.16.1
xlrd : 1.2.0
xlwt : 1.3.0
numba : 0.51.2
```
</details>
| 2020-10-20T21:25:29Z | [] | [] |
Traceback (most recent call last):
File "C:\Users\dhirschf\envs\dev\lib\site-packages\IPython\core\interactiveshell.py", line 3417, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-46-d78c0d3ee4ad>", line 1, in <module>
hash(pd.offsets.Day())
TypeError: unhashable type: 'pandas._libs.tslibs.offsets.Day'
| 14,304 |
||||
pandas-dev/pandas | pandas-dev__pandas-37302 | 4aa41b8fee4f4d5095d12bfbf644495df16c53d5 | diff --git a/doc/source/whatsnew/v1.1.4.rst b/doc/source/whatsnew/v1.1.4.rst
--- a/doc/source/whatsnew/v1.1.4.rst
+++ b/doc/source/whatsnew/v1.1.4.rst
@@ -23,6 +23,7 @@ Fixed regressions
- Fixed regression in :meth:`DataFrame.resample(...).apply(...)` raised ``AttributeError`` when input was a :class:`DataFrame` and only a :class:`Series` was evaluated (:issue:`36951`)
- Fixed regression in :class:`PeriodDtype` comparing both equal and unequal to its string representation (:issue:`37265`)
- Fixed regression in certain offsets (:meth:`pd.offsets.Day() <pandas.tseries.offsets.Day>` and below) no longer being hashable (:issue:`37267`)
+- Fixed regression in :class:`StataReader` which required ``chunksize`` to be manually set when using an iterator to read a dataset (:issue:`37280`)
.. ---------------------------------------------------------------------------
diff --git a/pandas/io/stata.py b/pandas/io/stata.py
--- a/pandas/io/stata.py
+++ b/pandas/io/stata.py
@@ -469,7 +469,7 @@ class PossiblePrecisionLoss(Warning):
precision_loss_doc = """
-Column converted from %s to %s, and some data are outside of the lossless
+Column converted from {0} to {1}, and some data are outside of the lossless
conversion range. This may result in a loss of precision in the saved data.
"""
@@ -543,7 +543,7 @@ def _cast_to_stata_types(data: DataFrame) -> DataFrame:
object in a DataFrame.
"""
ws = ""
- # original, if small, if large
+ # original, if small, if large
conversion_data = (
(np.bool_, np.int8, np.int8),
(np.uint8, np.int8, np.int16),
@@ -563,7 +563,7 @@ def _cast_to_stata_types(data: DataFrame) -> DataFrame:
dtype = c_data[1]
else:
dtype = c_data[2]
- if c_data[2] == np.float64: # Warn if necessary
+ if c_data[2] == np.int64: # Warn if necessary
if data[col].max() >= 2 ** 53:
ws = precision_loss_doc.format("uint64", "float64")
@@ -627,12 +627,12 @@ def __init__(self, catarray: Series, encoding: str = "latin-1"):
self.value_labels = list(zip(np.arange(len(categories)), categories))
self.value_labels.sort(key=lambda x: x[0])
self.text_len = 0
- self.off: List[int] = []
- self.val: List[int] = []
self.txt: List[bytes] = []
self.n = 0
# Compute lengths and setup lists of offsets and labels
+ offsets: List[int] = []
+ values: List[int] = []
for vl in self.value_labels:
category = vl[1]
if not isinstance(category, str):
@@ -642,9 +642,9 @@ def __init__(self, catarray: Series, encoding: str = "latin-1"):
ValueLabelTypeMismatch,
)
category = category.encode(encoding)
- self.off.append(self.text_len)
+ offsets.append(self.text_len)
self.text_len += len(category) + 1 # +1 for the padding
- self.val.append(vl[0])
+ values.append(vl[0])
self.txt.append(category)
self.n += 1
@@ -655,8 +655,8 @@ def __init__(self, catarray: Series, encoding: str = "latin-1"):
)
# Ensure int32
- self.off = np.array(self.off, dtype=np.int32)
- self.val = np.array(self.val, dtype=np.int32)
+ self.off = np.array(offsets, dtype=np.int32)
+ self.val = np.array(values, dtype=np.int32)
# Total length
self.len = 4 + 4 + 4 * self.n + 4 * self.n + self.text_len
@@ -868,23 +868,23 @@ def __init__(self):
# with a label, but the underlying variable is -127 to 100
# we're going to drop the label and cast to int
self.DTYPE_MAP = dict(
- list(zip(range(1, 245), ["a" + str(i) for i in range(1, 245)]))
+ list(zip(range(1, 245), [np.dtype("a" + str(i)) for i in range(1, 245)]))
+ [
- (251, np.int8),
- (252, np.int16),
- (253, np.int32),
- (254, np.float32),
- (255, np.float64),
+ (251, np.dtype(np.int8)),
+ (252, np.dtype(np.int16)),
+ (253, np.dtype(np.int32)),
+ (254, np.dtype(np.float32)),
+ (255, np.dtype(np.float64)),
]
)
self.DTYPE_MAP_XML = dict(
[
- (32768, np.uint8), # Keys to GSO
- (65526, np.float64),
- (65527, np.float32),
- (65528, np.int32),
- (65529, np.int16),
- (65530, np.int8),
+ (32768, np.dtype(np.uint8)), # Keys to GSO
+ (65526, np.dtype(np.float64)),
+ (65527, np.dtype(np.float32)),
+ (65528, np.dtype(np.int32)),
+ (65529, np.dtype(np.int16)),
+ (65530, np.dtype(np.int8)),
]
)
# error: Argument 1 to "list" has incompatible type "str";
@@ -1045,9 +1045,10 @@ def __init__(
self._order_categoricals = order_categoricals
self._encoding = ""
self._chunksize = chunksize
- if self._chunksize is not None and (
- not isinstance(chunksize, int) or chunksize <= 0
- ):
+ self._using_iterator = False
+ if self._chunksize is None:
+ self._chunksize = 1
+ elif not isinstance(chunksize, int) or chunksize <= 0:
raise ValueError("chunksize must be a positive integer when set.")
# State variables for the file
@@ -1057,7 +1058,7 @@ def __init__(
self._column_selector_set = False
self._value_labels_read = False
self._data_read = False
- self._dtype = None
+ self._dtype: Optional[np.dtype] = None
self._lines_read = 0
self._native_byteorder = _set_endianness(sys.byteorder)
@@ -1193,7 +1194,7 @@ def _read_new_header(self) -> None:
# Get data type information, works for versions 117-119.
def _get_dtypes(
self, seek_vartypes: int
- ) -> Tuple[List[Union[int, str]], List[Union[int, np.dtype]]]:
+ ) -> Tuple[List[Union[int, str]], List[Union[str, np.dtype]]]:
self.path_or_buf.seek(seek_vartypes)
raw_typlist = [
@@ -1518,11 +1519,8 @@ def _read_strls(self) -> None:
self.GSO[str(v_o)] = decoded_va
def __next__(self) -> DataFrame:
- if self._chunksize is None:
- raise ValueError(
- "chunksize must be set to a positive integer to use as an iterator."
- )
- return self.read(nrows=self._chunksize or 1)
+ self._using_iterator = True
+ return self.read(nrows=self._chunksize)
def get_chunk(self, size: Optional[int] = None) -> DataFrame:
"""
@@ -1690,11 +1688,15 @@ def any_startswith(x: str) -> bool:
convert = False
for col in data:
dtype = data[col].dtype
- if dtype in (np.float16, np.float32):
- dtype = np.float64
+ if dtype in (np.dtype(np.float16), np.dtype(np.float32)):
+ dtype = np.dtype(np.float64)
convert = True
- elif dtype in (np.int8, np.int16, np.int32):
- dtype = np.int64
+ elif dtype in (
+ np.dtype(np.int8),
+ np.dtype(np.int16),
+ np.dtype(np.int32),
+ ):
+ dtype = np.dtype(np.int64)
convert = True
retyped_data.append((col, data[col].astype(dtype)))
if convert:
@@ -1806,14 +1808,14 @@ def _do_convert_categoricals(
keys = np.array(list(vl.keys()))
column = data[col]
key_matches = column.isin(keys)
- if self._chunksize is not None and key_matches.all():
- initial_categories = keys
+ if self._using_iterator and key_matches.all():
+ initial_categories: Optional[np.ndarray] = keys
# If all categories are in the keys and we are iterating,
# use the same keys for all chunks. If some are missing
# value labels, then we will fall back to the categories
# varying across chunks.
else:
- if self._chunksize is not None:
+ if self._using_iterator:
# warn is using an iterator
warnings.warn(
categorical_conversion_warning, CategoricalConversionWarning
@@ -2024,7 +2026,7 @@ def _convert_datetime_to_stata_type(fmt: str) -> np.dtype:
"ty",
"%ty",
]:
- return np.float64 # Stata expects doubles for SIFs
+ return np.dtype(np.float64) # Stata expects doubles for SIFs
else:
raise NotImplementedError(f"Format {fmt} not implemented")
| BUG: regression: pandas.read_stata(filename, iterator=True) raises ValueError
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of pandas.
- [ ] (optional) I have confirmed this bug exists on the master branch of pandas.
---
#### Code Sample, a copy-pastable example
```python
>>> import pandas
>>> # for example https://gitlab.com/ViDA-NYU/datamart/datamart/-/blob/master/tests/data/stata118.dta
>>> iterator = pandas.read_stata(stata_file_name, iterator=True)
>>> list(iterator)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "site-packages/pandas/io/stata.py", line 1523, in __next__
raise ValueError(
ValueError: chunksize must be set to a positive integer to use as an iterator.
```
#### Problem description
`read_stata(filename, iterator=True)` no longer works in pandas 1.1.3. **It worked in pandas 1.0.5.**
#### Expected Output
DataFrame is loaded correctly
#### Output of ``pd.show_versions()``
<details>
```
INSTALLED VERSIONS
------------------
commit : db08276bc116c438d3fdee492026f8223584c477
python : 3.8.5.final.0
python-bits : 64
OS : Linux
OS-release : 5.4.0-51-generic
Version : #56-Ubuntu SMP Mon Oct 5 14:28:49 UTC 2020
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 1.1.3
numpy : 1.19.2
pytz : 2020.1
dateutil : 2.8.1
pip : 20.0.2
setuptools : 44.0.0
Cython : None
pytest : None
hypothesis : None
sphinx : 3.2.1
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 2.11.2
IPython : 7.14.0
pandas_datareader: None
bs4 : None
bottleneck : None
fsspec : 0.8.4
fastparquet : None
gcsfs : 0.7.1
matplotlib : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
pyxlsb : None
s3fs : 0.4.2
scipy : 1.5.3
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : 1.2.0
xlwt : None
numba : None
```
</details>
| (apologies for the forgotten title, that's a first...)
I have checked that it happens on 1.1.0 and master as well. So the problem must have been introduced between 1.0.5 and 1.1.0.
#31072 looked like a prime suspect but b54aaf7b works for me.
Bisecting shows that 035e1fe8 is the culprit (#34128).
Default chunksize needs to be set to 1.
https://github.com/pandas-dev/pandas/blob/2f552830497998243d12ad461ed1e42d4073ca2b/pandas/io/stata.py#L1521
Workaround is to pass a chunksize.
Thanks for the workaround, I might do this in the interim. | 2020-10-21T08:15:17Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "site-packages/pandas/io/stata.py", line 1523, in __next__
raise ValueError(
ValueError: chunksize must be set to a positive integer to use as an iterator.
| 14,305 |
|||
pandas-dev/pandas | pandas-dev__pandas-37432 | fabf03af9f1e79a6e2ebc2c31974e8ec8da9aec4 | diff --git a/doc/source/whatsnew/v1.1.4.rst b/doc/source/whatsnew/v1.1.4.rst
--- a/doc/source/whatsnew/v1.1.4.rst
+++ b/doc/source/whatsnew/v1.1.4.rst
@@ -26,6 +26,7 @@ Fixed regressions
- Fixed regression where slicing :class:`DatetimeIndex` raised :exc:`AssertionError` on irregular time series with ``pd.NaT`` or on unsorted indices (:issue:`36953` and :issue:`35509`)
- Fixed regression in certain offsets (:meth:`pd.offsets.Day() <pandas.tseries.offsets.Day>` and below) no longer being hashable (:issue:`37267`)
- Fixed regression in :class:`StataReader` which required ``chunksize`` to be manually set when using an iterator to read a dataset (:issue:`37280`)
+- Fixed regression in setitem with :meth:`DataFrame.iloc` which raised error when trying to set a value while filtering with a boolean list (:issue:`36741`)
- Fixed regression in :attr:`MultiIndex.is_monotonic_increasing` returning wrong results with ``NaN`` in at least one of the levels (:issue:`37220`)
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -1670,8 +1670,6 @@ def _setitem_with_indexer(self, indexer, value):
"length than the value"
)
- pi = plane_indexer[0] if lplane_indexer == 1 else plane_indexer
-
# we need an iterable, with a ndim of at least 1
# eg. don't pass through np.array(0)
if is_list_like_indexer(value) and getattr(value, "ndim", 1) > 0:
@@ -1698,7 +1696,7 @@ def _setitem_with_indexer(self, indexer, value):
else:
v = np.nan
- self._setitem_single_column(loc, v, pi)
+ self._setitem_single_column(loc, v, plane_indexer)
elif not unique_cols:
raise ValueError(
@@ -1716,7 +1714,7 @@ def _setitem_with_indexer(self, indexer, value):
else:
v = np.nan
- self._setitem_single_column(loc, v, pi)
+ self._setitem_single_column(loc, v, plane_indexer)
# we have an equal len ndarray/convertible to our labels
# hasattr first, to avoid coercing to ndarray without reason.
@@ -1735,7 +1733,9 @@ def _setitem_with_indexer(self, indexer, value):
for i, loc in enumerate(ilocs):
# setting with a list, re-coerces
- self._setitem_single_column(loc, value[:, i].tolist(), pi)
+ self._setitem_single_column(
+ loc, value[:, i].tolist(), plane_indexer
+ )
elif (
len(labels) == 1
@@ -1744,7 +1744,7 @@ def _setitem_with_indexer(self, indexer, value):
):
# we have an equal len list/ndarray
# We only get here with len(labels) == len(ilocs) == 1
- self._setitem_single_column(ilocs[0], value, pi)
+ self._setitem_single_column(ilocs[0], value, plane_indexer)
elif lplane_indexer == 0 and len(value) == len(self.obj.index):
# We get here in one case via .loc with a all-False mask
@@ -1759,12 +1759,12 @@ def _setitem_with_indexer(self, indexer, value):
)
for loc, v in zip(ilocs, value):
- self._setitem_single_column(loc, v, pi)
+ self._setitem_single_column(loc, v, plane_indexer)
else:
# scalar value
for loc in ilocs:
- self._setitem_single_column(loc, value, pi)
+ self._setitem_single_column(loc, value, plane_indexer)
else:
self._setitem_single_block(indexer, value)
| BUG: Setting values with ILOC and list like indexers raises
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of pandas.
- [x] (optional) I have confirmed this bug exists on the master branch of pandas.
---
**Note**: Please read [this guide](https://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) detailing how to provide the necessary information for us to reproduce your bug.
#### Code Sample, a copy-pastable example
```python
import pandas as pd
df = pd.DataFrame({"flag": ["x", "y"], "value": [1, 2]})
df.iloc[[True, False], 1] = df.iloc[[True, False], 1] * 2
print(df)
```
#### Problem description
This worked on 1.0.5 and returned
```
flag value
0 x 2
1 y 2
```
On master this raises:
```
Traceback (most recent call last):
File "/home/developer/.config/JetBrains/PyCharm2020.2/scratches/scratch_5.py", line 221, in <module>
df.iloc[[True, False], 1] = x
File "/home/developer/PycharmProjects/pandas/pandas/core/indexing.py", line 681, in __setitem__
iloc._setitem_with_indexer(indexer, value)
File "/home/developer/PycharmProjects/pandas/pandas/core/indexing.py", line 1756, in _setitem_with_indexer
self._setitem_single_column(ilocs[0], value, pi)
File "/home/developer/PycharmProjects/pandas/pandas/core/indexing.py", line 1800, in _setitem_single_column
ser._mgr = ser._mgr.setitem(indexer=pi, value=value)
File "/home/developer/PycharmProjects/pandas/pandas/core/internals/managers.py", line 532, in setitem
return self.apply("setitem", indexer=indexer, value=value)
File "/home/developer/PycharmProjects/pandas/pandas/core/internals/managers.py", line 397, in apply
applied = getattr(b, f)(**kwargs)
File "/home/developer/PycharmProjects/pandas/pandas/core/internals/blocks.py", line 923, in setitem
check_setitem_lengths(indexer, value, values)
File "/home/developer/PycharmProjects/pandas/pandas/core/indexers.py", line 158, in check_setitem_lengths
raise ValueError(
ValueError: cannot set using a list-like indexer with a different length than the value
Process finished with exit code 1
```
Was this change of behavior intended?
#### Expected Output
I would expect, that this would work and returns the results from 1.0.5
#### Output of ``pd.show_versions()``
<details>
master
</details>
| Thanks @phofl
3da053c7d1b1786689a81b143a7afb57eb4762ad is the first bad commit
commit 3da053c7d1b1786689a81b143a7afb57eb4762ad
Author: jbrockmendel <jbrockmendel@gmail.com>
Date: Mon Feb 17 16:15:45 2020 -0800
BUG: fix length_of_indexer with boolean mask (#31897)
pandas/core/indexers.py | 8 +++++++-
pandas/core/indexing.py | 28 +++++++---------------------
pandas/tests/indexing/test_indexers.py | 11 +++++++++++
3 files changed, 25 insertions(+), 22 deletions(-)
create mode 100644 pandas/tests/indexing/test_indexers.py
https://github.com/pandas-dev/pandas/pull/31897
cc @jbrockmendel
Not 100% sure, but it looks like before we get to that point in Block.setitem we want to have `value = 2` instead if `value = np.array([2])`. It'll take some more digging to see exactly where that unpacking should take place.
cc @jbrockmendel
I looked a bit into this and found, that
https://github.com/pandas-dev/pandas/blob/18b4864b7d8638c3101ec11fd6562d2fbfe872a8/pandas/core/indexing.py#L1673
casts the ``plane_indexer`` to a list, when only one element from the indexer is True. This seems to be the reason why
```
df = pd.DataFrame({"flag": ["x", "y", "z"], "value": [1, 3, 4]})
df.iloc[[True, False, True], 1] = df.iloc[[True, False, True], 1] * 2
```
works, while
```
df = pd.DataFrame({"flag": ["x", "y", "z"], "value": [1, 3, 4]})
df.iloc[[True, False, False], 1] = df.iloc[[True, False, False], 1] * 2
```
does not. The check raising the error is only ran, when ``pi`` is a list. This line does not seem to be necessary. I ran all tests in ``indexing`` and ``indexes`` without failure after deleting it. Do you remember why you introduced it in #31837? | 2020-10-26T21:48:22Z | [] | [] |
Traceback (most recent call last):
File "/home/developer/.config/JetBrains/PyCharm2020.2/scratches/scratch_5.py", line 221, in <module>
df.iloc[[True, False], 1] = x
File "/home/developer/PycharmProjects/pandas/pandas/core/indexing.py", line 681, in __setitem__
iloc._setitem_with_indexer(indexer, value)
File "/home/developer/PycharmProjects/pandas/pandas/core/indexing.py", line 1756, in _setitem_with_indexer
self._setitem_single_column(ilocs[0], value, pi)
File "/home/developer/PycharmProjects/pandas/pandas/core/indexing.py", line 1800, in _setitem_single_column
ser._mgr = ser._mgr.setitem(indexer=pi, value=value)
File "/home/developer/PycharmProjects/pandas/pandas/core/internals/managers.py", line 532, in setitem
return self.apply("setitem", indexer=indexer, value=value)
File "/home/developer/PycharmProjects/pandas/pandas/core/internals/managers.py", line 397, in apply
applied = getattr(b, f)(**kwargs)
File "/home/developer/PycharmProjects/pandas/pandas/core/internals/blocks.py", line 923, in setitem
check_setitem_lengths(indexer, value, values)
File "/home/developer/PycharmProjects/pandas/pandas/core/indexers.py", line 158, in check_setitem_lengths
raise ValueError(
ValueError: cannot set using a list-like indexer with a different length than the value
| 14,331 |
|||
pandas-dev/pandas | pandas-dev__pandas-37499 | 5532ae8bfb5e6a0203172f36c0738e9c345956a6 | diff --git a/doc/source/whatsnew/v1.1.4.rst b/doc/source/whatsnew/v1.1.4.rst
--- a/doc/source/whatsnew/v1.1.4.rst
+++ b/doc/source/whatsnew/v1.1.4.rst
@@ -15,6 +15,7 @@ including other versions of pandas.
Fixed regressions
~~~~~~~~~~~~~~~~~
- Fixed regression in :func:`read_csv` raising a ``ValueError`` when ``names`` was of type ``dict_keys`` (:issue:`36928`)
+- Fixed regression in :func:`read_csv` with more than 1M rows and specifying a ``index_col`` argument (:issue:`37094`)
- Fixed regression where attempting to mutate a :class:`DateOffset` object would no longer raise an ``AttributeError`` (:issue:`36940`)
- Fixed regression where :meth:`DataFrame.agg` would fail with :exc:`TypeError` when passed positional arguments to be passed on to the aggregation function (:issue:`36948`).
- Fixed regression in :class:`RollingGroupby` with ``sort=False`` not being respected (:issue:`36889`)
diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -441,7 +441,7 @@ def isin(comps: AnyArrayLike, values: AnyArrayLike) -> np.ndarray:
if len(comps) > 1_000_000 and not is_object_dtype(comps):
# If the the values include nan we need to check for nan explicitly
# since np.nan it not equal to np.nan
- if np.isnan(values).any():
+ if isna(values).any():
f = lambda c, v: np.logical_or(np.in1d(c, v), np.isnan(c))
else:
f = np.in1d
| BUG: Pandas 1.1.3 read_csv raises a TypeError when dtype, and index_col are provided, and file has >1M rows
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of pandas.
- [ ] (optional) I have confirmed this bug exists on the master branch of pandas.
---
#### Code Sample, a copy-pastable example
```python
import pandas as pd
import numpy as np
ROWS = 1000001 # <--------- with 1000000, it works
with open('out.dat', 'w') as fd:
for i in range(ROWS):
fd.write('%d\n' % i)
df = pd.read_csv('out.dat', names=['a'], dtype={'a': np.float64}, index_col=['a'])
```
#### Problem description
When `ROWS = 1000001`, I get the following traceback:
```
Traceback (most recent call last):
File "try.py", line 10, in <module>
df = pd.read_csv('out.dat', names=['a'], dtype={'a': np.float64}, index_col=['a'])
File "/tmp/new_pandas/lib64/python3.6/site-packages/pandas/io/parsers.py", line 686, in read_csv
return _read(filepath_or_buffer, kwds)
File "/tmp/new_pandas/lib64/python3.6/site-packages/pandas/io/parsers.py", line 458, in _read
data = parser.read(nrows)
File "/tmp/new_pandas/lib64/python3.6/site-packages/pandas/io/parsers.py", line 1196, in read
ret = self._engine.read(nrows)
File "/tmp/new_pandas/lib64/python3.6/site-packages/pandas/io/parsers.py", line 2231, in read
index, names = self._make_index(data, alldata, names)
File "/tmp/new_pandas/lib64/python3.6/site-packages/pandas/io/parsers.py", line 1677, in _make_index
index = self._agg_index(index)
File "/tmp/new_pandas/lib64/python3.6/site-packages/pandas/io/parsers.py", line 1770, in _agg_index
arr, _ = self._infer_types(arr, col_na_values | col_na_fvalues)
File "/tmp/new_pandas/lib64/python3.6/site-packages/pandas/io/parsers.py", line 1871, in _infer_types
mask = algorithms.isin(values, list(na_values))
File "/tmp/new_pandas/lib64/python3.6/site-packages/pandas/core/algorithms.py", line 443, in isin
if np.isnan(values).any():
TypeError: ufunc 'isnan' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''
```
#### Expected Output
With pandas 1.1.2, or ROWS = 1000000, it works fine.
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit : db08276bc116c438d3fdee492026f8223584c477
python : 3.6.3.final.0
python-bits : 64
OS : Linux
OS-release : 3.10.0-957.38.3.el7.x86_64
Version : #1 SMP Mon Nov 11 12:01:33 EST 2019
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 1.1.3
numpy : 1.19.2
pytz : 2020.1
dateutil : 2.8.1
pip : 20.2.3
setuptools : 50.3.0
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : 7.16.1
pandas_datareader: None
bs4 : None
bottleneck : None
fsspec : None
fastparquet : None
gcsfs : None
matplotlib : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
numba : None
</details>
| Possibly caused by #36266 (having some trouble running bisects on my end)
problem arises because the default `values` (na_values) for read_csv is `array(['', 'NULL', '#N/A', 'N/A', '1.#QNAN', 'nan', '#NA', '-1.#QNAN',
'<NA>', '1.#IND', 'n/a', '-nan', '-1.#IND', '#N/A N/A', 'null',
'-NaN', 'NaN', 'NA'], dtype=object)`
I agree, the code here:
https://github.com/pandas-dev/pandas/pull/36266/files#diff-c8f3ad29eaf121537b999e88e9117f3e3702d0b818a67516da25093fe2890ce8R442
Is suspicious.
Confirmed this is a regression compared to 1.0.x. Thanks for the report!
I can confirm I have the same problem, which arises as soon as I pass the threshold of 1M rows.
I only need to specify `index_col` to get the bug, though. Specifying `dtypes` is not needed.
Pandas 1.1.3
---
And as a temporary workaround, I am reading without `index_col`, and then setting my index. E.g.
```python
df = pd.read_csv(filepath, nrows=1000001)
df.set_index(0) # for example if I wanted the first column
```
I have the same problem. any solution?
> I have the same problem. any solution?
my solution was to downgrade to 1.1.2 and it works.
more minimal example not involving read_csv
```
>>> import numpy as np
>>> import pandas as pd
>>> pd.__version__
'1.1.3'
>>> ser = pd.Series([1, 2, np.nan] * 1_000_000)
>>> ser.isin({"foo", "bar"})
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\simon\anaconda3\envs\pandas-1.1.3\lib\site-packages\pandas\core
\series.py", line 4685, in isin
result = algorithms.isin(self, values)
File "C:\Users\simon\anaconda3\envs\pandas-1.1.3\lib\site-packages\pandas\core
\algorithms.py", line 443, in isin
if np.isnan(values).any():
TypeError: ufunc 'isnan' not supported for the input types, and the inputs could
not be safely coerced to any supported types according to the casting rule ''sa
fe''
>>>
```
```
>>> import numpy as np
>>> import pandas as pd
>>> pd.__version__
'1.0.5'
>>> ser = pd.Series([1, 2, np.nan] * 1_000_000)
>>> ser.isin({"foo", "bar"})
C:\Users\simon\anaconda3\envs\pandas-1.0.5\lib\site-packages\numpy\lib\arrayseto
ps.py:580: FutureWarning: elementwise comparison failed; returning scalar instea
d, but in the future will perform elementwise comparison
mask |= (ar1 == a)
0 False
1 False
2 False
3 False
4 False
...
2999995 False
2999996 False
2999997 False
2999998 False
2999999 False
Length: 3000000, dtype: bool
>>>
>>>
```
| 2020-10-29T21:33:13Z | [] | [] |
Traceback (most recent call last):
File "try.py", line 10, in <module>
df = pd.read_csv('out.dat', names=['a'], dtype={'a': np.float64}, index_col=['a'])
File "/tmp/new_pandas/lib64/python3.6/site-packages/pandas/io/parsers.py", line 686, in read_csv
return _read(filepath_or_buffer, kwds)
File "/tmp/new_pandas/lib64/python3.6/site-packages/pandas/io/parsers.py", line 458, in _read
data = parser.read(nrows)
File "/tmp/new_pandas/lib64/python3.6/site-packages/pandas/io/parsers.py", line 1196, in read
ret = self._engine.read(nrows)
File "/tmp/new_pandas/lib64/python3.6/site-packages/pandas/io/parsers.py", line 2231, in read
index, names = self._make_index(data, alldata, names)
File "/tmp/new_pandas/lib64/python3.6/site-packages/pandas/io/parsers.py", line 1677, in _make_index
index = self._agg_index(index)
File "/tmp/new_pandas/lib64/python3.6/site-packages/pandas/io/parsers.py", line 1770, in _agg_index
arr, _ = self._infer_types(arr, col_na_values | col_na_fvalues)
File "/tmp/new_pandas/lib64/python3.6/site-packages/pandas/io/parsers.py", line 1871, in _infer_types
mask = algorithms.isin(values, list(na_values))
File "/tmp/new_pandas/lib64/python3.6/site-packages/pandas/core/algorithms.py", line 443, in isin
if np.isnan(values).any():
TypeError: ufunc 'isnan' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''
| 14,343 |
|||
pandas-dev/pandas | pandas-dev__pandas-37675 | 8b05fe3bd20965ba64477f6c96dbf674d9f155ff | diff --git a/doc/source/whatsnew/v1.2.0.rst b/doc/source/whatsnew/v1.2.0.rst
--- a/doc/source/whatsnew/v1.2.0.rst
+++ b/doc/source/whatsnew/v1.2.0.rst
@@ -464,6 +464,7 @@ Indexing
- Bug in indexing on a :class:`Series` or :class:`DataFrame` with a :class:`MultiIndex` with a level named "0" (:issue:`37194`)
- Bug in :meth:`Series.__getitem__` when using an unsigned integer array as an indexer giving incorrect results or segfaulting instead of raising ``KeyError`` (:issue:`37218`)
- Bug in :meth:`Index.where` incorrectly casting numeric values to strings (:issue:`37591`)
+- Bug in :meth:`Series.loc` and :meth:`DataFrame.loc` raises when numeric label was given for object :class:`Index` although label was in :class:`Index` (:issue:`26491`)
Missing
^^^^^^^
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -5200,13 +5200,8 @@ def _maybe_cast_slice_bound(self, label, side: str_t, kind):
# We are a plain index here (sub-class override this method if they
# wish to have special treatment for floats/ints, e.g. Float64Index and
# datetimelike Indexes
- # reject them
- if is_float(label):
- self._invalid_indexer("slice", label)
-
- # we are trying to find integer bounds on a non-integer based index
- # this is rejected (generally .loc gets you here)
- elif is_integer(label):
+ # reject them, if index does not contain label
+ if (is_float(label) or is_integer(label)) and label not in self.values:
self._invalid_indexer("slice", label)
return label
| Label-based indexing on a Series with an index of dtype=object raises TypeError when using slices with integer bound
When indexing a pandas axis that has an `index` of `dtype=object`, with label-based indexing, passing a slice that contains an integer bound results in a `TypeError`, even if the integer *is indeed a label* in the `index` of the series.
### Details
#### Code Sample
When indexing a pandas axis that has an `index` of `dtype=object`, with label-based indexing, passing a slice that contains an integer bound results in a `TypeError`, even if the integer *is indeed a label* in the `index` of the series.
```python
import pandas as pd
series = pd.Series(range(4), index=[1, 'spam', 2, 'eggs'])
series
## 1 0
## spam 1
## 2 2
## eggs 3
## dtype: int64
series.index
## Index([1, 'spam', 2, 'eggs'], dtype='object')
series.loc['spam':'eggs']
## spam 1
## 2 2
## eggs 3
## dtype: int64
series.loc[1:'eggs']
# raises TypeError
```
#### Problem description
When indexing a `pd.Series` (or an equivalent pandas object) that has an `index` of `dtype=object` (upcasted for example from a list of `int`s and `str`s), with `.loc`'s label-based indexing, passing a slice that contains an integer bound (as in `1:'eggs'`) results in a `TypeError`, even if the integer *is indeed a label* in the `index` of the series:
```text
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\Paolo\Code\PycharmProjects\pandas\pandas\core\indexing.py", line 1504, in __getitem__
return self._getitem_axis(maybe_callable, axis=axis)
File "C:\Users\Paolo\Code\PycharmProjects\pandas\pandas\core\indexing.py", line 1871, in _getitem_axis
return self._get_slice_axis(key, axis=axis)
File "C:\Users\Paolo\Code\PycharmProjects\pandas\pandas\core\indexing.py", line 1537, in _get_slice_axis
slice_obj.step, kind=self.name)
File "C:\Users\Paolo\Code\PycharmProjects\pandas\pandas\core\indexes\base.py", line 4784, in slice_indexer
kind=kind)
File "C:\Users\Paolo\Code\PycharmProjects\pandas\pandas\core\indexes\base.py", line 5002, in slice_locs
start_slice = self.get_slice_bound(start, 'left', kind)
File "C:\Users\Paolo\Code\PycharmProjects\pandas\pandas\core\indexes\base.py", line 4914, in get_slice_bound
label = self._maybe_cast_slice_bound(label, side, kind)
File "C:\Users\Paolo\Code\PycharmProjects\pandas\pandas\core\indexes\base.py", line 4861, in _maybe_cast_slice_bound
self._invalid_indexer('slice', label)
File "C:\Users\Paolo\Code\PycharmProjects\pandas\pandas\core\indexes\base.py", line 3154, in _invalid_indexer
kind=type(key)))
TypeError: cannot do slice indexing on <class 'pandas.core.indexes.base.Index'> with these indexers [1] of <class 'int'>
```
Other non-integer (and non-float) slice bounds are accepted, as shown above.
Is this the expected behaviour? Possibly it's a documentation issue. According to the section [Selection by Label](http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#indexing-label) in the indexing doc page, the second warning says:
> .loc is strict when you present slicers that are not compatible (or convertible) with the index type. For example using integers in a DatetimeIndex. These will raise a TypeError.
What exactly is meant by "compatible or convertible"? An `int` is definitely an `object`, so it should be compatible with the index type.
Furthermore, the first paragraph in the section reads (emphasis mine):
> pandas provides a suite of methods in order to have purely label based indexing. This is a strict inclusion based protocol. Every label asked for must be in the index, or a KeyError will be raised. When slicing, both the start bound AND the stop bound are included, if present in the index. **Integers are valid labels**, but they refer to the label and not the position.
And, finally, the subsection [Slicing with labels](http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#slicing-with-labels) says:
> When using .loc with slices, if both the start and the stop labels are present in the index, then elements located between the two (including them) are returned
By reading this, to me it would seem that using a slice such as `1:'eggs'` or `1:2` in the above example with `loc` should be perfectly valid. The indexers are present in the index, and, as stated above, integers are valid labels.
----------------------------
The exception originates at the `_maybe_cast_slice_bound` check, which rejects all integers regardless of whether the `Index` contains any integers:
https://github.com/pandas-dev/pandas/blob/6d2398a58fda68e40f116f199439504558c7774c/pandas/core/indexes/base.py#L4846-L4863
#### Expected Output
I expected `series.loc[1:'eggs']` not to raise `TypeError` because of the integer label. I expected that expression to return the following slice view of the `Series`:
```text
1 0
spam 1
2 2
eggs 3
dtype: int64
```
#### Output of ``pd.show_versions()``
<details>
```text
INSTALLED VERSIONS
------------------
commit: 6d2398a58fda68e40f116f199439504558c7774c
python: 3.7.3.final.0
python-bits: 64
OS: Windows
OS-release: 10
machine: AMD64
processor: Intel64 Family 6 Model 158 Stepping 10, GenuineIntel
byteorder: little
LC_ALL: None
LANG: None
LOCALE: None.None
pandas: 0.25.0.dev0+598.g6d2398a58
pytest: 4.5.0
pip: 19.1.1
setuptools: 41.0.1
Cython: 0.29.7
numpy: 1.16.3
scipy: 1.3.0
pyarrow: 0.13.0
xarray: 0.12.1
IPython: 7.5.0
sphinx: 2.0.1
patsy: 0.5.1
dateutil: 2.8.0
pytz: 2019.1
blosc: 1.8.1
bottleneck: 1.2.1
tables: 3.5.1
numexpr: 2.6.9
feather: None
matplotlib: 3.1.0
openpyxl: 2.6.2
xlrd: 1.2.0
xlwt: 1.3.0
xlsxwriter: 1.1.8
lxml.etree: 4.3.3
bs4: 4.7.1
html5lib: 1.0.1
sqlalchemy: 1.3.3
pymysql: None
psycopg2: None
jinja2: 2.10.1
s3fs: 0.2.1
fastparquet: 0.3.1
pandas_gbq: None
pandas_datareader: None
gcsfs: None
```
</details>
| This raising probably isn't intentional.
On Wed, May 22, 2019 at 11:39 AM Paolo Lammens <notifications@github.com>
wrote:
> Code Sample
>
> import pandas as pd
>
> series = pd.Series(range(4), index=[1, 'spam', 2, 'eggs'])
> series## 1 0## spam 1## 2 2## eggs 3## dtype: int64
>
> series.index## Index([1, 'spam', 2, 'eggs'], dtype='object')
>
> series.loc['spam':'eggs']## spam 1## 2 2## eggs 3## dtype: int64
>
> series.loc[1:'eggs']# raises TypeError
>
> Problem description
>
> When indexing a pd.Series (or an equivalent pandas object) that has an
> index of dtype=object (upcasted from example from a list of ints and strs),
> with .loc's label-based indexing, passing a slice that contains an
> integer bound (as in 1:'eggs') results in a TypeError, even if the
> integer *is indeed a label* in the index of the series:
>
> Traceback (most recent call last):
> File "<stdin>", line 1, in <module>
> File "C:\Users\Paolo\Code\PycharmProjects\pandas\pandas\core\indexing.py", line 1504, in __getitem__
> return self._getitem_axis(maybe_callable, axis=axis)
> File "C:\Users\Paolo\Code\PycharmProjects\pandas\pandas\core\indexing.py", line 1871, in _getitem_axis
> return self._get_slice_axis(key, axis=axis)
> File "C:\Users\Paolo\Code\PycharmProjects\pandas\pandas\core\indexing.py", line 1537, in _get_slice_axis
> slice_obj.step, kind=self.name)
> File "C:\Users\Paolo\Code\PycharmProjects\pandas\pandas\core\indexes\base.py", line 4784, in slice_indexer
> kind=kind)
> File "C:\Users\Paolo\Code\PycharmProjects\pandas\pandas\core\indexes\base.py", line 5002, in slice_locs
> start_slice = self.get_slice_bound(start, 'left', kind)
> File "C:\Users\Paolo\Code\PycharmProjects\pandas\pandas\core\indexes\base.py", line 4914, in get_slice_bound
> label = self._maybe_cast_slice_bound(label, side, kind)
> File "C:\Users\Paolo\Code\PycharmProjects\pandas\pandas\core\indexes\base.py", line 4861, in _maybe_cast_slice_bound
> self._invalid_indexer('slice', label)
> File "C:\Users\Paolo\Code\PycharmProjects\pandas\pandas\core\indexes\base.py", line 3154, in _invalid_indexer
> kind=type(key)))
> TypeError: cannot do slice indexing on <class 'pandas.core.indexes.base.Index'> with these indexers [1] of <class 'int'>
>
> Other non-integer (and non-float) slice bounds are accepted, as shown
> above.
>
> Is this the expected behaviour? Possibly it's a documentation issue.
> According to the section Selection by Label
> <http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#indexing-label>
> in the indexing doc page, the second warning says:
>
> .loc is strict when you present slicers that are not compatible (or
> convertible) with the index type. For example using integers in a
> DatetimeIndex. These will raise a TypeError.
>
> What exactly is meant by "compatible or convertible"? An int is
> definitely an object, so it should be compatible with the index type.
>
> Furthermore, the first paragraph in the section reads (emphasis mine):
>
> pandas provides a suite of methods in order to have purely label based
> indexing. This is a strict inclusion based protocol. Every label asked for
> must be in the index, or a KeyError will be raised. When slicing, both the
> start bound AND the stop bound are included, if present in the index. *Integers
> are valid labels*, but they refer to the label and not the position.
>
> And, finally, the subsection Slicing with labels
> <http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#slicing-with-labels>
> says:
>
> When using .loc with slices, if both the start and the stop labels are
> present in the index, then elements located between the two (including
> them) are returned
>
> By reading this, to me it would seem that using a slice such as 1:'eggs'
> or 1:2 in the above example with the loc indexer should be perfectly
> valid. The indexer is present in the index, and, as stated above, integers
> are valid labels.
> ------------------------------
>
> The exception originates at the _maybe_cast_slice_bound check, which
> rejects all integers regardless of whether the Index contains any
> integers:
>
>
> https://github.com/pandas-dev/pandas/blob/6d2398a58fda68e40f116f199439504558c7774c/pandas/core/indexes/base.py#L4846-L4863
>
> Maybe it should first check if self.holds_integer?
> Expected Output
>
> I expected series.loc[1:'eggs'] not to raise TypeError because of the
> integer label. I expected that expression to return the following slice
> view of the Series:
>
> 1 0
> spam 1
> 2 2
> eggs 3
> dtype: int64
>
> Output of pd.show_versions()
>
> INSTALLED VERSIONS
> ------------------
> commit: 6d2398a58fda68e40f116f199439504558c7774c
> python: 3.7.3.final.0
> python-bits: 64
> OS: Windows
> OS-release: 10
> machine: AMD64
> processor: Intel64 Family 6 Model 158 Stepping 10, GenuineIntel
> byteorder: little
> LC_ALL: None
> LANG: None
> LOCALE: None.None
>
> pandas: 0.25.0.dev0+598.g6d2398a58
> pytest: 4.5.0
> pip: 19.1.1
> setuptools: 41.0.1
> Cython: 0.29.7
> numpy: 1.16.3
> scipy: 1.3.0
> pyarrow: 0.13.0
> xarray: 0.12.1
> IPython: 7.5.0
> sphinx: 2.0.1
> patsy: 0.5.1
> dateutil: 2.8.0
> pytz: 2019.1
> blosc: 1.8.1
> bottleneck: 1.2.1
> tables: 3.5.1
> numexpr: 2.6.9
> feather: None
> matplotlib: 3.1.0
> openpyxl: 2.6.2
> xlrd: 1.2.0
> xlwt: 1.3.0
> xlsxwriter: 1.1.8
> lxml.etree: 4.3.3
> bs4: 4.7.1
> html5lib: 1.0.1
> sqlalchemy: 1.3.3
> pymysql: None
> psycopg2: None
> jinja2: 2.10.1
> s3fs: 0.2.1
> fastparquet: 0.3.1
> pandas_gbq: None
> pandas_datareader: None
> gcsfs: None
>
> —
> You are receiving this because you are subscribed to this thread.
> Reply to this email directly, view it on GitHub
> <https://github.com/pandas-dev/pandas/issues/26491?email_source=notifications&email_token=AAKAOIWGITX5CKEBSOWTHETPWVZMNA5CNFSM4HOWDLXKYY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4GVIJ3NA>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/AAKAOITHWFXL2PLAGKAAD3DPWVZMNANCNFSM4HOWDLXA>
> .
>
| 2020-11-06T22:15:45Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\Paolo\Code\PycharmProjects\pandas\pandas\core\indexing.py", line 1504, in __getitem__
return self._getitem_axis(maybe_callable, axis=axis)
File "C:\Users\Paolo\Code\PycharmProjects\pandas\pandas\core\indexing.py", line 1871, in _getitem_axis
return self._get_slice_axis(key, axis=axis)
File "C:\Users\Paolo\Code\PycharmProjects\pandas\pandas\core\indexing.py", line 1537, in _get_slice_axis
slice_obj.step, kind=self.name)
File "C:\Users\Paolo\Code\PycharmProjects\pandas\pandas\core\indexes\base.py", line 4784, in slice_indexer
kind=kind)
File "C:\Users\Paolo\Code\PycharmProjects\pandas\pandas\core\indexes\base.py", line 5002, in slice_locs
start_slice = self.get_slice_bound(start, 'left', kind)
File "C:\Users\Paolo\Code\PycharmProjects\pandas\pandas\core\indexes\base.py", line 4914, in get_slice_bound
label = self._maybe_cast_slice_bound(label, side, kind)
File "C:\Users\Paolo\Code\PycharmProjects\pandas\pandas\core\indexes\base.py", line 4861, in _maybe_cast_slice_bound
self._invalid_indexer('slice', label)
File "C:\Users\Paolo\Code\PycharmProjects\pandas\pandas\core\indexes\base.py", line 3154, in _invalid_indexer
kind=type(key)))
TypeError: cannot do slice indexing on <class 'pandas.core.indexes.base.Index'> with these indexers [1] of <class 'int'>
| 14,369 |
|||
pandas-dev/pandas | pandas-dev__pandas-37778 | 2eb353063d6bc4f2ed22e9943d26465e284b8393 | diff --git a/doc/source/whatsnew/v1.2.0.rst b/doc/source/whatsnew/v1.2.0.rst
--- a/doc/source/whatsnew/v1.2.0.rst
+++ b/doc/source/whatsnew/v1.2.0.rst
@@ -543,6 +543,8 @@ Groupby/resample/rolling
- Bug in :meth:`df.groupby(..).quantile() <pandas.core.groupby.DataFrameGroupBy.quantile>` and :meth:`df.resample(..).quantile() <pandas.core.resample.Resampler.quantile>` raised ``TypeError`` when values were of type ``Timedelta`` (:issue:`29485`)
- Bug in :meth:`Rolling.median` and :meth:`Rolling.quantile` returned wrong values for :class:`BaseIndexer` subclasses with non-monotonic starting or ending points for windows (:issue:`37153`)
- Bug in :meth:`DataFrame.groupby` dropped ``nan`` groups from result with ``dropna=False`` when grouping over a single column (:issue:`35646`, :issue:`35542`)
+- Bug in :meth:`DataFrameGroupBy.head`, :meth:`DataFrameGroupBy.tail`, :meth:`SeriesGroupBy.head`, and :meth:`SeriesGroupBy.tail` would raise when used with ``axis=1`` (:issue:`9772`)
+
Reshaping
^^^^^^^^^
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -2742,7 +2742,10 @@ def head(self, n=5):
"""
self._reset_group_selection()
mask = self._cumcount_array() < n
- return self._selected_obj[mask]
+ if self.axis == 0:
+ return self._selected_obj[mask]
+ else:
+ return self._selected_obj.iloc[:, mask]
@Substitution(name="groupby")
@Substitution(see_also=_common_see_also)
@@ -2776,7 +2779,10 @@ def tail(self, n=5):
"""
self._reset_group_selection()
mask = self._cumcount_array(ascending=False) < n
- return self._selected_obj[mask]
+ if self.axis == 0:
+ return self._selected_obj[mask]
+ else:
+ return self._selected_obj.iloc[:, mask]
def _reindex_output(
self, output: OutputFrameOrSeries, fill_value: Scalar = np.NaN
| Groupby built by columns : cannot use .head() or .apply()
``` python
import numpy as np
import pandas as pd
df = pd.DataFrame({i:pd.Series(np.random.normal(size=10),
index=range(10)) for i in range(11)})
df_g = df.groupby(['a']*6+['b']*5, axis=1)
```
This, if I well understood, should build a groupby object grouping columns, and so give the possibility to later aggregate them. And indeed :
``` python
df_g.sum()
```
works well. But
``` python
df_g.head()
```
Throws an error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/jonas/Code/pandas/pandas/core/groupby.py", line 986, in head
in_head = self._cumcount_array() < n
File "/home/jonas/Code/pandas/pandas/core/groupby.py", line 1044, in _cumcount_array
cumcounts[indices] = values
IndexError: index 10 is out of bounds for axis 1 with size 10
```
and
``` python
df_g.apply(lambda x : x.sum())
```
from which I expected the same result as the first example, gives this table :
```
a b
0 -0.381070 NaN
1 -1.214075 NaN
2 -1.496252 NaN
3 3.392565 NaN
4 -0.782376 NaN
5 1.306043 NaN
6 NaN -1.772334
7 NaN 4.125280
8 NaN 1.992329
9 NaN 4.283854
10 NaN -4.791092
```
I didn't really get what's happening, I don't exclude a misunderstanding or an error from myself.
```
pd.show_versions()
INSTALLED VERSIONS
------------------
commit: None
python: 2.7.8.final.0
python-bits: 64
OS: Linux
OS-release: 3.13.0-46-generic
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: fr_FR.UTF-8
pandas: 0.16.0-28-gcb8c130
nose: 1.3.4
Cython: 0.20.2
numpy: 1.9.2
scipy: 0.14.0
statsmodels: None
IPython: 3.0.0-dev
sphinx: 1.2.2
patsy: None
dateutil: 2.4.1
pytz: 2015.2
bottleneck: None
tables: 3.1.1
numexpr: 2.4
matplotlib: 1.4.3
openpyxl: 1.7.0
xlrd: 0.9.2
xlwt: 0.7.5
xlsxwriter: None
lxml: None
bs4: None
html5lib: 0.999
httplib2: None
apiclient: None
sqlalchemy: 0.9.7
pymysql: None
psycopg2: 2.5.3 (dt dec mx pq3 ext)
```
| These do look like bugs to me -- thanks for the report! If you're interested in digging in to figure out what's going on, such efforts would be appreciated :).
this is an error in `cumcount_array`.
Eventually, there seems to be 2 different bugs.
The problem with head() may not be so relevant, cause if you groups columns, when you show the head of each group you see the head of the whole df. Yet, the error's still here and maybe address with an exception or warning (cause atm it's silent when you have more rows than columns.).
The pb with apply seems to be a bit deeper. The given function is applied without transmitting the 'axis' argument given in the first groupby call. I couldn't find where this 'axis' argument is stored, if so. Would you agree setting a new attribute to BaseGrouper class, so as to remind the orientation of the build ? If so I will propose a correction.
no it just needs to be passed thru
I don't think the issue with `apply` is actually a bug. In the original example, it's actually not possible to pass in the axis argument even if we knew to try:
``` .python
>>> df_g.apply(lambda x : x.sum(), axis=1)
...
TypeError: <lambda>() got an unexpected keyword argument 'axis'
```
If you want to pass an extra argument, you can add an argument to the lambda and pass it to `apply`, or you could just pass it directly:
``` .python
>>> df_g.apply(lambda x, axis : x.sum(axis=axis), axis=1)
>>> df_g.apply(lambda x: x.sum(axis=1))
```
We _could_ actually inspect the arguments of the passed-in function using reflection, and pass through the axis parameter whenever it takes an argument named 'axis', but that seems like it might be overkill:
``` .python
>>> import inspect
>>> inspect.getargspec(lambda x, axis : x.sum(axis=axis))
ArgSpec(args=['x', 'axis'], varargs=None, keywords=None, defaults=None)
```
Yep, I agree it's not a bug. I was thinking about adding a small warning in the doc, just for people like me not to forget the 'axis' argument in the applied function, but I had no time to do so yet.
Still, it's surprising that
```
df_g.sum()
```
and
```
df_g.apply(sum)
```
have not the same result.
`_cumcount_array` is okay here; the issue is the use of `mask` within `groupby(...).head` when `axis=1`:
```
mask = self._cumcount_array() < n
return self._selected_obj[mask]
```
When `axis=1`, the mask is computed along the columns, but then applied to the index. I think it should instead applied the columns.
The issue with `.apply(lambda x: x.sum())` with `axis=1` is trickier. The main issue is that when pandas feeds a group of values into the UDF, they are _not_ transposed. It seems reasonable to me to argue that they should be, but one technical hurdle here is what happens with a frame where the columns are different dtypes. Upon transposing, you now have columns of mixed dtypes, which are coerced to object type. So upon transposing the result back you lose type information. Since the UDF can return anything, there is no way to reliably determine that the resulting dtypes should be.
Of course, an argument against transposing the group when passing it to the UDF is that this would be a rather large change for what seems to me to be of little value. After all, any UDF can be rewritten under the presumption that the values passed in haven't been transposed. In this case, the UDF would be:
lambda x: x.sum(axis=1)
Using this in the OP example then produces the same result as `df_g.sum()`. | 2020-11-12T00:53:11Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/jonas/Code/pandas/pandas/core/groupby.py", line 986, in head
in_head = self._cumcount_array() < n
File "/home/jonas/Code/pandas/pandas/core/groupby.py", line 1044, in _cumcount_array
cumcounts[indices] = values
IndexError: index 10 is out of bounds for axis 1 with size 10
| 14,391 |
|||
pandas-dev/pandas | pandas-dev__pandas-37801 | 1f42d45623ba1a9677595e6f55b45930bbc5b24d | diff --git a/doc/source/whatsnew/v1.1.5.rst b/doc/source/whatsnew/v1.1.5.rst
--- a/doc/source/whatsnew/v1.1.5.rst
+++ b/doc/source/whatsnew/v1.1.5.rst
@@ -15,6 +15,7 @@ including other versions of pandas.
Fixed regressions
~~~~~~~~~~~~~~~~~
- Regression in addition of a timedelta-like scalar to a :class:`DatetimeIndex` raising incorrectly (:issue:`37295`)
+- Fixed regression in :meth:`Series.groupby` raising when the :class:`Index` of the :class:`Series` had a tuple as its name (:issue:`37755`)
-
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/series.py b/pandas/core/series.py
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -900,7 +900,7 @@ def __getitem__(self, key):
return result
- except KeyError:
+ except (KeyError, TypeError):
if isinstance(key, tuple) and isinstance(self.index, MultiIndex):
# We still have the corner case where a tuple is a key
# in the first level of our MultiIndex
@@ -964,7 +964,7 @@ def _get_values_tuple(self, key):
return result
if not isinstance(self.index, MultiIndex):
- raise ValueError("key of type tuple not found and not a MultiIndex")
+ raise KeyError("key of type tuple not found and not a MultiIndex")
# If key is contained, would have returned by now
indexer, new_index = self.index.get_loc_level(key)
@@ -1020,7 +1020,7 @@ def __setitem__(self, key, value):
except TypeError as err:
if isinstance(key, tuple) and not isinstance(self.index, MultiIndex):
- raise ValueError(
+ raise KeyError(
"key of type tuple not found and not a MultiIndex"
) from err
| BUG: Series.groupby() fails in pandas 1.1.4 when index has tuple name.
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of pandas.
- [ ] (optional) I have confirmed this bug exists on the master branch of pandas.
---
**Note**: Please read [this guide](https://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) detailing how to provide the necessary information for us to reproduce your bug.
#### Code Sample, a copy-pastable example
```python
import pandas as pd
print("Test 1")
a = pd.Series([1,2,3,4], index=[1,1,2,2], name=("a", "a"))
a.index.name = ("b", "b")
print(a)
print(a.index)
print(a.groupby(level=0).last())
print("Test 2")
a = pd.Series([1,2,3,4], index=[2,3,4,5], name=("a", "a"))
b = pd.Series([1,1,2,2], index=[2,3,4,5], name=("b", "b"))
a.index = b.reindex(a.index)
print(a)
print(a.index)
print(a.groupby(level=0).last())
```
#### Problem description
In pandas 1.1.2 this works fine. While it crashes in pandas 1.1.4. The problem seems related to the tuple index names. The output is:
```
Test 1
(b, b)
1 1
1 2
2 3
2 4
Name: (a, a), dtype: int64
Int64Index([1, 1, 2, 2], dtype='int64', name=('b', 'b'))
Traceback (most recent call last):
File "testcase.py", line 8, in <module>
print(a.groupby(level=0).last())
File "/home/burk/.local/share/virtualenvs/timeseries-Gj3kjmJv/lib/python3.8/site-packages/pandas/core/series.py", line 1735, in groupby
return SeriesGroupBy(
File "/home/burk/.local/share/virtualenvs/timeseries-Gj3kjmJv/lib/python3.8/site-packages/pandas/core/groupby/groupby.py", line 525, in __init__
grouper, exclusions, obj = get_grouper(
File "/home/burk/.local/share/virtualenvs/timeseries-Gj3kjmJv/lib/python3.8/site-packages/pandas/core/groupby/grouper.py", line 773, in get_grouper
if is_in_obj(gpr): # df.groupby(df['name'])
File "/home/burk/.local/share/virtualenvs/timeseries-Gj3kjmJv/lib/python3.8/site-packages/pandas/core/groupby/grouper.py", line 765, in is_in_obj
return gpr is obj[gpr.name]
File "/home/burk/.local/share/virtualenvs/timeseries-Gj3kjmJv/lib/python3.8/site-packages/pandas/core/series.py", line 888, in __getitem__
result = self._get_value(key)
File "/home/burk/.local/share/virtualenvs/timeseries-Gj3kjmJv/lib/python3.8/site-packages/pandas/core/series.py", line 989, in _get_value
loc = self.index.get_loc(label)
File "/home/burk/.local/share/virtualenvs/timeseries-Gj3kjmJv/lib/python3.8/site-packages/pandas/core/indexes/base.py", line 2895, in get_loc
return self._engine.get_loc(casted_key)
File "pandas/_libs/index.pyx", line 70, in pandas._libs.index.IndexEngine.get_loc
File "pandas/_libs/index.pyx", line 96, in pandas._libs.index.IndexEngine.get_loc
File "pandas/_libs/index.pyx", line 118, in pandas._libs.index.IndexEngine._get_loc_duplicates
TypeError: only integer scalar arrays can be converted to a scalar index
```
#### Expected Output
This is the output in 1.1.2:
```
Test 1
(b, b)
1 1
1 2
2 3
2 4
Name: (a, a), dtype: int64
Int64Index([1, 1, 2, 2], dtype='int64', name=('b', 'b'))
(b, b)
1 2
2 4
Name: (a, a), dtype: int64
Test 2
(b, b)
1 1
1 2
2 3
2 4
Name: (a, a), dtype: int64
Int64Index([1, 1, 2, 2], dtype='int64', name=('b', 'b'))
(b, b)
1 2
2 4
Name: (a, a), dtype: int64
```
#### Output of ``pd.show_versions()``
<details>
>>> pd.show_versions()
INSTALLED VERSIONS
------------------
commit : 67a3d4241ab84419856b84fc3ebc9abcbe66c6b3
python : 3.8.6.final.0
python-bits : 64
OS : Linux
OS-release : 5.8.0-26-generic
Version : #27-Ubuntu SMP Wed Oct 21 22:29:16 UTC 2020
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 1.1.4
numpy : 1.19.4
pytz : 2020.4
dateutil : 2.8.1
pip : 20.1.1
setuptools : 44.0.0
Cython : 0.29.17
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : 0.4.1
xlsxwriter : None
lxml.etree : 4.6.1
html5lib : 1.0.1
pymysql : None
psycopg2 : 2.8.6 (dt dec pq3 ext lo64)
jinja2 : None
IPython : 7.19.0
pandas_datareader: 0.9.0
bs4 : None
bottleneck : None
fsspec : None
fastparquet : None
gcsfs : None
matplotlib : 3.3.2
numexpr : None
odfpy : None
openpyxl : 3.0.5
pandas_gbq : None
pyarrow : 2.0.0
pytables : None
pyxlsb : None
s3fs : None
scipy : 1.4.1
sqlalchemy : 1.3.20
tables : None
tabulate : 0.8.7
xarray : None
xlrd : None
xlwt : None
numba : 0.51.2
</details>
| Thanks @burk for the report.
first bad commit: [20003347f81d76b3cbaa8259b577e19f76603120] Backport PR #36147: REGR: Series access with Index of tuples/frozenset (#36332) cc @rhshadrach
| 2020-11-12T23:52:40Z | [] | [] |
Traceback (most recent call last):
File "testcase.py", line 8, in <module>
print(a.groupby(level=0).last())
File "/home/burk/.local/share/virtualenvs/timeseries-Gj3kjmJv/lib/python3.8/site-packages/pandas/core/series.py", line 1735, in groupby
return SeriesGroupBy(
File "/home/burk/.local/share/virtualenvs/timeseries-Gj3kjmJv/lib/python3.8/site-packages/pandas/core/groupby/groupby.py", line 525, in __init__
grouper, exclusions, obj = get_grouper(
File "/home/burk/.local/share/virtualenvs/timeseries-Gj3kjmJv/lib/python3.8/site-packages/pandas/core/groupby/grouper.py", line 773, in get_grouper
if is_in_obj(gpr): # df.groupby(df['name'])
File "/home/burk/.local/share/virtualenvs/timeseries-Gj3kjmJv/lib/python3.8/site-packages/pandas/core/groupby/grouper.py", line 765, in is_in_obj
return gpr is obj[gpr.name]
File "/home/burk/.local/share/virtualenvs/timeseries-Gj3kjmJv/lib/python3.8/site-packages/pandas/core/series.py", line 888, in __getitem__
result = self._get_value(key)
File "/home/burk/.local/share/virtualenvs/timeseries-Gj3kjmJv/lib/python3.8/site-packages/pandas/core/series.py", line 989, in _get_value
loc = self.index.get_loc(label)
File "/home/burk/.local/share/virtualenvs/timeseries-Gj3kjmJv/lib/python3.8/site-packages/pandas/core/indexes/base.py", line 2895, in get_loc
return self._engine.get_loc(casted_key)
File "pandas/_libs/index.pyx", line 70, in pandas._libs.index.IndexEngine.get_loc
File "pandas/_libs/index.pyx", line 96, in pandas._libs.index.IndexEngine.get_loc
File "pandas/_libs/index.pyx", line 118, in pandas._libs.index.IndexEngine._get_loc_duplicates
TypeError: only integer scalar arrays can be converted to a scalar index
| 14,396 |
|||
pandas-dev/pandas | pandas-dev__pandas-37870 | 03b9ad89b41da5c5d606c575b39ff3ed1f718a53 | diff --git a/doc/source/whatsnew/v1.2.0.rst b/doc/source/whatsnew/v1.2.0.rst
--- a/doc/source/whatsnew/v1.2.0.rst
+++ b/doc/source/whatsnew/v1.2.0.rst
@@ -252,6 +252,7 @@ Other enhancements
- :class:`Window` now supports all Scipy window types in ``win_type`` with flexible keyword argument support (:issue:`34556`)
- :meth:`testing.assert_index_equal` now has a ``check_order`` parameter that allows indexes to be checked in an order-insensitive manner (:issue:`37478`)
- :func:`read_csv` supports memory-mapping for compressed files (:issue:`37621`)
+- Add support for ``min_count`` keyword for :meth:`DataFrame.groupby` and :meth:`DataFrame.resample` for functions ``min``, ``max``, ``first`` and ``last`` (:issue:`37821`, :issue:`37768`)
- Improve error reporting for :meth:`DataFrame.merge` when invalid merge column definitions were given (:issue:`16228`)
- Improve numerical stability for :meth:`.Rolling.skew`, :meth:`.Rolling.kurt`, :meth:`Expanding.skew` and :meth:`Expanding.kurt` through implementation of Kahan summation (:issue:`6929`)
- Improved error reporting for subsetting columns of a :class:`.DataFrameGroupBy` with ``axis=1`` (:issue:`37725`)
diff --git a/pandas/_libs/groupby.pyx b/pandas/_libs/groupby.pyx
--- a/pandas/_libs/groupby.pyx
+++ b/pandas/_libs/groupby.pyx
@@ -903,13 +903,12 @@ def group_last(rank_t[:, :] out,
ndarray[int64_t, ndim=2] nobs
bint runtime_error = False
- assert min_count == -1, "'min_count' only used in add and prod"
-
# TODO(cython 3.0):
# Instead of `labels.shape[0]` use `len(labels)`
if not len(values) == labels.shape[0]:
raise AssertionError("len(index) != len(labels)")
+ min_count = max(min_count, 1)
nobs = np.zeros((<object>out).shape, dtype=np.int64)
if rank_t is object:
resx = np.empty((<object>out).shape, dtype=object)
@@ -939,7 +938,7 @@ def group_last(rank_t[:, :] out,
for i in range(ncounts):
for j in range(K):
- if nobs[i, j] == 0:
+ if nobs[i, j] < min_count:
out[i, j] = NAN
else:
out[i, j] = resx[i, j]
@@ -961,7 +960,7 @@ def group_last(rank_t[:, :] out,
for i in range(ncounts):
for j in range(K):
- if nobs[i, j] == 0:
+ if nobs[i, j] < min_count:
if rank_t is int64_t:
out[i, j] = NPY_NAT
elif rank_t is uint64_t:
@@ -986,7 +985,8 @@ def group_last(rank_t[:, :] out,
def group_nth(rank_t[:, :] out,
int64_t[:] counts,
ndarray[rank_t, ndim=2] values,
- const int64_t[:] labels, int64_t rank=1
+ const int64_t[:] labels,
+ int64_t min_count=-1, int64_t rank=1
):
"""
Only aggregates on axis=0
@@ -1003,6 +1003,7 @@ def group_nth(rank_t[:, :] out,
if not len(values) == labels.shape[0]:
raise AssertionError("len(index) != len(labels)")
+ min_count = max(min_count, 1)
nobs = np.zeros((<object>out).shape, dtype=np.int64)
if rank_t is object:
resx = np.empty((<object>out).shape, dtype=object)
@@ -1033,7 +1034,7 @@ def group_nth(rank_t[:, :] out,
for i in range(ncounts):
for j in range(K):
- if nobs[i, j] == 0:
+ if nobs[i, j] < min_count:
out[i, j] = NAN
else:
out[i, j] = resx[i, j]
@@ -1057,7 +1058,7 @@ def group_nth(rank_t[:, :] out,
for i in range(ncounts):
for j in range(K):
- if nobs[i, j] == 0:
+ if nobs[i, j] < min_count:
if rank_t is int64_t:
out[i, j] = NPY_NAT
elif rank_t is uint64_t:
@@ -1294,13 +1295,12 @@ def group_max(groupby_t[:, :] out,
bint runtime_error = False
int64_t[:, :] nobs
- assert min_count == -1, "'min_count' only used in add and prod"
-
# TODO(cython 3.0):
# Instead of `labels.shape[0]` use `len(labels)`
if not len(values) == labels.shape[0]:
raise AssertionError("len(index) != len(labels)")
+ min_count = max(min_count, 1)
nobs = np.zeros((<object>out).shape, dtype=np.int64)
maxx = np.empty_like(out)
@@ -1337,11 +1337,12 @@ def group_max(groupby_t[:, :] out,
for i in range(ncounts):
for j in range(K):
- if nobs[i, j] == 0:
+ if nobs[i, j] < min_count:
if groupby_t is uint64_t:
runtime_error = True
break
else:
+
out[i, j] = nan_val
else:
out[i, j] = maxx[i, j]
@@ -1369,13 +1370,12 @@ def group_min(groupby_t[:, :] out,
bint runtime_error = False
int64_t[:, :] nobs
- assert min_count == -1, "'min_count' only used in add and prod"
-
# TODO(cython 3.0):
# Instead of `labels.shape[0]` use `len(labels)`
if not len(values) == labels.shape[0]:
raise AssertionError("len(index) != len(labels)")
+ min_count = max(min_count, 1)
nobs = np.zeros((<object>out).shape, dtype=np.int64)
minx = np.empty_like(out)
@@ -1411,7 +1411,7 @@ def group_min(groupby_t[:, :] out,
for i in range(ncounts):
for j in range(K):
- if nobs[i, j] == 0:
+ if nobs[i, j] < min_count:
if groupby_t is uint64_t:
runtime_error = True
break
diff --git a/pandas/core/groupby/ops.py b/pandas/core/groupby/ops.py
--- a/pandas/core/groupby/ops.py
+++ b/pandas/core/groupby/ops.py
@@ -603,7 +603,7 @@ def _aggregate(
):
if agg_func is libgroupby.group_nth:
# different signature from the others
- agg_func(result, counts, values, comp_ids, rank=1)
+ agg_func(result, counts, values, comp_ids, min_count, rank=1)
else:
agg_func(result, counts, values, comp_ids, min_count)
diff --git a/pandas/core/resample.py b/pandas/core/resample.py
--- a/pandas/core/resample.py
+++ b/pandas/core/resample.py
@@ -950,7 +950,7 @@ def quantile(self, q=0.5, **kwargs):
# downsample methods
-for method in ["sum", "prod"]:
+for method in ["sum", "prod", "min", "max", "first", "last"]:
def f(self, _method=method, min_count=0, *args, **kwargs):
nv.validate_resampler_func(_method, args, kwargs)
@@ -961,7 +961,7 @@ def f(self, _method=method, min_count=0, *args, **kwargs):
# downsample methods
-for method in ["min", "max", "first", "last", "mean", "sem", "median", "ohlc"]:
+for method in ["mean", "sem", "median", "ohlc"]:
def g(self, _method=method, *args, **kwargs):
nv.validate_resampler_func(_method, args, kwargs)
| BUG: Groupby.last accepts min_count but implementation raises
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of pandas.
- [x] (optional) I have confirmed this bug exists on the master branch of pandas.
---
**Note**: Please read [this guide](https://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) detailing how to provide the necessary information for us to reproduce your bug.
#### Code Sample, a copy-pastable example
```python
df = DataFrame({"a": [1, 2, 3, 4, 5, 6]})
df.groupby(level=0).last(min_count=1)
```
#### Problem description
This raises
```
Traceback (most recent call last):
File "/home/developer/.config/JetBrains/PyCharm2020.2/scratches/scratch_4.py", line 97, in <module>
print(df.groupby(level=0).last(min_count=1))
File "/home/developer/PycharmProjects/pandas/pandas/core/groupby/groupby.py", line 1710, in last
return self._agg_general(
File "/home/developer/PycharmProjects/pandas/pandas/core/groupby/groupby.py", line 1032, in _agg_general
result = self._cython_agg_general(
File "/home/developer/PycharmProjects/pandas/pandas/core/groupby/generic.py", line 1018, in _cython_agg_general
agg_mgr = self._cython_agg_blocks(
File "/home/developer/PycharmProjects/pandas/pandas/core/groupby/generic.py", line 1116, in _cython_agg_blocks
new_mgr = data.apply(blk_func, ignore_failures=True)
File "/home/developer/PycharmProjects/pandas/pandas/core/internals/managers.py", line 425, in apply
applied = b.apply(f, **kwargs)
File "/home/developer/PycharmProjects/pandas/pandas/core/internals/blocks.py", line 368, in apply
result = func(self.values, **kwargs)
File "/home/developer/PycharmProjects/pandas/pandas/core/groupby/generic.py", line 1067, in blk_func
result, _ = self.grouper.aggregate(
File "/home/developer/PycharmProjects/pandas/pandas/core/groupby/ops.py", line 594, in aggregate
return self._cython_operation(
File "/home/developer/PycharmProjects/pandas/pandas/core/groupby/ops.py", line 547, in _cython_operation
result = self._aggregate(result, counts, values, codes, func, min_count)
File "/home/developer/PycharmProjects/pandas/pandas/core/groupby/ops.py", line 608, in _aggregate
agg_func(result, counts, values, comp_ids, min_count)
File "pandas/_libs/groupby.pyx", line 906, in pandas._libs.groupby.group_last
AssertionError: 'min_count' only used in add and prod
Process finished with exit code 1
```
#### Expected Output
I would expect, that this function does not accept ``min_count`` as input (similar to ``Resample.last``) and raises the same error as described in #37768
Additonally the docs need adjustments, because ``min_count`` is documented. This probably holds true for most of the functions mentioned in https://github.com/pandas-dev/pandas/issues/37768#issuecomment-727009192 too.
#### Output of ``pd.show_versions()``
<details>
master
</details>
| 2020-11-15T19:50:09Z | [] | [] |
Traceback (most recent call last):
File "/home/developer/.config/JetBrains/PyCharm2020.2/scratches/scratch_4.py", line 97, in <module>
print(df.groupby(level=0).last(min_count=1))
File "/home/developer/PycharmProjects/pandas/pandas/core/groupby/groupby.py", line 1710, in last
return self._agg_general(
File "/home/developer/PycharmProjects/pandas/pandas/core/groupby/groupby.py", line 1032, in _agg_general
result = self._cython_agg_general(
File "/home/developer/PycharmProjects/pandas/pandas/core/groupby/generic.py", line 1018, in _cython_agg_general
agg_mgr = self._cython_agg_blocks(
File "/home/developer/PycharmProjects/pandas/pandas/core/groupby/generic.py", line 1116, in _cython_agg_blocks
new_mgr = data.apply(blk_func, ignore_failures=True)
File "/home/developer/PycharmProjects/pandas/pandas/core/internals/managers.py", line 425, in apply
applied = b.apply(f, **kwargs)
File "/home/developer/PycharmProjects/pandas/pandas/core/internals/blocks.py", line 368, in apply
result = func(self.values, **kwargs)
File "/home/developer/PycharmProjects/pandas/pandas/core/groupby/generic.py", line 1067, in blk_func
result, _ = self.grouper.aggregate(
File "/home/developer/PycharmProjects/pandas/pandas/core/groupby/ops.py", line 594, in aggregate
return self._cython_operation(
File "/home/developer/PycharmProjects/pandas/pandas/core/groupby/ops.py", line 547, in _cython_operation
result = self._aggregate(result, counts, values, codes, func, min_count)
File "/home/developer/PycharmProjects/pandas/pandas/core/groupby/ops.py", line 608, in _aggregate
agg_func(result, counts, values, comp_ids, min_count)
File "pandas/_libs/groupby.pyx", line 906, in pandas._libs.groupby.group_last
AssertionError: 'min_count' only used in add and prod
| 14,408 |
||||
pandas-dev/pandas | pandas-dev__pandas-37924 | 4edb52b4c674d5b24334fa8a518c801c51a9450a | diff --git a/doc/source/whatsnew/v1.3.0.rst b/doc/source/whatsnew/v1.3.0.rst
--- a/doc/source/whatsnew/v1.3.0.rst
+++ b/doc/source/whatsnew/v1.3.0.rst
@@ -302,8 +302,9 @@ Sparse
ExtensionArray
^^^^^^^^^^^^^^
+
- Bug in :meth:`DataFrame.where` when ``other`` is a :class:`Series` with ExtensionArray dtype (:issue:`38729`)
--
+- Fixed bug where :meth:`Series.idxmax`, :meth:`Series.idxmin` and ``argmax/min`` fail when the underlying data is :class:`ExtensionArray` (:issue:`32749`, :issue:`33719`, :issue:`36566`)
-
Other
diff --git a/pandas/core/base.py b/pandas/core/base.py
--- a/pandas/core/base.py
+++ b/pandas/core/base.py
@@ -715,9 +715,17 @@ def argmax(self, axis=None, skipna: bool = True, *args, **kwargs) -> int:
the minimum cereal calories is the first element,
since series is zero-indexed.
"""
+ delegate = self._values
nv.validate_minmax_axis(axis)
- nv.validate_argmax_with_skipna(skipna, args, kwargs)
- return nanops.nanargmax(self._values, skipna=skipna)
+ skipna = nv.validate_argmax_with_skipna(skipna, args, kwargs)
+
+ if isinstance(delegate, ExtensionArray):
+ if not skipna and delegate.isna().any():
+ return -1
+ else:
+ return delegate.argmax()
+ else:
+ return nanops.nanargmax(delegate, skipna=skipna)
def min(self, axis=None, skipna: bool = True, *args, **kwargs):
"""
@@ -765,9 +773,17 @@ def min(self, axis=None, skipna: bool = True, *args, **kwargs):
@doc(argmax, op="min", oppose="max", value="smallest")
def argmin(self, axis=None, skipna=True, *args, **kwargs) -> int:
+ delegate = self._values
nv.validate_minmax_axis(axis)
- nv.validate_argmax_with_skipna(skipna, args, kwargs)
- return nanops.nanargmin(self._values, skipna=skipna)
+ skipna = nv.validate_argmin_with_skipna(skipna, args, kwargs)
+
+ if isinstance(delegate, ExtensionArray):
+ if not skipna and delegate.isna().any():
+ return -1
+ else:
+ return delegate.argmin()
+ else:
+ return nanops.nanargmin(delegate, skipna=skipna)
def tolist(self):
"""
diff --git a/pandas/core/series.py b/pandas/core/series.py
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -2076,8 +2076,7 @@ def idxmin(self, axis=0, skipna=True, *args, **kwargs):
>>> s.idxmin(skipna=False)
nan
"""
- skipna = nv.validate_argmin_with_skipna(skipna, args, kwargs)
- i = nanops.nanargmin(self._values, skipna=skipna)
+ i = self.argmin(None, skipna=skipna)
if i == -1:
return np.nan
return self.index[i]
@@ -2147,8 +2146,7 @@ def idxmax(self, axis=0, skipna=True, *args, **kwargs):
>>> s.idxmax(skipna=False)
nan
"""
- skipna = nv.validate_argmax_with_skipna(skipna, args, kwargs)
- i = nanops.nanargmax(self._values, skipna=skipna)
+ i = self.argmax(None, skipna=skipna)
if i == -1:
return np.nan
return self.index[i]
| BUG: TypeError on Series(dtype='string').value_counts().idxmax()
# Expected output
Obtain the most frequent string in a series:
```pycon
>>> pd.Series('a').value_counts().idxmax()
'a'
```
# Problem
Changing the dtype on the second example from `'object'` to `'string'` breaks this idiom:
```python-traceback
>>> pd.Series('a', dtype='string').value_counts().idxmax()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "pandas/core/series.py", line 2168, in idxmax
i = nanops.nanargmax(self._values, skipna=skipna)
File "pandas/core/nanops.py", line 71, in _f
return f(*args, **kwargs)
File "pandas/core/nanops.py", line 924, in nanargmax
result = values.argmax(axis)
TypeError: argmax() takes 1 positional argument but 2 were given
```
The problem is evidently not due to a `'string'` index:
```pycon
>>> pd.Series([1], index=pd.Series(['a'], dtype='string')).idxmax()
'a'
```
# Output of ``pd.show_versions()``
<details>
```
>>> pd.show_versions()
INSTALLED VERSIONS
------------------
commit : 2a7d3326dee660824a8433ffd01065f8ac37f7d6
python : 3.7.9.final.0
python-bits : 64
OS : Darwin
OS-release : 19.6.0
Version : Darwin Kernel Version 19.6.0: Thu Jun 18 20:49:00 PDT 2020; root:xnu-6153.141.1~1/RELEASE_X86_64
machine : x86_64
processor : i386
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 1.1.2
numpy : 1.19.2
pytz : 2020.1
dateutil : 2.8.1
pip : 20.1.1
setuptools : 47.1.0
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : None
pandas_datareader: None
bs4 : None
bottleneck : None
fsspec : None
fastparquet : None
gcsfs : None
matplotlib : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
pyxlsb : None
s3fs : None
scipy : 1.5.2
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
numba : None
```
</details>
| Confirming this happens on 1.2 master.
<details><summary><b>Output of pd.show_versions()</b></summary>
INSTALLED VERSIONS
------------------
commit : 76eb314a0ef53c907da14f74a3745c5084a1428a
python : 3.8.3.final.0
python-bits : 64
OS : Linux
OS-release : 5.4.0-48-generic
Version : #52-Ubuntu SMP Thu Sep 10 10:58:49 UTC 2020
machine : x86_64
processor :
byteorder : little
LC_ALL : C.UTF-8
LANG : C.UTF-8
LOCALE : en_US.UTF-8
pandas : 1.2.0.dev0+232.g76eb314a0
numpy : 1.18.5
pytz : 2020.1
dateutil : 2.8.1
pip : 20.1.1
setuptools : 49.1.0.post20200704
Cython : 0.29.21
pytest : 5.4.3
hypothesis : 5.19.0
sphinx : 3.1.1
blosc : None
feather : None
xlsxwriter : 1.2.9
lxml.etree : 4.5.2
html5lib : 1.1
pymysql : None
psycopg2 : 2.8.5 (dt dec pq3 ext lo64)
jinja2 : 2.11.2
IPython : 7.16.1
pandas_datareader: None
bs4 : 4.9.1
bottleneck : 1.3.2
fsspec : 0.7.4
fastparquet : 0.4.0
gcsfs : 0.6.2
matplotlib : 3.2.2
numexpr : 2.7.1
odfpy : None
openpyxl : 3.0.4
pandas_gbq : None
pyarrow : 1.0.1
pytables : None
pyxlsb : None
s3fs : 0.4.2
scipy : 1.5.0
sqlalchemy : 1.3.18
tables : 3.6.1
tabulate : 0.8.7
xarray : 0.15.1
xlrd : 1.2.0
xlwt : 1.3.0
numba : 0.50.1
</details>
A similar snippet with `dtype=Int64` also throws:
``` python
In [11]: pd.Series(1, dtype='Int64').value_counts().idxmax()
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-11-0a7f15f3eb23> in <module>
----> 1 pd.Series(1, dtype='Int64').value_counts().idxmax()
/workspaces/pandas-arw2019/pandas/core/series.py in idxmax(self, axis, skipna, *args, **kwargs)
2187 """
2188 skipna = nv.validate_argmax_with_skipna(skipna, args, kwargs)
-> 2189 i = nanops.nanargmax(self._values, skipna=skipna)
2190 if i == -1:
2191 return np.nan
/workspaces/pandas-arw2019/pandas/core/nanops.py in _f(*args, **kwargs)
69 try:
70 with np.errstate(invalid="ignore"):
---> 71 return f(*args, **kwargs)
72 except ValueError as e:
73 # we want to transform an object array
/workspaces/pandas-arw2019/pandas/core/nanops.py in nanargmax(values, axis, skipna, mask)
922 """
923 values, mask, _, _, _ = _get_values(values, True, fill_value_typ="-inf", mask=mask)
--> 924 result = values.argmax(axis)
925 result = _maybe_arg_null_out(result, axis, mask, skipna)
926 return result
TypeError: argmax() takes 1 positional argument but 2 were given
```
but if we switch to `dtype=int64` it works:
``` python
In [12]: pd.Series(1, dtype='int64').value_counts().idxmax()
Out[12]: 1
```
So we recently added `ExtensionArray.argmin/argmax` (https://github.com/pandas-dev/pandas/pull/27801), but this is not yet hooked into Series -> https://github.com/pandas-dev/pandas/issues/35178. The issue is for Series.argmin/argmax, but I think the same is needed for idxmin/idxmax.
Note, you don't need the `value_counts()` to be able to reproduce it:
```python
In [21]: pd.Series(1, dtype='Int64').idxmax()
...
TypeError: argmax() takes 1 positional argument but 2 were given
``` | 2020-11-17T23:44:09Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "pandas/core/series.py", line 2168, in idxmax
i = nanops.nanargmax(self._values, skipna=skipna)
File "pandas/core/nanops.py", line 71, in _f
return f(*args, **kwargs)
File "pandas/core/nanops.py", line 924, in nanargmax
result = values.argmax(axis)
TypeError: argmax() takes 1 positional argument but 2 were given
| 14,414 |
|||
pandas-dev/pandas | pandas-dev__pandas-37986 | 25a1d9166ff0d131541a65d496e9b37ca7737f25 | diff --git a/doc/source/whatsnew/v1.1.5.rst b/doc/source/whatsnew/v1.1.5.rst
--- a/doc/source/whatsnew/v1.1.5.rst
+++ b/doc/source/whatsnew/v1.1.5.rst
@@ -17,7 +17,7 @@ Fixed regressions
- Regression in addition of a timedelta-like scalar to a :class:`DatetimeIndex` raising incorrectly (:issue:`37295`)
- Fixed regression in :meth:`Series.groupby` raising when the :class:`Index` of the :class:`Series` had a tuple as its name (:issue:`37755`)
- Fixed regression in :meth:`DataFrame.loc` and :meth:`Series.loc` for ``__setitem__`` when one-dimensional tuple was given to select from :class:`MultiIndex` (:issue:`37711`)
--
+- Fixed regression in inplace operations on :class:`Series` with ``ExtensionDtype`` with NumPy dtyped operand (:issue:`37910`)
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -70,6 +70,7 @@
is_datetime64_any_dtype,
is_datetime64tz_dtype,
is_dict_like,
+ is_dtype_equal,
is_extension_array_dtype,
is_float,
is_list_like,
@@ -11266,7 +11267,11 @@ def _inplace_method(self, other, op):
"""
result = op(self, other)
- if self.ndim == 1 and result._indexed_same(self) and result.dtype == self.dtype:
+ if (
+ self.ndim == 1
+ and result._indexed_same(self)
+ and is_dtype_equal(result.dtype, self.dtype)
+ ):
# GH#36498 this inplace op can _actually_ be inplace.
self._values[:] = result._values
return self
| BUG: EA inplace add (and other ops) with non-EA arg broken
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the latest version of pandas (`1.0.4`).
- [X] (optional) I have confirmed this bug exists on the master branch of pandas (`87247370c14629c1797fcf3540c6bd93b777a17e`).
---
#### Code Sample, a copy-pastable example
```python
import numpy as np
import pandas as pd
ser1 = pd.Series([1], dtype="Int64")
ser2 = pd.Series([1.0], dtype=np.float64)
ser1 += ser2
```
Note that the issue is NOT specific to the integer EA, but happens with all EAs that don't use NumPy dtypes.
#### Problem description
```
Traceback (most recent call last):
File "bug.py", line 6, in <module>
ser1 += ser2
File ".../pandas/core/generic.py", line 11305, in __iadd__
return self._inplace_method(other, type(self).__add__) # type: ignore[operator]
File ".../pandas/core/generic.py", line 11289, in _inplace_method
if self.ndim == 1 and result._indexed_same(self) and result.dtype == self.dtype:
TypeError: Cannot interpret 'Int64Dtype()' as a data type
```
This is likely a fallout of #37508. See here:
https://github.com/pandas-dev/pandas/blob/87247370c14629c1797fcf3540c6bd93b777a17e/pandas/core/generic.py#L11283-L11292
There, the result dtype (which is a NumPy dtype in this case) is compared with the EA dtype. This raises the TypeError, because NumPy only allows comparing its dtype objects with "real" NumPy dtypes (and EAs don't provide NumPy-compatible dtypes).
#### Expected Output
Pass. I'm pretty sure that worked with `1.0.3` (the mentioned PR in question was backported to the `1.0.x` branch between the `1.0.3` and `1.0.4` release).
| best guess is this can be fixed by changing `result.dtype == self.dtype` to `is_dtype_equal(result.dtype, self.dtype)`
@marco-neumann-by Thanks for the report!
It seems we don't really have testing for inplace ops for EAs .. | 2020-11-21T03:01:47Z | [] | [] |
Traceback (most recent call last):
File "bug.py", line 6, in <module>
ser1 += ser2
File ".../pandas/core/generic.py", line 11305, in __iadd__
return self._inplace_method(other, type(self).__add__) # type: ignore[operator]
File ".../pandas/core/generic.py", line 11289, in _inplace_method
if self.ndim == 1 and result._indexed_same(self) and result.dtype == self.dtype:
TypeError: Cannot interpret 'Int64Dtype()' as a data type
| 14,422 |
|||
pandas-dev/pandas | pandas-dev__pandas-38089 | 51750befe9c85c57949c14222b4222a849412015 | diff --git a/doc/source/whatsnew/v1.2.0.rst b/doc/source/whatsnew/v1.2.0.rst
--- a/doc/source/whatsnew/v1.2.0.rst
+++ b/doc/source/whatsnew/v1.2.0.rst
@@ -737,6 +737,7 @@ Reshaping
- Bug in :meth:`DataFrame.apply` not setting index of return value when ``func`` return type is ``dict`` (:issue:`37544`)
- Bug in :func:`concat` resulting in a ``ValueError`` when at least one of both inputs had a non-unique index (:issue:`36263`)
- Bug in :meth:`DataFrame.merge` and :meth:`pandas.merge` returning inconsistent ordering in result for ``how=right`` and ``how=left`` (:issue:`35382`)
+- Bug in :func:`merge_ordered` couldn't handle list-like ``left_by`` or ``right_by`` (:issue:`35269`)
Sparse
^^^^^^
diff --git a/pandas/core/reshape/merge.py b/pandas/core/reshape/merge.py
--- a/pandas/core/reshape/merge.py
+++ b/pandas/core/reshape/merge.py
@@ -140,9 +140,7 @@ def _groupby_and_merge(by, on, left: "DataFrame", right: "DataFrame", merge_piec
# make sure join keys are in the merged
# TODO, should merge_pieces do this?
- for k in by:
- if k in merged:
- merged[k] = key
+ merged[by] = key
pieces.append(merged)
| BUG: merge_ordered fails when left_by is set to more than one column
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of pandas.
- [ ] (optional) I have confirmed this bug exists on the master branch of pandas.
---
**Note**: Please read [this guide](https://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) detailing how to provide the necessary information for us to reproduce your bug.
#### Code Sample, a copy-pastable example
```python
import io, pandas
l = pandas.read_csv(io.StringIO('''
G H T
g h 1
g h 3
'''), delim_whitespace=True)
r = pandas.read_csv(io.StringIO('''
T
2
'''), delim_whitespace=True)
pandas.merge_ordered(l, r, on=['T'], left_by=['G', 'H'])
```
#### Problem description
This fails:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.8/site-packages/pandas/core/reshape/merge.py", line 290, in merge_ordered
result, _ = _groupby_and_merge(
File "/usr/lib/python3.8/site-packages/pandas/core/reshape/merge.py", line 162, in _groupby_and_merge
merged[k] = key
File "/usr/lib/python3.8/site-packages/pandas/core/frame.py", line 2938, in __setitem__
self._set_item(key, value)
File "/usr/lib/python3.8/site-packages/pandas/core/frame.py", line 3000, in _set_item
value = self._sanitize_column(key, value)
File "/usr/lib/python3.8/site-packages/pandas/core/frame.py", line 3636, in _sanitize_column
value = sanitize_index(value, self.index, copy=False)
File "/usr/lib/python3.8/site-packages/pandas/core/internals/construction.py", line 611, in sanitize_index
raise ValueError("Length of values does not match length of index")
ValueError: Length of values does not match length of index
```
#### Expected Output
Not failing. Should return:
G H T
0 g h 1
1 g h 3
For comparison, the above works fine if we use `left_by=['G']` and omit the `H` column entirely.
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit : None
python : 3.8.3.final.0
python-bits : 64
OS : Linux
OS-release : 5.7.7-arch1-1
machine : x86_64
processor :
byteorder : little
LC_ALL : None
LANG : en_FYL.UTF-8
LOCALE : en_US.UTF-8
pandas : 1.0.5
numpy : 1.19.0
pytz : 2020.1
dateutil : 2.8.1
pip : 20.1.1
setuptools : 49.1.0
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : 4.5.1
html5lib : 1.1
pymysql : None
psycopg2 : None
jinja2 : 2.11.2
IPython : 7.16.1
pandas_datareader: None
bs4 : 4.9.1
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : 4.5.1
matplotlib : 3.2.2
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
pytest : None
pyxlsb : None
s3fs : None
scipy : 1.5.0
sqlalchemy : 1.3.18
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
xlsxwriter : None
numba : None
</details>
| @Rufflewind can you simplify the example to remove the CSV stuff? Are you interested in investigating what's going on? | 2020-11-26T15:05:37Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.8/site-packages/pandas/core/reshape/merge.py", line 290, in merge_ordered
result, _ = _groupby_and_merge(
File "/usr/lib/python3.8/site-packages/pandas/core/reshape/merge.py", line 162, in _groupby_and_merge
merged[k] = key
File "/usr/lib/python3.8/site-packages/pandas/core/frame.py", line 2938, in __setitem__
self._set_item(key, value)
File "/usr/lib/python3.8/site-packages/pandas/core/frame.py", line 3000, in _set_item
value = self._sanitize_column(key, value)
File "/usr/lib/python3.8/site-packages/pandas/core/frame.py", line 3636, in _sanitize_column
value = sanitize_index(value, self.index, copy=False)
File "/usr/lib/python3.8/site-packages/pandas/core/internals/construction.py", line 611, in sanitize_index
raise ValueError("Length of values does not match length of index")
ValueError: Length of values does not match length of index
| 14,442 |
|||
pandas-dev/pandas | pandas-dev__pandas-38094 | 94179cd6af1d6ed7f65f7f7f98d976856532befc | diff --git a/doc/source/whatsnew/v1.1.5.rst b/doc/source/whatsnew/v1.1.5.rst
--- a/doc/source/whatsnew/v1.1.5.rst
+++ b/doc/source/whatsnew/v1.1.5.rst
@@ -20,6 +20,7 @@ Fixed regressions
- Fixed regression in inplace operations on :class:`Series` with ``ExtensionDtype`` with NumPy dtyped operand (:issue:`37910`)
- Fixed regression in metadata propagation for ``groupby`` iterator (:issue:`37343`)
- Fixed regression in indexing on a :class:`Series` with ``CategoricalDtype`` after unpickling (:issue:`37631`)
+- Fixed regression in :meth:`DataFrame.groupby` aggregation with out-of-bounds datetime objects in an object-dtype column (:issue:`36003`)
- Fixed regression in ``df.groupby(..).rolling(..)`` with the resulting :class:`MultiIndex` when grouping by a label that is in the index (:issue:`37641`)
.. ---------------------------------------------------------------------------
diff --git a/pandas/_libs/reduction.pyx b/pandas/_libs/reduction.pyx
--- a/pandas/_libs/reduction.pyx
+++ b/pandas/_libs/reduction.pyx
@@ -44,7 +44,9 @@ cdef class _BaseGrouper:
Slider islider, Slider vslider):
if cached_typ is None:
cached_ityp = self.ityp(islider.buf)
- cached_typ = self.typ(vslider.buf, index=cached_ityp, name=self.name)
+ cached_typ = self.typ(
+ vslider.buf, dtype=vslider.buf.dtype, index=cached_ityp, name=self.name
+ )
else:
# See the comment in indexes/base.py about _index_data.
# We need this for EA-backed indexes that have a reference
| REGR: Column with datetime values too big to be converted to pd.Timestamp leads to assertion error in groupby
- [X ] I have checked that this issue has not already been reported.
- [ X] I have confirmed this bug exists on the latest version of pandas.
- [ ] (optional) I have confirmed this bug exists on the master branch of pandas.
---
#### Code Sample, a copy-pastable example
Two different dates, one within the range of what `pd.Timestamp` can handle, the other outside of that range:
```
import pandas as pd
import datetime
df = pd.DataFrame({'A': ['X', 'Y'], 'B': [datetime.datetime(2005, 1, 1, 10, 30, 23, 540000),
datetime.datetime(3005, 1, 1, 10, 30, 23, 540000)]})
print(df.groupby('A').B.max())
```
#### Problem description
`pd.Timestamp` can't deal with a too big date like the year 3005, so to represent such a date I need to use the `datetime.datetime` type. Before 1.1.1 (1.1.0?) this hasn't been an issue, but now this code throws an assertion error:
```
Traceback (most recent call last):
File "<ipython-input-38-8b8ec5e4e179>", line 5, in <module>
print(df.groupby('A').B.max())
File "C:\Users\My.Name\AppData\Local\Continuum\miniconda3\envs\main\lib\site-packages\pandas\core\groupby\groupby.py", line 1558, in max
numeric_only=numeric_only, min_count=min_count, alias="max", npfunc=np.max
File "C:\Users\My.Name\AppData\Local\Continuum\miniconda3\envs\main\lib\site-packages\pandas\core\groupby\groupby.py", line 1015, in _agg_general
result = self.aggregate(lambda x: npfunc(x, axis=self.axis))
File "C:\Users\My.Name\AppData\Local\Continuum\miniconda3\envs\main\lib\site-packages\pandas\core\groupby\generic.py", line 261, in aggregate
func, *args, engine=engine, engine_kwargs=engine_kwargs, **kwargs
File "C:\Users\My.Name\AppData\Local\Continuum\miniconda3\envs\main\lib\site-packages\pandas\core\groupby\groupby.py", line 1083, in _python_agg_general
result, counts = self.grouper.agg_series(obj, f)
File "C:\Users\My.Name\AppData\Local\Continuum\miniconda3\envs\main\lib\site-packages\pandas\core\groupby\ops.py", line 644, in agg_series
return self._aggregate_series_fast(obj, func)
File "C:\Users\My.Name\AppData\Local\Continuum\miniconda3\envs\main\lib\site-packages\pandas\core\groupby\ops.py", line 669, in _aggregate_series_fast
result, counts = grouper.get_result()
File "pandas\_libs\reduction.pyx", line 256, in pandas._libs.reduction.SeriesGrouper.get_result
File "pandas\_libs\reduction.pyx", line 74, in pandas._libs.reduction._BaseGrouper._apply_to_group
File "C:\Users\My.Name\AppData\Local\Continuum\miniconda3\envs\main\lib\site-packages\pandas\core\groupby\groupby.py", line 1060, in <lambda>
f = lambda x: func(x, *args, **kwargs)
File "C:\Users\My.Name\AppData\Local\Continuum\miniconda3\envs\main\lib\site-packages\pandas\core\groupby\groupby.py", line 1015, in <lambda>
result = self.aggregate(lambda x: npfunc(x, axis=self.axis))
File "<__array_function__ internals>", line 6, in amax
File "C:\Users\My.Name\AppData\Local\Continuum\miniconda3\envs\main\lib\site-packages\numpy\core\fromnumeric.py", line 2706, in amax
keepdims=keepdims, initial=initial, where=where)
File "C:\Users\My.Name\AppData\Local\Continuum\miniconda3\envs\main\lib\site-packages\numpy\core\fromnumeric.py", line 85, in _wrapreduction
return reduction(axis=axis, out=out, **passkwargs)
File "C:\Users\My.Name\AppData\Local\Continuum\miniconda3\envs\main\lib\site-packages\pandas\core\generic.py", line 11460, in stat_func
func, name=name, axis=axis, skipna=skipna, numeric_only=numeric_only
File "C:\Users\My.Name\AppData\Local\Continuum\miniconda3\envs\main\lib\site-packages\pandas\core\series.py", line 4220, in _reduce
delegate = self._values
File "C:\Users\My.Name\AppData\Local\Continuum\miniconda3\envs\main\lib\site-packages\pandas\core\series.py", line 572, in _values
return self._mgr.internal_values()
File "C:\Users\My.Name\AppData\Local\Continuum\miniconda3\envs\main\lib\site-packages\pandas\core\internals\managers.py", line 1615, in internal_values
return self._block.internal_values()
File "C:\Users\My.Name\AppData\Local\Continuum\miniconda3\envs\main\lib\site-packages\pandas\core\internals\blocks.py", line 2019, in internal_values
return self.array_values()
File "C:\Users\My.Name\AppData\Local\Continuum\miniconda3\envs\main\lib\site-packages\pandas\core\internals\blocks.py", line 2022, in array_values
return self._holder._simple_new(self.values)
File "C:\Users\My.Name\AppData\Local\Continuum\miniconda3\envs\main\lib\site-packages\pandas\core\arrays\datetimes.py", line 290, in _simple_new
assert values.dtype == "i8"
AssertionError
```
From testing with mixing `pd.Timestamp` and `datetime.datetime` types I presume pandas is converting applicable dates (first line in the example) to `pd.Timestamp` while leaving the others as `datetime.datetime` leading to a mixed-type result column and the assertion error.
#### Expected Output
Since I'm explicitely operating with datatype `datetime.datetime` there should be no implicit conversion to `pd.Timestamp` if it's not assured that all values are within the range that `pd.Timestamp` allows.
#### Output of ``pd.show_versions()``
<details>
commit : f2ca0a2665b2d169c97de87b8e778dbed86aea07
python : 3.7.8.final.0
python-bits : 64
OS : Windows
OS-release : 10
Version : 10.0.19041
machine : AMD64
processor : Intel64 Family 6 Model 79 Stepping 1, GenuineIntel
byteorder : little
LC_ALL : None
LANG : en
LOCALE : None.None
pandas : 1.1.1
numpy : 1.19.1
pytz : 2020.1
dateutil : 2.8.1
pip : 20.2.2
setuptools : 50.0.0.post20200830
Cython : 0.29.21
pytest : None
hypothesis : None
sphinx : 3.2.1
blosc : None
feather : None
xlsxwriter : 1.3.3
lxml.etree : 4.5.2
html5lib : 1.1
pymysql : None
psycopg2 : None
jinja2 : 2.11.2
IPython : 7.18.1
pandas_datareader: None
bs4 : 4.9.1
bottleneck : None
fsspec : 0.8.0
fastparquet : 0.4.1
gcsfs : None
matplotlib : 3.3.1
numexpr : None
odfpy : None
openpyxl : 3.0.5
pandas_gbq : None
pyarrow : 1.0.1
pytables : None
pyxlsb : None
s3fs : None
scipy : 1.5.2
sqlalchemy : 1.3.19
tables : None
tabulate : 0.8.7
xarray : None
xlrd : 1.2.0
xlwt : None
numba : 0.51.1
</details>
| Thanks @Khris777. This seems to be a regression from 1.0.5:
```python
In [1]: import pandas as pd
...: import datetime
...:
...: df = pd.DataFrame({'A': ['X', 'Y'], 'B': [datetime.datetime(2
...: 005, 1, 1, 10, 30, 23, 540000),
...: datetime.datetime(3
...: 005, 1, 1, 10, 30, 23, 540000)]})
...: df.groupby("A")["B"].max()
...:
Out[1]:
A
X 2005-01-01 10:30:23.540000
Y 3005-01-01 10:30:23.540000
Name: B, dtype: object
In [2]: pd.__version__
Out[2]: '1.0.5'
```
Actually this was raising a TypeError after 4edcc5541ff3f6470f5e3c083cb83136119e6f0c but prior to the AssertionError.
cc @jbrockmendel
moved off 1.1.2 milestone (scheduled for this week) as no PRs to fix in the pipeline
> Actually this was raising a TypeError after [4edcc55](https://github.com/pandas-dev/pandas/commit/4edcc5541ff3f6470f5e3c083cb83136119e6f0c) but prior to the AssertionError.
#31182
> Actually this was raising a TypeError after [4edcc55](https://github.com/pandas-dev/pandas/commit/4edcc5541ff3f6470f5e3c083cb83136119e6f0c) but prior to the AssertionError.
can confirm
first bad commit: [4edcc5541ff3f6470f5e3c083cb83136119e6f0c] CLN: Make Series._values match Index._values (#31182)
https://github.com/simonjayhawkins/pandas/runs/1170442784?check_suite_focus=true
moved off 1.1.3 milestone (overdue) as no PRs to fix in the pipeline
moved off 1.1.4 milestone (scheduled for release tomorrow) as no PRs to fix in the pipeline
| 2020-11-26T16:46:02Z | [] | [] |
Traceback (most recent call last):
File "<ipython-input-38-8b8ec5e4e179>", line 5, in <module>
print(df.groupby('A').B.max())
File "C:\Users\My.Name\AppData\Local\Continuum\miniconda3\envs\main\lib\site-packages\pandas\core\groupby\groupby.py", line 1558, in max
numeric_only=numeric_only, min_count=min_count, alias="max", npfunc=np.max
File "C:\Users\My.Name\AppData\Local\Continuum\miniconda3\envs\main\lib\site-packages\pandas\core\groupby\groupby.py", line 1015, in _agg_general
result = self.aggregate(lambda x: npfunc(x, axis=self.axis))
File "C:\Users\My.Name\AppData\Local\Continuum\miniconda3\envs\main\lib\site-packages\pandas\core\groupby\generic.py", line 261, in aggregate
func, *args, engine=engine, engine_kwargs=engine_kwargs, **kwargs
File "C:\Users\My.Name\AppData\Local\Continuum\miniconda3\envs\main\lib\site-packages\pandas\core\groupby\groupby.py", line 1083, in _python_agg_general
result, counts = self.grouper.agg_series(obj, f)
File "C:\Users\My.Name\AppData\Local\Continuum\miniconda3\envs\main\lib\site-packages\pandas\core\groupby\ops.py", line 644, in agg_series
return self._aggregate_series_fast(obj, func)
File "C:\Users\My.Name\AppData\Local\Continuum\miniconda3\envs\main\lib\site-packages\pandas\core\groupby\ops.py", line 669, in _aggregate_series_fast
result, counts = grouper.get_result()
File "pandas\_libs\reduction.pyx", line 256, in pandas._libs.reduction.SeriesGrouper.get_result
File "pandas\_libs\reduction.pyx", line 74, in pandas._libs.reduction._BaseGrouper._apply_to_group
File "C:\Users\My.Name\AppData\Local\Continuum\miniconda3\envs\main\lib\site-packages\pandas\core\groupby\groupby.py", line 1060, in <lambda>
f = lambda x: func(x, *args, **kwargs)
File "C:\Users\My.Name\AppData\Local\Continuum\miniconda3\envs\main\lib\site-packages\pandas\core\groupby\groupby.py", line 1015, in <lambda>
result = self.aggregate(lambda x: npfunc(x, axis=self.axis))
File "<__array_function__ internals>", line 6, in amax
File "C:\Users\My.Name\AppData\Local\Continuum\miniconda3\envs\main\lib\site-packages\numpy\core\fromnumeric.py", line 2706, in amax
keepdims=keepdims, initial=initial, where=where)
File "C:\Users\My.Name\AppData\Local\Continuum\miniconda3\envs\main\lib\site-packages\numpy\core\fromnumeric.py", line 85, in _wrapreduction
return reduction(axis=axis, out=out, **passkwargs)
File "C:\Users\My.Name\AppData\Local\Continuum\miniconda3\envs\main\lib\site-packages\pandas\core\generic.py", line 11460, in stat_func
func, name=name, axis=axis, skipna=skipna, numeric_only=numeric_only
File "C:\Users\My.Name\AppData\Local\Continuum\miniconda3\envs\main\lib\site-packages\pandas\core\series.py", line 4220, in _reduce
delegate = self._values
File "C:\Users\My.Name\AppData\Local\Continuum\miniconda3\envs\main\lib\site-packages\pandas\core\series.py", line 572, in _values
return self._mgr.internal_values()
File "C:\Users\My.Name\AppData\Local\Continuum\miniconda3\envs\main\lib\site-packages\pandas\core\internals\managers.py", line 1615, in internal_values
return self._block.internal_values()
File "C:\Users\My.Name\AppData\Local\Continuum\miniconda3\envs\main\lib\site-packages\pandas\core\internals\blocks.py", line 2019, in internal_values
return self.array_values()
File "C:\Users\My.Name\AppData\Local\Continuum\miniconda3\envs\main\lib\site-packages\pandas\core\internals\blocks.py", line 2022, in array_values
return self._holder._simple_new(self.values)
File "C:\Users\My.Name\AppData\Local\Continuum\miniconda3\envs\main\lib\site-packages\pandas\core\arrays\datetimes.py", line 290, in _simple_new
assert values.dtype == "i8"
AssertionError
| 14,444 |
|||
pandas-dev/pandas | pandas-dev__pandas-38173 | efbcd68bd4bc899eb2270d4015bf32a32da00371 | diff --git a/doc/source/whatsnew/v1.2.0.rst b/doc/source/whatsnew/v1.2.0.rst
--- a/doc/source/whatsnew/v1.2.0.rst
+++ b/doc/source/whatsnew/v1.2.0.rst
@@ -769,6 +769,7 @@ Groupby/resample/rolling
- Bug in :meth:`DataFrame.groupby` dropped ``nan`` groups from result with ``dropna=False`` when grouping over a single column (:issue:`35646`, :issue:`35542`)
- Bug in :meth:`.DataFrameGroupBy.head`, :meth:`.DataFrameGroupBy.tail`, :meth:`SeriesGroupBy.head`, and :meth:`SeriesGroupBy.tail` would raise when used with ``axis=1`` (:issue:`9772`)
- Bug in :meth:`.DataFrameGroupBy.transform` would raise when used with ``axis=1`` and a transformation kernel (e.g. "shift") (:issue:`36308`)
+- Bug in :meth:`.DataFrameGroupBy.quantile` couldn't handle with arraylike ``q`` when grouping by columns (:issue:`33795`)
- Bug in :meth:`DataFrameGroupBy.rank` with ``datetime64tz`` or period dtype incorrectly casting results to those dtypes instead of returning ``float64`` dtype (:issue:`38187`)
Reshaping
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -2232,29 +2232,36 @@ def post_processor(vals: np.ndarray, inference: Optional[Type]) -> np.ndarray:
)
for qi in q
]
- result = concat(results, axis=0, keys=q)
+ result = concat(results, axis=self.axis, keys=q)
# fix levels to place quantiles on the inside
# TODO(GH-10710): Ideally, we could write this as
# >>> result.stack(0).loc[pd.IndexSlice[:, ..., q], :]
# but this hits https://github.com/pandas-dev/pandas/issues/10710
# which doesn't reorder the list-like `q` on the inner level.
- order = list(range(1, result.index.nlevels)) + [0]
+ order = list(range(1, result.axes[self.axis].nlevels)) + [0]
# temporarily saves the index names
- index_names = np.array(result.index.names)
+ index_names = np.array(result.axes[self.axis].names)
# set index names to positions to avoid confusion
- result.index.names = np.arange(len(index_names))
+ result.axes[self.axis].names = np.arange(len(index_names))
# place quantiles on the inside
- result = result.reorder_levels(order)
+ if isinstance(result, Series):
+ result = result.reorder_levels(order)
+ else:
+ result = result.reorder_levels(order, axis=self.axis)
# restore the index names in order
- result.index.names = index_names[order]
+ result.axes[self.axis].names = index_names[order]
# reorder rows to keep things sorted
- indices = np.arange(len(result)).reshape([len(q), self.ngroups]).T.flatten()
- return result.take(indices)
+ indices = (
+ np.arange(result.shape[self.axis])
+ .reshape([len(q), self.ngroups])
+ .T.flatten()
+ )
+ return result.take(indices, axis=self.axis)
@Substitution(name="groupby")
def ngroup(self, ascending: bool = True):
| BUG: quantile with list of quantiles fails on MultiIndex column and groupby
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of pandas.
- [ ] (optional) I have confirmed this bug exists on the master branch of pandas.
---
**Note**: Please read [this guide](https://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) detailing how to provide the necessary information for us to reproduce your bug.
#### Code Sample, a copy-pastable example
```python
import pandas as pd
import numpy as np
steps = 6
simulations = 20
alpha = 0.05
idx_interim_date_forecast = pd.IndexSlice["2020-04-01":]
df_return_forecast = pd.concat(
[
pd.DataFrame(
data=np.random.rand(steps, simulations),
index=pd.date_range(
start=idx_interim_date_forecast.start, periods=steps, freq="M"
),
columns=pd.MultiIndex.from_product(
[("sample",), range(0, simulations)], names=["scenario", "simulation"]
),
),
pd.DataFrame(
data=np.random.rand(steps, simulations),
index=pd.date_range(
start=idx_interim_date_forecast.start, periods=steps, freq="M"
),
columns=pd.MultiIndex.from_product(
[("trend",), range(0, simulations)], names=["scenario", "simulation"]
),
),
],
axis=1,
sort=True,
)
df_return_forecast.groupby(axis=1, level=0).quantile(q=alpha / 2)
df_return_forecast.groupby(axis=1, level=0).quantile(q=1 - alpha / 2)
df_return_forecast.groupby(axis=1, level=0).quantile(q=[alpha / 2, 1 - alpha / 2])
```
```python
Traceback (most recent call last):
File "<ipython-input-1-6a99a78036cb>", line 37, in <module>
df_return_forecast.groupby(axis=1, level=0).quantile(q=[alpha / 2, 1 - alpha / 2])
File "C:\Users\Kurt\Anaconda3\lib\site-packages\pandas\core\groupby\groupby.py", line 1951, in quantile
indices = np.arange(len(result)).reshape([len(q), self.ngroups]).T.flatten()
```
#### Problem description
I would expect the ```quantile``` call with an iterable list of quantiles to return at the specified locations. Individual calls at the required locations returns the correct quantiles. The issue seems to be a result of the multiindex column/groupby.
If instead I do the following I am able to get the desired result without error.
```python
df_return_forecast.T.groupby(axis=0, level=0).quantile(q=[alpha / 2, 1 - alpha / 2]).T
```
So there is a workaround but the behaviour is not as expected and I am not sure of the performance issues it my induce for large dataframes.
#### Expected Output
As per result obtained by:
```python
df_return_forecast.T.groupby(axis=0, level=0).quantile(q=[alpha / 2, 1 - alpha / 2]).T
```
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit : None
python : 3.7.6.final.0
python-bits : 64
OS : Windows
OS-release : 10
machine : AMD64
processor : Intel64 Family 6 Model 60 Stepping 3, GenuineIntel
byteorder : little
LC_ALL : None
LANG : en
LOCALE : None.None
pandas : 1.0.3
numpy : 1.18.1
pytz : 2019.3
dateutil : 2.8.1
pip : 20.0.2
setuptools : 46.1.3.post20200325
Cython : 0.29.16
pytest : 5.4.1
hypothesis : 5.10.4
sphinx : 3.0.2
blosc : None
feather : None
xlsxwriter : 1.2.8
lxml.etree : 4.5.0
html5lib : 1.0.1
pymysql : None
psycopg2 : None
jinja2 : 2.11.2
IPython : 7.13.0
pandas_datareader: None
bs4 : 4.9.0
bottleneck : 1.3.2
fastparquet : None
gcsfs : None
lxml.etree : 4.5.0
matplotlib : 3.2.1
numexpr : 2.7.1
odfpy : None
openpyxl : 3.0.3
pandas_gbq : None
pyarrow : 0.16.0
pytables : None
pytest : 5.4.1
pyxlsb : None
s3fs : None
scipy : 1.4.1
sqlalchemy : 1.3.15
tables : 3.6.1
tabulate : None
xarray : None
xlrd : 1.2.0
xlwt : 1.3.0
xlsxwriter : 1.2.8
numba : 0.48.0
</details>
| Thanks for the report. Are you interested in working on a fix? | 2020-11-30T05:22:31Z | [] | [] |
Traceback (most recent call last):
File "<ipython-input-1-6a99a78036cb>", line 37, in <module>
df_return_forecast.groupby(axis=1, level=0).quantile(q=[alpha / 2, 1 - alpha / 2])
File "C:\Users\Kurt\Anaconda3\lib\site-packages\pandas\core\groupby\groupby.py", line 1951, in quantile
indices = np.arange(len(result)).reshape([len(q), self.ngroups]).T.flatten()
```
#### Problem description
I would expect the ```quantile``` call with an iterable list of quantiles to return at the specified locations. Individual calls at the required locations returns the correct quantiles. The issue seems to be a result of the multiindex column/groupby.
| 14,465 |
|||
pandas-dev/pandas | pandas-dev__pandas-38220 | d0db009842e5a187193678c1d483af1127af1978 | diff --git a/doc/source/whatsnew/v1.3.0.rst b/doc/source/whatsnew/v1.3.0.rst
--- a/doc/source/whatsnew/v1.3.0.rst
+++ b/doc/source/whatsnew/v1.3.0.rst
@@ -140,7 +140,7 @@ Missing
MultiIndex
^^^^^^^^^^
--
+- Bug in :meth:`DataFrame.drop` raising ``TypeError`` when :class:`MultiIndex` is non-unique and no level is provided (:issue:`36293`)
-
I/O
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -4182,6 +4182,10 @@ def _drop_axis(
# GH 18561 MultiIndex.drop should raise if label is absent
if errors == "raise" and indexer.all():
raise KeyError(f"{labels} not found in axis")
+ elif isinstance(axis, MultiIndex) and labels.dtype == "object":
+ # Set level to zero in case of MultiIndex and label is string,
+ # because isin can't handle strings for MultiIndexes GH#36293
+ indexer = ~axis.get_level_values(0).isin(labels)
else:
indexer = ~axis.isin(labels)
# Check if label doesn't exist along axis
| BUG: `DataFrame.drop` fails when there is a multiIndex without any level provided
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of pandas.
- [ ] (optional) I have confirmed this bug exists on the master branch of pandas.
---
#### Code Sample, a copy-pastable example
```python
>>> x = pd.DataFrame({"a": range(10), "b": range(10, 20), "d": ["a", "v"] * 5}, index=pd.MultiIndex(levels=[['lama', 'cow', 'falcon'],
... ['speed', 'weight', 'length']],
... codes=[[0, 0, 0, 1, 1, 1, 2, 2, 2, 1],
... [0, 1, 2, 0, 1, 2, 0, 1, 2, 1]]))
>>> x
a b d
lama speed 0 10 a
weight 1 11 v
length 2 12 a
cow speed 3 13 v
weight 4 14 a
length 5 15 v
falcon speed 6 16 a
weight 7 17 v
length 8 18 a
cow weight 9 19 v
>>> x.drop(index='cow')
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/Users/pgali/PycharmProjects/del/venv1/lib/python3.7/site-packages/pandas/core/frame.py", line 4169, in drop
errors=errors,
File "/Users/pgali/PycharmProjects/del/venv1/lib/python3.7/site-packages/pandas/core/generic.py", line 3884, in drop
obj = obj._drop_axis(labels, axis, level=level, errors=errors)
File "/Users/pgali/PycharmProjects/del/venv1/lib/python3.7/site-packages/pandas/core/generic.py", line 3933, in _drop_axis
indexer = ~axis.isin(labels)
File "/Users/pgali/PycharmProjects/del/venv1/lib/python3.7/site-packages/pandas/core/indexes/multi.py", line 3600, in isin
values = MultiIndex.from_tuples(values, names=self.names)._values
File "/Users/pgali/PycharmProjects/del/venv1/lib/python3.7/site-packages/pandas/core/indexes/multi.py", line 501, in from_tuples
arrays = list(lib.tuples_to_object_array(tuples).T)
File "pandas/_libs/lib.pyx", line 2471, in pandas._libs.lib.tuples_to_object_array
TypeError: Expected tuple, got str
```
#### Problem description
I'd expected `drop` to drop all the rows with `cow` as an index.
#### Expected Output
```python
>>> x.drop(index='cow')
a b d
lama speed 0 10 a
weight 1 11 v
length 2 12 a
falcon speed 6 16 a
weight 7 17 v
length 8 18 a
```
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit : 2a7d3326dee660824a8433ffd01065f8ac37f7d6
python : 3.7.3.final.0
python-bits : 64
OS : Darwin
OS-release : 19.6.0
Version : Darwin Kernel Version 19.6.0: Thu Jun 18 20:49:00 PDT 2020; root:xnu-6153.141.1~1/RELEASE_X86_64
machine : x86_64
processor : i386
byteorder : little
LC_ALL : None
LANG : None
LOCALE : en_US.UTF-8
pandas : 1.1.2
numpy : 1.17.3
pytz : 2020.1
dateutil : 2.8.1
pip : 20.1.1
setuptools : 49.1.0
Cython : None
pytest : None
hypothesis : 5.29.0
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 2.11.2
IPython : None
pandas_datareader: None
bs4 : None
bottleneck : None
fsspec : None
fastparquet : None
gcsfs : None
matplotlib : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
numba : None
</details>
| Hi,
thanks for your report. You have to specify a level, when dropping parts of a Multiindex or give a list of tuples (lenght of tuples must match number of levels in MultiIndex).
We could improve the error reporting here.
@MarcoGorelli @phofl Can I please work on this issue? Will changing the exception message fix this issue?
@phofl I just did some digging and it looks like the issue is not just an error message, but rather the behavior when there is an index duplicate (notice `cow` is a duplicate). When there is no duplicate, dropping without specifying level works just fine. For example:
```
df = pd.DataFrame(
data=zip(range(9), range(10, 19)),
columns=['a', 'COL'],
index=pd.MultiIndex(
levels=[['lama', 'cow', 'falcon'], ['speed', 'weight', 'length']],
codes=[[0, 0, 0, 1, 1, 1, 2, 2, 2], [0, 1, 2, 0, 1, 2, 0, 1, 2]]
)
)
print(df)
print(df.drop(index='lama'))
```
yields
```
a COL
lama speed 0 10
weight 1 11
length 2 12
cow speed 3 13
weight 4 14
length 5 15
falcon speed 6 16
weight 7 17
length 8 18
##########################
a COL
cow speed 3 13
weight 4 14
length 5 15
falcon speed 6 16
weight 7 17
length 8 18
```
Is this really the intended behavior?
Interesting, thanks very much. Don‘t knoww what the intended behavior is then
Looked into it again:
We use https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Index.isin.html when the index is not unique. Isin can not handle strings as input, because it creates a MultiIndex from the given inputs.
When the index is unique, we use drop, which can handle strings as input.
Part starts here:
https://github.com/pandas-dev/pandas/blob/2067d7e306ae720d455f356e4da21f282a8a762e/pandas/core/generic.py#L4115
Is this behavior intended or should we search for a solution which is consisten between non unique and unique MultiIndexes? | 2020-12-01T22:49:48Z | [] | [] |
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/Users/pgali/PycharmProjects/del/venv1/lib/python3.7/site-packages/pandas/core/frame.py", line 4169, in drop
errors=errors,
File "/Users/pgali/PycharmProjects/del/venv1/lib/python3.7/site-packages/pandas/core/generic.py", line 3884, in drop
obj = obj._drop_axis(labels, axis, level=level, errors=errors)
File "/Users/pgali/PycharmProjects/del/venv1/lib/python3.7/site-packages/pandas/core/generic.py", line 3933, in _drop_axis
indexer = ~axis.isin(labels)
File "/Users/pgali/PycharmProjects/del/venv1/lib/python3.7/site-packages/pandas/core/indexes/multi.py", line 3600, in isin
values = MultiIndex.from_tuples(values, names=self.names)._values
File "/Users/pgali/PycharmProjects/del/venv1/lib/python3.7/site-packages/pandas/core/indexes/multi.py", line 501, in from_tuples
arrays = list(lib.tuples_to_object_array(tuples).T)
File "pandas/_libs/lib.pyx", line 2471, in pandas._libs.lib.tuples_to_object_array
TypeError: Expected tuple, got str
| 14,474 |