issues
5 rows where repo = 13221727 and user = 2405019 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: comments, created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1044693438 | I_kwDOAMm_X84-RMG- | 5937 | DataArray.dt.seconds returns incorrect value for negative `timedelta64[ns]` | leifdenby 2405019 | closed | 0 | 4 | 2021-11-04T12:05:24Z | 2023-11-10T00:39:17Z | 2023-11-10T00:39:17Z | CONTRIBUTOR | What happened: For a negative
What you expected to happen:
Minimal Complete Verifiable Example: ```python coding: utf-8import xarray as xr import numpy as np number of nanosecondsvalue = 42 da = xr.DataArray([np.timedelta64(value, "ns")]) print(da.dt.seconds) assert da.dt.seconds == 0 da = xr.DataArray([np.timedelta64(-value, "ns")]) print(da.dt.seconds) assert da.dt.seconds == 0 ``` Anything else we need to know?: I've narrowed this down to the call to
I think the issue arises because pandas turns the numpy timedelta64 into a "minus one day plus a time". This actually does have a number of "seconds" in it, but the "total_seconds" has the expected value:
Which would correctly round to zero. I don't think the issue is in pandas, although the output from pandas is counter-intuitive:
Maybe we should handle this as a special case by taking the absolute value before passing the values to pandas (and then applying the original sign again afterwards)? Environment: Output of <tt>xr.show_versions()</tt>``` INSTALLED VERSIONS ------------------ commit: None python: 3.7.7 (default, May 6 2020, 04:59:01) [Clang 4.0.1 (tags/RELEASE_401/final)] python-bits: 64 OS: Darwin OS-release: 19.6.0 machine: x86_64 processor: i386 byteorder: little LC_ALL: en_GB.UTF-8 LANG: None LOCALE: ('en_GB', 'UTF-8') libhdf5: 1.10.4 libnetcdf: 4.6.2 xarray: 0.18.2 pandas: 1.3.4 numpy: 1.19.1 scipy: 1.5.0 netCDF4: 1.4.2 pydap: installed h5netcdf: None h5py: 2.9.0 Nio: None zarr: 2.10.1 cftime: 1.5.1.1 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: 1.3.2 dask: 2021.09.1 distributed: 2021.09.1 matplotlib: 3.2.2 cartopy: 0.18.0 seaborn: 0.10.1 numbagg: None fsspec: 2021.06.1 cupy: None pint: 0.18 sparse: None setuptools: 46.4.0.post20200518 pip: 21.1.2 conda: None pytest: 6.0.1 IPython: 7.16.1 sphinx: None ``` |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5937/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
798676679 | MDExOlB1bGxSZXF1ZXN0NTY1NDUxMzU3 | 4855 | Fix `bounds_error=True` ignored with 1D interpolation | leifdenby 2405019 | closed | 0 | 3 | 2021-02-01T20:18:27Z | 2021-02-10T21:42:07Z | 2021-02-10T21:42:07Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/4855 | Previously
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4855/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
798676024 | MDU6SXNzdWU3OTg2NzYwMjQ= | 4854 | `bounds_error=True` ignored in 1D interpolation | leifdenby 2405019 | closed | 0 | 0 | 2021-02-01T20:17:34Z | 2021-02-10T21:42:06Z | 2021-02-10T21:42:06Z | CONTRIBUTOR | What happened: Attempted to interpolate outside coordinate range while passing What you expected to happen: I expected a Minimal Complete Verifiable Example: ```python import xarray as xr import numpy as np da = xr.DataArray( np.sin(0.3 * np.arange(12).reshape(4, 3)), [("time", np.arange(4)), ("space", [0.1, 0.2, 0.3])], ) this should return nans, as the default is to fill with nansda.interp(time=3.5)
this should raise ValueError, but
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4854/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
672662079 | MDU6SXNzdWU2NzI2NjIwNzk= | 4306 | Indexing datetime broken with pandas 1.1.0 | leifdenby 2405019 | closed | 0 | 2 | 2020-08-04T09:50:59Z | 2020-08-04T09:54:46Z | 2020-08-04T09:54:46Z | CONTRIBUTOR | Code below works with ```python import pandas as pd import xarray as xr import numpy as np dates = pd.date_range("2000-01-01", periods=5) ds = xr.Dataset(coords=dict(dates=dates)) ds['v'] = ("dates"), np.arange(ds.dates.count()) ds.sel(dates=ds.dates.values[2]) ``` The `.sel` operation produces a KeyError in `pandas/core/indexes/datetimes.py`:Traceback (most recent call last): File "datetime_problem.py", line 11, in <module> ds.sel(dates=ds.dates.values[2]) File "/Users/leifdenby/miniconda3/envs/lagtraj/lib/python3.8/site-packages/xarray/core/dataset.py", line 2101, in sel pos_indexers, new_indexes = remap_label_indexers( File "/Users/leifdenby/miniconda3/envs/lagtraj/lib/python3.8/site-packages/xarray/core/coordinates.py", line 396, in remap_label_indexers pos_indexers, new_indexes = indexing.remap_label_indexers( File "/Users/leifdenby/miniconda3/envs/lagtraj/lib/python3.8/site-packages/xarray/core/indexing.py", line 270, in remap_label_indexers idxr, new_idx = convert_label_indexer(index, label, dim, method, tolerance) File "/Users/leifdenby/miniconda3/envs/lagtraj/lib/python3.8/site-packages/xarray/core/indexing.py", line 189, in convert_label_indexer indexer = index.get_loc( File "/Users/leifdenby/miniconda3/envs/lagtraj/lib/python3.8/site-packages/pandas/core/indexes/datetimes.py", line 622, in get_loc raise KeyError(key) KeyError: 946857600000000000 Environment: Output of <tt>xr.show_versions()</tt>INSTALLED VERSIONS ------------------ commit: None python: 3.8.5 | packaged by conda-forge | (default, Jul 24 2020, 01:06:20) [Clang 10.0.1 ] python-bits: 64 OS: Darwin OS-release: 18.0.0 machine: x86_64 processor: i386 byteorder: little LC_ALL: en_GB.UTF-8 LANG: None LOCALE: en_GB.UTF-8 libhdf5: 1.10.5 libnetcdf: 4.6.3 xarray: 0.16.0 pandas: 1.1.0 numpy: 1.19.1 scipy: 1.5.2 netCDF4: 1.5.4 pydap: None h5netcdf: None h5py: None Nio: None zarr: None cftime: 1.2.1 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: None dask: 2.22.0 distributed: None matplotlib: 3.3.0 cartopy: None seaborn: None numbagg: None pint: None setuptools: 49.2.0.post20200712 pip: 20.1.1 conda: None pytest: 6.0.1 IPython: 7.16.1 sphinx: NoneApologies if this is a know issue. I tried to work out whether this is an issue with pandas or xarray (I assume it is with pandas), but couldn't find the right piece of code. Happy to fix the issue if someone could show me what needs to change. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4306/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
607678694 | MDU6SXNzdWU2MDc2Nzg2OTQ= | 4010 | Issue indexing by xarray's own time values + offset | leifdenby 2405019 | closed | 0 | 2 | 2020-04-27T16:20:34Z | 2020-04-28T11:03:06Z | 2020-04-28T08:20:16Z | CONTRIBUTOR | I'm struggling to work out how to index by a xarray time value + an offset (either created using MCVE Code Sample```python import xarray as xr import numpy as np import datetime as dt now = dt.datetime.now() dt_array = xr.DataArray( range(10), dims=('time', ), coords=dict(time=[now + dt.timedelta(seconds=i) for i in range(10)]) ) this worksdt_array.loc[dt_array.time.min():dt_array.time.max()].count() == 10 this fails, only the first value is returned (addingthe time delta appears to have no effect)dt_array.loc[dt_array.time.min():dt_array.time.min() + np.timedelta64(seconds=4)].count() == 4 this fails, an exception is raised when trying to adda datetime.timedelta to the xarray valuedt_array.loc[dt_array.time.min():dt_array.time.max() + dt.timedelta(seconds=4)].count() == 4 also fails, I got the impression from issue #1240that
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4010/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);