issues
3 rows where user = 161133 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
937508115 | MDU6SXNzdWU5Mzc1MDgxMTU= | 5581 | Error slicing CFTimeIndex with Pandas 1.3 | ScottWales 161133 | closed | 0 | 4 | 2021-07-06T04:28:00Z | 2021-07-23T21:53:51Z | 2021-07-23T21:53:51Z | CONTRIBUTOR | What happened: Slicing a DataArray with a CFTime time axis gives an error What you expected to happen: The slice should return elements 31 to 180 Minimal Complete Verifiable Example: ```python import xarray as xr import cftime import numpy as np units = 'days since 2000-01-01 00:00' time_365 = cftime.num2date(np.arange(0, 10 * 365), units, '365_day') da = xr.DataArray(np.arange(time_365.size), coords = [time_365], dims = 'time') da.sel(time=slice('2000-02','2000-06')) ``` Anything else we need to know?: It appears to be a compatibility issue between Pandas 1.3.0 and Xarray 0.18.2, with Pandas 1.2.5 and Xarray 0.18.2 the slice behaves normally. Possibly there has been an interface change that has broken CFTimeIndex. Using a pure Pandas time axis works fine ```python import xarray as xr import pandas as pd import numpy as np time = pd.date_range('20000101','20100101', freq='D') da = xr.DataArray(np.arange(time.size), coords = [time], dims = 'time') da.sel(time=slice('2000-02','2000-06')) ``` Environment: Output of <tt>xr.show_versions()</tt>INSTALLED VERSIONS ------------------ commit: None python: 3.9.5 | packaged by conda-forge | (default, Jun 19 2021, 00:32:32) [GCC 9.3.0] python-bits: 64 OS: Linux OS-release: 4.18.0-305.7.1.el8.nci.x86_64 machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en.UTF-8 LOCALE: ('en_US', 'UTF-8') libhdf5: 1.10.6 libnetcdf: 4.7.4 xarray: 0.18.2 pandas: 1.3.0 numpy: 1.21.0 scipy: 1.7.0 netCDF4: 1.5.6 pydap: installed h5netcdf: 0.11.0 h5py: 3.3.0 Nio: None zarr: 2.8.3 cftime: 1.4.1 nc_time_axis: 1.2.0 PseudoNetCDF: None rasterio: 1.2.6 cfgrib: 0.9.9.0 iris: 2.4.0 bottleneck: 1.3.2 dask: 2021.06.2 distributed: 2021.06.2 matplotlib: 3.4.2 cartopy: 0.19.0.post1 seaborn: 0.11.1 numbagg: None pint: 0.17 setuptools: 52.0.0.post20210125 pip: 21.1.3 conda: 4.10.3 pytest: 6.2.4 IPython: 7.25.0 sphinx: 4.0.3 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5581/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
457771111 | MDExOlB1bGxSZXF1ZXN0Mjg5NTExOTk2 | 3033 | ENH: keepdims=True for xarray reductions | ScottWales 161133 | closed | 0 | 3 | 2019-06-19T02:04:53Z | 2019-06-23T09:18:42Z | 2019-06-23T09:18:33Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/3033 | Add new option
Coordinates that depend on the reduced dimensions will be removed from the Dataset/DataArray The name The functionality has only been added to
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3033/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
440988633 | MDU6SXNzdWU0NDA5ODg2MzM= | 2943 | Rolling operations loose chunking with dask and bottleneck | ScottWales 161133 | closed | 0 | 1 | 2019-05-07T01:52:05Z | 2019-05-07T02:01:13Z | 2019-05-07T02:01:13Z | CONTRIBUTOR | Code Sample, a copy-pastable example if possibleA "Minimal, Complete and Verifiable Example" will make it much easier for maintainers to help you: http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports ```python import bottleneck import xarray import dask data = dask.array.ones((100,), chunks=(10,)) da = xarray.DataArray(data, dims=['time']) rolled = da.rolling(time=15).mean() Expect the 'rolled' dataset to be chunked approximately the same as 'data',however there is only one chunk in 'rolled' instead of 10assert len(rolled.chunks[0]) > 1 ``` Problem descriptionRolling operations loose chunking over the rolled dimension when using dask datasets with bottleneck installed, which is a problem for large datasets where we don't want to load the entire thing. The issue appears to be caused by Expected OutputChunks should be preserved through Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2943/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);