issues
3 rows where repo = 13221727, state = "closed" and user = 5802846 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
887711474 | MDU6SXNzdWU4ODc3MTE0NzQ= | 5290 | Inconclusive error messages using to_zarr with regions | niowniow 5802846 | closed | 0 | 4 | 2021-05-11T15:54:39Z | 2023-11-05T06:28:39Z | 2023-11-05T06:28:39Z | CONTRIBUTOR | What happened:
The idea is to use a xarray dataset (stored as dummy zarr file), which is subsequently filled with the It seems the current implementation is only designed to either store coordinates for the whole dataset and write them to disk or to write without coordinates. I failed to understand this from the documentation and tried to create a dataset without coordinates and fill it with a dataset subset with coordinates. It gave some inconclusive errors depending on the actual code example (see below).
It might also be a bug and it should in fact be possible to add a dataset with coordinates to a dummy dataset without coordinates. Then there seems to be an issue regarding the handling of the variables during storing the region. ... or I might just have done it wrong... and I'm looking forward to suggestions. What you expected to happen: Either an error message telling me that that i should use coordinates during creation of the dummy dataset. Alternatively, if this is a bug and should be possible then it should just work. Minimal Complete Verifiable Example: ```python import dask.array import xarray as xr import numpy as np error = 1 # choose between 0 (no error), 1, 2, 3 dummies = dask.array.zeros(30, chunks=10) chunks in coords are not taken into account while saving!?coord_x = dask.array.zeros(30, chunks=10) # or coord_x = np.zeros((30,)) if error == 0: ds = xr.Dataset({"foo": ("x", dummies)}, coords={"x":coord_x}) else: ds = xr.Dataset({"foo": ("x", dummies)}) print(ds) path = "./tmp/test.zarr" ds.to_zarr(path, mode='w', compute=False, consolidated=True) create a new dataset to be input into a regionds = xr.Dataset({"foo": ('x', np.arange(10))},coords={"x":np.arange(10)}) if error == 1: ds.to_zarr(path, region={"x": slice(10, 20)}) # ValueError: parameter 'value': expected array with shape (0,), got (10,) elif error == 2: ds.to_zarr(path, region={"x": slice(0, 10)}) ds.to_zarr(path, region={"x": slice(10, 20)}) # ValueError: conflicting sizes for dimension 'x': length 10 on 'x' and length 30 on 'foo' elif error == 3: ds.to_zarr(path, region={"x": slice(0, 10)}) ds = xr.Dataset({"foo": ('x', np.arange(10))},coords={"x":np.arange(10)}) ds.to_zarr(path, region={"x": slice(10, 20)}) # ValueError: parameter 'value': expected array with shape (0,), got (10,) else: ds.to_zarr(path, region={"x": slice(10, 20)}) ds = xr.open_zarr(path) print('reopen',ds['x']) ``` Anything else we need to know?: Environment: Output of <tt>xr.show_versions()</tt>INSTALLED VERSIONS ------------------ commit: None python: 3.8.6 | packaged by conda-forge | (default, Oct 7 2020, 19:08:05) [GCC 7.5.0] python-bits: 64 OS: Linux OS-release: 4.19.0-16-amd64 machine: x86_64 processor: byteorder: little LC_ALL: None LANG: C.UTF-8 LOCALE: en_US.UTF-8 libhdf5: None libnetcdf: None xarray: 0.18.0 pandas: 1.2.3 numpy: 1.19.2 scipy: 1.6.2 netCDF4: None pydap: None h5netcdf: None h5py: None Nio: None zarr: 2.8.1 cftime: 1.4.1 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: None dask: 2021.04.0 distributed: None matplotlib: 3.4.1 cartopy: None seaborn: None numbagg: None pint: None setuptools: 49.6.0.post20210108 pip: 21.0.1 conda: None pytest: None IPython: None sphinx: None |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5290/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
976207971 | MDU6SXNzdWU5NzYyMDc5NzE= | 5727 | Setting item with loc and boolean mask fails | niowniow 5802846 | closed | 0 | 3 | 2021-08-21T19:41:56Z | 2022-03-17T17:11:43Z | 2022-03-17T17:11:43Z | CONTRIBUTOR | What happened:
When setting a DataArray with Minimal Complete Verifiable Example: ```python import numpy as np import xarray as xr x = np.arange(10).astype(np.float64) fx = np.arange(10).astype(np.float64) da = xr.DataArray(fx,dims=['x'],coords={'x':x}) mask = np.zeros((10,)) mask[1::2] = 1 mask = mask.astype(bool) da.loc[{'x':~mask}] = np.arange(5)+10 ```
Anything else we need to know?:
<del>Could be fixed by replacing the line, but maybe this is not the cleanest solution.</del> I tried fixing it with the following, which works for the above code but fails for other cases.
Environment: Output of <tt>xr.show_versions()</tt>INSTALLED VERSIONS ------------------ commit: None python: 3.7.0 (default, Oct 9 2018, 10:31:47) [GCC 7.3.0] python-bits: 64 OS: Linux OS-release: 5.4.0-80-lowlatency machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: ('en_US', 'UTF-8') libhdf5: None libnetcdf: None xarray: 0.19.0 pandas: 1.0.1 numpy: 1.18.1 scipy: 1.4.1 netCDF4: None pydap: None h5netcdf: None h5py: None Nio: None zarr: 2.5.0 cftime: None nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: None dask: 2.14.0 distributed: None matplotlib: 3.1.3 cartopy: None seaborn: None numbagg: None pint: None setuptools: 46.1.3.post20200330 pip: 20.0.2 conda: None pytest: None IPython: 7.13.0 sphinx: None |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5727/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
536214141 | MDExOlB1bGxSZXF1ZXN0MzUxNzg0Nzk5 | 3610 | Fix zarr append with groups | niowniow 5802846 | closed | 0 | 12 | 2019-12-11T08:24:44Z | 2020-03-02T12:19:17Z | 2020-03-02T12:19:17Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/3610 | Fixes the issue that
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3610/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);