issues
3 rows where type = "issue" and user = 4801430 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
863332419 | MDU6SXNzdWU4NjMzMzI0MTk= | 5197 | allow you to raise error on missing zarr chunks with open_dataset/open_zarr | bolliger32 4801430 | closed | 0 | 1 | 2021-04-21T00:00:03Z | 2023-11-24T22:14:18Z | 2023-11-24T22:14:18Z | CONTRIBUTOR | Is your feature request related to a problem? Please describe. Currently if a zarr store has a missing chunk, it is treaded as all missing. This is an upstream functionality but one for which there may soon be a kwarg allowing you to instead raise an error in these instances (https://github.com/zarr-developers/zarr-python/pull/489). This is valuable in situations where you would like to distinguish intentional NaN data from I/O errors that caused you to not write some chunks. Here's an example of a problematic case in this situation (courtesy of @delgadom ):
This prints: ``` data read into xarray <xarray.DataArray 'myarr' (x: 2, y: 2)> array([[ 0., nan], [ 2., 3.]]) Coordinates: * x (x) int64 0 1 * y (y) int64 0 1 structure of zarr store myzarr.zarr: myarr x y myzarr.zarr/myarr: 0.0 0.1 1.0 1.1 myzarr.zarr/x: 0 myzarr.zarr/y: 0 remove a chunk rm myzarr.zarr/myarr/1.0 data read into xarray <xarray.DataArray 'myarr' (x: 2, y: 2)> array([[ 0., nan], [nan, 3.]]) Coordinates: * x (x) int64 0 1 * y (y) int64 0 1 ``` Describe the solution you'd like
I'm not sure where a kwarg to the |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5197/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
not_planned | xarray 13221727 | issue | ||||||
868352536 | MDU6SXNzdWU4NjgzNTI1MzY= | 5219 | Zarr encoding attributes persist after slicing data, raising error on `to_zarr` | bolliger32 4801430 | open | 0 | 9 | 2021-04-27T01:34:52Z | 2022-12-06T16:16:20Z | CONTRIBUTOR | What happened:
Opened a dataset using What you expected to happen:
The file would save without needing to explicitly modify any Minimal Complete Verifiable Example: ```python ds = xr.Dataset({"data": (("dimA", ), [10, 20, 30, 40])}, coords={"dimA": [1, 2, 3, 4]}) ds = ds.chunk({"dimA": 2}) ds.to_zarr("test.zarr", consolidated=True, mode="w") ds2 = xr.open_zarr("test.zarr", consolidated=True).sel(dimA=[1,3]).persist() ds2.to_zarr("test2.zarr", consolidated=True, mode="w") ``` This raises:
Not sure if there is a good way around this (or perhaps this is even desired behavior?), but figured I would flag it as it seemed unexpected and took us a second to diagnose. Once you've loaded the data from a zarr store, I feel like the default behavior should probably be to forget the encodings used to save that zarr, treating the in-memory dataset object just like any other in-memory dataset object that could have been loaded from any source. But maybe I'm in the minority or missing some nuance about why you'd want the encoding to hang around. Environment: ``` INSTALLED VERSIONS commit: None python: 3.8.8 | packaged by conda-forge | (default, Feb 20 2021, 16:22:27) [GCC 9.3.0] python-bits: 64 OS: Linux OS-release: 5.4.89+ machine: x86_64 processor: x86_64 byteorder: little LC_ALL: C.UTF-8 LANG: C.UTF-8 LOCALE: en_US.UTF-8 libhdf5: 1.10.6 libnetcdf: 4.7.4 xarray: 0.17.0 pandas: 1.2.4 numpy: 1.20.2 scipy: 1.6.2 netCDF4: 1.5.6 pydap: installed h5netcdf: 0.11.0 h5py: 3.2.1 Nio: None zarr: 2.7.1 cftime: 1.2.1 nc_time_axis: 1.2.0 PseudoNetCDF: None rasterio: 1.2.2 cfgrib: 0.9.9.0 iris: 3.0.1 bottleneck: 1.3.2 dask: 2021.04.1 distributed: 2021.04.1 matplotlib: 3.4.1 cartopy: 0.19.0 seaborn: 0.11.1 numbagg: None pint: 0.17 setuptools: 49.6.0.post20210108 pip: 21.0.1 conda: None pytest: 6.2.3 IPython: 7.22.0 sphinx: 3.5.4 ``` |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5219/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
540578460 | MDU6SXNzdWU1NDA1Nzg0NjA= | 3648 | combine_by_coords should allow for missing panels in hypercube | bolliger32 4801430 | closed | 0 | 0 | 2019-12-19T21:29:02Z | 2019-12-24T13:46:28Z | 2019-12-24T13:46:28Z | CONTRIBUTOR | MCVE Code Sample
Expected Output
Problem DescriptionCurrently, it throws the following error:
Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3648/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);