issues
2 rows where assignee = 35919497 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
847014702 | MDU6SXNzdWU4NDcwMTQ3MDI= | 5098 | open_dataset regression | dcherian 2448579 | closed | 0 | aurghs 35919497 | 2 | 2021-03-31T17:32:03Z | 2021-04-15T12:11:34Z | 2021-04-15T12:11:34Z | MEMBER | What happened:
What you expected to happen: should replace Minimal Complete Verifiable Example: ```python import xarray as xr da = xr.DataArray([1, 2, 3]) da.to_netcdf("~/bug_report.nc") xr.open_dataarray("~/bug_report.nc") ``` Anything else we need to know?: works on 0.17.0, fails on master |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5098/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | |||||
717410970 | MDU6SXNzdWU3MTc0MTA5NzA= | 4496 | Flexible backends - Harmonise zarr chunking with other backends chunking | aurghs 35919497 | closed | 0 | aurghs 35919497 | 7 | 2020-10-08T14:43:23Z | 2020-12-10T10:51:09Z | 2020-12-10T10:51:09Z | COLLABORATOR | Is your feature request related to a problem? Please describe. In #4309 we proposed to separate xarray - backend tasks, more or less in this way: - Backend returns a dataset - xarray manage chunks and cache. With the changes in open_dataset to support also zarr (#4187 ), we introduced a slightly different behavior for zarr chunking with respect the other backends. Behavior of all the backends except zar - if chunk == {} or 'auto': it uses dask and only one chunk per variable - if the user defines chunks for not all the dimensions, along these dimensions it uses only one chunk: ```python
Describe the solution you'd like We could extend easily zarr behavior to all the backends (which, for now, don't use the field variable.encodings['chunks']): if no chunks are defined in encoding, we use as default the dimension size, otherwise, we use the encoded chunks. So for now we are not going to change any external behavior, but if needed the other backends can use this interface. I have some additional notes:
One last question:
- In the new interface of open_dataset there is a new key, imported from open_zarr: |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4496/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);