issues
2 rows where type = "issue" and user = 8982598 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
257079041 | MDU6SXNzdWUyNTcwNzkwNDE= | 1571 | to_netcdf fails for engine=h5netcdf when using dask-backed arrays | jcmgray 8982598 | closed | 0 | 2 | 2017-09-12T15:08:27Z | 2019-02-12T05:39:19Z | 2019-02-12T05:39:19Z | CONTRIBUTOR | When using dask-backed datasets/arrays it does not seem possible to use the 'h5netcdf' engine to write to disk:
results in the error: ```bash ... h5py/h5a.pyx in h5py.h5a.open() KeyError: "Can't open attribute (can't locate attribute: 'dask')" ``` Not sure if this is a xarray or h5netcdf issue - or some inherent limitation in which case apologies! |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1571/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
130753818 | MDU6SXNzdWUxMzA3NTM4MTg= | 742 | merge and align DataArrays/Datasets on different domains | jcmgray 8982598 | closed | 0 | 11 | 2016-02-02T17:27:17Z | 2017-01-23T22:42:18Z | 2017-01-23T22:42:18Z | CONTRIBUTOR | Firstly, I think For example consider this setup: ``` python import xarray as xr x1 = [100] y1 = [1, 2, 3, 4, 5] dat1 = [[101, 102, 103, 104, 105]] x2 = [200] y2 = [3, 4, 5, 6] # different size and domain dat2 = [[203, 204, 205, 206]] da1 = xr.DataArray(dat1, dims=['x', 'y'], coords={'x': x1, 'y': y1}) da2 = xr.DataArray(dat2, dims=['x', 'y'], coords={'x': x2, 'y': y2}) ``` I would like to aggregate such DataArrays into a new, single DataArray with ``` python
Here is a quick function I wrote to do such but I would worried about the performance of 'expanding' the new data to the old data's size every iteration (i.e. supposing that the first argument is a large DataArray that you are adding to but doesn't necessarily contain the dimensions already).
Might this be (or is this already!) possible in simpler form in |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/742/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);