issues
4 rows where user = 4992424 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
221387277 | MDU6SXNzdWUyMjEzODcyNzc= | 1372 | decode_cf() loads chunked arrays | darothen 4992424 | closed | 0 | 7 | 2017-04-12T20:52:48Z | 2018-04-12T23:38:02Z | 2018-04-12T23:38:02Z | NONE | Currently using xarray version 0.9.2 and dask version 0.14.0. Suppose you load a NetCDF file with the chunks parameter:
The data is loaded as dask arrays, as expected. But if we then manually call |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1372/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
121740837 | MDU6SXNzdWUxMjE3NDA4Mzc= | 678 | Save to netCDF with record dimension? | darothen 4992424 | closed | 0 | 6 | 2015-12-11T16:20:35Z | 2018-01-08T20:11:27Z | 2018-01-08T20:11:27Z | NONE | Is it currently possible in xray to identify a coordinate as a record dimension when saving to netCDF? Saving a Dataset to disk - even when it's a Dataset that was directly read in from a netCDF file with a record dimension - seems to destroy any indication that there was a record dimension. For instance, reading in CESM output tapes and then immediately saving them to disk demotes "time" from being a record dimension. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/678/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
208215185 | MDExOlB1bGxSZXF1ZXN0MTA2NTkyMjUx | 1272 | Groupby-like API for resampling | darothen 4992424 | closed | 0 | 0.10 2415632 | 27 | 2017-02-16T19:04:07Z | 2017-09-22T16:27:36Z | 2017-09-22T16:27:35Z | NONE | 0 | pydata/xarray/pulls/1272 | This is a work-in-progress to resolve #1269.
Openly welcome feedback/critiques on how I approached this. Subclassing
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1272/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||
220011864 | MDExOlB1bGxSZXF1ZXN0MTE0NjgwMzcy | 1356 | Add DatetimeAccessor for accessing datetime fields via `.dt` attribute | darothen 4992424 | closed | 0 | 9 | 2017-04-06T19:48:19Z | 2017-04-29T01:19:12Z | 2017-04-29T01:18:59Z | NONE | 0 | pydata/xarray/pulls/1356 |
This uses the Virtual time fieldsPresumably this could be used to augment |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1356/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);