issues
6 rows where milestone = 987654, state = "closed" and type = "issue" sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: user, author_association, created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
46049691 | MDU6SXNzdWU0NjA0OTY5MQ== | 255 | Add Dataset.to_pandas() method | shoyer 1217238 | closed | 0 | 0.5 987654 | 2 | 2014-10-17T00:01:36Z | 2021-05-04T13:56:00Z | 2021-05-04T13:56:00Z | MEMBER | This would be the complement of the DataArray constructor, converting an xray.DataArray into a 1D series, 2D DataFrame or 3D panel, whichever is appropriate.
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/255/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | |||||
48301141 | MDU6SXNzdWU0ODMwMTE0MQ== | 277 | Creation of an empty DataArray | andreas-h 358378 | closed | 0 | 0.5 987654 | 11 | 2014-11-10T19:07:55Z | 2020-03-06T12:38:08Z | 2020-03-06T12:38:07Z | CONTRIBUTOR | I'd like to create an empty |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/277/reactions", "total_count": 10, "+1": 10, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | |||||
69216911 | MDU6SXNzdWU2OTIxNjkxMQ== | 394 | Checklist for releasing a version of xray with dask support | shoyer 1217238 | closed | 0 | 0.5 987654 | 3 | 2015-04-17T21:02:10Z | 2015-06-01T18:27:49Z | 2015-06-01T18:27:49Z | MEMBER | For dask:
- [x] default threadpool for dask.array
- [x] fix indexing bugs for dask.array
- [x] make a decision on (and if necessary implement) renaming "block" to "chunk"
- [x] fix repeated use of For xray:
- [x] update xray for the updated dask (https://github.com/xray/xray/pull/395)
- [x] figure out how to handle caching with the Things we can add in an incremental release: - make non-aggregating grouped operations more useable - automatic lazy apply for grouped operations on xray objects CC @mrocklin |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/394/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | |||||
72145600 | MDU6SXNzdWU3MjE0NTYwMA== | 406 | millisecond and microseconds support | jsignell 4806877 | closed | 0 | 0.5 987654 | 5 | 2015-04-30T12:38:27Z | 2015-05-01T20:33:10Z | 2015-05-01T20:33:10Z | CONTRIBUTOR | netcdf4python supports milliseconds and microseconds: https://github.com/Unidata/netcdf4-python/commit/22d439d6d3602171dc2c23bca0ade31d3c49ad20 would it be possible to support in X-ray? |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/406/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | |||||
58310637 | MDU6SXNzdWU1ODMxMDYzNw== | 328 | Support out-of-core computation using dask | shoyer 1217238 | closed | 0 | 0.5 987654 | 7 | 2015-02-20T05:02:22Z | 2015-04-17T21:03:12Z | 2015-04-17T21:03:12Z | MEMBER | Dask is a library for out of core computation somewhat similar to biggus in conception, but with slightly grander aspirations. For examples of how Dask could be applied to weather data, see this blog post by @mrocklin: http://matthewrocklin.com/blog/work/2015/02/13/Towards-OOC-Slicing-and-Stacking/ It would be interesting to explore using dask internally in xray, so that we can implement lazy/out-of-core aggregations, concat and groupby to complement the existing lazy indexing. This functionality would be quite useful for xray, and even more so than merely supporting datasets-on-disk (#199). A related issue is #79: we can easily imagine using Dask with groupby/apply to power out-of-core and multi-threaded computation. Todos for xray:
- [x] refactor Todos for dask (to be clear, none of these are blockers for a proof of concept):
- [x] support for NaN skipping aggregations
- [x] ~~support for interleaved concatenation (necessary for transformations by group, which are quite common)~~ (turns out to be a one-liner with concatenate and take, see below)
- [x] ~~support for something like |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/328/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | |||||
58288666 | MDU6SXNzdWU1ODI4ODY2Ng== | 326 | DataArray.groupby.apply with a generic ndarray function | IamJeffG 2002703 | closed | 0 | 0.5 987654 | 1 | 2015-02-19T23:37:34Z | 2015-02-20T04:41:08Z | 2015-02-20T04:41:08Z | CONTRIBUTOR | Need to apply a transformation function across one dimension of a DataArray, where that non-xray function speaks in ndarrays. Currently the only ways to do this involve wrapping the function. An example: ``` import numpy as np import xray from scipy.ndimage.morphology import binary_opening da = xray.DataArray(np.random.random_integers(0, 1, (10, 10, 3)), dims=['row', 'col', 'time']) I want to apply an operation the 2D image at each point in timeda.groupby('time').apply(binary_opening) AttributeError: 'numpy.ndarray' object has no attribute 'dims'def wrap_binary_opening(da, kwargs): return xray.DataArray(binary_opening(da.values, kwargs), da.coords) da.groupby('time').apply(wrap_binary_opening) da.groupby('time').apply(wrap_binary_opening, iterations=2) # func may take custom args ``` My proposed solution is that |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/326/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);