home / github

Menu
  • Search all tables
  • GraphQL API

issues

Table actions
  • GraphQL API for issues

6 rows where milestone = 987654, repo = 13221727 and type = "issue" sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: user, author_association, created_at (date), updated_at (date), closed_at (date)

type 1

  • issue · 6 ✖

state 1

  • closed 6

repo 1

  • xarray · 6 ✖
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
46049691 MDU6SXNzdWU0NjA0OTY5MQ== 255 Add Dataset.to_pandas() method shoyer 1217238 closed 0   0.5 987654 2 2014-10-17T00:01:36Z 2021-05-04T13:56:00Z 2021-05-04T13:56:00Z MEMBER      

This would be the complement of the DataArray constructor, converting an xray.DataArray into a 1D series, 2D DataFrame or 3D panel, whichever is appropriate.

to_pandas would also makes sense for Dataset, if it could convert 0d datasets to series, e.g., pd.Series({k: v.item() for k, v in ds.items()}) (there is currently no direct way to do this), and revert to to_dataframe for higher dimensional input. - [x] DataArray method - [ ] Dataset method

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/255/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
48301141 MDU6SXNzdWU0ODMwMTE0MQ== 277 Creation of an empty DataArray andreas-h 358378 closed 0   0.5 987654 11 2014-11-10T19:07:55Z 2020-03-06T12:38:08Z 2020-03-06T12:38:07Z CONTRIBUTOR      

I'd like to create an empty DataArray, i.e., one with only NA values. The docstring of DataArray says that data=None is allowed, if a dataset argument is provided. However, the docstring doesn't say anything about a dataset argument. 1. I think there's a bug in the docstring 2. I'd like to pass data=None and get a DataArray with the coords/dims set up properly (as defined by the coords and dims kwargs), but with a values array of NA.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/277/reactions",
    "total_count": 10,
    "+1": 10,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
69216911 MDU6SXNzdWU2OTIxNjkxMQ== 394 Checklist for releasing a version of xray with dask support shoyer 1217238 closed 0   0.5 987654 3 2015-04-17T21:02:10Z 2015-06-01T18:27:49Z 2015-06-01T18:27:49Z MEMBER      

For dask: - [x] default threadpool for dask.array - [x] fix indexing bugs for dask.array - [x] make a decision on (and if necessary implement) renaming "block" to "chunk" - [x] fix repeated use of da.insert

For xray: - [x] update xray for the updated dask (https://github.com/xray/xray/pull/395) - [x] figure out how to handle caching with the .load() method on dask arrays - [x] cleanup the xray documentation on dask array. - [x] write an introductory blog post

Things we can add in an incremental release: - make non-aggregating grouped operations more useable - automatic lazy apply for grouped operations on xray objects

CC @mrocklin

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/394/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
72145600 MDU6SXNzdWU3MjE0NTYwMA== 406 millisecond and microseconds support jsignell 4806877 closed 0   0.5 987654 5 2015-04-30T12:38:27Z 2015-05-01T20:33:10Z 2015-05-01T20:33:10Z CONTRIBUTOR      

netcdf4python supports milliseconds and microseconds:

https://github.com/Unidata/netcdf4-python/commit/22d439d6d3602171dc2c23bca0ade31d3c49ad20

would it be possible to support in X-ray?

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/406/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
58310637 MDU6SXNzdWU1ODMxMDYzNw== 328 Support out-of-core computation using dask shoyer 1217238 closed 0   0.5 987654 7 2015-02-20T05:02:22Z 2015-04-17T21:03:12Z 2015-04-17T21:03:12Z MEMBER      

Dask is a library for out of core computation somewhat similar to biggus in conception, but with slightly grander aspirations. For examples of how Dask could be applied to weather data, see this blog post by @mrocklin: http://matthewrocklin.com/blog/work/2015/02/13/Towards-OOC-Slicing-and-Stacking/

It would be interesting to explore using dask internally in xray, so that we can implement lazy/out-of-core aggregations, concat and groupby to complement the existing lazy indexing. This functionality would be quite useful for xray, and even more so than merely supporting datasets-on-disk (#199).

A related issue is #79: we can easily imagine using Dask with groupby/apply to power out-of-core and multi-threaded computation.

Todos for xray: - [x] refactor Variable.concat to make use of functions like concatenate and stack instead of in-place array modification (Dask arrays do not support mutation, for good reasons) - [x] refactor reindex_variables to not make direct use of mutation (e.g., by using da.insert below) - [x] add some sort of internal abstraction to represent "computable" arrays that are not necessarily numpy.ndarray objects (done: this is the data attribute) - [x] expose reblock in the public API - [x] load datasets into dask arrays from disk - [x] load dataset from multiple files into dask - [x] ~~some sort of API for user controlled lazy apply on dask arrays (using groupby, mostly likely)~~ (not necessary for initial release) - [x] save from dask arrays - [x] an API for lazy ufuncs like sin and sqrt - [x] robustly handle indexing along orthogonal dimensions if dask can't handle it directly.

Todos for dask (to be clear, none of these are blockers for a proof of concept): - [x] support for NaN skipping aggregations - [x] ~~support for interleaved concatenation (necessary for transformations by group, which are quite common)~~ (turns out to be a one-liner with concatenate and take, see below) - [x] ~~support for something like take_nd from pandas: like np.take, but with -1 as a sentinel value for "missing" (necessary for many alignment operations)~~ da.insert, modeled after np.insert would solve this problem. - [x] ~~support "orthogonal" MATLAB-like array-based indexing along multiple dimensions~~ (taking along one axis at a time is close enough) - [x] broadcast_to: see https://github.com/numpy/numpy/pull/5371

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/328/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
58288666 MDU6SXNzdWU1ODI4ODY2Ng== 326 DataArray.groupby.apply with a generic ndarray function IamJeffG 2002703 closed 0   0.5 987654 1 2015-02-19T23:37:34Z 2015-02-20T04:41:08Z 2015-02-20T04:41:08Z CONTRIBUTOR      

Need to apply a transformation function across one dimension of a DataArray, where that non-xray function speaks in ndarrays. Currently the only ways to do this involve wrapping the function. An example:

``` import numpy as np import xray from scipy.ndimage.morphology import binary_opening

da = xray.DataArray(np.random.random_integers(0, 1, (10, 10, 3)), dims=['row', 'col', 'time'])

I want to apply an operation the 2D image at each point in time

da.groupby('time').apply(binary_opening)

AttributeError: 'numpy.ndarray' object has no attribute 'dims'

def wrap_binary_opening(da, kwargs): return xray.DataArray(binary_opening(da.values, kwargs), da.coords)

da.groupby('time').apply(wrap_binary_opening) da.groupby('time').apply(wrap_binary_opening, iterations=2) # func may take custom args ```

My proposed solution is that apply would automatically coerce func's return value to a DataArray.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/326/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 76.847ms · About: xarray-datasette