home / github

Menu
  • Search all tables
  • GraphQL API

issues

Table actions
  • GraphQL API for issues

2 rows where type = "issue" and user = 2783717 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date), closed_at (date)

type 1

  • issue · 2 ✖

state 1

  • closed 2

repo 1

  • xarray 2
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
143551401 MDU6SXNzdWUxNDM1NTE0MDE= 804 Compute multiple dask backed arrays at once jcrist 2783717 closed 0     5 2016-03-25T17:48:03Z 2018-03-07T01:40:49Z 2018-03-07T01:40:42Z NONE      

In dask, a user can compute multiple arrays in a single scheduler run using the dask.compute function:

``` python

a_computed, b_computed = dask.compute(a, b) ```

This is nice for when a and b might share intermediates. The same can be done currently in xarray if a and b are first put into a dataset:

``` python

both = xr.Dataset(dict(a=a, b=b)) both.load() # Compute all the arrays in a single pass ```

This is fine, but it might also be nice to be able to do this without first putting everything into a dataset. I'm not sure what a good api is here, as xarray objects mutate when computed. Perhaps just adding an xr.compute(*args) function that fully realizes all dask backed variables.

``` python

xr.compute(a, b) # a and b now contain numpy arrays, not dask arrays ```

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/804/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
132818023 MDU6SXNzdWUxMzI4MTgwMjM= 758 No check for dimension compatibility on DataArray creation jcrist 2783717 closed 0     1 2016-02-10T21:08:32Z 2016-02-15T02:21:33Z 2016-02-15T02:21:33Z NONE      

When creating a DataArray with an iterable of coordinates, no check is made that the dimensions of the coordinates match the shape of the data. Unsure if there's a reason why this isn't checked in xarray or not (I'd expect it to be). This may also just me using xarray wrong. I got bit by this earlier with a typo in my code:

``` python In [1]: import xarray as xr

In [2]: import numpy as np

In [3]: a = np.arange(6).reshape((2, 3))

In [4]: x = np.arange(3)

In [5]: y = np.arange(2, 4)

In [6]: xa = xr.DataArray(a, coords=[x, x], dims=['x', 'y']) # Oop, gave x twice, instead of [x, y]

In [7]: xa.shape Out[7]: (2, 3)

In [8]: xa.coords Out[8]: Coordinates: * x (x) int64 0 1 2 * y (y) int64 0 1 2 ```

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/758/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 19.839ms · About: xarray-datasette