home / github

Menu
  • GraphQL API
  • Search all tables

issues

Table actions
  • GraphQL API for issues

3 rows where "created_at" is on date 2019-10-01 and user = 2448579 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date), closed_at (date)

type 2

  • pull 2
  • issue 1

state 1

  • closed 3

repo 1

  • xarray 3
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
501108453 MDU6SXNzdWU1MDExMDg0NTM= 3363 user-friendly additions for dask usage dcherian 2448579 closed 0     3 2019-10-01T19:48:27Z 2021-04-19T03:34:18Z 2021-04-19T03:34:18Z MEMBER      

Any thoughts on adding

  1. .chunksize or .nbytes_chunk
  2. .ntasks : this would be len(obj.__dask_graph__())
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3363/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
501150299 MDExOlB1bGxSZXF1ZXN0MzIzNDgwMDky 3364 Make concat more forgiving with variables that are being merged. dcherian 2448579 closed 0     3 2019-10-01T21:15:54Z 2019-10-17T01:30:32Z 2019-10-14T18:06:54Z MEMBER   0 pydata/xarray/pulls/3364
  • [x] Closes #508
  • [x] Tests added
  • [x] Passes black . && mypy . && flake8
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

Downstream issue: https://github.com/marbl-ecosys/cesm2-marbl/issues/1

Basically, we are currently raising an error when attempting to merge variables that are present in some datasets but not others that are provided to concat. This seems unnecessarily strict and it turns out we had an issue for it! (#508)

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3364/reactions",
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
501059947 MDExOlB1bGxSZXF1ZXN0MzIzNDA1OTEz 3362 Fix concat bug when concatenating unlabeled dimensions. dcherian 2448579 closed 0     3 2019-10-01T18:10:22Z 2019-10-08T22:30:38Z 2019-10-08T22:13:48Z MEMBER   0 pydata/xarray/pulls/3362

This fixes the following behaviour. (downstream issue https://github.com/xgcm/xgcm/issues/154)

def test_concat(self, data): split_data = [ data.isel(dim1=slice(3)), data.isel(dim1=3), # this wouldn't work on master data.isel(dim1=slice(4, None)), ] assert_identical(data, concat(split_data, "dim1"))

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3362/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 52.914ms · About: xarray-datasette