home / github

Menu
  • Search all tables
  • GraphQL API

issues

Table actions
  • GraphQL API for issues

1 row where repo = 13221727 and user = 885575 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date), closed_at (date)

type 1

  • pull 1

state 1

  • closed 1

repo 1

  • xarray · 1 ✖
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
164948082 MDExOlB1bGxSZXF1ZXN0NzcwMzE1NzI= 895 Tweaks for opening datasets tsupinie 885575 closed 0     8 2016-07-11T22:08:05Z 2020-11-08T20:18:01Z 2020-11-08T20:18:01Z NONE   0 pydata/xarray/pulls/895

I tweaked the open_dataset() and open_mfdataset() functions for better performance with the PyNIO engine. 1. I use a lot of HDF4 files with what I guess I'll call "malformed names," where the file name does not end with ".hdf". PyNIO is able to figure out the format of the file, but it prints an annoying warning. As far as I can tell, the only way to shut off the warning is to tell it the format of the file, which I've added a format option for. 2. I added an option called only_variables which specifies which variables to load from the dataset in the event you don't want to load all variables. Say, for example, I have a dataset with 47 variables in it, but I only need 3 of them. If the data are not cached, then only loading the 3 I need cuts the I/O time in half. If they are cached, then loading only the 3 takes 20% of the time to load the full dataset. The only_variables option behaves pretty similarly to drop_variables. The default is to load all variables.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/895/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 39.562ms · About: xarray-datasette