home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

3 rows where issue = 393214032 and user = 2443309 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: updated_at (date)

These facets timed out: author_association, issue

user 1

  • jhamman · 3 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
453799948 https://github.com/pydata/xarray/issues/2624#issuecomment-453799948 https://api.github.com/repos/pydata/xarray/issues/2624 MDEyOklzc3VlQ29tbWVudDQ1Mzc5OTk0OA== jhamman 2443309 2019-01-13T03:54:07Z 2019-01-13T03:54:07Z MEMBER

I'm going to close this as the original issue (error in compression/codecs) has been resolved. @ktyle - I'd be happy to continue this discussion on the Pangeo issue tracker if you'd like to discuss optimal chunk layout more.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Xarray to Zarr error (in compress / numcodecs functions)  393214032
451206728 https://github.com/pydata/xarray/issues/2624#issuecomment-451206728 https://api.github.com/repos/pydata/xarray/issues/2624 MDEyOklzc3VlQ29tbWVudDQ1MTIwNjcyOA== jhamman 2443309 2019-01-03T16:59:06Z 2019-01-03T16:59:06Z MEMBER

@ktyle - glad to hear things are moving for you. I'm pretty sure the last chunk in each of your datasets is smaller than the rest. So after concatenation, you end up with a small chunk in the middle and at the end of the time dimension. I bet if you used a chunk size of 172 (divides evenly into 2924), you wouldn't need to rechunk.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Xarray to Zarr error (in compress / numcodecs functions)  393214032
449184291 https://github.com/pydata/xarray/issues/2624#issuecomment-449184291 https://api.github.com/repos/pydata/xarray/issues/2624 MDEyOklzc3VlQ29tbWVudDQ0OTE4NDI5MQ== jhamman 2443309 2018-12-21T00:14:22Z 2018-12-21T00:14:22Z MEMBER

You can also rechunk your dataset after the fact using the chunk method:

Python ds = ds.chunk({'time': 1})

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Xarray to Zarr error (in compress / numcodecs functions)  393214032

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 5117.741ms · About: xarray-datasette