home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

3 rows where author_association = "MEMBER", issue = 617476316 and user = 1217238 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

user 1

  • shoyer · 3 ✖

issue 1

  • Automatic chunking of arrays ? · 3 ✖

author_association 1

  • MEMBER · 3 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
628816425 https://github.com/pydata/xarray/issues/4055#issuecomment-628816425 https://api.github.com/repos/pydata/xarray/issues/4055 MDEyOklzc3VlQ29tbWVudDYyODgxNjQyNQ== shoyer 1217238 2020-05-14T18:37:40Z 2020-05-14T18:37:40Z MEMBER

If we think can improve an error message by adding additional context, the right solution is to use raise Exception(...) from original_error: https://stackoverflow.com/a/16414892/809705

On the other hand, if xarray doesn't have anything more to add on top of the original error message, it is best not to add any wrapper at all. Users will just see the original error from dask.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Automatic chunking of arrays ? 617476316
628747933 https://github.com/pydata/xarray/issues/4055#issuecomment-628747933 https://api.github.com/repos/pydata/xarray/issues/4055 MDEyOklzc3VlQ29tbWVudDYyODc0NzkzMw== shoyer 1217238 2020-05-14T16:31:39Z 2020-05-14T16:31:39Z MEMBER

The error message from dask is already pretty descriptive: NotImplementedError: Can not use auto rechunking with object dtype. We are unable to estimate the size in bytes of object data

I don't think we have much to add on top of that?

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Automatic chunking of arrays ? 617476316
628319690 https://github.com/pydata/xarray/issues/4055#issuecomment-628319690 https://api.github.com/repos/pydata/xarray/issues/4055 MDEyOklzc3VlQ29tbWVudDYyODMxOTY5MA== shoyer 1217238 2020-05-14T00:43:22Z 2020-05-14T00:43:22Z MEMBER

Agreed, this would be very welcome!

chunks='auto' isn't supported only because xarray support for dask predates it :)

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Automatic chunking of arrays ? 617476316

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 4164.742ms · About: xarray-datasette