home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

2 rows where author_association = "MEMBER", issue = 238284894 and user = 306380 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • mrocklin · 2 ✖

issue 1

  • Writing directly to a netCDF file while using distributed · 2 ✖

author_association 1

  • MEMBER · 2 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
311114129 https://github.com/pydata/xarray/issues/1464#issuecomment-311114129 https://api.github.com/repos/pydata/xarray/issues/1464 MDEyOklzc3VlQ29tbWVudDMxMTExNDEyOQ== mrocklin 306380 2017-06-26T16:39:24Z 2017-06-26T16:39:24Z MEMBER

Presumably there is some object in the task graph that we don't know how to serialize. This can be fixed either in XArray, by not including such an object but recreating it each time or wrapping it, or in Dask, by learning how to (de)serialize it.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Writing directly to a netCDF file while using distributed 238284894
310817771 https://github.com/pydata/xarray/issues/1464#issuecomment-310817771 https://api.github.com/repos/pydata/xarray/issues/1464 MDEyOklzc3VlQ29tbWVudDMxMDgxNzc3MQ== mrocklin 306380 2017-06-24T06:17:52Z 2017-06-24T06:17:52Z MEMBER

It's failing to serialize something in the task graph, I'm not sure what (I'm also surprised that the except clause didn't trigger and log the input). My first guess is that there is an open netcdf file object floating around within the task graph. If so then we should endeavor to avoid doing this (or have some file object proxy that is (de)serializable.

As a short-term workaround you might try starting a local cluster within the same process.

client = Client(processes=False)

This might help you to avoid serialization issues. Generally we should resolve the issue regardless though.

cc'ing @rabernat, who seems to have the most experience here.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Writing directly to a netCDF file while using distributed 238284894

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 78.677ms · About: xarray-datasette