home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

4 rows where issue = 96732359 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 2

  • rabernat 2
  • shoyer 2

issue 1

  • problems with big endian DataArrays · 4 ✖

author_association 1

  • MEMBER 4
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
124219424 https://github.com/pydata/xarray/issues/489#issuecomment-124219424 https://api.github.com/repos/pydata/xarray/issues/489 MDEyOklzc3VlQ29tbWVudDEyNDIxOTQyNA== rabernat 1197350 2015-07-23T19:34:52Z 2015-07-23T19:34:52Z MEMBER

Thanks for looking into it. In the meantime, I decided try writing a custom backend for my data instead.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  problems with big endian DataArrays 96732359
124199070 https://github.com/pydata/xarray/issues/489#issuecomment-124199070 https://api.github.com/repos/pydata/xarray/issues/489 MDEyOklzc3VlQ29tbWVudDEyNDE5OTA3MA== shoyer 1217238 2015-07-23T18:28:32Z 2015-07-23T18:28:32Z MEMBER

This is a bug in bottleneck: https://github.com/kwgoodman/bottleneck/issues/104

You can work around this issue for now by uninstalling bottleneck. This will have a slight performance cost for little endian arrays, but it shouldn't be a big deal.

I'll also add a check to ensure we never try to pass big endian arrays to bottleneck.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  problems with big endian DataArrays 96732359
124129560 https://github.com/pydata/xarray/issues/489#issuecomment-124129560 https://api.github.com/repos/pydata/xarray/issues/489 MDEyOklzc3VlQ29tbWVudDEyNDEyOTU2MA== rabernat 1197350 2015-07-23T14:47:31Z 2015-07-23T14:47:31Z MEMBER

I apparently do have bottleneck installed, although I was unaware of it until now. What is the relationship between bottleneck and this issue?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  problems with big endian DataArrays 96732359
124002256 https://github.com/pydata/xarray/issues/489#issuecomment-124002256 https://api.github.com/repos/pydata/xarray/issues/489 MDEyOklzc3VlQ29tbWVudDEyNDAwMjI1Ng== shoyer 1217238 2015-07-23T07:13:26Z 2015-07-23T07:13:26Z MEMBER

Do have bottleneck installed?

I've seen error messages from summing big endian arrays before, but never silently wrong results.

We resolved many of these issues for netcdf3 files by coercing arrays to little endian upon reading them from disk. We might even extend this to all arrays loaded into xray.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  problems with big endian DataArrays 96732359

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 9.581ms · About: xarray-datasette