home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

5 rows where issue = 218260909 and user = 1217238 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

These facets timed out: author_association, issue

user 1

  • shoyer · 5 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
298433889 https://github.com/pydata/xarray/issues/1340#issuecomment-298433889 https://api.github.com/repos/pydata/xarray/issues/1340 MDEyOklzc3VlQ29tbWVudDI5ODQzMzg4OQ== shoyer 1217238 2017-05-01T21:11:15Z 2017-05-01T21:11:15Z MEMBER

@karenamckinnon In this case, it was in the file paths, i.e., /glade/apps/opt/pandas/0.14.0/gnu/4.8.2/lib/python2.7/site-packages/pandas-0.14.0-py2.7-linux-x86_64.egg/pandas/core/index.pyc

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  round-trip performance with save_mfdataset / open_mfdataset 218260909
297572440 https://github.com/pydata/xarray/issues/1340#issuecomment-297572440 https://api.github.com/repos/pydata/xarray/issues/1340 MDEyOklzc3VlQ29tbWVudDI5NzU3MjQ0MA== shoyer 1217238 2017-04-26T23:48:29Z 2017-04-26T23:48:29Z MEMBER

@karenamckinnon From your traceback, it looks like you're using pandas 0.14, but xarray requires at least pandas 0.15.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  round-trip performance with save_mfdataset / open_mfdataset 218260909
297566576 https://github.com/pydata/xarray/issues/1340#issuecomment-297566576 https://api.github.com/repos/pydata/xarray/issues/1340 MDEyOklzc3VlQ29tbWVudDI5NzU2NjU3Ng== shoyer 1217238 2017-04-26T23:08:55Z 2017-04-26T23:08:55Z MEMBER

@karenamckinnon could you please share a traceback for the error?

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  round-trip performance with save_mfdataset / open_mfdataset 218260909
290480036 https://github.com/pydata/xarray/issues/1340#issuecomment-290480036 https://api.github.com/repos/pydata/xarray/issues/1340 MDEyOklzc3VlQ29tbWVudDI5MDQ4MDAzNg== shoyer 1217238 2017-03-30T17:18:22Z 2017-03-30T17:18:22Z MEMBER

Indeed, it's not. We should add some way to pipe this arguments through auto_combine on to concat.

{
    "total_count": 3,
    "+1": 3,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  round-trip performance with save_mfdataset / open_mfdataset 218260909
290477014 https://github.com/pydata/xarray/issues/1340#issuecomment-290477014 https://api.github.com/repos/pydata/xarray/issues/1340 MDEyOklzc3VlQ29tbWVudDI5MDQ3NzAxNA== shoyer 1217238 2017-03-30T17:07:50Z 2017-03-30T17:07:50Z MEMBER

My strong suspicion is that the bottleneck here is xarray checking all the coordinates for equality in concat, when deciding whether to add a "time" dimension or not.

Try passing coords='minimal' and see if that speeds things up. See the concat documentation for details: http://xarray.pydata.org/en/stable/generated/xarray.concat.html#xarray.concat

This was a convenient check for small/in-memory datasets but possibly it's not a good one going forward. It's generally slow to load all the coordinate data for comparisons, but it's even worse with the current implementation, which computes pair-wise comparisons of arrays with dask instead of doing them in parallel all at once.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  round-trip performance with save_mfdataset / open_mfdataset 218260909

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 5321.489ms · About: xarray-datasette