home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

2 rows where author_association = "CONTRIBUTOR" and issue = 274308380 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • WeatherGod 2

issue 1

  • Possible regression with PyNIO data not being lazily loaded · 2 ✖

author_association 1

  • CONTRIBUTOR · 2 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
345310488 https://github.com/pydata/xarray/issues/1720#issuecomment-345310488 https://api.github.com/repos/pydata/xarray/issues/1720 MDEyOklzc3VlQ29tbWVudDM0NTMxMDQ4OA== WeatherGod 291576 2017-11-17T17:33:13Z 2017-11-17T17:33:13Z CONTRIBUTOR

Awesome! Thanks!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Possible regression with PyNIO data not being lazily loaded 274308380
345124033 https://github.com/pydata/xarray/issues/1720#issuecomment-345124033 https://api.github.com/repos/pydata/xarray/issues/1720 MDEyOklzc3VlQ29tbWVudDM0NTEyNDAzMw== WeatherGod 291576 2017-11-17T02:08:50Z 2017-11-17T02:08:50Z CONTRIBUTOR

Is there a convenient sentinel I can check for loaded-ness? The only reason I noticed this was I was debugging another problem with my processing of HRRR files (~600mb each) and the memory usage shot up (did you know that top will report memory usage as fractions of terabytes when you get high enough?). I could test this with some smaller netcdf4 files if I could just loop through the variables and assert some sentinal.

On Thu, Nov 16, 2017 at 8:57 PM, Stephan Hoyer notifications@github.com wrote:

@WeatherGod https://github.com/weathergod can you verify that you don't get immediate loading when loading netCDF files, e.g., with scipy or netCDF4-python?

We did change how loading of data works with printing in this release (

1532 https://github.com/pydata/xarray/pull/1532), but if anything the

changes should go the other way, to do less loading of data.

I'm having trouble debugging this locally because I can't seem to get a working version of pynio installed from conda-forge on OS X (running into various ABI incompatibility issues when I try this in a new conda environment).

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/pydata/xarray/issues/1720#issuecomment-345122204, or mute the thread https://github.com/notifications/unsubscribe-auth/AARy-MO7la8KSJnQoto8Kso5gBYedUKQks5s3OgSgaJpZM4Qflk- .

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Possible regression with PyNIO data not being lazily loaded 274308380

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 11.175ms · About: xarray-datasette