issue_comments
where author_association = "MEMBER", issue = 140291221 and user = 1217238 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
These facets timed out: author_association, issue
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
199547343 | https://github.com/pydata/xarray/issues/793#issuecomment-199547343 | https://api.github.com/repos/pydata/xarray/issues/793 | MDEyOklzc3VlQ29tbWVudDE5OTU0NzM0Mw== | shoyer 1217238 | 2016-03-22T00:01:52Z | 2016-03-22T00:01:52Z | MEMBER | This should be pretty easy -- we'll just need to add The only subtlety is that this needs to be done in a way that is dependent on the version of dask, because the keyword argument is new -- something like |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
dask.async.RuntimeError: NetCDF: HDF error on xarray to_netcdf 140291221 | |
196924992 | https://github.com/pydata/xarray/issues/793#issuecomment-196924992 | https://api.github.com/repos/pydata/xarray/issues/793 | MDEyOklzc3VlQ29tbWVudDE5NjkyNDk5Mg== | shoyer 1217238 | 2016-03-15T17:04:57Z | 2016-03-15T17:27:29Z | MEMBER | I did a little digging into this and I'm pretty sure the issue here is that HDF5 cannot do multi-threading -- at all. Moreover, many HDF5 builds are not thread safe. Right now, we use a single shared lock for all reads with xarray, but for writes we rely on dask.array.store, which only uses different locks for each array it writes. Because @pwolfram's HDF5 file includes multiple variables, each of these gets written with their own thread lock -- which means we end up writing to the same file simultaneously from multiple threads. So what we could really use here is a |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
dask.async.RuntimeError: NetCDF: HDF error on xarray to_netcdf 140291221 | |
195637636 | https://github.com/pydata/xarray/issues/793#issuecomment-195637636 | https://api.github.com/repos/pydata/xarray/issues/793 | MDEyOklzc3VlQ29tbWVudDE5NTYzNzYzNg== | shoyer 1217238 | 2016-03-12T02:19:18Z | 2016-03-12T02:19:18Z | MEMBER | I'm pretty sure we now have a thread lock around all writes to NetCDF files, but it's possible that isn't aggressive enough (maybe we can't safely read and write a different file at the same time?). If your script works with synchronous execution I'll take another look. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
dask.async.RuntimeError: NetCDF: HDF error on xarray to_netcdf 140291221 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1