issue_comments
1 row where author_association = "MEMBER", issue = 202964277 and user = 2443309 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- “ValueError: chunksize cannot exceed dimension size” when trying to write xarray to netcdf · 1 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
326138431 | https://github.com/pydata/xarray/issues/1225#issuecomment-326138431 | https://api.github.com/repos/pydata/xarray/issues/1225 | MDEyOklzc3VlQ29tbWVudDMyNjEzODQzMQ== | jhamman 2443309 | 2017-08-30T22:36:14Z | 2017-08-30T22:36:14Z | MEMBER | @tbohn - What is happening here is that xarray is storing the netCDF4 chunk size from the input file. For the
Those integers correspond to the dimensions from LAI. When you slice your dataset, you end up with lat/lon dimensions that are now smaller than the The logical fix is to validate this encoding attribute and either 1) throw an informative error if something isn't going to work, or 2) change the |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
“ValueError: chunksize cannot exceed dimension size” when trying to write xarray to netcdf 202964277 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1