issue_comments
4 rows where author_association = "MEMBER" and issue = 257400162 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- Modifying data set resulting in much larger file size · 4 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
329286600 | https://github.com/pydata/xarray/issues/1572#issuecomment-329286600 | https://api.github.com/repos/pydata/xarray/issues/1572 | MDEyOklzc3VlQ29tbWVudDMyOTI4NjYwMA== | shoyer 1217238 | 2017-09-13T20:25:33Z | 2017-09-13T20:25:33Z | MEMBER | You could do scale-offset encoding on the variable by setting |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Modifying data set resulting in much larger file size 257400162 | |
329232225 | https://github.com/pydata/xarray/issues/1572#issuecomment-329232225 | https://api.github.com/repos/pydata/xarray/issues/1572 | MDEyOklzc3VlQ29tbWVudDMyOTIzMjIyNQ== | fmaussion 10050469 | 2017-09-13T17:01:09Z | 2017-09-13T17:04:12Z | MEMBER | Yes, your file uses lossy compression, which is lost in the conversion to the type double. You can either use lossy compression again, or store your data as float instead of double to reduce the output file size. (http://xarray.pydata.org/en/latest/io.html#writing-encoded-data) |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Modifying data set resulting in much larger file size 257400162 | |
329232732 | https://github.com/pydata/xarray/issues/1572#issuecomment-329232732 | https://api.github.com/repos/pydata/xarray/issues/1572 | MDEyOklzc3VlQ29tbWVudDMyOTIzMjczMg== | jhamman 2443309 | 2017-09-13T17:02:57Z | 2017-09-13T17:02:57Z | MEMBER | Thanks. So, as you can see, the In the next version of xarray (0.10) we will have an improved version of where that will help with some of this. @fmaussion also has some good suggestions. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Modifying data set resulting in much larger file size 257400162 | |
329228614 | https://github.com/pydata/xarray/issues/1572#issuecomment-329228614 | https://api.github.com/repos/pydata/xarray/issues/1572 | MDEyOklzc3VlQ29tbWVudDMyOTIyODYxNA== | jhamman 2443309 | 2017-09-13T16:48:35Z | 2017-09-13T16:48:35Z | MEMBER | @jamesstidard - can you compare the output of |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Modifying data set resulting in much larger file size 257400162 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 3