issue_comments
4 rows where author_association = "MEMBER" and issue = 924676925 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- Nan/ changed values in output when only reading data, saving and reading again · 4 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
1531496369 | https://github.com/pydata/xarray/issues/5490#issuecomment-1531496369 | https://api.github.com/repos/pydata/xarray/issues/5490 | IC_kwDOAMm_X85bSMex | kmuehlbauer 5821660 | 2023-05-02T13:38:49Z | 2023-05-02T13:38:49Z | MEMBER | This is indeed an issue with That is not a problem per se, but those attributes are obviously different for different files. When concatenating only the first files's attributes survive. That might already be the source of the above problem, as it might slightly change values. An even bigger problem is, when the dynamic range of the decoded data (min/max) doesn't overlap. Then the data might be folded from the lower border to the upper border or vica versa. I've put an example into #5739. The suggestion for now is as @keewis comment to drop encoding in such cases and use floating point values for writing. You might use the available compression options for floating point data. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Nan/ changed values in output when only reading data, saving and reading again 924676925 | |
1531465011 | https://github.com/pydata/xarray/issues/5490#issuecomment-1531465011 | https://api.github.com/repos/pydata/xarray/issues/5490 | IC_kwDOAMm_X85bSE0z | kmuehlbauer 5821660 | 2023-05-02T13:20:46Z | 2023-05-02T13:20:46Z | MEMBER | Xref: #5739 |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Nan/ changed values in output when only reading data, saving and reading again 924676925 | |
864386633 | https://github.com/pydata/xarray/issues/5490#issuecomment-864386633 | https://api.github.com/repos/pydata/xarray/issues/5490 | MDEyOklzc3VlQ29tbWVudDg2NDM4NjYzMw== | kmuehlbauer 5821660 | 2021-06-19T10:18:21Z | 2021-06-19T10:18:21Z | MEMBER | @lthUniBonn You would need to use |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Nan/ changed values in output when only reading data, saving and reading again 924676925 | |
864131761 | https://github.com/pydata/xarray/issues/5490#issuecomment-864131761 | https://api.github.com/repos/pydata/xarray/issues/5490 | MDEyOklzc3VlQ29tbWVudDg2NDEzMTc2MQ== | keewis 14808389 | 2021-06-18T15:52:18Z | 2021-06-18T15:52:18Z | MEMBER | related to that there's also #5082 which proposes to drop the encoding more aggressively. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Nan/ changed values in output when only reading data, saving and reading again 924676925 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 2