issue_comments
4 rows where issue = 935818279 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- quantile to_netcdf loading original data · 4 ✖
| id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
|---|---|---|---|---|---|---|---|---|---|---|---|
| 874205134 | https://github.com/pydata/xarray/issues/5567#issuecomment-874205134 | https://api.github.com/repos/pydata/xarray/issues/5567 | MDEyOklzc3VlQ29tbWVudDg3NDIwNTEzNA== | andreall 25382032 | 2021-07-05T15:48:50Z | 2021-07-05T15:48:50Z | NONE | oh I get it now. Thanks. Indeed it works now when chunking lat and lon from the start. |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
quantile to_netcdf loading original data 935818279 | |
| 873148006 | https://github.com/pydata/xarray/issues/5567#issuecomment-873148006 | https://api.github.com/repos/pydata/xarray/issues/5567 | MDEyOklzc3VlQ29tbWVudDg3MzE0ODAwNg== | dcherian 2448579 | 2021-07-02T17:21:13Z | 2021-07-02T17:21:13Z | MEMBER | It has to compute the quantile first before overwriting |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
quantile to_netcdf loading original data 935818279 | |
| 873123273 | https://github.com/pydata/xarray/issues/5567#issuecomment-873123273 | https://api.github.com/repos/pydata/xarray/issues/5567 | MDEyOklzc3VlQ29tbWVudDg3MzEyMzI3Mw== | andreall 25382032 | 2021-07-02T16:37:03Z | 2021-07-02T16:37:03Z | NONE |
But if I am doing |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
quantile to_netcdf loading original data 935818279 | |
| 873094334 | https://github.com/pydata/xarray/issues/5567#issuecomment-873094334 | https://api.github.com/repos/pydata/xarray/issues/5567 | MDEyOklzc3VlQ29tbWVudDg3MzA5NDMzNA== | dcherian 2448579 | 2021-07-02T15:48:53Z | 2021-07-02T15:48:53Z | MEMBER |
I suspect this is making your entire dataset one big chunk. I would chunk along |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
quantile to_netcdf loading original data 935818279 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] (
[html_url] TEXT,
[issue_url] TEXT,
[id] INTEGER PRIMARY KEY,
[node_id] TEXT,
[user] INTEGER REFERENCES [users]([id]),
[created_at] TEXT,
[updated_at] TEXT,
[author_association] TEXT,
[body] TEXT,
[reactions] TEXT,
[performed_via_github_app] TEXT,
[issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
ON [issue_comments] ([user]);
user 2