issue_comments
4 rows where issue = 393214032 and user = 1197350 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- Xarray to Zarr error (in compress / numcodecs functions) · 4 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
449184669 | https://github.com/pydata/xarray/issues/2624#issuecomment-449184669 | https://api.github.com/repos/pydata/xarray/issues/2624 | MDEyOklzc3VlQ29tbWVudDQ0OTE4NDY2OQ== | rabernat 1197350 | 2018-12-21T00:16:40Z | 2018-12-21T00:16:40Z | MEMBER |
Not a good idea in this case. The original 49GB chunks will still exist in the task graph and will have to be computed before the rechunking step. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Xarray to Zarr error (in compress / numcodecs functions) 393214032 | |
449151325 | https://github.com/pydata/xarray/issues/2624#issuecomment-449151325 | https://api.github.com/repos/pydata/xarray/issues/2624 | MDEyOklzc3VlQ29tbWVudDQ0OTE1MTMyNQ== | rabernat 1197350 | 2018-12-20T22:09:20Z | 2018-12-20T22:09:20Z | MEMBER | So the key information is this:
This says that your dask chunk size is 1460 x 32 x 361 x 720 (x 4 bytes for Furthermore, the dask chunks will be automatically mapped to zarr chunks by xarray. These zarr chunks would be much too big to be useful. Zarr docs say "at least 1MB". In my example notebook I recommeded 10-100 MB.) For both zarr and dask, you can think of a chunk as an amount of data that can be comfortably held in memory and passed around the network. (That's where the 10 - 100 MB estimate comes from.) It is also the minimum size of data that can be read from the dataset at once. Even if you only need one single value, the whole chunk needs to be read into memory and decompressed. I would recommend you chunk along the time dimension. You can accomplish by adding the I imagine that will fix most of your issues. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Xarray to Zarr error (in compress / numcodecs functions) 393214032 | |
449145011 | https://github.com/pydata/xarray/issues/2624#issuecomment-449145011 | https://api.github.com/repos/pydata/xarray/issues/2624 | MDEyOklzc3VlQ29tbWVudDQ0OTE0NTAxMQ== | rabernat 1197350 | 2018-12-20T21:43:40Z | 2018-12-20T21:43:40Z | MEMBER |
The syntax and an example for specifying a compressor is given in the docs here: http://xarray.pydata.org/en/latest/io.html#zarr-compressors-and-filters. It needs to be part of the |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Xarray to Zarr error (in compress / numcodecs functions) 393214032 | |
449144275 | https://github.com/pydata/xarray/issues/2624#issuecomment-449144275 | https://api.github.com/repos/pydata/xarray/issues/2624 | MDEyOklzc3VlQ29tbWVudDQ0OTE0NDI3NQ== | rabernat 1197350 | 2018-12-20T21:40:44Z | 2018-12-20T21:40:44Z | MEMBER | @ktyle - it sounds like your chunks are too big. Can you post xarray's representation of your dataset before writing it to zarr? Call p.s. I edited your comment a bit to put the code into code blocks. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Xarray to Zarr error (in compress / numcodecs functions) 393214032 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1