issue_comments
5 rows where author_association = "MEMBER" and issue = 466994138 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: reactions, created_at (date), updated_at (date)
issue 1
- Support parallel writes to zarr store · 5 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
1092436439 | https://github.com/pydata/xarray/issues/3096#issuecomment-1092436439 | https://api.github.com/repos/pydata/xarray/issues/3096 | IC_kwDOAMm_X85BHUHX | max-sixty 5635139 | 2022-04-08T04:43:14Z | 2022-04-08T04:43:14Z | MEMBER | I think this was closed by https://github.com/pydata/xarray/pull/4035 (which I'm going to start using shortly!), so I'll close this, but feel free to reopen if I missed something. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Support parallel writes to zarr store 466994138 | |
730446943 | https://github.com/pydata/xarray/issues/3096#issuecomment-730446943 | https://api.github.com/repos/pydata/xarray/issues/3096 | MDEyOklzc3VlQ29tbWVudDczMDQ0Njk0Mw== | rabernat 1197350 | 2020-11-19T15:22:41Z | 2020-11-19T15:22:41Z | MEMBER | Just a note that #4035 provides a new way to do parallel writing to zarr stores. @VincentDehaye & @cdibble, would you be willing to test this out and see if it meets your needs? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Support parallel writes to zarr store 466994138 | |
516047812 | https://github.com/pydata/xarray/issues/3096#issuecomment-516047812 | https://api.github.com/repos/pydata/xarray/issues/3096 | MDEyOklzc3VlQ29tbWVudDUxNjA0NzgxMg== | rabernat 1197350 | 2019-07-29T15:47:13Z | 2019-07-29T15:47:13Z | MEMBER | @VincentDehaye - we are eager to help you. But it is difficult to hit a moving target. I would like to politely suggest that we keep this issue on topic: making sure that parallel append to zarr store works as expected. Your latest post revealed that you did not try our suggested resolution (use I recommend you open a new, separate issue related to "storing different variables being indexed by the same dimension". |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Support parallel writes to zarr store 466994138 | |
511174605 | https://github.com/pydata/xarray/issues/3096#issuecomment-511174605 | https://api.github.com/repos/pydata/xarray/issues/3096 | MDEyOklzc3VlQ29tbWVudDUxMTE3NDYwNQ== | shoyer 1217238 | 2019-07-14T05:28:22Z | 2019-07-14T05:28:43Z | MEMBER |
Yes, this is the suggested workflow! It is definitely possible to create a zarr dataset and then write to it in parallel with a bunch of processes, but not via xarray's |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Support parallel writes to zarr store 466994138 | |
510659320 | https://github.com/pydata/xarray/issues/3096#issuecomment-510659320 | https://api.github.com/repos/pydata/xarray/issues/3096 | MDEyOklzc3VlQ29tbWVudDUxMDY1OTMyMA== | rabernat 1197350 | 2019-07-11T21:23:33Z | 2019-07-11T21:23:33Z | MEMBER | Hi @VincentDehaye. Thanks for being an early adopter! We really appreciate your feedback. I'm sorry it didn't work as expected. We are in really new territory with this feature. I'm a bit confused about why you are using the multiprocessing module here. The recommended way of parallelizing xarray operations is via the built-in dask support. There are no guarantees that multiprocessing like you're doing will work right. When we talk about parallel append, we are always talking about dask. Your MCVE is not especially helpful for debugging because the two key functions (make_xarray_dataset and upload_to_s3) are not shown. Could you try simplifying your example a bit? I know it is hard when cloud is involved. But try to let us see more of what is happening under the hood. If you are creating a dataset for the first time, you probably don't want append. You want to do
If you are using a dask cluster, this will automatically parallelize everything. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Support parallel writes to zarr store 466994138 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 3