issue_comments: 533801682
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/2912#issuecomment-533801682 | https://api.github.com/repos/pydata/xarray/issues/2912 | 533801682 | MDEyOklzc3VlQ29tbWVudDUzMzgwMTY4Mg== | 668201 | 2019-09-21T14:21:17Z | 2019-09-21T14:21:17Z | NONE |
@jhamman Could you elaborate on these ways ? I am having severe slow-downs when writing Datasets by blocks (backed by dask). I have also noticed that the slowdowns do not occur when writing to ramdisk. Here are the timings of
The workaround suggested here works, but the datasets may not always fit in memory, and it fails the essential purpose of dask... Note: I am using dask 2.3.0 and xarray 0.12.3 |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
435535284 |