issue_comments: 485460901
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/2912#issuecomment-485460901 | https://api.github.com/repos/pydata/xarray/issues/2912 | 485460901 | MDEyOklzc3VlQ29tbWVudDQ4NTQ2MDkwMQ== | 1217238 | 2019-04-22T16:06:50Z | 2019-04-22T16:06:50Z | MEMBER | You're using dask, so the Dataset is being lazily computed. If one part of your pipeline is very expensive (perhaps reading the original data from disk?) then the process of saving can be very slow. I would suggest doing some profiling, e.g., as shown in this example: http://docs.dask.org/en/latest/diagnostics-local.html#example Once we know what the slow part is, that will hopefully make opportunities for improvement more obvious. |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
435535284 |