issue_comments: 364488847
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/1854#issuecomment-364488847 | https://api.github.com/repos/pydata/xarray/issues/1854 | 364488847 | MDEyOklzc3VlQ29tbWVudDM2NDQ4ODg0Nw== | 1797906 | 2018-02-09T16:45:51Z | 2018-02-09T16:45:51Z | NONE | That run was killed with the output
Process finished with exit code 137 (interrupted by signal 9: SIGKILL) ``` I wasn't watching the machine at the time but I assume that's it falling over to memory pressure. Hi @jhamman, I'm using I'm just using whatever the default scheduler is as that's pretty much all the code I've got written above. I'm unsure how to do a performance check as the dataset can't even be fully loaded currently. I've tried different chuck sizes in the past hoping to stumble on a magic size, but have been unsuccessful with that. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
291332965 |