issue_comments: 531617569
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/3306#issuecomment-531617569 | https://api.github.com/repos/pydata/xarray/issues/3306 | 531617569 | MDEyOklzc3VlQ29tbWVudDUzMTYxNzU2OQ== | 15016780 | 2019-09-16T01:22:09Z | 2019-09-16T01:22:09Z | NONE | Thanks @rabernat. I tried what you suggested (with a small subset, the source files are quite large) and it seems to work on smaller subsets, writing locally. Which leads me to suspect trying to run the same process with larger datasets might be overloading memory, but I can't assert the root cause yet. This isn't blocking my current strategy so closing for now. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
493058488 |