issue_comments: 627375551
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/4043#issuecomment-627375551 | https://api.github.com/repos/pydata/xarray/issues/4043 | 627375551 | MDEyOklzc3VlQ29tbWVudDYyNzM3NTU1MQ== | 48764870 | 2020-05-12T14:19:24Z | 2020-05-12T14:19:24Z | NONE | @rabernat - Thank you! I will review the code (thank you for the extra comments, I really appreciate that) and follow your instructions to test the chunk size. Just for my understanding, So theoretically It is not possible to make big requests without using chunking? The threads server is under our management and we want to know if these errors can be solved through any specific configuration of the service in the thredds. Thank you in advance! |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
614144170 |