issue_comments: 363931288
This data as json
| html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
|---|---|---|---|---|---|---|---|---|---|---|---|
| https://github.com/pydata/xarray/issues/1895#issuecomment-363931288 | https://api.github.com/repos/pydata/xarray/issues/1895 | 363931288 | MDEyOklzc3VlQ29tbWVudDM2MzkzMTI4OA== | 1217238 | 2018-02-07T22:22:40Z | 2018-02-07T22:22:40Z | MEMBER |
I don't think there's any caching here. All of these objects are stateless, though
No, not particularly, though potentially opening a zarr store could be a little expensive. I'm mostly not sure how this would be done. Currently, we open files, create array objects, do some lazy decoding and then create dask arrays with |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
295270362 |