issue_comments: 393479854
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/2159#issuecomment-393479854 | https://api.github.com/repos/pydata/xarray/issues/2159 | 393479854 | MDEyOklzc3VlQ29tbWVudDM5MzQ3OTg1NA== | 10668114 | 2018-05-31T09:57:01Z | 2018-05-31T09:57:01Z | NONE | Just wanted to add the same request;)
I also do not understand what is the real complexity of implementing it. As I understand the problem, the initial full dataset is some sort of N-d hypercube and then it is being split into parts along any nr of dimensions. When reading multiple files, which are just parts of this hypercube, it should be enough to just find the possible dimension values, form a hypercube and place each files content into the correct slot? What am I missing here? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
324350248 |