issue_comments: 1306300937
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/2799#issuecomment-1306300937 | https://api.github.com/repos/pydata/xarray/issues/2799 | 1306300937 | IC_kwDOAMm_X85N3JIJ | 61931826 | 2022-11-07T22:16:55Z | 2022-11-07T22:16:55Z | NONE | I'm really not understanding why indexing is so slow. My dataarray has 2 dims, one axis 1.5 million long ('node') and the other 1500 ('time'). Trying to pull a single timeseries by indexing 1 node takes 16 seconds. the Variable workaround or playing around with chunking doesn't change anything. The only thing loading into memory should be array of 1500 values. Not sure what's going on under the hood but there may be a way to specify that you're only looking to optimize indexing along 1 dim. Once it gets indexed it becomes a very tiny data set. I would think |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
416962458 |