issues: 517799069
This data as json
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
517799069 | MDU6SXNzdWU1MTc3OTkwNjk= | 3486 | Should performance be equivalent when opening with chunks or re-chunking a dataset? | 7799184 | open | 0 | 2 | 2019-11-05T14:14:58Z | 2021-08-31T15:28:04Z | CONTRIBUTOR | I was wondering if the chunking behaviour would be expected to be equivalent under two different use cases: (1) When opening a dataset using the I'm interested in performance for slicing across different dimensions. In my case the performance is quite different, please see the example below: Open dataset with one single chunk along
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3486/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
13221727 | issue |