issue_comments: 892477231
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/5655#issuecomment-892477231 | https://api.github.com/repos/pydata/xarray/issues/5655 | 892477231 | IC_kwDOAMm_X841Mh8v | 14371165 | 2021-08-04T08:39:53Z | 2021-08-04T08:39:53Z | MEMBER | I'm not so sure it simplifies that considerably. The linked PR is the minimal changes I had to do to get it working for my use cases and most of the changes were just removing unneccessary My files have 2000+ variables with each variable having like 8 attributes. It starts taking a while when you have to read each one of those. At the moment, reading from file to Dataset takes about 2s, 600ms of those were reading attributes. With the PR I got it down to 200ms. Not as much as I'd hoped but I think I can get my LazyDict implementation much faster. Changing file formats is too large of a change. We have used hdf5-files for many years and just switching to a different file format is just not something you do in painless way without (fast) backwards compatible alternative. It's hard to motivate a switch to xarray if the old alternative reads in files faster. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
957201551 |