home / github / issue_comments

Menu
  • Search all tables
  • GraphQL API

issue_comments: 806863336

This data as json

html_url issue_url id node_id user created_at updated_at author_association body reactions performed_via_github_app issue
https://github.com/pydata/xarray/issues/2857#issuecomment-806863336 https://api.github.com/repos/pydata/xarray/issues/2857 806863336 MDEyOklzc3VlQ29tbWVudDgwNjg2MzMzNg== 2418513 2021-03-25T14:35:28Z 2021-03-25T17:15:06Z NONE

I wonder if it would help to use the same underlying h5py.File or h5netcdf.File when appending.

I don't think it's about what's happening in the current Python's process, which instances are being cached or not, it's about the general logic.

For instance, in the example above, if you run it once (e.g. set the range to 50); and then run it but comment out the block that clears the file, and set the range to 50-100. The very first dataset written the second time will be already very slow, slower than the last dataset written the first time - which means it's not about reusing the same File instance.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  427410885
Powered by Datasette · Queries took 3.298ms · About: xarray-datasette