home / github / issue_comments

Menu
  • Search all tables
  • GraphQL API

issue_comments: 735849936

This data as json

html_url issue_url id node_id user created_at updated_at author_association body reactions performed_via_github_app issue
https://github.com/pydata/xarray/issues/4045#issuecomment-735849936 https://api.github.com/repos/pydata/xarray/issues/4045 735849936 MDEyOklzc3VlQ29tbWVudDczNTg0OTkzNg== 2418513 2020-11-30T15:18:55Z 2020-11-30T15:21:02Z NONE

In principle we should be able to handle this (contributions are welcome)

I don't mind contributing but not knowing the netcdf stuff inside out I'm not sure I have a good vision on what's the proper way to do it. My use case is very simple - I have an in-memory xr.Dataset that I want to save() and then load() without losses.

Should it just be an xr.save(..., m8=True) (or whatever that flag would be called), so that all of numpy's M8[...] and m8[...] would be serialized transparently (as int64, that is) without passing them through the whole cftime pipeline. It would be then nice, of course, if xr.load was also aware of this convention (via some special attribute or somehow else) and could convert them back like .view('M8[ns]') when loading. I think xarray should also throw an exception if it detects timestamps/timedeltas of nanosecond precision that it can't serialize without going through int-float-int routine (or automatically revert to using this transparent but netcdf-incompatible mode).

Maybe this is not the proper way to do it - ideas welcome (there's also an open PR - #4400 - mind checking that out?)

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  614275938
Powered by Datasette · Queries took 0.759ms · About: xarray-datasette