home / github / issue_comments

Menu
  • Search all tables
  • GraphQL API

issue_comments: 744103639

This data as json

html_url issue_url id node_id user created_at updated_at author_association body reactions performed_via_github_app issue
https://github.com/pydata/xarray/issues/4045#issuecomment-744103639 https://api.github.com/repos/pydata/xarray/issues/4045 744103639 MDEyOklzc3VlQ29tbWVudDc0NDEwMzYzOQ== 6628425 2020-12-14T00:50:46Z 2020-12-14T00:50:46Z MEMBER

@half-adder I've verified that #4684 fixes your initial issue. Note, however, that outside of the time you referenced, your Dataset contained times that required nanosecond precision, e.g.:

```python

data.time.isel(animal=0, timepoint=0, pair=-1, wavelength=0) <xarray.DataArray 'time' ()> array('2017-02-22T16:24:14.722999999', dtype='datetime64[ns]') Coordinates: wavelength <U3 '410' strain object 'HD233' stage_x float64 1.64e+04 stage_y float64 -429.0 stage_z float64 2.155e+04 bin_x float64 4.0 bin_y float64 4.0 exposure float64 90.0 mvmt-anterior uint8 0 mvmt-posterior uint8 0 mvmt-sides_of_tip uint8 0 mvmt-tip uint8 0 experiment_id object '2017_02_22-HD233_SAY47' time datetime64[ns] 2017-02-22T16:24:14.722999999 animal_ uint64 0 ```

So in order for things to be round-tripped accurately you will need to override the original units in the dataset with nanoseconds instead of microseconds. This was not possible before, but now is with #4684.

```python

data.time.encoding["units"] = "nanoseconds since 1900-01-01" ```

With #4684 you could also just simply delete the original units, and xarray will now automatically choose the appropriate units so that the datetimes can be serialized with int64 values (and hence be round-tripped exactly).

```python

del data.time.encoding["units"] ```

{
    "total_count": 3,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 1,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  614275938
Powered by Datasette · Queries took 0.671ms · About: xarray-datasette