home / github / issue_comments

Menu
  • GraphQL API
  • Search all tables

issue_comments: 510144707

This data as json

html_url issue_url id node_id user created_at updated_at author_association body reactions performed_via_github_app issue
https://github.com/pydata/xarray/issues/2501#issuecomment-510144707 https://api.github.com/repos/pydata/xarray/issues/2501 510144707 MDEyOklzc3VlQ29tbWVudDUxMDE0NDcwNw== 1872600 2019-07-10T16:59:12Z 2019-07-11T11:47:02Z NONE

@TomAugspurger , I sat down here at Scipy with @rabernat and he instantly realized that we needed to drop the feature_id coordinate to prevent open_mfdataset from trying to harmonize that coordinate from all the chunks.

So if I use this code, the open_mfdataset command finishes: python def drop_coords(ds): ds = ds.drop(['reference_time','feature_id']) return ds.reset_coords(drop=True) and I can then add back in the dropped coordinate values at the end: python dsets = [xr.open_dataset(f) for f in files[:3]] ds.coords['feature_id'] = dsets[0].coords['feature_id']

I'm now running into memory issues when I write the zarr data -- but I should raise that as a new issue, right?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  372848074
Powered by Datasette · Queries took 0.576ms · About: xarray-datasette