home / github / issue_comments

Menu
  • GraphQL API
  • Search all tables

issue_comments: 510167911

This data as json

html_url issue_url id node_id user created_at updated_at author_association body reactions performed_via_github_app issue
https://github.com/pydata/xarray/issues/2501#issuecomment-510167911 https://api.github.com/repos/pydata/xarray/issues/2501 510167911 MDEyOklzc3VlQ29tbWVudDUxMDE2NzkxMQ== 1312546 2019-07-10T18:05:07Z 2019-07-10T18:05:07Z MEMBER

Great, thanks. I’ll look into the memory issue when writing. We may already have an issue for it.

On Jul 10, 2019, at 10:59, Rich Signell notifications@github.com wrote:

@TomAugspurger , I sat down here at Scipy with @rabernat and he instantly realized that we needed to drop the feature_id coordinate to prevent open_mfdataset from trying to harmonize that coordinate from all the chunks.

So if I use this code, the open_mdfdataset command finishes:

def drop_coords(ds): ds = ds.drop(['reference_time','feature_id']) return ds.reset_coords(drop=True) and I can then add back in the dropped coordinate values at the end:

dsets = [xr.open_dataset(f) for f in files[:3]] ds.coords['feature_id'] = dsets[0].coords['feature_id'] I'm now running into memory issues when I write the zarr data -- but I should raise that as a new issue, right?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  372848074
Powered by Datasette · Queries took 0.583ms · About: xarray-datasette