home / github / issue_comments

Menu
  • Search all tables
  • GraphQL API

issue_comments: 510659320

This data as json

html_url issue_url id node_id user created_at updated_at author_association body reactions performed_via_github_app issue
https://github.com/pydata/xarray/issues/3096#issuecomment-510659320 https://api.github.com/repos/pydata/xarray/issues/3096 510659320 MDEyOklzc3VlQ29tbWVudDUxMDY1OTMyMA== 1197350 2019-07-11T21:23:33Z 2019-07-11T21:23:33Z MEMBER

Hi @VincentDehaye. Thanks for being an early adopter! We really appreciate your feedback. I'm sorry it didn't work as expected. We are in really new territory with this feature.

I'm a bit confused about why you are using the multiprocessing module here. The recommended way of parallelizing xarray operations is via the built-in dask support. There are no guarantees that multiprocessing like you're doing will work right. When we talk about parallel append, we are always talking about dask.

Your MCVE is not especially helpful for debugging because the two key functions (make_xarray_dataset and upload_to_s3) are not shown. Could you try simplifying your example a bit? I know it is hard when cloud is involved. But try to let us see more of what is happening under the hood.

If you are creating a dataset for the first time, you probably don't want append. You want to do python ds = xr.open_mfdataset(all_the_source_files) ds.to_zarr(s3fs_target)

If you are using a dask cluster, this will automatically parallelize everything.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  466994138
Powered by Datasette · Queries took 0.773ms · About: xarray-datasette