home / github / issue_comments

Menu
  • GraphQL API
  • Search all tables

issue_comments: 362673511

This data as json

html_url issue_url id node_id user created_at updated_at author_association body reactions performed_via_github_app issue
https://github.com/pydata/xarray/pull/1793#issuecomment-362673511 https://api.github.com/repos/pydata/xarray/issues/1793 362673511 MDEyOklzc3VlQ29tbWVudDM2MjY3MzUxMQ== 306380 2018-02-02T18:56:16Z 2018-02-02T18:56:16Z MEMBER

SerializableLock isn't appropriate here if you want inter process locking. Dask's lock is probably better here if you're running with the distributed scheduler.

On Feb 2, 2018 1:38 PM, "Joe Hamman" notifications@github.com wrote:

The tests failure indicates that the netcdf4/h5netcdf libraries cannot open the file in write/append mode, and it seems that is because the file is already open (by another process).

Two questions:

  1. autoclose is False to_netcdf. That generally makes sense to me but I'm concerned that we're not being explicit enough about closing the file after each process is done interacting with it. Do we have a way to lock until the file is closed?
  2. The lock we're using is dask's SerializableLock. Is that the correct Lock to be using? There is also the distributed.Lock.

xref: dask/dask#1892 https://github.com/dask/dask/issues/1892

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/pydata/xarray/pull/1793#issuecomment-362657475, or mute the thread https://github.com/notifications/unsubscribe-auth/AASszNohUQirqOLvJZ_5dkoQ74icEsCkks5tQ0w4gaJpZM4RHpBe .

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  283388962
Powered by Datasette · Queries took 0.778ms · About: xarray-datasette