home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

3 rows where author_association = "CONTRIBUTOR" and issue = 927617256 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • josephnowak 3

issue 1

  • Appending data to a dataset stored in Zarr format produce PermissonError or NaN values in the final result · 3 ✖

author_association 1

  • CONTRIBUTOR · 3 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
1546942397 https://github.com/pydata/xarray/issues/5511#issuecomment-1546942397 https://api.github.com/repos/pydata/xarray/issues/5511 IC_kwDOAMm_X85cNHe9 josephnowak 25071375 2023-05-14T16:41:38Z 2023-05-14T17:03:57Z CONTRIBUTOR

Hi @shoyer, sorry for bothering you with this issue again, I know that it is old right now, but I have been dealing with it again some days ago and I have also noticed the same problem using the region parameter, so I was thinking that based on this issue I opened on Zarr (https://github.com/zarr-developers/zarr-python/issues/1414) it would be good to implement any of this options to solve the problem:

  1. A warning on the docs indicating that it is necessary to add a synchronizer if you want to append or update data to a Zarr file, or that you need to manually align the chunks based on the size of the missing data on the last chunk to be able to get independent writes.

  2. Automatically align the chunks to get independent writes (which I think can produce slower writes due to the modification of the chunks).

  3. Raise an error if there is no synchronizer and the chunks are not properly aligned, I think that the error can be controlled using the parameter safe_chunks that you offer on the to_zarr method.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Appending data to a dataset stored in Zarr format produce PermissonError or NaN values in the final result 927617256
869196682 https://github.com/pydata/xarray/issues/5511#issuecomment-869196682 https://api.github.com/repos/pydata/xarray/issues/5511 MDEyOklzc3VlQ29tbWVudDg2OTE5NjY4Mg== josephnowak 25071375 2021-06-27T17:15:20Z 2021-06-27T17:15:20Z CONTRIBUTOR

Hi again, I check a little bit more the behavior of Zarr and Dask and I found that the problem only occurs when the lock option in the 'dask.store' method is set as None or False, below you can find an example: ```py

import numpy as np import zarr import dask.array as da

Writing an small zarr array with 42.2 as the value

z1 = zarr.open('data/example.zarr', mode='w', shape=(152), chunks=(30), dtype='f4') z1[:] = 42.2

resizing the array

z2 = zarr.open('data/example.zarr', mode='a') z2.resize(308)

New data to append

append_data = da.from_array(np.array([50.3] * 156), chunks=(30))

If you pass to the lock parameters None or False you will get the PermissonError or some 0s in the final result

so I think this is the problem when Xarray writes to Zarr with Dask, (I saw in the code that by default use lock = None)

If you put lock = True all the problems disappear.

da.store(append_data, z2, regions=[tuple([slice(152, 308)])], lock=None)

the result can contain many 0s or throw an error

print(z2[:]) ```

Hope this help to fix the bug.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Appending data to a dataset stored in Zarr format produce PermissonError or NaN values in the final result 927617256
867715379 https://github.com/pydata/xarray/issues/5511#issuecomment-867715379 https://api.github.com/repos/pydata/xarray/issues/5511 MDEyOklzc3VlQ29tbWVudDg2NzcxNTM3OQ== josephnowak 25071375 2021-06-24T15:08:47Z 2021-06-24T15:08:47Z CONTRIBUTOR

Hi, (sorry if this sound annoying) I check a little bit the code used to append data to Zarr files, and from my perspective the logic is correct and it takes into account the case where the last chunks have differents shape because it works with the shape of the unmodified array and then it resizes it to write in regions with Dask:

I ran the same code that I let in the previous comment but I passed a synchronizer to the 'to_zarr' method (synchronizer=zarr.ThreadSynchronizer()) and all the problems related to the nans and to PermissonErrors disappeared, so this looks more like a synchronization problem between Zarr and Dask.

Hope this helps in something to fix the bug.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Appending data to a dataset stored in Zarr format produce PermissonError or NaN values in the final result 927617256

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 14.401ms · About: xarray-datasette