issue_comments
4 rows where author_association = "NONE" and issue = 342531772 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: reactions, created_at (date), updated_at (date)
issue 1
- zarr and xarray chunking compatibility and `to_zarr` performance · 4 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
673565228 | https://github.com/pydata/xarray/issues/2300#issuecomment-673565228 | https://api.github.com/repos/pydata/xarray/issues/2300 | MDEyOklzc3VlQ29tbWVudDY3MzU2NTIyOA== | LunarLanding 4441338 | 2020-08-13T16:04:04Z | 2020-08-13T16:04:04Z | NONE | I arrived here due to a different use case / problem, which ultimately I solved, but I think there's value in documenting it here. My use case is the following workflow: 1 . take raw data, build a dataset, append it to a zarr store Z 2 . analyze the data on Z, then maybe goto 1. Step 2's performance is much better when data on Z is chunked properly along the appending dimension 'frame' (chunks of size 50), however step 1 only adds 1 element along it. I end up with Z having chunks (1,1,1,1,1...) on 'frame'. On xarray 0.16.0, this seems solvable via the encoding parameter, if we take care to only use it on the store creation. Before that version, I was using something like the monkey patch posted by @chrisbarber . Code: ```python import shutil import xarray as xr import numpy as np import tempfile zarr_path = tempfile.mkdtemp() def append_test(ds,chunks): shutil.rmtree(zarr_path)
sometime before 0.16.0import contextlib @contextlib.contextmanager def change_determine_zarr_chunks(chunks): orig_determine_zarr_chunks = xr.backends.zarr._determine_zarr_chunks try: def new_determine_zarr_chunks( enc_chunks, var_chunks, ndim, name): da = ds[name] zchunks = tuple(chunks[dim] if (dim in chunks and chunks[dim] is not None) else da.shape[i] for i,dim in enumerate(da.dims)) return zchunks xr.backends.zarr._determine_zarr_chunks = new_determine_zarr_chunks yield finally: xr.backends.zarr._determine_zarr_chunks = orig_determine_zarr_chunks chunks = {'frame':10,'other':50} ds = xr.Dataset({'data':xr.DataArray(data=np.random.rand(100,100),dims=('frame','other'))}) append_test(ds,chunks) with change_determine_zarr_chunks(chunks): append_test(ds,chunks) with 0.16.0def append_test_encoding(ds,chunks): shutil.rmtree(zarr_path)
append_test_encoding(ds,chunks) ```
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
zarr and xarray chunking compatibility and `to_zarr` performance 342531772 | |
493408428 | https://github.com/pydata/xarray/issues/2300#issuecomment-493408428 | https://api.github.com/repos/pydata/xarray/issues/2300 | MDEyOklzc3VlQ29tbWVudDQ5MzQwODQyOA== | tinaok 46813815 | 2019-05-17T10:37:35Z | 2019-05-17T10:37:35Z | NONE | Hi, I'm new to xarray & zarr ,
After reading a zarr file, I re-chunk the data using xarray.Dataset.chunk. Then create a newly chunked data stored as zarr file with xarray.Dataset.to_zarr But I get error message:
'NotImplementedError: Specified zarr chunks (200, 100, 1) would overlap multiple dask chunks ((50, 50, 50, 50), (25, 25, 25, 25), (10000,)). This is not implemented in xarray yet. Consider rechunking the data using Then why do I get 'notimplemented error ? Do I have to use 'del dsread.data.encoding['chunks']. each time before using 'Dataset.to_zarr' as a workaround? but probably I am missing somthing. I hope someone can point me out... I made a notebook here for reproducing the pb. thanks for your help, regards Tina |
{ "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
zarr and xarray chunking compatibility and `to_zarr` performance 342531772 | |
406732486 | https://github.com/pydata/xarray/issues/2300#issuecomment-406732486 | https://api.github.com/repos/pydata/xarray/issues/2300 | MDEyOklzc3VlQ29tbWVudDQwNjczMjQ4Ng== | chrisbarber 1530840 | 2018-07-20T21:33:08Z | 2018-07-20T21:33:08Z | NONE | I took a closer look and noticed my one-dimensional fields of size 505359 were reporting a chunksize or 63170. Turns out that's enough to come up with a minimal repro: ```python
|
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
zarr and xarray chunking compatibility and `to_zarr` performance 342531772 | |
406705740 | https://github.com/pydata/xarray/issues/2300#issuecomment-406705740 | https://api.github.com/repos/pydata/xarray/issues/2300 | MDEyOklzc3VlQ29tbWVudDQwNjcwNTc0MA== | chrisbarber 1530840 | 2018-07-20T19:36:08Z | 2018-07-20T19:38:03Z | NONE | Ah, that's great. I do see some improvement. Specifically, I can now set chunks using xarray, and successfully write to zarr, and reopen it. However, when reopening it I do find that the chunks have been inconsistently applied (some fields have the expected chunksize whereas some small fields have the entire variable in one chunk). Furthermore, trying to write a second time with I also tried loading my entire dataset into memory, allowing the initial Curious: Is there any downside in xarray to using datasets with inconsistent chunks? I take it that it is a supported configuration because xarray allows it to happen, but just outputs that error when calling One other thing to add: it might be nice to have an option to allow zarr auto-chunking even when |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
zarr and xarray chunking compatibility and `to_zarr` performance 342531772 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 3