home / github / issue_comments

Menu
  • Search all tables
  • GraphQL API

issue_comments: 1052240616

This data as json

html_url issue_url id node_id user created_at updated_at author_association body reactions performed_via_github_app issue
https://github.com/pydata/xarray/issues/6069#issuecomment-1052240616 https://api.github.com/repos/pydata/xarray/issues/6069 1052240616 IC_kwDOAMm_X84-t-ro 6574622 2022-02-26T15:58:48Z 2022-02-26T15:58:48Z CONTRIBUTOR

I'm trying to picture some usage scenarios based on incrementally adding timesteps to data on store. I hope these might help to answer questions from above. In particular, I think that append and region options of to_zarr will imply different usage patterns, so might lead to different answers, and mixing terms might lead to confusion.

I'll use the following dataset for demonstration code:

python ds = xr.Dataset({ "T": (("time", "x"), [[1.,2.,3.],[11.,12.,13.]]), }, coords={ "time": (("time",), [21., 22.]), "x": (("x",), [100., 200., 300.]) }).chunk({"time": 1})

append

The purpose of append is to add (one or many) elements along one dimension after the end of all currently existing elements. This implies a read-modify-write cycle to at least the total shape of the array. Furthermore, the place to write new chunks is determined by the current shape of the existing array. Due to these implications, it doesn't seem to be useful to try append in parallel (it would become ambiguous where to write) and it doesn't seem to be too useful (but possible) to only write some of the variables defined on the append-dimension, because all other variables would implicitly be filled with fill_value and those places couldn't be filled with another append anymore.

As a consquence, append-mode writes will always have to be sequential and writes to objects shared touched by multiple append calls will always have a defined behaviour, even if they are modified / overwritten with each call. Creating and appending works as follows:

```python

writes 0-sized time-dimension, so only metadata and non-time dependent variables

ds.isel(time=slice(0,0)).to_zarr("test_append.zarr")

!tree -a test_append.zarr

ds.isel(time=slice(0,1)).to_zarr("test_append.zarr", mode="a", append_dim="time") ds.isel(time=slice(1,2)).to_zarr("test_append.zarr", mode="a", append_dim="time")

print() print("final dataset:") !tree -a test_append.zarr ```

Output ``` test_append.zarr ├── .zattrs ├── .zgroup ├── .zmetadata ├── T │ ├── .zarray │ └── .zattrs ├── time │ ├── .zarray │ └── .zattrs └── x ├── .zarray ├── .zattrs └── 0 3 directories, 10 files final dataset: test_append.zarr ├── .zattrs ├── .zgroup ├── .zmetadata ├── T │ ├── .zarray │ ├── .zattrs │ ├── 0.0 │ └── 1.0 ├── time │ ├── .zarray │ ├── .zattrs │ ├── 0 │ └── 1 └── x ├── .zarray ├── .zattrs └── 0 3 directories, 14 files ```

In this case, x would be overwritten with each append call, but the behaviour is well defined as we will only ever append sequentially, so whatever the last write writes into x will be the final result, e.g. [1, 2, 3] in the following case:

python ds.isel(time=slice(0,1)).to_zarr("test_append.zarr", mode="a", append_dim="time") ds2 = ds.assign(x=[1,2,3]) ds2.isel(time=slice(1,2)).to_zarr("test_append.zarr", mode="a", append_dim="time")

If instead, x shouldn't be overwritten, it's possible to append using: python ds.drop(["x"]).isel(time=slice(0,1)).to_zarr("test_append.zarr", mode="a", append_dim="time") ds.drop(["x"]).isel(time=slice(1,2)).to_zarr("test_append.zarr", mode="a", append_dim="time") This also works already with current xarray and has well defined behaviour. However, if there are many time-independent variables, it might be easier if something like .drop_if_not("time") or something similar would be available.

region

region behaves quite differently from append. It does not modify the shape of the arrays and it does not depend on the shape's value to determine where to write new data (it requires user input to do so). This generally enables parallel writes to the same dataset (if only distinct chunks are touched). But as metadata (e.g. shape) is still shared, updates to metadata must be done in a coordinated (likely sequential) manner.

Generally, the workflow with region would imply writing the metadata once and maybe update it from time to time but sequentially (e.g. on a coordinating node) and write all the chunks in parallel on worker nodes, while carefully ensuring that no common chunks are overwritten. Let's see how this might look like:

```python ds.to_zarr("test.zarr", compute=False, encoding={"time": {"chunks": [1]}}) !rm test.zarr/time/0 !rm test.zarr/time/1

!tree -a test.zarr

NOTE: these may run in parallel (even if that's not useful in time, but region might also be in time and space)

ds.drop(['x']).isel(time=slice(0,1)).to_zarr("test.zarr", mode="r+", region={"time": slice(0,1)}) ds.drop(['x']).isel(time=slice(1,2)).to_zarr("test.zarr", mode="r+", region={"time": slice(1,2)})

print() print("final dataset:") !tree -a test.zarr ```

Output ``` test.zarr ├── .zattrs ├── .zgroup ├── .zmetadata ├── T │ ├── .zarray │ └── .zattrs ├── time │ ├── .zarray │ └── .zattrs └── x ├── .zarray ├── .zattrs └── 0 3 directories, 10 files final dataset: test.zarr ├── .zattrs ├── .zgroup ├── .zmetadata ├── T │ ├── .zarray │ ├── .zattrs │ ├── 0.0 │ └── 1.0 ├── time │ ├── .zarray │ ├── .zattrs │ ├── 0 │ └── 1 └── x ├── .zarray ├── .zattrs └── 0 3 directories, 14 files ```

The above works and as far as I understand does what we'd want for parallel writes. It also avoids the mentioned ambiguous cases (due to the drop(['x']) statements). However this case is even more cumbersome to write than in the append case. The parallel writes might benefit from again from something like .drop_if_not("time") (which probably can't be optional in this case due to ambiguity). But what's even more problematic is the initial write of array metadata. In order to start building the dataset, I'll have to scaffold an (potentially not yet computed) Dataset of full size and use compute=False to write only metadata. However, this fails for coordinate variables (like time), because those are eagerly loaded and will still be written out. That's why I've removed those chunks in the example above.

If region should be used for parallel append, then there must be some process on a coordinating node which updates the metadata keys (at least by increasing the shape). I don't yet see how that could be written nicely using xarray.


So based on these two kinds of tasks, it seems to me that the actual append and region write-modes of to_zarr are already doing what they should do, but there could be some more convenience functions which would make those tasks much simpler:

  • some method like drop_if_not (maybe with a better name) which drops all the things we don't want to keep (maybe we should call it keep instead of drop). This method would essentially result in and simplify mode 1 in @shoyer's answer, which I'd argue is what we actually want in both use cases, becasue the dropped data would already have been written by the coordinating process. I'd believe that mode 1 shouldn't be the default for to_zarr though, because silently dropping data from being written isn't nice to the user.
  • some better tooling for writing and updating zarr dataset metadata (I don't know if that would fit in the realm of xarray though, as it looks like handling Datasets without content. For "appending" metadata, I really don't know how I'd picture this propery in xarray world.)
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  1077079208
Powered by Datasette · Queries took 0.578ms · About: xarray-datasette