issues
3 rows where user = 3780274 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
523041080 | MDU6SXNzdWU1MjMwNDEwODA= | 3536 | Update dataset attributes and sync to disk store | eddienko 3780274 | open | 0 | 1 | 2019-11-14T18:52:05Z | 2023-09-21T23:09:18Z | NONE | Summary: I would like to be able to update dataset/dataarray attributes and sync the updates to disk. Regarding the Zarr backend in Xarray, at the moment once datasets are read from disk all modifications are in memory only. This means that for the particular case of metadata stored as attributes, in order to update an attribute on disk there are two choices:
The former is prohibitive , in particular for large datasets, but in general it is not a good practice that in order to modify one attribute I need to rewrite the whole pixel data. The second is possible, and certainly I could imagine the need to write a custom function to do this. Is that something that should be available in the xarray library though? Something like |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3536/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
522519084 | MDExOlB1bGxSZXF1ZXN0MzQwNzA1Njc2 | 3526 | Allow nested dictionaries in the Zarr backend (#3517) | eddienko 3780274 | closed | 0 | 7 | 2019-11-13T22:51:47Z | 2023-09-14T03:04:11Z | 2023-09-14T03:04:10Z | NONE | 0 | pydata/xarray/pulls/3526 | Closes #3517. I have tried to touch the minimum code as possible while leaving the defaults as they are now. The code works now by redefining to value to save as a list containing the dictionary. However when writing to disk the the dictionary is saved correctly, without the list (I do not understand 100% how it works but it does so someone double check!).
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3526/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
522187284 | MDU6SXNzdWU1MjIxODcyODQ= | 3517 | Support for nested dictionaries in atttributes for Zarr | eddienko 3780274 | open | 0 | 2 | 2019-11-13T12:28:58Z | 2019-11-13T22:53:43Z | NONE | Similar to #2868, but for Zarr. In this case Zarr should support saving nested dictionaries.
Error message: ``` TypeError Traceback (most recent call last) <ipython-input-11-4e7ad4f0a599> in <module> ----> 1 ds.to_zarr('ll.zarr', mode='w') /srv/conda/envs/notebook/lib/python3.7/site-packages/xarray/core/dataset.py in to_zarr(self, store, mode, synchronizer, group, encoding, compute, consolidated, append_dim) 1614 compute=compute, 1615 consolidated=consolidated, -> 1616 append_dim=append_dim, 1617 ) 1618 /srv/conda/envs/notebook/lib/python3.7/site-packages/xarray/backends/api.py in to_zarr(dataset, store, mode, synchronizer, group, encoding, compute, consolidated, append_dim) 1299 # validate Dataset keys, DataArray names, and attr keys/values 1300 _validate_dataset_names(dataset) -> 1301 _validate_attrs(dataset) 1302 1303 if mode == "a": /srv/conda/envs/notebook/lib/python3.7/site-packages/xarray/backends/api.py in _validate_attrs(dataset) 209 # Check attrs on the dataset itself 210 for k, v in dataset.attrs.items(): --> 211 check_attr(k, v) 212 213 # Check attrs on each variable within the dataset /srv/conda/envs/notebook/lib/python3.7/site-packages/xarray/backends/api.py in check_attr(name, value) 204 "a string, an ndarray or a list/tuple of " 205 "numbers/strings for serialization to netCDF " --> 206 "files".format(value) 207 ) 208 TypeError: Invalid value for attr: {'bar': 1} must be a number, a string, an ndarray or a list/tuple of numbers/strings for serialization to netCDF files ``` Additionally the error message mentions netCDF -- even if I am writing a Zarr file. Workaround is to save as a list, i.e.
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3517/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);