issue_comments
9 rows where author_association = "CONTRIBUTOR", issue = 1077079208 and user = 6574622 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: reactions, created_at (date), updated_at (date)
issue 1
- to_zarr: region not recognised as dataset dimensions · 9 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
1059405550 | https://github.com/pydata/xarray/issues/6069#issuecomment-1059405550 | https://api.github.com/repos/pydata/xarray/issues/6069 | IC_kwDOAMm_X84_JT7u | d70-t 6574622 | 2022-03-04T18:16:57Z | 2022-03-04T18:16:57Z | CONTRIBUTOR | I'll set up a new issue. @Boorhin, I couldn't confirm the weirdness with the small example, but will put in a note to your comment. If you can reproduce the weirdness on the minimal example, would you make a comment to the new issue? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
to_zarr: region not recognised as dataset dimensions 1077079208 | |
1059378287 | https://github.com/pydata/xarray/issues/6069#issuecomment-1059378287 | https://api.github.com/repos/pydata/xarray/issues/6069 | IC_kwDOAMm_X84_JNRv | d70-t 6574622 | 2022-03-04T17:39:24Z | 2022-03-04T17:39:24Z | CONTRIBUTOR | I've made a simpler example of the The workaround:
@dcherian, @Boorhin should we make a new (CF-related) issue out of this and try to keep focussing on append and region use-cases here, which seemed to be the initial problem in this thread (probably by going further through your example @Boorhin?). |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
to_zarr: region not recognised as dataset dimensions 1077079208 | |
1059078961 | https://github.com/pydata/xarray/issues/6069#issuecomment-1059078961 | https://api.github.com/repos/pydata/xarray/issues/6069 | IC_kwDOAMm_X84_IEMx | d70-t 6574622 | 2022-03-04T11:27:12Z | 2022-03-04T11:27:44Z | CONTRIBUTOR | btw, as a work-around it works when removing the
But still, this might call for another issue to solve. |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
to_zarr: region not recognised as dataset dimensions 1077079208 | |
1059076885 | https://github.com/pydata/xarray/issues/6069#issuecomment-1059076885 | https://api.github.com/repos/pydata/xarray/issues/6069 | IC_kwDOAMm_X84_IDsV | d70-t 6574622 | 2022-03-04T11:23:56Z | 2022-03-04T11:23:56Z | CONTRIBUTOR | Ok, I believe, I've now reproduced your error: ```python import xarray as xr from rasterio.enums import Resampling import numpy as np ds = xr.tutorial.open_dataset('air_temperature').isel(time=0) ds = ds.rio.write_crs('EPSG:4326') dst = ds.rio.reproject('EPSG:3857', shape=(250, 250), resampling=Resampling.bilinear, nodata=np.nan) dst.air.encoding = {} dst = dst.assign(air=dst.air.expand_dims("time"), time=dst.time.expand_dims("time")) m = {}
dst.to_zarr(m)
dst.to_zarr(m, append_dim="time")
This seems to be due to handling of CF-Conventions which might go wrong in the append case: the |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
to_zarr: region not recognised as dataset dimensions 1077079208 | |
1059063397 | https://github.com/pydata/xarray/issues/6069#issuecomment-1059063397 | https://api.github.com/repos/pydata/xarray/issues/6069 | IC_kwDOAMm_X84_IAZl | d70-t 6574622 | 2022-03-04T11:05:07Z | 2022-03-04T11:05:07Z | CONTRIBUTOR | This error ist unrelated to region or append writes. The dataset
but still carries encoding-information from
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
to_zarr: region not recognised as dataset dimensions 1077079208 | |
1059025444 | https://github.com/pydata/xarray/issues/6069#issuecomment-1059025444 | https://api.github.com/repos/pydata/xarray/issues/6069 | IC_kwDOAMm_X84_H3Ik | d70-t 6574622 | 2022-03-04T10:13:40Z | 2022-03-04T10:13:40Z | CONTRIBUTOR | 🤷 can't help any further without a minimal reproducible example here... |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
to_zarr: region not recognised as dataset dimensions 1077079208 | |
1058381922 | https://github.com/pydata/xarray/issues/6069#issuecomment-1058381922 | https://api.github.com/repos/pydata/xarray/issues/6069 | IC_kwDOAMm_X84_FaBi | d70-t 6574622 | 2022-03-03T18:56:13Z | 2022-03-03T18:56:13Z | CONTRIBUTOR | I don't yet know a proper answer, but there'd be three observations I have:
* The |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
to_zarr: region not recognised as dataset dimensions 1077079208 | |
1052252098 | https://github.com/pydata/xarray/issues/6069#issuecomment-1052252098 | https://api.github.com/repos/pydata/xarray/issues/6069 | IC_kwDOAMm_X84-uBfC | d70-t 6574622 | 2022-02-26T16:07:56Z | 2022-02-26T16:07:56Z | CONTRIBUTOR | While testing a bit further, I found another case which might potentially be dangerous: ```python ds is the same as above, but chunksize is {"time": 1, "x": 1}once on the coordinatords.to_zarr("test.zarr", compute=False, encoding={"time": {"chunks": [1]}, "x": {"chunks": [1]}}) in parallelds.isel(time=slice(0,1), x=slice(0,1)).to_zarr("test.zarr", mode="r+", region={"time": slice(0,1), "x": slice(0,1)}) ds.isel(time=slice(0,1), x=slice(1,2)).to_zarr("test.zarr", mode="r+", region={"time": slice(0,1), "x": slice(1,2)}) ds.isel(time=slice(0,1), x=slice(2,3)).to_zarr("test.zarr", mode="r+", region={"time": slice(0,1), "x": slice(2,3)}) ds.isel(time=slice(1,2), x=slice(0,1)).to_zarr("test.zarr", mode="r+", region={"time": slice(1,2), "x": slice(0,1)}) ds.isel(time=slice(1,2), x=slice(1,2)).to_zarr("test.zarr", mode="r+", region={"time": slice(1,2), "x": slice(1,2)}) ds.isel(time=slice(1,2), x=slice(2,3)).to_zarr("test.zarr", mode="r+", region={"time": slice(1,2), "x": slice(2,3)}) ``` This example doesn't produce any error, but the |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
to_zarr: region not recognised as dataset dimensions 1077079208 | |
1052240616 | https://github.com/pydata/xarray/issues/6069#issuecomment-1052240616 | https://api.github.com/repos/pydata/xarray/issues/6069 | IC_kwDOAMm_X84-t-ro | d70-t 6574622 | 2022-02-26T15:58:48Z | 2022-02-26T15:58:48Z | CONTRIBUTOR | I'm trying to picture some usage scenarios based on incrementally adding timesteps to data on store. I hope these might help to answer questions from above. In particular, I think that I'll use the following dataset for demonstration code:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
to_zarr: region not recognised as dataset dimensions 1077079208 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1