home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

4 rows where author_association = "CONTRIBUTOR", issue = 340192831 and user = 11750960 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • apatlpo · 4 ✖

issue 1

  • can't store zarr after open_zarr and isel · 4 ✖

author_association 1

  • CONTRIBUTOR · 4 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
404873326 https://github.com/pydata/xarray/issues/2278#issuecomment-404873326 https://api.github.com/repos/pydata/xarray/issues/2278 MDEyOklzc3VlQ29tbWVudDQwNDg3MzMyNg== apatlpo 11750960 2018-07-13T15:48:46Z 2018-07-13T15:48:46Z CONTRIBUTOR

Could you please be more specific about where this is done for netCDF?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  can't store zarr after open_zarr and isel 340192831
404503718 https://github.com/pydata/xarray/issues/2278#issuecomment-404503718 https://api.github.com/repos/pydata/xarray/issues/2278 MDEyOklzc3VlQ29tbWVudDQwNDUwMzcxOA== apatlpo 11750960 2018-07-12T12:59:44Z 2018-07-12T13:00:01Z CONTRIBUTOR

Note that there is also a fix for case 2 that is simply del ds['v'].encoding['chunks'] prior to data storage.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  can't store zarr after open_zarr and isel 340192831
404503025 https://github.com/pydata/xarray/issues/2278#issuecomment-404503025 https://api.github.com/repos/pydata/xarray/issues/2278 MDEyOklzc3VlQ29tbWVudDQwNDUwMzAyNQ== apatlpo 11750960 2018-07-12T12:57:17Z 2018-07-12T12:57:35Z CONTRIBUTOR

With the same case, I have another error message which may reflect the same issue (or not), maybe you can tell me. The error message is different which is the reason I am posting this.

Starting from the same dataset: nx, ny, nt = 32, 32, 64 ds = xr.Dataset({}, coords={'x':np.arange(nx),'y':np.arange(ny), 't': np.arange(nt)}) ds = ds.assign(v=ds.t*np.cos(np.pi/180./100*ds.x)*np.cos(np.pi/180./50*ds.y)) ds = ds.chunk({'t': 1, 'x': nx/2, 'y': ny/2}) ds.to_zarr('data.zarr', mode='w')

Case 1 works fine: ds = ds.chunk({'t': nt, 'x': nx/4, 'y': ny/4}) ds.to_zarr('data_rechunked.zarr', mode='w')

Case 2 breaks: ds = xr.open_zarr('data.zarr') ds = ds.chunk({'t': nt, 'x': nx/4, 'y': ny/4}) ds.to_zarr('data_rechunked.zarr', mode='w') with the following error message: .... NotImplementedError: Specified zarr chunks (1, 16, 16) would overlap multiple dask chunks ((64,), (8, 8, 8, 8), (8, 8, 8, 8)). This is not implemented in xarray yet. Consider rechunking the data using `chunk()` or specifying different chunks in encoding.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  can't store zarr after open_zarr and isel 340192831
404415760 https://github.com/pydata/xarray/issues/2278#issuecomment-404415760 https://api.github.com/repos/pydata/xarray/issues/2278 MDEyOklzc3VlQ29tbWVudDQwNDQxNTc2MA== apatlpo 11750960 2018-07-12T07:25:36Z 2018-07-12T07:25:36Z CONTRIBUTOR

thanks for the workaround suggestion. Apparently you also need to delete chunks for the t singleton coordinate though. The workaround looks at the end like: ds = xr.open_zarr('data.zarr') del ds['v'].encoding['chunks'] del ds['t'].encoding['chunks'] ds.isel(t=0).to_zarr('data_t0.zarr', mode='w') Any idea about how serious this is and/or where it's coming from?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  can't store zarr after open_zarr and isel 340192831

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 650.764ms · About: xarray-datasette