home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

3 rows where issue = 614785886 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 3

  • rabernat 1
  • apatlpo 1
  • stale[bot] 1

author_association 3

  • CONTRIBUTOR 1
  • MEMBER 1
  • NONE 1

issue 1

  • automatic chunking of zarr archive · 3 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
1113981994 https://github.com/pydata/xarray/issues/4046#issuecomment-1113981994 https://api.github.com/repos/pydata/xarray/issues/4046 IC_kwDOAMm_X85CZgQq stale[bot] 26384082 2022-04-30T12:37:47Z 2022-04-30T12:37:47Z NONE

In order to maintain a list of currently relevant issues, we mark issues as stale after a period of inactivity

If this issue remains relevant, please comment here or remove the stale label; otherwise it will be marked as closed automatically

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  automatic chunking of zarr archive 614785886
625893739 https://github.com/pydata/xarray/issues/4046#issuecomment-625893739 https://api.github.com/repos/pydata/xarray/issues/4046 MDEyOklzc3VlQ29tbWVudDYyNTg5MzczOQ== apatlpo 11750960 2020-05-08T16:17:52Z 2020-05-08T16:17:52Z CONTRIBUTOR

Thanks for this speedy reply @rabernat !

Improving docs is still within my reach (I hope) and I will give it a shot. Could this improvement in the document take place in the description of the encoding parameter of xarray.Dataset.to_zarr?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  automatic chunking of zarr archive 614785886
625857192 https://github.com/pydata/xarray/issues/4046#issuecomment-625857192 https://api.github.com/repos/pydata/xarray/issues/4046 MDEyOklzc3VlQ29tbWVudDYyNTg1NzE5Mg== rabernat 1197350 2020-05-08T14:59:43Z 2020-05-08T14:59:43Z MEMBER

Thanks for raising this useful issue.

There are two ways to control Zarr chunks: - Specify chunks in encoding (always takes precedence) - Determine chunks based on Dask chunks

If neither of these are present, Xarray creates the zarr arrays with no chunks specified. In this case, zarr will choose the chunks automatically for you. This behavior is described in the Zarr docs: https://zarr.readthedocs.io/en/stable/tutorial.html#chunk-size-and-shape

If you are feeling lazy, you can let Zarr guess a chunk shape for your data by providing chunks=True, although please note that the algorithm for guessing a chunk shape is based on simple heuristics and may be far from optimal

You can override this default per variable by specifying a single global chunk in encoding: python ds.foo.encoding['chunks'] = -1 or, at write time, python ds.to_zarr('test.zarr', mode='w', encoding={'foo': {'chunks': -1}})

I agree that none of this is described well in the Xarray docs. A PR to improve the docs would be most welcome. 😉

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  automatic chunking of zarr archive 614785886

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 718.978ms · About: xarray-datasette