home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

2 rows where issue = 1221393104 and user = 3019665 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • jakirkham · 2 ✖

issue 1

  • docs on specifying chunks in to_zarr encoding arg · 2 ✖

author_association 1

  • NONE 2
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
1122811102 https://github.com/pydata/xarray/pull/6542#issuecomment-1122811102 https://api.github.com/repos/pydata/xarray/issues/6542 IC_kwDOAMm_X85C7Lze jakirkham 3019665 2022-05-10T20:06:06Z 2022-05-10T20:06:06Z NONE

@jakirkham were you thinking a reference to the dask docs for more info on optimal chunk sizing and aligning with storage?

It could make sense to refer to or if similar ideas come up here it may be worth mentioning in this change

or are you suggesting the proposed docs change is too complex?

Not at all.

I was trying to address the lack of documentation on specifying chunks within a zarr array for non-dask arrays/coordinates, but also covering the weedsy (but common) case of datasets with a mix of dask & in-memory arrays/coords like in my example. I have been frustrated by zarr stores I've written with a couple dozen array chunks and thousands of coordinate chunks for this reason, but it's definitely a gnarly topic to cover concisely :P

If there's anything you need help with or would like to discuss, please don't hesitate to raise a Zarr issue. We also enabled GH discussions over there so if that fits better feel free to use that 🙂

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  docs on specifying chunks in to_zarr encoding arg 1221393104
1121430268 https://github.com/pydata/xarray/pull/6542#issuecomment-1121430268 https://api.github.com/repos/pydata/xarray/issues/6542 IC_kwDOAMm_X85C16r8 jakirkham 3019665 2022-05-09T18:23:03Z 2022-05-09T18:23:03Z NONE

FWIW there's a similar doc page about chunk size in Dask that may be worth borrowing from

{
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 1
}
  docs on specifying chunks in to_zarr encoding arg 1221393104

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 13.911ms · About: xarray-datasette