home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

1 row where issue = 717410970 and user = 23487320 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • weiji14 · 1 ✖

issue 1

  • Flexible backends - Harmonise zarr chunking with other backends chunking · 1 ✖

author_association 1

  • CONTRIBUTOR 1
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
721466404 https://github.com/pydata/xarray/issues/4496#issuecomment-721466404 https://api.github.com/repos/pydata/xarray/issues/4496 MDEyOklzc3VlQ29tbWVudDcyMTQ2NjQwNA== weiji14 23487320 2020-11-04T01:47:30Z 2020-11-04T01:49:39Z CONTRIBUTOR

Just a general comment on the xr.open_dataset(engine="zarr") part, I prefer to keep or reduce the amount of chunks= options (i.e. Option 1) rather than add another chunks="encoded" option.

For those who are confused, this is the current state of xr.open_mfdataset (correct me if I'm wrong):

| :arrow_down: engine\chunk :arrow_right: | None (default) | 'auto' | {} | -1 | |--------------------------------------------------------| -------------------|-------|----|-------| | None (i.e. default for NetCDF) | np.ndarray | dask.Array (produces origintal chunks as in NetCDF obj??) | dask.Array (rechunked into 1 chunk) | dask.Array (rechunked into 1 chunk) | | zarr | np.ndarray | dask.Array (original chunks as in Zarr obj) | dask.Array (original chunks as in Zarr obj) | dask.Array (rechunked into 1 chunk + UserWarning) |

Sample code to test (run in jupyter notebook to see the dask chunk visual):

```python import xarray as xr import fsspec # Opening NetCDF dataset: xr.Dataset = xr.open_dataset( "http://thredds.ucar.edu/thredds/dodsC/grib/NCEP/HRRR/CONUS_2p5km/Best", chunks={} ) dataset.Temperature_height_above_ground.data # Opening Zarr zstore = fsspec.get_mapper( url="gs://cmip6/CMIP/NCAR/CESM2/historical/r9i1p1f1/Amon/tas/gn/" ) dataset: xr.Dataset = xr.open_dataset( filename_or_obj=zstore, engine="zarr", chunks={}, backend_kwargs=dict(consolidated=True), ) dataset.tas.data ```
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Flexible backends - Harmonise zarr chunking with other backends chunking 717410970

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 13.299ms · About: xarray-datasette