home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

1 row where user = 23300143 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • DonjetaR · 1 ✖

issue 1

  • `xarray.open_zarr()` takes too long to lazy load when the data arrays contain a large number of Dask chunks. 1

author_association 1

  • NONE 1
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
984014394 https://github.com/pydata/xarray/issues/6036#issuecomment-984014394 https://api.github.com/repos/pydata/xarray/issues/6036 IC_kwDOAMm_X846pt46 DonjetaR 23300143 2021-12-01T20:08:46Z 2021-12-01T20:12:25Z NONE

@dcherian thanks for your reply. I know Xarray can't do anything about the Dask computations of the chunks. My question was if it was possible to save the Dask chunk informations on the Zarr metadata such that it is not neccessary to calculate them ie. run the the getem() function from Dask that takes too long to run and increases memory.

Following example runs out of memory on my computer. I have 16 GB RAM.

```python

import dask import xarray as xr

chunks = (1, 1, 1) ds = xr.Dataset(data_vars={ "foo": (('x', 'y', 'z'), dask.array.empty((1000, 1000, 1000), chunks=(1000, 1000, 1000)))}) ds.to_zarr(store='data', group='ds.zarr', compute=False, encoding={'foo': {'chunks': chunks}}) ds_loaded = xr.open_zarr(group='ds.zarr', store='data')

```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  `xarray.open_zarr()` takes too long to lazy load when the data arrays contain a large number of Dask chunks. 1068225524

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 12.226ms · About: xarray-datasette