home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

3 rows where issue = 1318369110 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 2

  • mathause 2
  • tbloch1 1

author_association 2

  • MEMBER 2
  • NONE 1

issue 1

  • xarray.DataArray.str.cat() doesn't work on chunked data · 3 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
1196413171 https://github.com/pydata/xarray/issues/6828#issuecomment-1196413171 https://api.github.com/repos/pydata/xarray/issues/6828 IC_kwDOAMm_X85HT9Dz mathause 10194086 2022-07-27T08:22:25Z 2022-07-27T08:22:25Z MEMBER

Yes good point - just calling compute may be the better solution.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  xarray.DataArray.str.cat() doesn't work on chunked data 1318369110
1196410247 https://github.com/pydata/xarray/issues/6828#issuecomment-1196410247 https://api.github.com/repos/pydata/xarray/issues/6828 IC_kwDOAMm_X85HT8WH tbloch1 34276374 2022-07-27T08:19:28Z 2022-07-27T08:19:28Z NONE

Thanks for the workaround @mathause!

Is there a benefit to your approach, rather than calling compute() on each DataArray? It seems like calling compute() twice is faster for the MVCE example (but maybe it won't scale that way).

But either way, it would be nice if the function threw a warning/error for handling dask arrays!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  xarray.DataArray.str.cat() doesn't work on chunked data 1318369110
1195744976 https://github.com/pydata/xarray/issues/6828#issuecomment-1195744976 https://api.github.com/repos/pydata/xarray/issues/6828 IC_kwDOAMm_X85HRZ7Q mathause 10194086 2022-07-26T17:02:50Z 2022-07-26T17:02:50Z MEMBER

Thanks for your report. I think the issue is that dask cannot correctly infer the dtype of the result - or at least not it's length (maybe because it does not do value-based casting? not sure).

As a workaround you could do an intermediate cast to an object:

python dac.astype(object).str.cat(dac, sep='--').astype("U").compute()

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  xarray.DataArray.str.cat() doesn't work on chunked data 1318369110

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 12.993ms · About: xarray-datasette