home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

6 rows where author_association = "MEMBER" and issue = 617476316 sorted by updated_at descending

✖
✖
✖

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: reactions, created_at (date), updated_at (date)

user 2

  • shoyer 3
  • dcherian 3

issue 1

  • Automatic chunking of arrays ? · 6 ✖

author_association 1

  • MEMBER · 6 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
628816425 https://github.com/pydata/xarray/issues/4055#issuecomment-628816425 https://api.github.com/repos/pydata/xarray/issues/4055 MDEyOklzc3VlQ29tbWVudDYyODgxNjQyNQ== shoyer 1217238 2020-05-14T18:37:40Z 2020-05-14T18:37:40Z MEMBER

If we think can improve an error message by adding additional context, the right solution is to use raise Exception(...) from original_error: https://stackoverflow.com/a/16414892/809705

On the other hand, if xarray doesn't have anything more to add on top of the original error message, it is best not to add any wrapper at all. Users will just see the original error from dask.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Automatic chunking of arrays ? 617476316
628747933 https://github.com/pydata/xarray/issues/4055#issuecomment-628747933 https://api.github.com/repos/pydata/xarray/issues/4055 MDEyOklzc3VlQ29tbWVudDYyODc0NzkzMw== shoyer 1217238 2020-05-14T16:31:39Z 2020-05-14T16:31:39Z MEMBER

The error message from dask is already pretty descriptive: NotImplementedError: Can not use auto rechunking with object dtype. We are unable to estimate the size in bytes of object data

I don't think we have much to add on top of that?

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Automatic chunking of arrays ? 617476316
628582552 https://github.com/pydata/xarray/issues/4055#issuecomment-628582552 https://api.github.com/repos/pydata/xarray/issues/4055 MDEyOklzc3VlQ29tbWVudDYyODU4MjU1Mg== dcherian 2448579 2020-05-14T11:51:21Z 2020-05-14T11:51:21Z MEMBER

is_scalar(chunks) might be the appropriate condition. is_scalar is already imported from .utils in dataset.py

This seems to work fine in a lot of cases, except automatic chunking isn't implemented for object dtypes at the moment, so it fails if you pass a cftime coordinate, for example.

Can we catch this error and re-raise specifying "automatic chunking fails for object arrays. These include cftime DataArrays" or something like that?

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Automatic chunking of arrays ? 617476316
628319690 https://github.com/pydata/xarray/issues/4055#issuecomment-628319690 https://api.github.com/repos/pydata/xarray/issues/4055 MDEyOklzc3VlQ29tbWVudDYyODMxOTY5MA== shoyer 1217238 2020-05-14T00:43:22Z 2020-05-14T00:43:22Z MEMBER

Agreed, this would be very welcome!

chunks='auto' isn't supported only because xarray support for dask predates it :)

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Automatic chunking of arrays ? 617476316
628231949 https://github.com/pydata/xarray/issues/4055#issuecomment-628231949 https://api.github.com/repos/pydata/xarray/issues/4055 MDEyOklzc3VlQ29tbWVudDYyODIzMTk0OQ== dcherian 2448579 2020-05-13T20:35:49Z 2020-05-13T20:35:49Z MEMBER

Awesome! Please see https://xarray.pydata.org/en/stable/contributing.html for docs on contributing

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Automatic chunking of arrays ? 617476316
628065564 https://github.com/pydata/xarray/issues/4055#issuecomment-628065564 https://api.github.com/repos/pydata/xarray/issues/4055 MDEyOklzc3VlQ29tbWVudDYyODA2NTU2NA== dcherian 2448579 2020-05-13T15:26:10Z 2020-05-13T15:26:49Z MEMBER

so da.chunk({dim_name: "auto"}) works but da.chunk("auto") does not. The latter is a relatively easy fix. We just need to update the condition here: https://github.com/pydata/xarray/blob/bd84186acbd84bd386134a5b60111596cee2d8ec/xarray/core/dataset.py#L1736-L1737

A PR would be very welcome if you have the time, @AndrewWilliams3142

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Automatic chunking of arrays ? 617476316

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 27.496ms · About: xarray-datasette
  • Sort ascending
  • Sort descending
  • Facet by this
  • Hide this column
  • Show all columns
  • Show not-blank rows