home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

5 rows where author_association = "CONTRIBUTOR" and issue = 617476316 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • AndrewILWilliams 5

issue 1

  • Automatic chunking of arrays ? · 5 ✖

author_association 1

  • CONTRIBUTOR · 5 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
628797255 https://github.com/pydata/xarray/issues/4055#issuecomment-628797255 https://api.github.com/repos/pydata/xarray/issues/4055 MDEyOklzc3VlQ29tbWVudDYyODc5NzI1NQ== AndrewILWilliams 56925856 2020-05-14T18:01:45Z 2020-05-14T18:01:45Z CONTRIBUTOR

I also thought that, after the dask error message it's pretty easy to then look at the dataset and check what the problem dimension is.

In general though, is that the type of layout you'd suggest for catching and re-raising errors? Using raise Exception() ?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Automatic chunking of arrays ? 617476316
628616379 https://github.com/pydata/xarray/issues/4055#issuecomment-628616379 https://api.github.com/repos/pydata/xarray/issues/4055 MDEyOklzc3VlQ29tbWVudDYyODYxNjM3OQ== AndrewILWilliams 56925856 2020-05-14T12:57:21Z 2020-05-14T17:50:31Z CONTRIBUTOR

Nice, that's neater! Would this work, in the maybe_chunk() call? Sorry about the basic questions!

python def maybe_chunk(name, var, chunks): chunks = selkeys(chunks, var.dims) if not chunks: chunks = None if var.ndim > 0: # when rechunking by different amounts, make sure dask names change # by provinding chunks as an input to tokenize. # subtle bugs result otherwise. see GH3350 token2 = tokenize(name, token if token else var._data, chunks) name2 = f"{name_prefix}{name}-{token2}" try: return var.chunk(chunks, name=name2, lock=lock) except NotImplementedError as err: raise Exception("Automatic chunking fails for object arrays." + "These include cftime DataArrays.") else: return var

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Automatic chunking of arrays ? 617476316
628513777 https://github.com/pydata/xarray/issues/4055#issuecomment-628513777 https://api.github.com/repos/pydata/xarray/issues/4055 MDEyOklzc3VlQ29tbWVudDYyODUxMzc3Nw== AndrewILWilliams 56925856 2020-05-14T09:26:24Z 2020-05-14T09:26:24Z CONTRIBUTOR

Also, the contributing docs have been super clear so far! Thanks! :)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Automatic chunking of arrays ? 617476316
628513443 https://github.com/pydata/xarray/issues/4055#issuecomment-628513443 https://api.github.com/repos/pydata/xarray/issues/4055 MDEyOklzc3VlQ29tbWVudDYyODUxMzQ0Mw== AndrewILWilliams 56925856 2020-05-14T09:25:48Z 2020-05-14T09:25:48Z CONTRIBUTOR

Cheers! Just had a look, is it as simple as just changing this line to the following, @dcherian ?

python if isinstance(chunks, Number) or chunks=='auto': chunks = dict.fromkeys(self.dims, chunks)

This seems to work fine in a lot of cases, except automatic chunking isn't implemented for object dtypes at the moment, so it fails if you pass a cftime coordinate, for example.

One option is to automatically use self=xr.decode_cf(self) if the input dataset is cftime? Or could just throw an error.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Automatic chunking of arrays ? 617476316
628212516 https://github.com/pydata/xarray/issues/4055#issuecomment-628212516 https://api.github.com/repos/pydata/xarray/issues/4055 MDEyOklzc3VlQ29tbWVudDYyODIxMjUxNg== AndrewILWilliams 56925856 2020-05-13T19:56:34Z 2020-05-13T19:56:34Z CONTRIBUTOR

Oh ok I didn't know about this, I'll take a look and read the contribution docs tomorrow ! It'll be my first PR so may need a bit of hand-holding when it comes to tests. Willing to try though!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Automatic chunking of arrays ? 617476316

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 13.001ms · About: xarray-datasette