home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

6 rows where issue = 859577556 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 4

  • chrisroat 2
  • dcherian 2
  • max-sixty 1
  • keewis 1

author_association 2

  • MEMBER 4
  • CONTRIBUTOR 2

issue 1

  • multiple arrays with common nan-shaped dimension · 6 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
826022291 https://github.com/pydata/xarray/issues/5168#issuecomment-826022291 https://api.github.com/repos/pydata/xarray/issues/5168 MDEyOklzc3VlQ29tbWVudDgyNjAyMjI5MQ== dcherian 2448579 2021-04-24T02:44:34Z 2021-04-24T02:44:34Z MEMBER

Related: #2801

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  multiple arrays with common nan-shaped dimension 859577556
821655160 https://github.com/pydata/xarray/issues/5168#issuecomment-821655160 https://api.github.com/repos/pydata/xarray/issues/5168 MDEyOklzc3VlQ29tbWVudDgyMTY1NTE2MA== chrisroat 1053153 2021-04-16T22:38:08Z 2021-04-16T22:38:08Z CONTRIBUTOR

It may run even deeper -- there seem to be several checks on dimension sizes that would need special casing. Even simply doing a variable[dim] lookup fails!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  multiple arrays with common nan-shaped dimension 859577556
821241108 https://github.com/pydata/xarray/issues/5168#issuecomment-821241108 https://api.github.com/repos/pydata/xarray/issues/5168 MDEyOklzc3VlQ29tbWVudDgyMTI0MTEwOA== max-sixty 5635139 2021-04-16T15:04:17Z 2021-04-16T17:43:22Z MEMBER

~Currently xarray requires known dimension sizes. Unless anyone has any insight about its interaction with dask that I'm not familiar with?~ Edit: better informed views below

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  multiple arrays with common nan-shaped dimension 859577556
821294184 https://github.com/pydata/xarray/issues/5168#issuecomment-821294184 https://api.github.com/repos/pydata/xarray/issues/5168 MDEyOklzc3VlQ29tbWVudDgyMTI5NDE4NA== dcherian 2448579 2021-04-16T16:28:09Z 2021-04-16T16:28:09Z MEMBER

I'm not sure about writing to zarr but it seems possible to support nan-sized dimensions when unindexed. We could skip alignment when the dimension is nan-sized for all variables in an Xarray object.

~/kitchen_sync/xarray/xarray/core/alignment.py in align(join, copy, indexes, exclude, fill_value, *objects) 283 for dim in obj.dims: 284 if dim not in exclude: --> 285 all_coords[dim].append(obj.coords[dim]) 286 try: 287 index = obj.indexes[dim]

For alignment, it may be as easy as adding the name of the nan-sized dimension to exclude.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  multiple arrays with common nan-shaped dimension 859577556
821285344 https://github.com/pydata/xarray/issues/5168#issuecomment-821285344 https://api.github.com/repos/pydata/xarray/issues/5168 MDEyOklzc3VlQ29tbWVudDgyMTI4NTM0NA== chrisroat 1053153 2021-04-16T16:13:09Z 2021-04-16T16:13:09Z CONTRIBUTOR

There seems to be some support, but now you have me worried. I have a used xarray mainly for labelling, but not for much computation -- I'm dropping into dask because I need map_overlap.

FWIW, calling dask.compute(arr) works with unknown chunk sizes, but now I see arr.compute() does not. This fooled me into thinking I could use unknown chunk sizes. Now I see that writing to zarr does not work, either. This might torpedo my current design.

I see the compute_chunk_sizes method, but that seems to trigger computation. I'm running on a dask cluster -- is there anything I can do to salvage the pattern arr_with_nan_shape.to_dataset().to_zarr(compute=False) (with our without xarray)?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  multiple arrays with common nan-shaped dimension 859577556
821258307 https://github.com/pydata/xarray/issues/5168#issuecomment-821258307 https://api.github.com/repos/pydata/xarray/issues/5168 MDEyOklzc3VlQ29tbWVudDgyMTI1ODMwNw== keewis 14808389 2021-04-16T15:30:59Z 2021-04-16T15:30:59Z MEMBER

this also came up in #4659 and dask/dask#6058. In #4659 we settled for computing the chunksizes for now since supporting unknown chunksizes seems like a bigger change.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  multiple arrays with common nan-shaped dimension 859577556

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 14.91ms · About: xarray-datasette