home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

2 rows where issue = 1318826485 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 2

  • dcherian 1
  • benbovy 1

issue 1

  • Using a tuple as a sequence in DataArray.sel no longer supported? · 2 ✖

author_association 1

  • MEMBER 2
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
1215321340 https://github.com/pydata/xarray/issues/6835#issuecomment-1215321340 https://api.github.com/repos/pydata/xarray/issues/6835 IC_kwDOAMm_X85IcFT8 dcherian 2448579 2022-08-15T16:35:35Z 2022-08-15T16:35:35Z MEMBER

I like the idea of just passing tuples through and letting the index deal with it. Just like a MultiIndex, there may be other cases where this makes sense.

For the current PandasIndex maybe we can raise a nicer error in .sel?

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Using a tuple as a sequence in DataArray.sel no longer supported? 1318826485
1199194614 https://github.com/pydata/xarray/issues/6835#issuecomment-1199194614 https://api.github.com/repos/pydata/xarray/issues/6835 IC_kwDOAMm_X85HekH2 benbovy 4160723 2022-07-29T11:59:40Z 2022-07-29T11:59:40Z MEMBER

Thanks for the report @momchil-flex. That's definitely a regression.

However, I wonder what should we do: depreciate interpreting tuples as sequences and always consider them as "scalar" values or continue interpreting it differently depending on the cases?

For example, tuples indexer values were (and still are) assumed to be single element values when selecting on a dimension coordinate with a multi-index (although eventually the multi-index dimension coordinate might be depreciated in xarray):

```python da = xr.DataArray( data=range(3), dims="x", coords={"a": ("x", ["a", "a", "c"]), "b": ("x", [0, 1, 2])}, ).set_index(x=["a", "b"])

da

<xarray.DataArray (x: 3)>

array([0, 1, 2])

Coordinates:

* x (x) object MultiIndex

* a (x) <U1 'a' 'a' 'c'

* b (x) int64 0 1 2

da.sel(x=("a", 1))

<xarray.DataArray ()>

array(1)

Coordinates:

x object ('a', 1)

a <U1 'a'

b int64 1

```

Pros of always treating a tuple as 1-element indexer value:

  • Clearer
  • Less special cases to maintain internally in Xarray

Cons:

  • With flexible indexes, Xarray currently just passes the indexers to the corresponding (custom) indexes, leaving the responsibility to those indexes to process them as they want. Although we might have some control on the behavior of PandasIndex and PandasMultiIndex built-in Xarray, we have no control on 3rd party indexes. Unless we somehow formalize the semantics of the indexer values passed in .sel(), but this could be challenging as there could be many kinds of indexers (scalar types, tuples, lists, slices, numpy arrays, xarray Variable or DataArray objects, etc.).
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Using a tuple as a sequence in DataArray.sel no longer supported? 1318826485

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 797.721ms · About: xarray-datasette