home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

11 rows where issue = 357156174 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: reactions, created_at (date), updated_at (date)

user 4

  • horta 5
  • shoyer 4
  • fujiisoup 1
  • stale[bot] 1

author_association 3

  • CONTRIBUTOR 5
  • MEMBER 5
  • NONE 1

issue 1

  • DataArray.loc fails for duplicates where DataFrame works · 11 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
672548285 https://github.com/pydata/xarray/issues/2399#issuecomment-672548285 https://api.github.com/repos/pydata/xarray/issues/2399 MDEyOklzc3VlQ29tbWVudDY3MjU0ODI4NQ== stale[bot] 26384082 2020-08-12T03:24:32Z 2020-08-12T03:24:32Z NONE

In order to maintain a list of currently relevant issues, we mark issues as stale after a period of inactivity

If this issue remains relevant, please comment here or remove the stale label; otherwise it will be marked as closed automatically

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  DataArray.loc fails for duplicates where DataFrame works 357156174
420446624 https://github.com/pydata/xarray/issues/2399#issuecomment-420446624 https://api.github.com/repos/pydata/xarray/issues/2399 MDEyOklzc3VlQ29tbWVudDQyMDQ0NjYyNA== horta 514522 2018-09-11T22:24:14Z 2018-09-11T22:24:14Z CONTRIBUTOR

Yes, I'm working on that doc for now to come up a very precise and as simple as possible definitions.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  DataArray.loc fails for duplicates where DataFrame works 357156174
420444668 https://github.com/pydata/xarray/issues/2399#issuecomment-420444668 https://api.github.com/repos/pydata/xarray/issues/2399 MDEyOklzc3VlQ29tbWVudDQyMDQ0NDY2OA== fujiisoup 6815844 2018-09-11T22:16:32Z 2018-09-11T22:16:32Z MEMBER

Sorry that I couldn't join the discussion here.

Thanks, @horta, for giving the nice document. We tried to use the consistent terminology in the docs, but I agree that it would be nice to have a list of the definitions. I think it might be better to discuss in another issue. See #2410.

For loc and sel issues. One thing I don't agree is

The result of d.loc[i] is equal to d.sel(x=i). Also, it seems reasonable to expect the its result should be the same as d0.sel(x=i) for d0 given by

As xarray inherits not only from pandas but also from numpy's multi-dimensional array. That is, we need to be very consistent with the resultant shape of indexing. It would be confusing if a selection from different dimensional arrays becomes the same.

I do think that handling duplicate matches with indexing is an important use-case. This comes up with nearest neighbor matching as well -- it would be useful to be able to return the full set of matches within a given distance, not just the nearest match.

I also think that what is lacking in xarray is this functionality. Any interest to help us for this?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  DataArray.loc fails for duplicates where DataFrame works 357156174
420373780 https://github.com/pydata/xarray/issues/2399#issuecomment-420373780 https://api.github.com/repos/pydata/xarray/issues/2399 MDEyOklzc3VlQ29tbWVudDQyMDM3Mzc4MA== shoyer 1217238 2018-09-11T18:28:43Z 2018-09-11T18:28:43Z MEMBER

CC @fujiisoup who implemented much of this. I will also take a look at your doc when I have the chance.

I do think that handling duplicate matches with indexing is an important use-case. This comes up with nearest neighbor matching as well -- it would be useful to be able to return the full set of matches within a given distance, not just the nearest match.

I wonder if it would be more productive to consider a new indexing API for one -> many matches. sel/loc is already quite complex.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  DataArray.loc fails for duplicates where DataFrame works 357156174
420362244 https://github.com/pydata/xarray/issues/2399#issuecomment-420362244 https://api.github.com/repos/pydata/xarray/issues/2399 MDEyOklzc3VlQ29tbWVudDQyMDM2MjI0NA== horta 514522 2018-09-11T17:52:29Z 2018-09-11T17:52:29Z CONTRIBUTOR

Hi again. I'm working on a precise definition of xarray and indexing. I find the official one a bit hard to understand. It might help me come up with a reasonable way to handle duplicate indices. https://drive.google.com/file/d/1uJ_U6nedkNe916SMViuVKlkGwPX-mGK7/view?usp=sharing

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  DataArray.loc fails for duplicates where DataFrame works 357156174
419714631 https://github.com/pydata/xarray/issues/2399#issuecomment-419714631 https://api.github.com/repos/pydata/xarray/issues/2399 MDEyOklzc3VlQ29tbWVudDQxOTcxNDYzMQ== horta 514522 2018-09-09T13:04:12Z 2018-09-09T13:04:12Z CONTRIBUTOR

I see. Now I read about it, let me give another shot.

Let i be

<xarray.DataArray (y: 1, z: 1)> array([['a']], dtype='<U1') Dimensions without coordinates: y, z

and d be

<xarray.DataArray (x: 2)> array([0, 1]) Coordinates: * x (x) <U1 'a' 'a'

The result of d.loc[i] is equal to d.sel(x=i). Also, it seems reasonable to expect the its result should be the same as d0.sel(x=i) for d0 given by

<xarray.DataArray (x: 2, dim_1: 1)> array([[0], [1]]) Coordinates: * x (x) <U1 'a' 'a' Dimensions without coordinates: dim_1

as per column vector representation assumption.

Answer

Laying down the first dimension gives

| y | z | x | |---|---|---| | a | a | a | | | | a |

By order, x will match with y and therefore we will append a new dimension after x to match with z:

| y | z | x | dim_1 |---|---|---|-------| | a | a | a | ? | | | | a | ? |

where ? means any. Joining the first and second halves of the table gives

| y | z | x | dim_1 |---|---|---|-------| | a | a | a | ? | | a | a | a | ? |

And here is my suggestions. Use the mapping y|->x and z|->dim_1 to decide which axis to expand for the additional element. I will choose y-axis because the additional a was originally appended to the x-axis.

The answer is

<xarray.DataArray (y: 2, z: 1)> array([[0], [1]]) Coordinates: x (y, z) <U1 'a' 'a' Dimensions without coordinates: y, z

for

```

ans.coords["x"] <xarray.DataArray 'x' (y: 2, z: 1)> array([['a'], ['a']], dtype='<U1') Coordinates: x (y, z) <U1 'a' 'a' Dimensions without coordinates: y, z ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  DataArray.loc fails for duplicates where DataFrame works 357156174
419580420 https://github.com/pydata/xarray/issues/2399#issuecomment-419580420 https://api.github.com/repos/pydata/xarray/issues/2399 MDEyOklzc3VlQ29tbWVudDQxOTU4MDQyMA== shoyer 1217238 2018-09-07T22:15:33Z 2018-09-07T22:15:33Z MEMBER

Please take a look at xarray's detailed indexing rules: http://xarray.pydata.org/en/stable/indexing.html#indexing-rules

I will ignore the dimension names for now as I don't have much experience with xarray yet.

I think this is the crux of the problem. Put another way: why should the result of indexing be a 1x2 array instead of a 2x1 array? Currently (with the exception of indexing by a scalar with an index with duplicates), xarray determines the shape/dimensions resulting from indexing from the shape/dimensions of the indexers not the array being indexed.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  DataArray.loc fails for duplicates where DataFrame works 357156174
419383633 https://github.com/pydata/xarray/issues/2399#issuecomment-419383633 https://api.github.com/repos/pydata/xarray/issues/2399 MDEyOklzc3VlQ29tbWVudDQxOTM4MzYzMw== horta 514522 2018-09-07T09:39:01Z 2018-09-07T09:39:01Z CONTRIBUTOR

Now I see the problem. But I think it is solvable.

I will ignore the dimension names for now as I don't have much experience with xarray yet.

The code

python da_nonunique = xr.DataArray([0, 1], dims=['x'], coords={'x': ['a', 'a']} indexer = xr.DataArray([['a']], dims=['y', 'z'])

can be understood as defining two indexed arrays:

[a, a] and [[a]]. As we are allowing for non-unique indexing, I will denote unique array elements as [e_0, e_1] and [[r_0]] interchangeably.

Algorithm:

  1. Align. [[a], [a]] and [[a]].
  2. Ravel. [(a,a), (a,a)] and [(a,a)].
  3. Join. [(a,a), (a,a)]. I.e., [e_0, e_1].
  4. Unravel. [[e_0, e_1]]. Notice that [e_0, e_1] has been picked up by r_0.
  5. Reshape. [[e_0, e_1]] (solution).

Concretely, the solution is a bi-dimensional, 1x2 array:

| 0 1 |.

There is another relevant example. Let the code be

python da_nonunique = xr.DataArray([0, 1, 2], dims=['x'], coords={'x': ['a', 'a', 'b']} indexer = xr.DataArray([['a', 'b']], dims=['y', 'z'])

We have [a, a, b] and [[a, b]], also denoted as [e_0, e_1, e_2] and [[r_0, r_1]].

Algorithm:

  1. Align. [[a], [a], [b]] and [[a, b]].
  2. Ravel. [(a,a), (a,a), (b,b)] and [(a,a), (b,b)].
  3. Join. [(a,a), (a,a), (b,b)]. I.e., [e_0, e_1, e_2].
  4. Unravel. [[e_0, e_1, e_2]]. Notice now that [e_0, e_1] has been picked up by r_0 and [e_2] by r_1.
  5. Reshape. [[e_0, e_1, e_2]].

The solution is a bi-dimensional, 1x3 array:

| 0 1 2 |

Explanation

  1. Align recursively adds a new dimension in the array with lower dimensionality.
  2. Ravel recursively removes a dimension by converting elements into tuples.
  3. SQL Join operation: Cartesian product plus match.
  4. Unravel performs the inverse of 2.
  5. Reshape converts it to the indexer's dimensionality.
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  DataArray.loc fails for duplicates where DataFrame works 357156174
419173479 https://github.com/pydata/xarray/issues/2399#issuecomment-419173479 https://api.github.com/repos/pydata/xarray/issues/2399 MDEyOklzc3VlQ29tbWVudDQxOTE3MzQ3OQ== shoyer 1217238 2018-09-06T17:18:08Z 2018-09-06T17:18:08Z MEMBER

Let me give a more concrete example of the issue for multi-dimensional indexing: python da_unique = xr.DataArray([0, 1], dims=['x'], coords={'x': ['a', 'b']}) da_nonunique = xr.DataArray([0, 1], dims=['x'], coords={'x': ['a', 'a']}) indexer = xr.DataArray([['a']], dims=['y', 'z']) With a unique index, notice how the result takes on the dimensions of the indexer: ```

da_unique.loc[indexer] <xarray.DataArray (y: 1, z: 1)> array([[0]]) Coordinates: x (y, z) object 'a' Dimensions without coordinates: y, z `` What would you propose for the result ofda_nonunique.loc[indexer]`?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  DataArray.loc fails for duplicates where DataFrame works 357156174
419166914 https://github.com/pydata/xarray/issues/2399#issuecomment-419166914 https://api.github.com/repos/pydata/xarray/issues/2399 MDEyOklzc3VlQ29tbWVudDQxOTE2NjkxNA== horta 514522 2018-09-06T16:56:44Z 2018-09-06T16:56:44Z CONTRIBUTOR

Thanks for the feedback!

  1. You can count on indexing if the is_unique flag is checked beforehand. The way pandas does indexing seems to be both clear to the user and powerful. It seems clear because indexing is the result of a Cartesian product after filtering for matching values. It is powerful because it allows indexing as complex as SQL INNER JOIN, which covers the trivial case of unique elements. For example, the following operation

```python import pandas as pd

df = pd.DataFrame(data=[0, 1, 2], index=list("aab")) print(df.loc[list("ab")])

0

a 0

a 1

b 2

```

is an INNER JOIN between the two indexes

INNER((a, b) x (a, a, b)) = INNER(aa, aa, ab, ba, ba, bb) = (aa, aa, bb)

Another example:

```python import pandas as pd

df = pd.DataFrame(data=[0, 1], index=list("aa")) print(df.loc[list("aa")])

0

a 0

a 1

a 0

a 1

```

is again an INNER JOIN between the two indexes

INNER((a, a) x (a, a)) = INNER(aa, aa, aa, aa) = (aa, aa, aa, aa)

  1. Assume a bidimensional array with the following indexing:

0 1 a ! @ a # $

This translate into an unidimensional index: (a, 0), (a, 1), (a, 0), (a, 1). As such, it can be treated as usual. Assume you index the above matrix using [('a', 0), ('a', 0)]. This implies

INNER( ((a, 0), (a, 0)) x ((a, 0), (a, 1), (a, 0), (a, 1)) ) = INNER( (a,0)(a,0), (a,0)(a,1), (a,0)(a,0), (a,0)(a,1), (a,0)(a,0), (a,0)(a,1), (a,0)(a,0), (a,0)(a,1) ) = ((a,0)(a,0), (a,0)(a,0), (a,0)(a,0), (a,0)(a,0))

Converting it back to the matricial representation:

0 0 a ! ! a # #

In summary, my suggestion is to consider the possibility of defining indexing B by using A (i.e., B.loc(A)) as a Cartesian product followed by match filtering. Or in SQL terms, an INNER JOIN.

The multi-dimensional indexing, as far as I can see, can always be transformed into the uni-dimensional case and treated as such.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  DataArray.loc fails for duplicates where DataFrame works 357156174
418778596 https://github.com/pydata/xarray/issues/2399#issuecomment-418778596 https://api.github.com/repos/pydata/xarray/issues/2399 MDEyOklzc3VlQ29tbWVudDQxODc3ODU5Ng== shoyer 1217238 2018-09-05T15:41:21Z 2018-09-05T15:41:21Z MEMBER

Thanks for the report!

This was actually a somewhat intentional omission in xarray, but if would not be particularly difficult to add in this feature if we want it. At the very least, we should note this deviation somewhere in the docs.

There are two potentially problematic aspects to the pandas behavior: 1. It means that you cannot count on indexing a dataframe with its own index to return something equivalent to the original dataframe, e.g., consider df.loc[['a', 'a']] in your example, which returns a dataframe with 4 rows. 2. More generally, it means you can't count on indexing a dataframe with an array to return an object of the same size as the indexer. This is particularly problematic for xarray, because we support vectorized indexing with multi-dimensional indexers. I don't know how we could define a multi-dimensional equivalent of this -- what shape should the result have if you indexed with a multi-dimensional array instead, e.g., da.loc[{"dim_0": xr.DataArray([['a']]}]? With multiple dimensions involved, it's not clear where the extra introduced dimensions should go.

Now that you bring this up, I wonder how the existing supporting for indexing like da.loc[{"dim_0": "a"}] would work if there are other multi-dimensional indexers. I don't know if we have test coverage for this...

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  DataArray.loc fails for duplicates where DataFrame works 357156174

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 35.417ms · About: xarray-datasette