home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

33 rows where issue = 241578773 and user = 6815844 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • fujiisoup · 33 ✖

issue 1

  • WIP: indexing with broadcasting · 33 ✖

author_association 1

  • MEMBER 33
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
337805095 https://github.com/pydata/xarray/pull/1473#issuecomment-337805095 https://api.github.com/repos/pydata/xarray/issues/1473 MDEyOklzc3VlQ29tbWVudDMzNzgwNTA5NQ== fujiisoup 6815844 2017-10-19T05:40:57Z 2017-10-19T05:40:57Z MEMBER

I'm happy with this :)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WIP: indexing with broadcasting 241578773
335319183 https://github.com/pydata/xarray/pull/1473#issuecomment-335319183 https://api.github.com/repos/pydata/xarray/issues/1473 MDEyOklzc3VlQ29tbWVudDMzNTMxOTE4Mw== fujiisoup 6815844 2017-10-09T23:45:55Z 2017-10-10T12:12:04Z MEMBER

@shoyer, thanks for your review.

If we try to make behavior "intuitive" for 80% of use-cases, it only makes the remaining 20% more baffling and error prone.

OK. It makes sense also for me. Merging your PR.

I would be OK with adding a warning that there are still a few unresolved edge cases involving MultiIndex.

Actually, the vectorized label-indexing currently does not work almost entirely with MultiIndex. I think of the following cases where appropriate error messages are required,

python In [1]: import xarray as xr ...: import pandas as pd ...: ...: midx = pd.MultiIndex.from_tuples( ...: [(1, 'a'), (2, 'b'), (3, 'c')], ...: names=['x0', 'x1']) ...: da = xr.DataArray([0, 1, 2], dims=['x'], ...: coords={'x': midx}) ...: da ...: Out[1]: <xarray.DataArray (x: 3)> array([0, 1, 2]) Coordinates: * x (x) MultiIndex - x0 (x) int64 1 2 3 - x1 (x) object 'a' 'b' 'c'

  • da.sel(x=[(1, 'a'), (2, 'b')])
  • da.sel(x0='a')

works as expected,

  • da.sel(x0=[1, 2])

fail without appropriate error messages

  • da.sel(x=xr.DataArray([np.array(midx[:2]), np.array(midx[-2:])], dims=['y', 'z']))

destructs the MultiIndex structure silently.

I will add better Exceptions later today.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WIP: indexing with broadcasting 241578773
334024556 https://github.com/pydata/xarray/pull/1473#issuecomment-334024556 https://api.github.com/repos/pydata/xarray/issues/1473 MDEyOklzc3VlQ29tbWVudDMzNDAyNDU1Ng== fujiisoup 6815844 2017-10-04T01:20:13Z 2017-10-04T01:20:13Z MEMBER

@jhamman Thanks for the review (and sorry for my late reply). I made some modifications.

@shoyer Do you have further comments about coordinate confliction?

Limitations of the current implementation are + Coordinate confliction and attachment related to reindex is still off. I think it should go with another PR. + I could not solve the 2nd issue of this comment. In your exampe, python mda.sel(x=xr.DataArray(mda.indexes['x'][:3], dims='x')) works as expected, but python mda.sel(x=xr.DataArray(mda.indexes['x'][:3], dims='z')) will attach coordinate z.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WIP: indexing with broadcasting 241578773
331920149 https://github.com/pydata/xarray/pull/1473#issuecomment-331920149 https://api.github.com/repos/pydata/xarray/issues/1473 MDEyOklzc3VlQ29tbWVudDMzMTkyMDE0OQ== fujiisoup 6815844 2017-09-25T15:35:03Z 2017-09-25T15:35:22Z MEMBER

I think it's ready. I appreciate any further comments.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WIP: indexing with broadcasting 241578773
331480985 https://github.com/pydata/xarray/pull/1473#issuecomment-331480985 https://api.github.com/repos/pydata/xarray/issues/1473 MDEyOklzc3VlQ29tbWVudDMzMTQ4MDk4NQ== fujiisoup 6815844 2017-09-22T15:32:58Z 2017-09-22T15:32:58Z MEMBER

I think it would be better to update reindex method in another PR, as this PR is already large. So I ported this suggestion to #1553.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WIP: indexing with broadcasting 241578773
330168943 https://github.com/pydata/xarray/pull/1473#issuecomment-330168943 https://api.github.com/repos/pydata/xarray/issues/1473 MDEyOklzc3VlQ29tbWVudDMzMDE2ODk0Mw== fujiisoup 6815844 2017-09-18T09:27:38Z 2017-09-18T09:27:38Z MEMBER

Another case that might be confusing, ```python import numpy as np import xarray as xr

da = xr.DataArray(np.random.randn(3), dims=['x'], coords={'x': ['a', 'b', 'c']}) index_ds = xr.Dataset({}, coords={'x': [0, 1]})

this results in the coordinate confliction

da.isel(x=index_ds['x']) `` Asindex_ds['x']` has a coordinate of itself, it results in the coordinate confliction error, but it is clear that user does not want to attach it as a coordinate. I think in such a case, the indexer's coordinate should be silently dropped.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WIP: indexing with broadcasting 241578773
330133651 https://github.com/pydata/xarray/pull/1473#issuecomment-330133651 https://api.github.com/repos/pydata/xarray/issues/1473 MDEyOklzc3VlQ29tbWVudDMzMDEzMzY1MQ== fujiisoup 6815844 2017-09-18T05:49:26Z 2017-09-18T05:49:26Z MEMBER

For the second issue pointed out in this comment, I noticed it is due to the somewhat irregular xr.DataArray construction behavior with MultiIndex,

```python In [1]: import numpy as np ...: import xarray as xr ...: import pandas as pd ...: ...: midx = pd.MultiIndex.from_product([list('abc'), [0, 1]], ...: names=('one', 'two')) ...: # midx is automatically converted to a coordinate ...: xr.DataArray(midx[:3], dims='z') ...: Out[1]: <xarray.DataArray (z: 3)> array([('a', 0), ('a', 1), ('b', 0)], dtype=object) Coordinates: * z (z) MultiIndex - one (z) object 'a' 'a' 'b' - two (z) int64 0 1 0

In [2]: # If a coordinate is explicitly specified, midx will be a data ...: xr.DataArray(midx[:3], dims='z', coords={'z': [0, 1, 2]}) ...: Out[2]: <xarray.DataArray (z: 3)> array([('a', 0), ('a', 1), ('b', 0)], dtype=object) Coordinates: * z (z) int64 0 1 2 ```

I added tests for this case.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WIP: indexing with broadcasting 241578773
330048656 https://github.com/pydata/xarray/pull/1473#issuecomment-330048656 https://api.github.com/repos/pydata/xarray/issues/1473 MDEyOklzc3VlQ29tbWVudDMzMDA0ODY1Ng== fujiisoup 6815844 2017-09-17T14:08:28Z 2017-09-17T14:08:35Z MEMBER

Xarray doesn't check names for other functionality

OK. I adopt isinstance(k, _ThisArray) rather than the name comparison.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WIP: indexing with broadcasting 241578773
330022824 https://github.com/pydata/xarray/pull/1473#issuecomment-330022824 https://api.github.com/repos/pydata/xarray/issues/1473 MDEyOklzc3VlQ29tbWVudDMzMDAyMjgyNA== fujiisoup 6815844 2017-09-17T05:42:01Z 2017-09-17T05:42:01Z MEMBER

How about the following case?

python target = Dataset({}, coords={'x': np.arange(3)}) indexer = DataArray([0, 1], dims=['x'], coords={'x': [2, 4]}) actual = target['x'].isel(x=indexer) Based on the above criteria, it will raise an IndexError, but I feel it should not raise an error as it is clear which one should preceds.

However, python target.isel(x=indexer) should raise an Error.

I would like to add an additional rule to take care of the first case, which might be valid only for DataArray + If k == self.name, drop the conflicted coordinate silently.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WIP: indexing with broadcasting 241578773
330018569 https://github.com/pydata/xarray/pull/1473#issuecomment-330018569 https://api.github.com/repos/pydata/xarray/issues/1473 MDEyOklzc3VlQ29tbWVudDMzMDAxODU2OQ== fujiisoup 6815844 2017-09-17T04:17:28Z 2017-09-17T04:17:28Z MEMBER

Sorry for my late reply, and thanks for the information.

It occurs to me now that we actually have an pre-existing merge feature

It sounds great if we could use preexisting criteria (and maybe logic also). I will look inside xarray's merge logic deeply.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WIP: indexing with broadcasting 241578773
328508165 https://github.com/pydata/xarray/pull/1473#issuecomment-328508165 https://api.github.com/repos/pydata/xarray/issues/1473 MDEyOklzc3VlQ29tbWVudDMyODUwODE2NQ== fujiisoup 6815844 2017-09-11T12:02:25Z 2017-09-11T12:02:25Z MEMBER

To be honest, it's still not clear to me which is the right choice.

It's not yet clear to me either. I think, in such a case, we should choose the simplest rule so that we can explain it easily and we could add more rule later if necessary.

I think indexer's coordinates should not conflict is the simplest. The other end might be we don't care the indexer's coordinates, but I like the previous one.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WIP: indexing with broadcasting 241578773
328403754 https://github.com/pydata/xarray/pull/1473#issuecomment-328403754 https://api.github.com/repos/pydata/xarray/issues/1473 MDEyOklzc3VlQ29tbWVudDMyODQwMzc1NA== fujiisoup 6815844 2017-09-11T03:04:05Z 2017-09-11T03:04:05Z MEMBER

Thank you for review.

I thought we agreed that these cases should raise an error, i.e., to require exact alignment?

I thought we have agreed to simply neglect the coordinate conflict (comment).

Yes, but now I agree to raise an IndexError is clearer for users. I will revert it.

The interaction with MultiIndex indexing seems to be somewhat off. Compare:

I forgot to consider MultiIndex. I will fix it.

(May be later this week.)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WIP: indexing with broadcasting 241578773
327802435 https://github.com/pydata/xarray/pull/1473#issuecomment-327802435 https://api.github.com/repos/pydata/xarray/issues/1473 MDEyOklzc3VlQ29tbWVudDMyNzgwMjQzNQ== fujiisoup 6815844 2017-09-07T13:40:30Z 2017-09-07T13:40:30Z MEMBER

Added, with some code clean-ups.

(Although I prepared tests for it), I agree that the boolean indexing with a different dimension name is a rare use case

But I personally think this rule adds additional complexity. From an analogy of np.ndarray indexing python da.values[(da.y > -1).values] this looks like a just a mis-coding of python da.values[:, (da.y > -1).values] and this error may be user's responsibility.

I think we should recommend da.isel(y=(da.y>-1)) instead.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WIP: indexing with broadcasting 241578773
327194050 https://github.com/pydata/xarray/pull/1473#issuecomment-327194050 https://api.github.com/repos/pydata/xarray/issues/1473 MDEyOklzc3VlQ29tbWVudDMyNzE5NDA1MA== fujiisoup 6815844 2017-09-05T14:31:28Z 2017-09-05T14:31:28Z MEMBER

Thank you for the careful review. I updated most part you pointed out, but not all. I will finish it up tomorrow.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WIP: indexing with broadcasting 241578773
326857965 https://github.com/pydata/xarray/pull/1473#issuecomment-326857965 https://api.github.com/repos/pydata/xarray/issues/1473 MDEyOklzc3VlQ29tbWVudDMyNjg1Nzk2NQ== fujiisoup 6815844 2017-09-04T03:24:49Z 2017-09-04T03:24:49Z MEMBER

So for now, let's raise a FutureWarning

OK. Done.

if supplying a DataArray with array.coords[dim].values != array.values.

I think the condition is something like array.dims != (dim, ), where in the future version we will consider the dimension of indexers.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WIP: indexing with broadcasting 241578773
326778667 https://github.com/pydata/xarray/pull/1473#issuecomment-326778667 https://api.github.com/repos/pydata/xarray/issues/1473 MDEyOklzc3VlQ29tbWVudDMyNjc3ODY2Nw== fujiisoup 6815844 2017-09-03T01:30:49Z 2017-09-03T02:00:46Z MEMBER

for inexact indexing (e.g., method='nearest'), the result of reindex copies the index from the indexers, whereas the result of sel copies the index from the object being indexed

Yes, this is another difference, but if the indexer of sel has a coordinate, the behavior becomes closer to reindex.

I don't know quite what it would mean to reindex with a multi-dimensional indexer

I thought this when working with the power-user example,

It would be really nice to also have a power-user example of pointwise indexing with 2D indexers and nearest-neighbor lookups, e.g., to switch to another coordinate projection. Something like ds.sel(latitude=latitude_grid, longitude=longitude_grid, method='nearest', tolerance=0.1).

As this example, I considered the rasm data, where the original object stays on the logical coordinates x and y. If we have conversion DataArrays, such as a table of x and y values as a function of target coordinates lat and lon, then the coordinate projection from (x, y) to (lat, lon) can be done by .sel(x=x, y=y, method='nearest'). This might be a kind of multi-dimensional reindex?

In such a use case, it would be better for sel (or multi-dimensional reindex) to return NaN than to raise an error.

From a practical perspective, writing a version of vectorized indexing that fills in NaN could be non-trivial.

I agree. I also would like to try that, but it might be a bit tough and it would be better to do after the next release.

Maybe I need to switch to much easier task in this example. Do you have any suggestion?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WIP: indexing with broadcasting 241578773
326753554 https://github.com/pydata/xarray/pull/1473#issuecomment-326753554 https://api.github.com/repos/pydata/xarray/issues/1473 MDEyOklzc3VlQ29tbWVudDMyNjc1MzU1NA== fujiisoup 6815844 2017-09-02T16:10:59Z 2017-09-02T16:10:59Z MEMBER

API question.

.sel(x=[0.0, 1.0], method='nearest', tolerance=0.1) should work exactly same as .reindex(x=[0.0, 1.0], method='nearest', tolerance=0.1)? Currently, .sel method raises KeyError if there is no corresponding value in x. reindex returns np.nan if there is no matching value.

My preference is to make sel work as reindex currently does and to gradually deprecate reindex method, because now the difference between these two methods are very tiny.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WIP: indexing with broadcasting 241578773
326334294 https://github.com/pydata/xarray/pull/1473#issuecomment-326334294 https://api.github.com/repos/pydata/xarray/issues/1473 MDEyOklzc3VlQ29tbWVudDMyNjMzNDI5NA== fujiisoup 6815844 2017-08-31T15:36:49Z 2017-08-31T15:39:24Z MEMBER
  • [x] Closes #1444, #1436
  • [x] Tests added / passed
  • [x] Passes git diff master | flake8 --diff
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

I think I am approaching.

See docs for the detail, but the essential change of this PR is that now indexing ([], .loc[], .sel(), .isel()) considers indexers dimension. By passing xr.DataArray as indexers, we can realize many types of advanced indexing, which is done previously by special methods isel_points, sel_points, and reindex. (isel_points and sel_points are deprecated by this PR.)

If indexers have no named dimension (e.g. np.ndarray, integer, slice), then the indexing behaves exactly the same way to the current version. So this change should be compatible almost all the existing codes.

Now all the existing tests passed and I added many test cases as far as I think of. However, I would like to ask members to use this branch for your daily work and make sure there is no inconvenience, because indexing is very fundamental and a single bug would affect every user significantly.

Any comments or thoughts are welcome.

(I refactored indexing.rst largely according to this change. I would also appreciate very much if anyone could point out some confusing/unnatural sentences.)

I am looking forward to seeing it in v.0.10 :)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WIP: indexing with broadcasting 241578773
325141121 https://github.com/pydata/xarray/pull/1473#issuecomment-325141121 https://api.github.com/repos/pydata/xarray/issues/1473 MDEyOklzc3VlQ29tbWVudDMyNTE0MTEyMQ== fujiisoup 6815844 2017-08-26T15:59:18Z 2017-08-26T16:06:11Z MEMBER

I still think I would prefer including all coordinates from indexers unless there is already existing coordinates of the same name.

OK. I agree. It's might be the best. I will update the code.

But I can't yet imagine all the cases that are incompatible the existing code. I am just wondering if we could bring such a sudden change without warning.

Do we need API change warning period?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WIP: indexing with broadcasting 241578773
321213528 https://github.com/pydata/xarray/pull/1473#issuecomment-321213528 https://api.github.com/repos/pydata/xarray/issues/1473 MDEyOklzc3VlQ29tbWVudDMyMTIxMzUyOA== fujiisoup 6815844 2017-08-09T10:10:58Z 2017-08-09T11:10:14Z MEMBER

@shoyer

Overwrite da['x'] by ind['x'] seems bad.

Agreed. I changed the code not to overwrite coordinate.

My inclination would be that it's OK to add coordinates from the indices as long as they aren't conflicting.

My current implementation does this, but I am still worrying about even in this situation there will be a similar unexpected behavior, e.g. because an unexpected coordinate is attached in previous indexing, the coordinate to be attached would be silently neglected.

I think we may need a careful API decision. I am currently thinking (assuming indexers = {k: ind}) + If ind.dims == (k, ) (indexing-DataArray has the same dimension to the dimension to be indexed along), we neglect ind.coords[k]. + If ind.dims != (k, ) and ind.dims not in da.dims, then we attach a new coordinate ind.coords[ind.dims] + If ind.dims != (k, ) and ind.dims in da.dims, then raise an Error.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WIP: indexing with broadcasting 241578773
321119003 https://github.com/pydata/xarray/pull/1473#issuecomment-321119003 https://api.github.com/repos/pydata/xarray/issues/1473 MDEyOklzc3VlQ29tbWVudDMyMTExOTAwMw== fujiisoup 6815844 2017-08-09T00:32:46Z 2017-08-09T00:32:46Z MEMBER

Another case I am wondering is when indexing-DataArray has the same name but different valued coordinate,

```python In [1]: import numpy as np ...: import xarray as xr ...: ...: da = xr.DataArray(np.arange(3 * 2).reshape(3, 2), dims=['x', 'y'], ...: coords={'x': [0, 1, 2], 'y': ['a', 'b']}) ...: da # indexed DataArray ...: Out[1]: <xarray.DataArray (x: 3, y: 2)> array([[0, 1], [2, 3], [4, 5]]) Coordinates: * x (x) int64 0 1 2 * y (y) <U1 'a' 'b'

In [2]: ind = xr.DataArray([2, 1], dims=['x'], coords={'x': [0.1, 0.2]}) ...: ind # indexing DataArray. This also has 'x' ...: Out[2]: <xarray.DataArray (x: 2)> array([2, 1]) Coordinates: * x (x) float64 0.1 0.2

In [3]: da.isel(x=ind.variable) ...: Out[3]: <xarray.DataArray (x: 2, y: 2)> array([[4, 5], [2, 3]]) Coordinates: * x (x) int64 2 1 * y (y) <U1 'a' 'b'

In [4]: da.isel(x=ind) # Overwrite da['x'] by ind['x'] ...: Out[4]: <xarray.DataArray (x: 2, y: 2)> array([[4, 5], [2, 3]]) Coordinates: * x (x) float64 0.1 0.2 * y (y) <U1 'a' 'b' ```

Currently, the original coordinate is overwritten by the indexer's coordinate, but it may cause an unintentional change of the coordinate values. May be we should keep the original coordinate in such a case or raise an Exception?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WIP: indexing with broadcasting 241578773
321115367 https://github.com/pydata/xarray/pull/1473#issuecomment-321115367 https://api.github.com/repos/pydata/xarray/issues/1473 MDEyOklzc3VlQ29tbWVudDMyMTExNTM2Nw== fujiisoup 6815844 2017-08-09T00:06:10Z 2017-08-09T00:06:10Z MEMBER

I am wondering how the indexing by DataArray should look like, in particular if the indexing-DataArray has coordinates.

In my current implementation, it bahaves

```python In [1]: import numpy as np ...: import xarray as xr ...: ...: da = xr.DataArray(np.arange(3 * 2).reshape(3, 2), dims=['x', 'y'], ...: coords={'x': [0, 1, 2], 'y': ['a', 'b']}) ...: da ...: Out[1]: <xarray.DataArray (x: 3, y: 2)> array([[0, 1], [2, 3], [4, 5]]) Coordinates: * x (x) int64 0 1 2 * y (y) <U1 'a' 'b'

In [2]: ind = xr.DataArray([2, 1], dims=['a'], ...: coords={'a': [0.1, 0.2], 'time': (('a', ), [10, 20])}) ...: ind # 'a': coordinate, 'time': non-dimension coordinate ...: Out[2]: <xarray.DataArray (a: 2)> array([2, 1]) Coordinates: * a (a) float64 0.1 0.2 time (a) int64 10 20

In [3]: da.isel(x=ind) Out[3]: <xarray.DataArray (a: 2, y: 2)> array([[4, 5], [2, 3]]) Coordinates: x (a) int64 2 1 * y (y) <U1 'a' 'b' * a (a) float64 0.1 0.2 ```

I think we should keep (indexed-version of the) original coordinate (da['x']) even after the indexing. We may need also ind['a'] as a new coordinate. Can we ignore the non-dimensional coordinate in the indexer (ind['time'])?

I am slightly worrying that after repetitive indexing the number of coordinates may unintentionally increase.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WIP: indexing with broadcasting 241578773
320493296 https://github.com/pydata/xarray/pull/1473#issuecomment-320493296 https://api.github.com/repos/pydata/xarray/issues/1473 MDEyOklzc3VlQ29tbWVudDMyMDQ5MzI5Ng== fujiisoup 6815844 2017-08-06T08:28:14Z 2017-08-06T08:28:14Z MEMBER

@shoyer Thanks for your help! I will take a look.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WIP: indexing with broadcasting 241578773
319028665 https://github.com/pydata/xarray/pull/1473#issuecomment-319028665 https://api.github.com/repos/pydata/xarray/issues/1473 MDEyOklzc3VlQ29tbWVudDMxOTAyODY2NQ== fujiisoup 6815844 2017-07-31T10:22:44Z 2017-07-31T10:23:04Z MEMBER

Thanks, @shoyer. I will take a look shortly.

Added pointwise indexing support for dask using vindex.

Thanks. It's a great help! I think a similar logic (flatten -> lookup -> reshape) will be necessary to improve .sel method, (or indexing.get_indexer() function), as our new sel should support multi-dimensional look up.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WIP: indexing with broadcasting 241578773
318813583 https://github.com/pydata/xarray/pull/1473#issuecomment-318813583 https://api.github.com/repos/pydata/xarray/issues/1473 MDEyOklzc3VlQ29tbWVudDMxODgxMzU4Mw== fujiisoup 6815844 2017-07-29T08:26:30Z 2017-07-29T08:26:30Z MEMBER

@shoyer

Thanks for the detailed review.

I don't think we want to index non-xarray types with IndexerTuple subclasses. It's probably best to any convert them into base tuple() objects before indexing.

Yes. Actually, some backends seem to check something like if type(key) is tuple if key is empty. So, in my previous implementation, I manually converted its instance type in case of an empty tuple. I added to_tuple() method to IndexerTuple class and called it in all the basic ArrayWrapper. I think this made the code a bit cleaner.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WIP: indexing with broadcasting 241578773
317902607 https://github.com/pydata/xarray/pull/1473#issuecomment-317902607 https://api.github.com/repos/pydata/xarray/issues/1473 MDEyOklzc3VlQ29tbWVudDMxNzkwMjYwNw== fujiisoup 6815844 2017-07-25T23:29:10Z 2017-07-25T23:29:10Z MEMBER

No, for 1D boolean arrays I think we should insist that sizes match exactly.

OK. Thanks for the suggestion.

I'm slightly hesitating to deprecate this indexing in this PR. I guess it should go with another issue. (Some tests assume this indexing behavior.)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WIP: indexing with broadcasting 241578773
317742791 https://github.com/pydata/xarray/pull/1473#issuecomment-317742791 https://api.github.com/repos/pydata/xarray/issues/1473 MDEyOklzc3VlQ29tbWVudDMxNzc0Mjc5MQ== fujiisoup 6815844 2017-07-25T13:48:12Z 2017-07-25T13:48:12Z MEMBER

@shoyer Thanks for the suggestion. As you suggested, I prepared our own indexer class. I think the codes became much cleaner.

I struggled with the boolean index behavior, python np.random.randn(10, 20)[np.arange(8) < 5] which works in my laptop but fails in travis. Maybe this behavior was deprecated in numpy?

In my current PR, the boolean index is simply converted to integer array by .nonzero() method, so xarray works with such boolean array with different size. Is it what we want?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WIP: indexing with broadcasting 241578773
317247671 https://github.com/pydata/xarray/pull/1473#issuecomment-317247671 https://api.github.com/repos/pydata/xarray/issues/1473 MDEyOklzc3VlQ29tbWVudDMxNzI0NzY3MQ== fujiisoup 6815844 2017-07-23T11:51:00Z 2017-07-23T11:51:00Z MEMBER

1 Backends support only "basic indexing "(int and slice). This is pretty common. 2 Backends support some of the "advanced indexing" use cases but not everything (e.g., restricted to most one list). This is also pretty common (e.g., dask and h5py). 3 Backends support "orthogonal indexing" instead of advanced indexing. NetCDF4 does this (but perform can be pretty terrible). 4 Backends support NumPy's fully vectorized "advanced indexing". This is quite rare -- I've only seen this for backends that actually store their data in the form of NumPy arrays (e.g., scipy.io.netcdf).

I am wondering what the cleanest design is. Because the cases 3 and 4 you suggested are pretty exculsive, I tried to distinguish cases 1, 2, and 4 in Variable._broadcast_indexes(key) method. For backends that only accept orthogonal indexing, I think case 4 indexers can be orthogonalized in each ArrayWrappers (by indexing._unbroadcast_indexes function).

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WIP: indexing with broadcasting 241578773
317137217 https://github.com/pydata/xarray/pull/1473#issuecomment-317137217 https://api.github.com/repos/pydata/xarray/issues/1473 MDEyOklzc3VlQ29tbWVudDMxNzEzNzIxNw== fujiisoup 6815844 2017-07-21T23:54:38Z 2017-07-21T23:54:38Z MEMBER

Currently, this line in backends/rasterio_.py fails. This is because that the new indexing logic converts integer-arrays into slices as much as possible, e.g. [0, 2] is converted to slice(0, 3, 2) which is currently regarded as an invalid indexer in rasterio.

However, other array wrappers seem to require the automatic slice conversion. As I am not familiar with rasterio, I will appreciate if anyone gives me a help.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WIP: indexing with broadcasting 241578773
317012255 https://github.com/pydata/xarray/pull/1473#issuecomment-317012255 https://api.github.com/repos/pydata/xarray/issues/1473 MDEyOklzc3VlQ29tbWVudDMxNzAxMjI1NQ== fujiisoup 6815844 2017-07-21T14:13:44Z 2017-07-21T14:13:44Z MEMBER

I think we'll also want to make an "vectorized to orthogonal" indexing adapter that we can use netCDF4.Variable

I implemented BroadcastIndexedAdapter that converts broadcasted-indexer back to orthogonal-indexer. Former LazilyIndexedArray is renamed to OrthogonalLazilyIndexedArray and new LazilyIndexedArray now accepts broadcasted-indexers. (Some tests related to backend still fail.)

Now some array-adaptors accepts orthogonal-indexers and other accepts broadcasted-indexers. I think it is a little confusing. Maybe clearer terminology is necessary?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WIP: indexing with broadcasting 241578773
315746067 https://github.com/pydata/xarray/pull/1473#issuecomment-315746067 https://api.github.com/repos/pydata/xarray/issues/1473 MDEyOklzc3VlQ29tbWVudDMxNTc0NjA2Nw== fujiisoup 6815844 2017-07-17T12:49:46Z 2017-07-17T12:50:55Z MEMBER

@shoyer Thanks for your help.

Let's not worry about supporting every indexing type with dask.

Yes. Thanks to your patch, dask-based variable is now indexed fine.

Some replies to your comments to the outdated codes. + multidimensional boolean indexer
Agree. I added a sanity check and raise IndexError in case of multi-dimensional boolean array. + indexer type in DasokIndexingAdapter
Because I changed how indexing.broadcasted_indexable (formally indexing.orthogonally_indexable) is called, indexers passed to DaskIndexingAdapter are already broadcasted to Variables (in case of _broadcast_indexes_advanced).

I will try to fit the other array wrappers, LazilyIndexedArray, CopyOnWriteArray, MemoryCachedArray to the broadcasted indexers, the tests of which currently fail. (Maybe I will need another help.)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WIP: indexing with broadcasting 241578773
315593569 https://github.com/pydata/xarray/pull/1473#issuecomment-315593569 https://api.github.com/repos/pydata/xarray/issues/1473 MDEyOklzc3VlQ29tbWVudDMxNTU5MzU2OQ== fujiisoup 6815844 2017-07-16T08:19:11Z 2017-07-16T08:19:11Z MEMBER

I just realized that dask's indexing is limited, e.g. it does not support nd-array indexing. I will try to make a work around this issue, but I am not very familiar with dask. If anyone gives me any idea for this, it would be helpful.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WIP: indexing with broadcasting 241578773
314718320 https://github.com/pydata/xarray/pull/1473#issuecomment-314718320 https://api.github.com/repos/pydata/xarray/issues/1473 MDEyOklzc3VlQ29tbWVudDMxNDcxODMyMA== fujiisoup 6815844 2017-07-12T10:13:48Z 2017-07-12T10:13:48Z MEMBER

Thanks @shoyer I updated _broadcast_indexes method based on your reference script.

As you pointed out, we may need better Error message in here. I guess we should raise our own Exception class in as_variable and replace the message here? Duplication of as_variable function only for better exception message sounds a bad idea.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WIP: indexing with broadcasting 241578773

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 32.053ms · About: xarray-datasette