home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

46 rows where author_association = "CONTRIBUTOR" and user = 3958036 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: issue_url, reactions, created_at (date), updated_at (date)

issue 20

  • Support multiple dimensions in DataArray.argmin() and DataArray.argmax() methods 13
  • Surface plots 5
  • Releasing memory? 4
  • Inconsistency in whether index is created with new dimension coordinate? 3
  • cmap.set_under() does not work as expected 2
  • Fix contourf set under 2
  • Control attrs of result in `merge()`, `concat()`, `combine_by_coords()` and `combine_nested()` 2
  • Adding new DataArray to a Dataset removes attrs of existing coord 2
  • cumulative_integrate() method 2
  • Argmin indexes 1
  • Bring xr.align objects specification in line with other top-level functions? 1
  • plot.pcolormesh fails with shading='gouraud' 1
  • Rename ordered_dict_intersection -> compat_dict_intersection 1
  • Add missing_dims argument allowing isel() to ignore missing dimensions 1
  • Index 3D array with index of last axis stored in 2D array 1
  • Incoherencies between docs in open_mfdataset and combine_by_coords and its behaviour. 1
  • workaround for file with variable and dimension having same name 1
  • Add Dataset.plot.streamplot() method 1
  • plot_surface() wrapper 1
  • Use broadcast_like for 2d plot coordinates 1

user 1

  • johnomotani · 46 ✖

author_association 1

  • CONTRIBUTOR · 46 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
1245036513 https://github.com/pydata/xarray/issues/4417#issuecomment-1245036513 https://api.github.com/repos/pydata/xarray/issues/4417 IC_kwDOAMm_X85KNb_h johnomotani 3958036 2022-09-13T07:54:32Z 2022-09-13T07:54:32Z CONTRIBUTOR

Thanks @benbovy - with explicit methods now to produce the result with or without index, I think we can close this now :smiley:

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Inconsistency in whether index is created with new dimension coordinate? 698577111
1000305491 https://github.com/pydata/xarray/issues/4456#issuecomment-1000305491 https://api.github.com/repos/pydata/xarray/issues/4456 IC_kwDOAMm_X847n3NT johnomotani 3958036 2021-12-23T13:28:20Z 2021-12-23T13:28:20Z CONTRIBUTOR

I didn't find any solution in xarray - I think I ended up just dropping the conflicting variable...

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  workaround for file with variable and dimension having same name 708337538
830637894 https://github.com/pydata/xarray/pull/5153#issuecomment-830637894 https://api.github.com/repos/pydata/xarray/issues/5153 MDEyOklzc3VlQ29tbWVudDgzMDYzNzg5NA== johnomotani 3958036 2021-05-01T14:01:36Z 2021-05-01T14:01:36Z CONTRIBUTOR

The vote seems to be for a separate cumulative_integrate method - I've made that change.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  cumulative_integrate() method 857378504
829239055 https://github.com/pydata/xarray/pull/5101#issuecomment-829239055 https://api.github.com/repos/pydata/xarray/issues/5101 MDEyOklzc3VlQ29tbWVudDgyOTIzOTA1NQ== johnomotani 3958036 2021-04-29T13:29:55Z 2021-04-29T14:43:04Z CONTRIBUTOR

One thing that could be nice is an example in http://xarray.pydata.org/en/stable/plotting.html - up to you.

It seemed odd to add a surface example when contour and countourf weren't shown - so I've added a new subsection for 'Other types of plot' and shown all three.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Surface plots 847489988
828034654 https://github.com/pydata/xarray/pull/5101#issuecomment-828034654 https://api.github.com/repos/pydata/xarray/issues/5101 MDEyOklzc3VlQ29tbWVudDgyODAzNDY1NA== johnomotani 3958036 2021-04-27T23:51:19Z 2021-04-27T23:51:19Z CONTRIBUTOR

Thanks @mathause - master merged now.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Surface plots 847489988
823167082 https://github.com/pydata/xarray/pull/5099#issuecomment-823167082 https://api.github.com/repos/pydata/xarray/issues/5099 MDEyOklzc3VlQ29tbWVudDgyMzE2NzA4Mg== johnomotani 3958036 2021-04-20T10:31:19Z 2021-04-20T10:31:19Z CONTRIBUTOR

Thanks @mathause! I've used your suggestion to turn my MVCE into a test which fails on master, but passes with the fix in this PR.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Use broadcast_like for 2d plot coordinates 847199398
819120258 https://github.com/pydata/xarray/pull/5153#issuecomment-819120258 https://api.github.com/repos/pydata/xarray/issues/5153 MDEyOklzc3VlQ29tbWVudDgxOTEyMDI1OA== johnomotani 3958036 2021-04-13T23:43:12Z 2021-04-13T23:43:12Z CONTRIBUTOR

There's a cumtrapz in https://github.com/fujiisoup/xr-scipy/blob/master/xrscipy/integrate.py. Does that help?

I did a quick check and it seems like applying scipy.integrate.cumtrapz to a dask array returns a numpy array, whereas the duck_array_ops.cumulative_trapezoid that I copied from duck_array_ops.trapz does return a dask array (so I assume it must not force a compute?). Maybe @fujiisoup has thoughts though?

BTW the test failures are in test_cftimeindex.py so I assume they're unrelated to these changes.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  cumulative_integrate() method 857378504
812012953 https://github.com/pydata/xarray/pull/5101#issuecomment-812012953 https://api.github.com/repos/pydata/xarray/issues/5101 MDEyOklzc3VlQ29tbWVudDgxMjAxMjk1Mw== johnomotani 3958036 2021-04-01T16:10:01Z 2021-04-01T16:10:01Z CONTRIBUTOR

Tests pass! Think this is ready for review now @pydata/xarray

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Surface plots 847489988
811935990 https://github.com/pydata/xarray/pull/5101#issuecomment-811935990 https://api.github.com/repos/pydata/xarray/issues/5101 MDEyOklzc3VlQ29tbWVudDgxMTkzNTk5MA== johnomotani 3958036 2021-04-01T14:11:35Z 2021-04-01T14:11:35Z CONTRIBUTOR

Seems like the way I've implemented surface plots requires a slightly newer version than the oldest supported matplotlib to use the "3d" projection. matplotlib-3.2.0 basically works, but some of the unit tests require matpltolib-3.3.0 to pass. I'll try to add a check and skip some tests where necessary.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Surface plots 847489988
811519561 https://github.com/pydata/xarray/pull/5101#issuecomment-811519561 https://api.github.com/repos/pydata/xarray/issues/5101 MDEyOklzc3VlQ29tbWVudDgxMTUxOTU2MQ== johnomotani 3958036 2021-03-31T22:59:59Z 2021-03-31T22:59:59Z CONTRIBUTOR

Note, this PR includes the bugfix in #5099, because it needed to build on that change. #5099 is smaller, so I guess it will naturally be merged first.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Surface plots 847489988
811261936 https://github.com/pydata/xarray/issues/5084#issuecomment-811261936 https://api.github.com/repos/pydata/xarray/issues/5084 MDEyOklzc3VlQ29tbWVudDgxMTI2MTkzNg== johnomotani 3958036 2021-03-31T17:12:23Z 2021-03-31T17:12:23Z CONTRIBUTOR

I started working on this, and it seems to be almost trivial, except that plot_surface() requires it's x and y arguments to be 2d arrays. My guess is that I want to special-case the broadcasting in newplotfunc() here https://github.com/pydata/xarray/blob/ddc352faa6de91f266a1749773d08ae8d6f09683/xarray/plot/plot.py#L678-L684 but it's not immediately obvious to me how this bit works: * Why is it only ever xval that needs broadcasting? Couldn't the function have been called with a 2d x and a 1d y? * Why is it OK to check if xval.shape[0] == yval.shape[0]? I'm probably missing something, but don't see how this check can work without referring to the actual dimensions of x and y - what if the single dimension of x was actually the second dimension of y, but happened to have the same size as the first dimension of y? My first thought was that I just want to do x = x.broadcast_like(z) y = y.broadcast_like(z) actually, I think this is a bug. I'll make a new issue...

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  plot_surface() wrapper 842583817
809264761 https://github.com/pydata/xarray/pull/5003#issuecomment-809264761 https://api.github.com/repos/pydata/xarray/issues/5003 MDEyOklzc3VlQ29tbWVudDgwOTI2NDc2MQ== johnomotani 3958036 2021-03-29T10:22:57Z 2021-03-29T10:22:57Z CONTRIBUTOR

Thanks @mathause - I've merged master now.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add Dataset.plot.streamplot() method 823290488
735419937 https://github.com/pydata/xarray/issues/3002#issuecomment-735419937 https://api.github.com/repos/pydata/xarray/issues/3002 MDEyOklzc3VlQ29tbWVudDczNTQxOTkzNw== johnomotani 3958036 2020-11-29T16:28:32Z 2020-11-29T16:28:32Z CONTRIBUTOR

I had the same problem. I found that da.plot.pcolormesh('lon', 'lat', shading='gouraud', infer_intervals=False) does work. Would be nice if shading='gouraud' worked by default though.

I guess the problem is that 'gouraud' needs the coordinate positions at the grid points, and I assume infer_intervals=True is creating coordinate values at the grid-cell corners (so an n x m DataArray is plotted using coordinates with lengh (n+1) and (m+1). I haven't looked into what infer_intervals actually does though. Would it be safe to just turn it off if shading='gouraud' is used?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  plot.pcolormesh fails with shading='gouraud' 453126577
734487380 https://github.com/pydata/xarray/issues/2779#issuecomment-734487380 https://api.github.com/repos/pydata/xarray/issues/2779 MDEyOklzc3VlQ29tbWVudDczNDQ4NzM4MA== johnomotani 3958036 2020-11-26T21:44:09Z 2020-11-26T21:44:09Z CONTRIBUTOR

:+1: I ran into this issue today, and found the error message strange and confusing.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Bring xr.align objects specification in line with other top-level functions? 412080974
691109135 https://github.com/pydata/xarray/issues/4417#issuecomment-691109135 https://api.github.com/repos/pydata/xarray/issues/4417 MDEyOklzc3VlQ29tbWVudDY5MTEwOTEzNQ== johnomotani 3958036 2020-09-11T13:54:01Z 2020-09-11T13:54:01Z CONTRIBUTOR

I think I was trying to fix this in #4108

:+1: On a quick glance it looks to me like #4108 would fix my issue here.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Inconsistency in whether index is created with new dimension coordinate? 698577111
691045790 https://github.com/pydata/xarray/issues/4417#issuecomment-691045790 https://api.github.com/repos/pydata/xarray/issues/4417 MDEyOklzc3VlQ29tbWVudDY5MTA0NTc5MA== johnomotani 3958036 2020-09-11T11:45:40Z 2020-09-11T11:45:40Z CONTRIBUTOR

I haven't managed to get set_index to line up the results. This: ``` import numpy as np import xarray as xr

ds = xr.Dataset() ds['a'] = ('x', np.linspace(0,1)) ds['b'] = ('x', np.linspace(3,4)) ds = ds.rename(b='x') ds = ds.set_coords('x') ds = ds.set_index(x='x')

print(ds) print('indexes', ds.indexes) `` Produces a Dataset that now has an indexx, but nox` coordinate. The 'assignment of 1D variable' version produces both a coordinate and an index.

One suggested solution in #2461 was to use swap_dims(), and the following does produce the desired result ``` import numpy as np import xarray as xr

ds = xr.Dataset() ds['a'] = ('x', np.linspace(0,1)) ds['b'] = ('x', np.linspace(3,4)) ds = ds.swap_dims({'x': 'b'}) ds = ds.rename(b='x')

print(ds) print('indexes', ds.indexes) `` But (1) having to useswap_dims()to create a coordinate seems bizarre, (2) I think it's a bit ugly to get rid of thexdimension withswap_dims(), and then have to rename the new dimension back tox, when what I wanted was to add a coordinate tox`.

I found the behaviour confusing because I wasn't aware of the index variables at all... I create a coordinate with set_coords(); then do a bunch of manipulations involving slicing pieces out of the Dataset and re-combining them with concat (at least I think this is the relevant part of my code...); finally test the result by taking slices again (with isel()) of the result and comparing them to slices of the original Dataset with xarray.testing.assert_identical(), which failed because of the missing indexes. I guess somewhere in the manipulations I did, some operation created the coordinate in a new Dataset object by assignment, at which point it generated an index for the coordinate too.

I may well be missing other context that makes this an undesirable thing to do, but for my use-case at least, I think it would make more sense if set_coords() created an index if the coordinate is a dimension coordinate (or whatever the actual criterion is for assignment of a 1d variable to create an index).

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Inconsistency in whether index is created with new dimension coordinate? 698577111
690538286 https://github.com/pydata/xarray/issues/4415#issuecomment-690538286 https://api.github.com/repos/pydata/xarray/issues/4415 MDEyOklzc3VlQ29tbWVudDY5MDUzODI4Ng== johnomotani 3958036 2020-09-10T17:29:28Z 2020-09-10T17:29:28Z CONTRIBUTOR

Apparently it's possible to work around this by using merge() instead of assigning. The following produces the expected output: ``` import numpy as np import xarray as xr

ds = xr.Dataset() ds["a"] = xr.DataArray(np.linspace(0., 1.), dims="x")

ds["x"] = xr.DataArray(np.linspace(0., 2., len(ds["x"])), dims="x") ds["x"].attrs["foo"] = "bar"

print(ds["x"])

ds["b"] = xr.DataArray(np.linspace(0., 1.), dims="x")

ds = ds.merge(xr.DataArray(np.linspace(0., 1.), dims="x", name="b"))

print(ds["x"]) ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Adding new DataArray to a Dataset removes attrs of existing coord 698263021
690535154 https://github.com/pydata/xarray/issues/4415#issuecomment-690535154 https://api.github.com/repos/pydata/xarray/issues/4415 MDEyOklzc3VlQ29tbWVudDY5MDUzNTE1NA== johnomotani 3958036 2020-09-10T17:24:06Z 2020-09-10T17:24:06Z CONTRIBUTOR

Unfortunately I don't have time at the moment to dig into why this is happening, but it was confusing because I found a bug where attrs had disappeared, and eventually chased it down to where some new variables were added to my Dataset, which I would not expect to change existing coordinates!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Adding new DataArray to a Dataset removes attrs of existing coord 698263021
651340318 https://github.com/pydata/xarray/pull/3936#issuecomment-651340318 https://api.github.com/repos/pydata/xarray/issues/3936 MDEyOklzc3VlQ29tbWVudDY1MTM0MDMxOA== johnomotani 3958036 2020-06-29T20:22:49Z 2020-06-29T20:22:49Z CONTRIBUTOR

Thanks @dcherian :smile: You're very welcome!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Support multiple dimensions in DataArray.argmin() and DataArray.argmax() methods 594594646
650303121 https://github.com/pydata/xarray/pull/3936#issuecomment-650303121 https://api.github.com/repos/pydata/xarray/issues/3936 MDEyOklzc3VlQ29tbWVudDY1MDMwMzEyMQ== johnomotani 3958036 2020-06-26T17:29:15Z 2020-06-26T17:29:15Z CONTRIBUTOR

Thanks @keewis! I think we've addressed all the review comments now.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Support multiple dimensions in DataArray.argmin() and DataArray.argmax() methods 594594646
650300879 https://github.com/pydata/xarray/pull/3936#issuecomment-650300879 https://api.github.com/repos/pydata/xarray/issues/3936 MDEyOklzc3VlQ29tbWVudDY1MDMwMDg3OQ== johnomotani 3958036 2020-06-26T17:24:55Z 2020-06-26T17:24:55Z CONTRIBUTOR

calling on a array with missing values raises ValueError: All-NaN slice encountered (try ds = xr.tutorial.open_dataset("rasm"); ds.argmin(dim="y"))

Some missing values should be OK. In your example though for example In [27]: ds.Tair.isel(time=0, x=0).values Out[27]: array([nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan]) So there are no values in the array slice that argmin should be applied over, which is an error.

I guess we could add some special handling for this (not sure what though, because we can't set a variable with type int to nan or None), but I think that is a separate issue that would need a new PR.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Support multiple dimensions in DataArray.argmin() and DataArray.argmax() methods 594594646
650272329 https://github.com/pydata/xarray/pull/3936#issuecomment-650272329 https://api.github.com/repos/pydata/xarray/issues/3936 MDEyOklzc3VlQ29tbWVudDY1MDI3MjMyOQ== johnomotani 3958036 2020-06-26T16:29:35Z 2020-06-26T16:29:35Z CONTRIBUTOR

ds.argmin(dim=...) returns a single index (the same result as ds.argmin()) but ds.Tair.argmin(dim=...) returns something different from ds.Tair.argmin(). Is that intentional?

I think this is a bug. dim=... to Dataset.argmin should be an error because dim=... returns a dict of results, and it's not clear how to do that consistently for a Dataset that might have several members with different combinations of dimensions.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Support multiple dimensions in DataArray.argmin() and DataArray.argmax() methods 594594646
649059087 https://github.com/pydata/xarray/pull/3936#issuecomment-649059087 https://api.github.com/repos/pydata/xarray/issues/3936 MDEyOklzc3VlQ29tbWVudDY0OTA1OTA4Nw== johnomotani 3958036 2020-06-24T20:39:31Z 2020-06-24T20:39:31Z CONTRIBUTOR

Merge conflicts fixed, this PR should be ready to review/merge.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Support multiple dimensions in DataArray.argmin() and DataArray.argmax() methods 594594646
621735196 https://github.com/pydata/xarray/issues/4009#issuecomment-621735196 https://api.github.com/repos/pydata/xarray/issues/4009 MDEyOklzc3VlQ29tbWVudDYyMTczNTE5Ng== johnomotani 3958036 2020-04-30T09:58:50Z 2020-04-30T09:58:50Z CONTRIBUTOR

Yes, should be simple to correct. I have a unit test now that reproduces the error.

@pydata/xarray a question: for combine_by_coords should the default be combine_attrs="no_conflicts" (current code) or combine_attrs="drop" (current docs)?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Incoherencies between docs in open_mfdataset and combine_by_coords and its behaviour. 607616849
617375991 https://github.com/pydata/xarray/pull/3936#issuecomment-617375991 https://api.github.com/repos/pydata/xarray/issues/3936 MDEyOklzc3VlQ29tbWVudDYxNzM3NTk5MQ== johnomotani 3958036 2020-04-21T19:46:09Z 2020-04-21T19:46:09Z CONTRIBUTOR

I guess this would happen if somebody calls np.argmin(xarray_object)? Does this happen from inside xarray somewhere, or in external code?

If it's the former, then we should just fix xarray not to do this. If it's only in external code, I would still consider breaking this behavior.

test_units.py has a couple of tests that apply numpy functions to xr.Variables, xr.DataArrays and xr.Datasets. Those tests break for argmin and argmax without the out argument. Should I just remove the argmin and argmax function calls? The tests would still call them as methods.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Support multiple dimensions in DataArray.argmin() and DataArray.argmax() methods 594594646
616100298 https://github.com/pydata/xarray/pull/3936#issuecomment-616100298 https://api.github.com/repos/pydata/xarray/issues/3936 MDEyOklzc3VlQ29tbWVudDYxNjEwMDI5OA== johnomotani 3958036 2020-04-19T10:42:06Z 2020-04-19T10:42:06Z CONTRIBUTOR

@pydata/xarray - I think this PR is ready to be merged, are there any changes I should make? Thanks!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Support multiple dimensions in DataArray.argmin() and DataArray.argmax() methods 594594646
612143069 https://github.com/pydata/xarray/pull/3936#issuecomment-612143069 https://api.github.com/repos/pydata/xarray/issues/3936 MDEyOklzc3VlQ29tbWVudDYxMjE0MzA2OQ== johnomotani 3958036 2020-04-10T17:51:13Z 2020-04-10T17:51:13Z CONTRIBUTOR

I eventually found that the cause of the errors I was getting was that the argmin and argmax methods did not have an out argument. In order for the methods to be wrapped by numpy (and I guess dask is the same), the call signature of np.argmin() has to be supported, which means axis and out arguments are needed.

@shoyer I've kept the use of injected functions, because removing the injected argmin and argmax completely meant re-implementing handling of skipna and only within _unravel_argminmax, which seemed less nice to me. If/when method injection is refactored, it would be nice to include a mechanism to override the core operation with an explicitly implemented, extended version like argmin/argmax.

I think this PR is ready for review now :smile:

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Support multiple dimensions in DataArray.argmin() and DataArray.argmax() methods 594594646
610418244 https://github.com/pydata/xarray/issues/3949#issuecomment-610418244 https://api.github.com/repos/pydata/xarray/issues/3949 MDEyOklzc3VlQ29tbWVudDYxMDQxODI0NA== johnomotani 3958036 2020-04-07T14:26:02Z 2020-04-07T14:26:02Z CONTRIBUTOR

I think indexed_array = val_arr.isel(z=z_indices) should work, if I'm understanding what you want. I haven't tested though...

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Index 3D array with index of last axis stored in 2D array 595900209
610414198 https://github.com/pydata/xarray/issues/3948#issuecomment-610414198 https://api.github.com/repos/pydata/xarray/issues/3948 MDEyOklzc3VlQ29tbWVudDYxMDQxNDE5OA== johnomotani 3958036 2020-04-07T14:18:36Z 2020-04-07T14:18:36Z CONTRIBUTOR

Sorry for the noise, but at least I will be glad of something to search for next time I forget how to do something like this!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Releasing memory? 595882590
610413864 https://github.com/pydata/xarray/issues/3948#issuecomment-610413864 https://api.github.com/repos/pydata/xarray/issues/3948 MDEyOklzc3VlQ29tbWVudDYxMDQxMzg2NA== johnomotani 3958036 2020-04-07T14:18:01Z 2020-04-07T14:18:01Z CONTRIBUTOR

Thanks @lanougue, @dcherian I think I see the simple answer now: use a deepcopy first, to leave the Dataset as is, then del the DataArray when finished with it, e.g. ``` da1 = ds["variable1"].copy(deep=True) ... do stuff with da1 ... del da1

da2 = ds["variable2"].copy(deep=True) ... do stuff with da2 ... del da2

... etc. ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Releasing memory? 595882590
610404757 https://github.com/pydata/xarray/issues/3948#issuecomment-610404757 https://api.github.com/repos/pydata/xarray/issues/3948 MDEyOklzc3VlQ29tbWVudDYxMDQwNDc1Nw== johnomotani 3958036 2020-04-07T14:01:28Z 2020-04-07T14:01:28Z CONTRIBUTOR

Thanks @lanougue, but what if I might want to re-load da1 again later, ie. if da came from some Dataset da = ds["variable"], I want to leave ds in the same state as just after I did ds = open_dataset(...)? Wouldn't del da remove "variable" from ds? Or maybe not free any memory if ds still has a reference to the DataArray?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Releasing memory? 595882590
610403146 https://github.com/pydata/xarray/issues/3948#issuecomment-610403146 https://api.github.com/repos/pydata/xarray/issues/3948 MDEyOklzc3VlQ29tbWVudDYxMDQwMzE0Ng== johnomotani 3958036 2020-04-07T13:58:44Z 2020-04-07T13:58:44Z CONTRIBUTOR

OK, I think I've answered my own question. Looks like dask can handle this workflow already, something like: ```

do_some_work does not call .load() or .compute() anywhere

result1a, result1b, result1c = dask.compute(do_some_work(ds["variable1"])

result2a, result2b, result2c = dask.compute(do_some_work(ds["variable2"])

... etc. ```

I do still wonder if there might be any case where .release() might be useful...

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Releasing memory? 595882590
610288219 https://github.com/pydata/xarray/pull/3936#issuecomment-610288219 https://api.github.com/repos/pydata/xarray/issues/3936 MDEyOklzc3VlQ29tbWVudDYxMDI4ODIxOQ== johnomotani 3958036 2020-04-07T09:45:24Z 2020-04-07T09:45:24Z CONTRIBUTOR

These test failures seem to have uncovered a larger issue: overriding a method injected by ops.inject_all_ops_and_reduce_methods(DataArray, priority=60) is tricky. I think there may be a way around it by mangling the name of the injected operation if that method is already defined on the class; I tried this and it nearly works, but I think I need to implement an override for argmin/argmax on Variable, because DataArray.__array_wrap__ uses the Variable version of the method... Work in progress!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Support multiple dimensions in DataArray.argmin() and DataArray.argmax() methods 594594646
610081717 https://github.com/pydata/xarray/pull/3936#issuecomment-610081717 https://api.github.com/repos/pydata/xarray/issues/3936 MDEyOklzc3VlQ29tbWVudDYxMDA4MTcxNw== johnomotani 3958036 2020-04-06T23:06:11Z 2020-04-06T23:06:11Z CONTRIBUTOR

The rename to _argmin_base is also causing test_aggregation (which is a test of units with pint) to fail, because the numpy function and the Variable method do not have the same name. Any suggestions how to fix this? It seems tricky because I can't pass either argmin (no Variable method) or _argmin_base (no numpy method) to the test.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Support multiple dimensions in DataArray.argmin() and DataArray.argmax() methods 594594646
610079091 https://github.com/pydata/xarray/pull/3936#issuecomment-610079091 https://api.github.com/repos/pydata/xarray/issues/3936 MDEyOklzc3VlQ29tbWVudDYxMDA3OTA5MQ== johnomotani 3958036 2020-04-06T22:57:50Z 2020-04-06T22:57:50Z CONTRIBUTOR

I've updated so the new functionality is provided by argmin() and argmax(), when they are passed a multiple dimensions. * I've added Dataset.argmin() and Dataset.argmax(), which only work for a single dimension, but give informative exceptions for unsupported cases. If anyone has some behaviour that they would like for Dataset.argmin() with multiple dimensions, I'm happy to implement it, but returning something like a dict of Datasets didn't seem nice, and as far as I understand a Dataset cannot contain something like a dict of DataArrays. * I renamed argmin and argmax in ops.py and duck_array_ops.py to _argmin_base and _argmax_base so that I could still use them inside the argmin and argmax methods. Any alternatives, or better suggestions for naming them? * Variable no longer has argmin or argmax methods. Is that a problem? I just updated test_variable.py and test_dask.py to use _argmin_base and _argmax_base.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Support multiple dimensions in DataArray.argmin() and DataArray.argmax() methods 594594646
609477080 https://github.com/pydata/xarray/pull/3936#issuecomment-609477080 https://api.github.com/repos/pydata/xarray/issues/3936 MDEyOklzc3VlQ29tbWVudDYwOTQ3NzA4MA== johnomotani 3958036 2020-04-05T20:26:12Z 2020-04-05T20:50:03Z CONTRIBUTOR

@shoyer I think your last option sounds good. Questions: * What should da.argmin() with no arguments do? * Currently returns the flattened index of the global minimum. * I think returning a dict of indices would be much more useful, but it does change existing behaviour (more useful because you can then do da.isel(da.argmin())). * Could anyway do da.argmin(da.dims) to get the dict result, but that's a bit ugly * Could have something like da.argmin(...) - maybe as a temporary workaround while we deprecate the current behaviour of da.argmin()? * If we overload argmin, what's the cleanest way to get the existing argmin() method within the new one? Would something like from .duck_array_ops import argmin as argmin_1d work?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Support multiple dimensions in DataArray.argmin() and DataArray.argmax() methods 594594646
609478091 https://github.com/pydata/xarray/pull/3936#issuecomment-609478091 https://api.github.com/repos/pydata/xarray/issues/3936 MDEyOklzc3VlQ29tbWVudDYwOTQ3ODA5MQ== johnomotani 3958036 2020-04-05T20:33:04Z 2020-04-05T20:33:04Z CONTRIBUTOR

Maybe worth noting, at the moment if you try to call argmin(dim=("x", "y")) with multiple dimensions, there's a not-very-helpful exception ```

array.argmin(dim=("x", "y")) Traceback (most recent call last): File "/home/john/anaconda3/lib/python3.7/site-packages/numpy/core/fromnumeric.py", line 61, in _wrapfunc return bound(args, *kwds) TypeError: 'tuple' object cannot be interpreted as an integer

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/<...>/xarray/xarray/core/common.py", line 46, in wrapped_func return self.reduce(func, dim, axis, skipna=skipna, kwargs) File "/<...>/xarray/xarray/core/dataarray.py", line 2288, in reduce var = self.variable.reduce(func, dim, axis, keep_attrs, keepdims, kwargs) File "/<...>/xarray/xarray/core/variable.py", line 1579, in reduce data = func(input_data, axis=axis, kwargs) File "/<...>/xarray/xarray/core/duck_array_ops.py", line 304, in f return func(values, axis=axis, kwargs) File "/<...>/xarray/xarray/core/duck_array_ops.py", line 47, in f return wrapped(args, kwargs) File "<array_function internals>", line 6, in argmin File "/<...>/anaconda3/lib/python3.7/site-packages/numpy/core/fromnumeric.py", line 1267, in argmin return _wrapfunc(a, 'argmin', axis=axis, out=out) File "/<...>/anaconda3/lib/python3.7/site-packages/numpy/core/fromnumeric.py", line 70, in _wrapfunc return _wrapit(obj, method, args, kwds) File "/<...>/anaconda3/lib/python3.7/site-packages/numpy/core/fromnumeric.py", line 47, in _wrapit result = getattr(asarray(obj), method)(*args, kwds) TypeError: 'tuple' object cannot be interpreted as an integer ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Support multiple dimensions in DataArray.argmin() and DataArray.argmax() methods 594594646
609084241 https://github.com/pydata/xarray/pull/1469#issuecomment-609084241 https://api.github.com/repos/pydata/xarray/issues/1469 MDEyOklzc3VlQ29tbWVudDYwOTA4NDI0MQ== johnomotani 3958036 2020-04-04T20:23:17Z 2020-04-04T20:23:17Z CONTRIBUTOR

Any plans to finish/merge this PR? indexes_max and indexes_min would be very nice to have! (See also #3160.) Although I guess the idxmin/idxmax are superseded by #3871?

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Argmin indexes 239918314
608324523 https://github.com/pydata/xarray/pull/3923#issuecomment-608324523 https://api.github.com/repos/pydata/xarray/issues/3923 MDEyOklzc3VlQ29tbWVudDYwODMyNDUyMw== johnomotani 3958036 2020-04-03T09:10:41Z 2020-04-03T09:10:41Z CONTRIBUTOR

The Linux-py38-upstream-dev test is failing in cftime tests at various places similar to this ``` with create_tmp_file() as tmp_file: original.to_netcdf(tmp_file) with pytest.warns(None) as record: with open_dataset(tmp_file, use_cftime=False) as ds: assert_identical(expected_x, ds.x) assert_identical(expected_time, ds.time)

          assert not record

E assert not WarningsChecker(record=True) ``` Don't think it's related to this PR...

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add missing_dims argument allowing isel() to ignore missing dimensions 591471233
603509086 https://github.com/pydata/xarray/pull/3887#issuecomment-603509086 https://api.github.com/repos/pydata/xarray/issues/3887 MDEyOklzc3VlQ29tbWVudDYwMzUwOTA4Ng== johnomotani 3958036 2020-03-24T21:16:08Z 2020-03-24T21:16:08Z CONTRIBUTOR

Doesn't look like it, but it seems like a thing that could be useful, e.g. for handling combining attrs dicts, so why delete it?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Rename ordered_dict_intersection -> compat_dict_intersection 587280307
603134393 https://github.com/pydata/xarray/pull/3877#issuecomment-603134393 https://api.github.com/repos/pydata/xarray/issues/3877 MDEyOklzc3VlQ29tbWVudDYwMzEzNDM5Mw== johnomotani 3958036 2020-03-24T09:42:11Z 2020-03-24T09:42:11Z CONTRIBUTOR

For specifying which object, one possibility would be to pass an int to combine_attrs in merge(), concat() or combine_by_coords(), or a tuple of int to combine_nested, giving the index of the object to use attributes from. This feature would need new tests writing though, so I'd suggest implementing it in a new PR.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Control attrs of result in `merge()`, `concat()`, `combine_by_coords()` and `combine_nested()` 585868107
602806659 https://github.com/pydata/xarray/pull/3877#issuecomment-602806659 https://api.github.com/repos/pydata/xarray/issues/3877 MDEyOklzc3VlQ29tbWVudDYwMjgwNjY1OQ== johnomotani 3958036 2020-03-23T19:23:47Z 2020-03-23T19:23:47Z CONTRIBUTOR

Should there perhaps be another option to specify which object to get the attrs from? I'm just thinking by analogy to how open_mfdataset now lets you specify which file you want the attrs from.

That would actually be nice to have in concat() for a use-case I have. It's not immediately obvious to me how to implement it though. For merge() or concat() you could give an integer index. I think open_mfdataset used the file-name (?), but there's no equivalent for combine_by_coords or combine_nested is there?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Control attrs of result in `merge()`, `concat()`, `combine_by_coords()` and `combine_nested()` 585868107
582538190 https://github.com/pydata/xarray/pull/3601#issuecomment-582538190 https://api.github.com/repos/pydata/xarray/issues/3601 MDEyOklzc3VlQ29tbWVudDU4MjUzODE5MA== johnomotani 3958036 2020-02-05T18:09:24Z 2020-02-05T18:09:24Z CONTRIBUTOR

I've added an extra test that checks the right thing happens when we do not change the bad, under or over colors. That test needs to set vmin and vmax to get sensible defaults for the under and over colors: without vmin or vmax, mpl.colors.from_levels_and_colors() is called with extend = 'neither'; then it sets _rgba_under and _rgba_over in its result to (0.0, 0.0, 0.0, 0.0) (but these values can never be used, because with no vmin or vmax there can be no data outside of the plotted range).

I think this is ready now. The unit test failures on my local computer appear to be unrelated to this PR (see earlier references to #3673 by @keewis and #3747 by @mathause).

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fix contourf set under 533996523
577418853 https://github.com/pydata/xarray/pull/3601#issuecomment-577418853 https://api.github.com/repos/pydata/xarray/issues/3601 MDEyOklzc3VlQ29tbWVudDU3NzQxODg1Mw== johnomotani 3958036 2020-01-22T22:37:01Z 2020-01-22T22:37:01Z CONTRIBUTOR

Not sure why some tests are failing... The ones I can see are to do with CFTime. I can't see a relation to the changes in this PR??

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fix contourf set under 533996523
562578478 https://github.com/pydata/xarray/issues/3590#issuecomment-562578478 https://api.github.com/repos/pydata/xarray/issues/3590 MDEyOklzc3VlQ29tbWVudDU2MjU3ODQ3OA== johnomotani 3958036 2019-12-06T13:49:36Z 2019-12-06T13:49:36Z CONTRIBUTOR

Looks like that was it, thanks @dcherian!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  cmap.set_under() does not work as expected 532165408
561358453 https://github.com/pydata/xarray/issues/3590#issuecomment-561358453 https://api.github.com/repos/pydata/xarray/issues/3590 MDEyOklzc3VlQ29tbWVudDU2MTM1ODQ1Mw== johnomotani 3958036 2019-12-03T21:10:58Z 2019-12-03T21:10:58Z CONTRIBUTOR

Might be somethnig to do with https://github.com/pydata/xarray/blob/ed05f9862622b00f40f7b9b99ccdb0ab3766ff0f/xarray/plot/dataset_plot.py#L137 ?

The docstring for matplotlib.colors.Normalize says

If clip is True, masked values are set to 1; otherwise they remain masked. Clipping silently defeats the purpose of setting the over, under, and masked colors in the colormap, so it is likely to lead to surprises; therefore the default is clip = False.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  cmap.set_under() does not work as expected 532165408

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 717.843ms · About: xarray-datasette