home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

60 rows where user = 6200806 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: reactions, created_at (date), updated_at (date)

issue >30

  • GroupBy of stacked dim with strings renames underlying dims 5
  • CFTimeIndex 4
  • Towards a (temporary?) workaround for datetime issues at the xarray-level 3
  • ENH: three argument version of where 3
  • IndexError when calling load() on netCDF file accessed via opendap 3
  • ValueError: Buffer has wrong number of dimensions (expected 1, got 2) 2
  • Accept rename to same name 2
  • Allow empty result of numerical operations between DataArrays 2
  • Expose testing methods 2
  • Time limitation (between years 1678 and 2262) restrictive to climate community 2
  • data_vars option added to open_mfdataset 2
  • CI offline? 2
  • Fix for stack+groupby+apply w/ non-increasing coord 2
  • `xray.open_mfdataset` concatenates also variables without time dimension 1
  • support for units 1
  • Unexpected behavior by diff when applied to coordinate DataArray 1
  • Problems when array of coordinate bounds is 2D 1
  • Complete renaming xray -> xarray 1
  • Behavior of ds.rename when old and new name are the same 1
  • Replacing coord with coord of same name results in NaNs 1
  • add geocolormesh 1
  • Pytest assert functions 1
  • Why ufuncs module not included in top-level namespace 1
  • Clarifying sel/drop behavior for dims with vs. without coords 1
  • BUG: Resample on PeriodIndex not working? 1
  • Current doc builds are broken 1
  • Add trapz to DataArray for mathematical integration 1
  • xarray vs Xarray vs XArray 1
  • Shape preserving `diff` via new keywords 1
  • Move/read coordinate as data variable - xarray.Dataset 1
  • …

user 1

  • spencerahill · 60 ✖

author_association 1

  • CONTRIBUTOR 60
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
1074099378 https://github.com/pydata/xarray/issues/4353#issuecomment-1074099378 https://api.github.com/repos/pydata/xarray/issues/4353 IC_kwDOAMm_X85ABXSy spencerahill 6200806 2022-03-21T16:14:10Z 2022-03-21T16:14:10Z CONTRIBUTOR

@btickell sorry, I basically haven't used this data since that last post, so I don't recall exactly. But I believe I just was able to download it and work with it locally rather than using opendap.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  IndexError when calling load() on netCDF file accessed via opendap 681864788
683461276 https://github.com/pydata/xarray/issues/4353#issuecomment-683461276 https://api.github.com/repos/pydata/xarray/issues/4353 MDEyOklzc3VlQ29tbWVudDY4MzQ2MTI3Ng== spencerahill 6200806 2020-08-30T19:37:33Z 2020-08-30T19:37:33Z CONTRIBUTOR

Thanks @dcherian. Note that I was able to access the subset of this dataset that I need via other means, so this is no longer important for me. So feel free to close if it doesn't seem worth the trouble.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  IndexError when calling load() on netCDF file accessed via opendap 681864788
676414924 https://github.com/pydata/xarray/issues/4353#issuecomment-676414924 https://api.github.com/repos/pydata/xarray/issues/4353 MDEyOklzc3VlQ29tbWVudDY3NjQxNDkyNA== spencerahill 6200806 2020-08-19T14:06:07Z 2020-08-19T14:06:07Z CONTRIBUTOR

I should add: the file in question is one that somebody else (who is on vacation) works with via xarray, or at least has in the past. So that makes me wonder if this behavior is a regression in 0.16.0.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  IndexError when calling load() on netCDF file accessed via opendap 681864788
639789368 https://github.com/pydata/xarray/pull/2729#issuecomment-639789368 https://api.github.com/repos/pydata/xarray/issues/2729 MDEyOklzc3VlQ29tbWVudDYzOTc4OTM2OA== spencerahill 6200806 2020-06-05T20:28:24Z 2020-06-05T20:28:24Z CONTRIBUTOR

Just came across this PR while trying for the first time to create an animation of xarray data. Looks like it got quite far along but then sputtered. Did it get superseded by hvplot?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  [WIP] Feature: Animated 1D plots 404945709
606606170 https://github.com/pydata/xarray/pull/3906#issuecomment-606606170 https://api.github.com/repos/pydata/xarray/issues/3906 MDEyOklzc3VlQ29tbWVudDYwNjYwNjE3MA== spencerahill 6200806 2020-03-31T12:50:02Z 2020-03-31T12:50:02Z CONTRIBUTOR

Great, thanks @max-sixty and happy to do it.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fix for stack+groupby+apply w/ non-increasing coord 589471115
606291544 https://github.com/pydata/xarray/pull/3906#issuecomment-606291544 https://api.github.com/repos/pydata/xarray/issues/3906 MDEyOklzc3VlQ29tbWVudDYwNjI5MTU0NA== spencerahill 6200806 2020-03-30T22:47:38Z 2020-03-30T22:47:38Z CONTRIBUTOR

OK @max-sixty I've gotten logic that works in the sense that it works for this use case and doesn't break any existing tests. But I could use a look over by somebody more familiar with groupby + multi-indexing, as it's totally possible that this quick fix causes other problems.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fix for stack+groupby+apply w/ non-increasing coord 589471115
604430213 https://github.com/pydata/xarray/issues/3287#issuecomment-604430213 https://api.github.com/repos/pydata/xarray/issues/3287 MDEyOklzc3VlQ29tbWVudDYwNDQzMDIxMw== spencerahill 6200806 2020-03-26T13:26:59Z 2020-03-26T13:26:59Z CONTRIBUTOR

Thanks @max-sixty. Contrary to my warning about not doing a PR, I couldn't help myself and dug in a bit. It turns out that string coordinates aren't the problem, it's when the coordinate isn't in sorted order. For example, @chrisroat's original example doesn't error if the coordinate is ["G", "R"] instead of ["R", "G"]. A more concrete WIP test:

```python def test_stack_groupby_unsorted_coord(): data = [[0, 1], [2, 3]] data_flat = [0, 1, 2, 3] dims = ["y", "x"] y_vals = [2, 3]

# "y" coord is in sorted order, and everything works
arr = xr.DataArray(data, dims=dims, coords={"y": y_vals})
actual1 = arr.stack(z=["y", "x"]).groupby("z").first()
midx = pd.MultiIndex.from_product([[2, 3], [0, 1]], names=dims)
expected1 = xr.DataArray(data_flat, dims=["z"], coords={"z": midx})
xr.testing.assert_equal(actual1, expected1)

# Now "y" coord is NOT in sorted order, and the bug appears
arr = xr.DataArray(data, dims=dims, coords={"y": y_vals[::-1]})
actual2 = arr.stack(z=["y", "x"]).groupby("z").first()
midx = pd.MultiIndex.from_product([[3, 2], [0, 1]], names=dims)
expected2 = xr.DataArray(data_flat, dims=["z"], coords={"z": midx})
xr.testing.assert_equal(actual2, expected2)

test_stack_groupby_str_coords() yieldspython


AssertionError Traceback (most recent call last)

[...]

AssertionError: Left and right DataArray objects are not equal

Differing values: L array([2, 3, 0, 1]) R array([0, 1, 2, 3]) Differing coordinates: L * z (z) MultiIndex - z_leve...(z) int64 2 2 3 3 - z_leve...(z) int64 0 1 0 1 R * z (z) MultiIndex - y (z) int64 3 3 2 2 - x (z) int64 0 1 0 1 ```

I'll return to this tomorrow, in the meantime if this triggers any thoughts about the best path forward, that would be much appreciated!

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  GroupBy of stacked dim with strings renames underlying dims 490476815
603879881 https://github.com/pydata/xarray/issues/3287#issuecomment-603879881 https://api.github.com/repos/pydata/xarray/issues/3287 MDEyOklzc3VlQ29tbWVudDYwMzg3OTg4MQ== spencerahill 6200806 2020-03-25T14:44:15Z 2020-03-25T14:46:14Z CONTRIBUTOR

Notice that the string coordinate also gets reordered alphabetically: in @chrisroat 's example above, the coord goes from ['R', 'G'] to ['G', 'R'].

@max-sixty I can't promise a PR anytime soon, but if/when I do manage, where would be a good starting point? Perhaps here where the _level_ names are introduced: https://github.com/pydata/xarray/blob/009aa66620b3437cf0de675013fa7d1ff231963c/xarray/core/dataset.py#L251-L256

Edit: actually maybe here: https://github.com/pydata/xarray/blob/9eec56c833da6dca02c3e6c593586fd201a534a0/xarray/core/variable.py#L2237-L2249

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  GroupBy of stacked dim with strings renames underlying dims 490476815
603490643 https://github.com/pydata/xarray/issues/3287#issuecomment-603490643 https://api.github.com/repos/pydata/xarray/issues/3287 MDEyOklzc3VlQ29tbWVudDYwMzQ5MDY0Mw== spencerahill 6200806 2020-03-24T20:34:58Z 2020-03-24T20:34:58Z CONTRIBUTOR

Here's a quick and dirty workaround that works at least for my use case. arr_orig is the original DataArray from which arr_unstacked_bad was generated via a stack/groupby/apply/unstack chain yielding the _level_0 etc. dims, with the stack call having been arr_orig.stack(**{dim_of_stack: dims_stacked}). Likely excessively convoluted and YMMV.

```python def fix_unstacked_dims(arr_unstacked_bad, arr_orig, dim_of_stack, dims_stacked): """Workaround for xarray bug involving stacking str-based coords.

C.f. https://github.com/pydata/xarray/issues/3287

"""
dims_not_stacked = [dim for dim in arr_orig.dims if dim not in dims_stacked]
stacked_dims_after_unstack = [dim for dim in arr_unstacked_bad.dims 
                              if dim not in dims_not_stacked]
dims_mapping = {d1: d2 for d1, d2 in zip(stacked_dims_after_unstack, dims_stacked)}
arr_unstacked_bad = arr_unstacked_bad.rename(dims_mapping)

arr_out = arr_orig.copy(deep=True)
arr_out.values = arr_unstacked_bad.transpose(*arr_orig.dims).values
return arr_out.assign_coords(arr_orig.coords)

```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  GroupBy of stacked dim with strings renames underlying dims 490476815
603469718 https://github.com/pydata/xarray/issues/3287#issuecomment-603469718 https://api.github.com/repos/pydata/xarray/issues/3287 MDEyOklzc3VlQ29tbWVudDYwMzQ2OTcxOA== spencerahill 6200806 2020-03-24T19:48:57Z 2020-03-24T19:48:57Z CONTRIBUTOR

Same or different problem as https://github.com/pydata/xarray/issues/1483?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  GroupBy of stacked dim with strings renames underlying dims 490476815
603468912 https://github.com/pydata/xarray/issues/3287#issuecomment-603468912 https://api.github.com/repos/pydata/xarray/issues/3287 MDEyOklzc3VlQ29tbWVudDYwMzQ2ODkxMg== spencerahill 6200806 2020-03-24T19:47:16Z 2020-03-24T19:47:16Z CONTRIBUTOR

I just bumped into this problem as well. xarray 0.15.0. Expected behavior? Bug?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  GroupBy of stacked dim with strings renames underlying dims 490476815
537570940 https://github.com/pydata/xarray/issues/3366#issuecomment-537570940 https://api.github.com/repos/pydata/xarray/issues/3366 MDEyOklzc3VlQ29tbWVudDUzNzU3MDk0MA== spencerahill 6200806 2019-10-02T16:22:00Z 2019-10-02T16:22:00Z CONTRIBUTOR

Looks like its back up and running now

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  CI offline? 501461397
537539839 https://github.com/pydata/xarray/issues/3366#issuecomment-537539839 https://api.github.com/repos/pydata/xarray/issues/3366 MDEyOklzc3VlQ29tbWVudDUzNzUzOTgzOQ== spencerahill 6200806 2019-10-02T15:11:51Z 2019-10-02T15:12:54Z CONTRIBUTOR

Indeed, looks like Azure is having some problem: https://status.dev.azure.com/

Pipelines (in US only) listed as "degraded". I.e. not specific to xarray --- I'm having same problem in another repo

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  CI offline? 501461397
454639949 https://github.com/pydata/xarray/issues/1497#issuecomment-454639949 https://api.github.com/repos/pydata/xarray/issues/1497 MDEyOklzc3VlQ29tbWVudDQ1NDYzOTk0OQ== spencerahill 6200806 2019-01-16T03:36:15Z 2019-01-16T03:36:15Z CONTRIBUTOR

No longer needed; closing

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Best way to perform DataArray.mean while retaining coords defined in the dimension being averaged 246612712
392825767 https://github.com/pydata/xarray/issues/1270#issuecomment-392825767 https://api.github.com/repos/pydata/xarray/issues/1270 MDEyOklzc3VlQ29tbWVudDM5MjgyNTc2Nw== spencerahill 6200806 2018-05-29T15:42:55Z 2018-05-29T15:42:55Z CONTRIBUTOR

@lvankampenhout just FYI #2191 has been opened for further discussion of adding resample to CFTimeIndex. So keep an eye on that for those developments...as well as consider taking a stab at implementing it yourself! I'm sure @spencerkclark and others will be keen to help out once you (or somebody) gets started.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  BUG: Resample on PeriodIndex not working? 207862981
388649930 https://github.com/pydata/xarray/issues/789#issuecomment-388649930 https://api.github.com/repos/pydata/xarray/issues/789 MDEyOklzc3VlQ29tbWVudDM4ODY0OTkzMA== spencerahill 6200806 2018-05-13T19:24:18Z 2018-05-13T19:24:18Z CONTRIBUTOR

Closed by #1252 ?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Time limitation (between years 1678 and 2262) restrictive to climate community 139956689
388649761 https://github.com/pydata/xarray/pull/1252#issuecomment-388649761 https://api.github.com/repos/pydata/xarray/issues/1252 MDEyOklzc3VlQ29tbWVudDM4ODY0OTc2MQ== spencerahill 6200806 2018-05-13T19:21:41Z 2018-05-13T19:21:41Z CONTRIBUTOR

Credit also due to @rabernat for organizing the workshop in late 2016 where this effort got off the ground, and to @shoyer who sketched out an initial roadmap for the implementation at that meeting.

So excited to have this in! In aospy alone, we'll be able to get rid of 100s (1000+?) of lines of code now that CFTime is in place.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  CFTimeIndex 205473898
374091871 https://github.com/pydata/xarray/pull/1252#issuecomment-374091871 https://api.github.com/repos/pydata/xarray/issues/1252 MDEyOklzc3VlQ29tbWVudDM3NDA5MTg3MQ== spencerahill 6200806 2018-03-19T03:35:05Z 2018-03-19T03:35:05Z CONTRIBUTOR

Just seeing this. I'm tied up the next couple days but would be happy to review it on Wednesday. Although I doubt I'll find anything you or @shoyer wouldn't, so feel free to merge if you'd rather not hold it up another few days.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  CFTimeIndex 205473898
346536629 https://github.com/pydata/xarray/issues/1738#issuecomment-346536629 https://api.github.com/repos/pydata/xarray/issues/1738 MDEyOklzc3VlQ29tbWVudDM0NjUzNjYyOQ== spencerahill 6200806 2017-11-23T05:54:51Z 2017-11-23T05:54:51Z CONTRIBUTOR

FWIW aospy is having similar failures starting roughly at the same time: https://github.com/spencerahill/aospy/issues/238

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Windows/Python 2.7 tests of dask-distributed failing on master/v0.10.0 276241193
336513488 https://github.com/pydata/xarray/issues/1627#issuecomment-336513488 https://api.github.com/repos/pydata/xarray/issues/1627 MDEyOklzc3VlQ29tbWVudDMzNjUxMzQ4OA== spencerahill 6200806 2017-10-13T17:15:33Z 2017-10-13T17:15:33Z CONTRIBUTOR

OMG this is so cool!

ditto wow I can't wait for this to be in!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  html repr of xarray object (for the notebook) 264747372
330672462 https://github.com/pydata/xarray/pull/1580#issuecomment-330672462 https://api.github.com/repos/pydata/xarray/issues/1580 MDEyOklzc3VlQ29tbWVudDMzMDY3MjQ2Mg== spencerahill 6200806 2017-09-19T21:01:55Z 2017-09-19T21:01:55Z CONTRIBUTOR

Try running them via pytest instead (pip install pytest or conda install pytest if you don't have it already).

Note that you'll need to add new test(s) that cover the modifications you have made -- not just run the existing tests.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  data_vars option added to open_mfdataset 258884482
330648836 https://github.com/pydata/xarray/pull/1580#issuecomment-330648836 https://api.github.com/repos/pydata/xarray/issues/1580 MDEyOklzc3VlQ29tbWVudDMzMDY0ODgzNg== spencerahill 6200806 2017-09-19T19:31:00Z 2017-09-19T19:32:31Z CONTRIBUTOR

have not run tests yet

These will definitely be needed. Existing tests for open_mfdataset are in xarray/tests/test_backends.py. @shoyer and/or @jhamman can hopefully help you out if you need more guidance.

Not sure how to install flake8

pip install flake8 should work. Alternatively if you use conda conda install -c conda-forge flake8

Do not think the change is big enough to be added to the files

A brief note in What's New under the Enhancements section would actually be appropriate. Just follow the example of the others in that section.

Thanks for taking this on! We actually just bumped across this (here), so your fix will immediately benefit more than just you.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  data_vars option added to open_mfdataset 258884482
330568482 https://github.com/pydata/xarray/issues/438#issuecomment-330568482 https://api.github.com/repos/pydata/xarray/issues/438 MDEyOklzc3VlQ29tbWVudDMzMDU2ODQ4Mg== spencerahill 6200806 2017-09-19T15:01:37Z 2017-09-19T15:01:37Z CONTRIBUTOR

I think you've accidentally used "minimum" instead of "minimal"

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  `xray.open_mfdataset` concatenates also variables without time dimension 89268800
326386387 https://github.com/pydata/xarray/issues/1540#issuecomment-326386387 https://api.github.com/repos/pydata/xarray/issues/1540 MDEyOklzc3VlQ29tbWVudDMyNjM4NjM4Nw== spencerahill 6200806 2017-08-31T18:43:23Z 2017-08-31T18:43:23Z CONTRIBUTOR

One bit that may be helpful: this is arising in aospy's tests, e.g. here. But, they're only occurring in our 'py27-min' environment (and are doing so with 100% consistency); never in our full 'py27' environment.

environment-py27-min.yml:

name: test_env channels: - conda-forge dependencies: - python=2.7 - scipy - netCDF4 - xarray - dask - distributed - pip: - coveralls - pytest-cov

environment-py27.yml

name: test_env channels: - conda-forge dependencies: - python=2.7 - scipy - netCDF4 - xarray - dask - distributed - pytest - future - matplotlib - ipython - pip: - coveralls - pytest-cov - pytest-catchlog - runipy

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  BUG: Dask distributed integration tests failing on Travis 254217141
320755613 https://github.com/pydata/xarray/pull/1496#issuecomment-320755613 https://api.github.com/repos/pydata/xarray/issues/1496 MDEyOklzc3VlQ29tbWVudDMyMDc1NTYxMw== spencerahill 6200806 2017-08-07T19:19:56Z 2017-08-07T19:19:56Z CONTRIBUTOR

any final comments

@jhamman none; thanks @shoyer for including the function-version of where

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  ENH: three argument version of where 246502828
318957832 https://github.com/pydata/xarray/pull/1496#issuecomment-318957832 https://api.github.com/repos/pydata/xarray/issues/1496 MDEyOklzc3VlQ29tbWVudDMxODk1NzgzMg== spencerahill 6200806 2017-07-31T03:18:44Z 2017-07-31T03:18:44Z CONTRIBUTOR

Thanks @shoyer. I guess what I had in mind is the case where both x and y are scalars, while cond is still a condition on a. In that case you couldn't do x.where(cond, y); it would require either a.where(cond, x, y) or where(cond, x, y) being supported. Am I understanding that correctly? (If I'm not being clear, consider a concrete case by plugging in e.g. x=-2, y=0, and cond=(a.x + a.y < 5).)

a.where(cond, x, y) might seem odd, since it doesn't actually retain any of a's values, but then it could retain coordinates and attributes, so it might still be useful. And this differs from where(cond, x, y), which it seems would retain cond's coords and attrs.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  ENH: three argument version of where 246502828
318931493 https://github.com/pydata/xarray/pull/1496#issuecomment-318931493 https://api.github.com/repos/pydata/xarray/issues/1496 MDEyOklzc3VlQ29tbWVudDMxODkzMTQ5Mw== spencerahill 6200806 2017-07-30T21:38:45Z 2017-07-30T21:38:45Z CONTRIBUTOR

How difficult would it be to include np.where's option to provide values for both where the condition is met and where it isn't? From their docstring:

If both x and y are specified, the output array contains elements of x where condition is True, and elements from y elsewhere.

From your example above (haven't gone through the code), what you have implemented in this PR is a special case, namely the xarray analog to np.where(a.x + a.y < 5, a, -1).

I recently had a usecase where this would be handy.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  ENH: three argument version of where 246502828
309461142 https://github.com/pydata/xarray/issues/1447#issuecomment-309461142 https://api.github.com/repos/pydata/xarray/issues/1447 MDEyOklzc3VlQ29tbWVudDMwOTQ2MTE0Mg== spencerahill 6200806 2017-06-19T14:42:35Z 2017-06-19T14:42:35Z CONTRIBUTOR

One other thought for x rather than xr: to me at least, the x-prefixed package names just sound cool 😄

{
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 1,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Package naming "conventions" for xarray extensions 234658224
307813862 https://github.com/pydata/xarray/pull/1252#issuecomment-307813862 https://api.github.com/repos/pydata/xarray/issues/1252 MDEyOklzc3VlQ29tbWVudDMwNzgxMzg2Mg== spencerahill 6200806 2017-06-12T14:53:26Z 2017-06-12T14:53:26Z CONTRIBUTOR

Pinging folks on this. Summer is upon us and, generally speaking for the academics among us, is a good time for projects like this.

@spencerkclark looks like you're still looking for guidance w/r/t the last batch of comments from May 10.

From the CI it also looks like I need to accommodate some updates in pandas in NetCDFTimeIndex. I will try and look into that tomorrow.

Did your subsequent commits resolve this?

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  CFTimeIndex 205473898
296779762 https://github.com/pydata/xarray/issues/1383#issuecomment-296779762 https://api.github.com/repos/pydata/xarray/issues/1383 MDEyOklzc3VlQ29tbWVudDI5Njc3OTc2Mg== spencerahill 6200806 2017-04-24T18:22:14Z 2017-04-24T18:22:14Z CONTRIBUTOR

See also Dataset.set_coords

{
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 1,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  open_mfdataset not finding coordinates 223891484
296253682 https://github.com/pydata/xarray/issues/1380#issuecomment-296253682 https://api.github.com/repos/pydata/xarray/issues/1380 MDEyOklzc3VlQ29tbWVudDI5NjI1MzY4Mg== spencerahill 6200806 2017-04-21T17:29:30Z 2017-04-21T17:29:30Z CONTRIBUTOR

open_mfdataset has a 'concat_dim' optional keyword argument where you can specify the name of a new dimension that you want to concatenate your files over. You can read more about this in the API reference on open_mfdataset.

You could then overwrite the coordinate of that new dimension with your desired time coordinate. Does that help?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  open_mfdataset and add time dimension 223440405
293740415 https://github.com/pydata/xarray/issues/1372#issuecomment-293740415 https://api.github.com/repos/pydata/xarray/issues/1372 MDEyOklzc3VlQ29tbWVudDI5Mzc0MDQxNQ== spencerahill 6200806 2017-04-13T00:02:56Z 2017-04-13T00:02:56Z CONTRIBUTOR

+1 we have come across this recently also in aospy

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  decode_cf() loads chunked arrays 221387277
292979787 https://github.com/pydata/xarray/issues/1369#issuecomment-292979787 https://api.github.com/repos/pydata/xarray/issues/1369 MDEyOklzc3VlQ29tbWVudDI5Mjk3OTc4Nw== spencerahill 6200806 2017-04-10T15:11:25Z 2017-04-10T15:11:47Z CONTRIBUTOR

Sounds like Dataset.reset_coords is what you're looking for.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Move/read coordinate as data variable - xarray.Dataset  220568982
289855515 https://github.com/pydata/xarray/issues/1332#issuecomment-289855515 https://api.github.com/repos/pydata/xarray/issues/1332 MDEyOklzc3VlQ29tbWVudDI4OTg1NTUxNQ== spencerahill 6200806 2017-03-28T18:06:41Z 2017-03-28T18:06:41Z CONTRIBUTOR

I'm not sure we want to wrap np.gradient. It seems like other approaches like @rabernat 's xgcm would be more appropriate as a superset of xarray.

Certainly grid-aware differencing and integral operators are preferred when the grid information is known and available, but I'm not sure that therefore a more naive version akin to np.gradient would not be useful. It's quite likely that there are xarray users (e.g. in non climate/weather/ocean-related fields) wherein a 'c' grid is meaningless to them, yet they still would appreciate being able to easily compute derivatives via xarray operations.

But then we're back to the valid questions raised before re: what is the appropriate scope of xarray functionality, c.f. https://github.com/pydata/xarray/issues/1288#issuecomment-283062107 and subsequent in that thread

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Shape preserving `diff` via new keywords 217385961
285807058 https://github.com/pydata/xarray/issues/1306#issuecomment-285807058 https://api.github.com/repos/pydata/xarray/issues/1306 MDEyOklzc3VlQ29tbWVudDI4NTgwNzA1OA== spencerahill 6200806 2017-03-10T22:51:53Z 2017-03-10T22:51:53Z CONTRIBUTOR

I capitalize Xarray when used for titles or at the beginning of a sentence

I agree with this: lower case unless beginning of a sentence or in a title. If you capitalize the "A" too, then there's more of a discrepancy between the uncapitalized version and the capitalized one.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  xarray vs Xarray vs XArray 213426608
284924732 https://github.com/pydata/xarray/issues/1280#issuecomment-284924732 https://api.github.com/repos/pydata/xarray/issues/1280 MDEyOklzc3VlQ29tbWVudDI4NDkyNDczMg== spencerahill 6200806 2017-03-08T02:12:46Z 2017-03-08T02:12:46Z CONTRIBUTOR

FYI this appears to have just been fixed on the RTD end: https://github.com/rtfd/readthedocs.org/issues/2651#issuecomment-284922193

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Current doc builds are broken 209082646
283143896 https://github.com/pydata/xarray/issues/1288#issuecomment-283143896 https://api.github.com/repos/pydata/xarray/issues/1288 MDEyOklzc3VlQ29tbWVudDI4MzE0Mzg5Ng== spencerahill 6200806 2017-02-28T19:52:23Z 2017-02-28T19:52:23Z CONTRIBUTOR

I like the integrate idea. Nothing further to add not already covered nicely via the above concerns by @rabernat and responses by @shoyer.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add trapz to DataArray for mathematical integration 210704949
280726382 https://github.com/pydata/xarray/pull/1252#issuecomment-280726382 https://api.github.com/repos/pydata/xarray/issues/1252 MDEyOklzc3VlQ29tbWVudDI4MDcyNjM4Mg== spencerahill 6200806 2017-02-17T18:18:48Z 2017-02-17T18:18:48Z CONTRIBUTOR

@spencerkclark @shoyer I got some more concrete information c.f. on my previous comment on negative and/or 5 digit dates. The TRACE simulation outputs netCDF files uses units of thousands of years relative to 1950, and therefore doesn't use 5 integers. But it does use negative and positive floats...negative for <1950, positive for >1950.

Re: 5 digits, there is growing research interest in very long climate model integrations, e.g. http://www.longrunmip.org/, but even those for now appear <10k yr in duration.

But there are also so called EMICs (Earth Models of Intermediate Complexity) that are cheap to run for 1000s of years. Although I couldn't immediately find any published results using them for >10k yr duration...

So ultimately I'd say there definitely exists a use-case for negative times (albeit with odd format) and there likely exists a use-case for 5 digit years. IMHO these are not must-haves for the initial netcdftime implementation but should at least be kept in mind, i.e. code design that makes them not-overly-difficult to introduce eventually.

I suspect there are xarray users with more direct experience with these cases...feel free to chime in

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  CFTimeIndex 205473898
277558059 https://github.com/pydata/xarray/issues/1248#issuecomment-277558059 https://api.github.com/repos/pydata/xarray/issues/1248 MDEyOklzc3VlQ29tbWVudDI3NzU1ODA1OQ== spencerahill 6200806 2017-02-05T23:09:39Z 2017-02-05T23:09:39Z CONTRIBUTOR

Sorry this example wasn't clear.

Note that v0.9.1 contains the new drop argument to .isel for exactly these sorts of use cases: ds.isel(dim_0, drop=True) should work regardless of whether or not there is a dim_0 coordinate.

Yes, this looks like the perfect solution for our use-case. My mistake for not reading the docs carefully enough. Thanks!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Clarifying sel/drop behavior for dims with vs. without coords 205248365
273688344 https://github.com/pydata/xarray/issues/1216#issuecomment-273688344 https://api.github.com/repos/pydata/xarray/issues/1216 MDEyOklzc3VlQ29tbWVudDI3MzY4ODM0NA== spencerahill 6200806 2017-01-19T05:51:33Z 2017-01-19T05:51:33Z CONTRIBUTOR

Great, see #1219.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Why ufuncs module not included in top-level namespace 201441553
269886866 https://github.com/pydata/xarray/issues/1084#issuecomment-269886866 https://api.github.com/repos/pydata/xarray/issues/1084 MDEyOklzc3VlQ29tbWVudDI2OTg4Njg2Ng== spencerahill 6200806 2017-01-01T00:09:26Z 2017-01-01T00:09:26Z CONTRIBUTOR

for now I think it makes sense to keep this in xarray proper

I agree.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Towards a (temporary?) workaround for datetime issues at the xarray-level 187591179
268662951 https://github.com/pydata/xarray/pull/1147#issuecomment-268662951 https://api.github.com/repos/pydata/xarray/issues/1147 MDEyOklzc3VlQ29tbWVudDI2ODY2Mjk1MQ== spencerahill 6200806 2016-12-21T22:41:55Z 2016-12-21T22:41:55Z CONTRIBUTOR

This is great. At least in terms of the functionality I was looking for, I'd say this closes #754.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Pytest assert functions 192752746
265197666 https://github.com/pydata/xarray/issues/1084#issuecomment-265197666 https://api.github.com/repos/pydata/xarray/issues/1084 MDEyOklzc3VlQ29tbWVudDI2NTE5NzY2Ng== spencerahill 6200806 2016-12-06T16:30:43Z 2016-12-06T16:30:43Z CONTRIBUTOR

I think your basic example is probably already enough to be useful.

Just to be sure we're not mixing Spencers, this was all @spencerkclark's great work! I had no hand in it.

I think if we were to include string-based indexing, it would be best if it were completely consistent with the DatetimeIndex version.

I agree.

So ultimately this raises the question, would we want to add just the field accessors to enable group-by operations for now and add string-based selection (and other features like resample) later, or should we put our heads down and work out a solution for partial datetime string based using netcdftime datetime objects?

Maybe an interim solution is for NetCDFTimeIndex to only accept full datestrings, issuing an error message for partial strings explaining that this functionality is forthcoming?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Towards a (temporary?) workaround for datetime issues at the xarray-level 187591179
264916322 https://github.com/pydata/xarray/issues/1084#issuecomment-264916322 https://api.github.com/repos/pydata/xarray/issues/1084 MDEyOklzc3VlQ29tbWVudDI2NDkxNjMyMg== spencerahill 6200806 2016-12-05T17:20:13Z 2016-12-05T17:20:13Z CONTRIBUTOR

This looks pretty sane to me, though of course it's still missing a few nice things you can do with datetime64 (e.g., reindex and partial datetime string selection).

@shoyer and others, is there a well-defined list of required features that the new index object would need to satisfy in order to be considred for inclusion in xarray? Are the two that you mentioned must-haves? Are there others?

Wanting to make sure we're all on the same page in terms of what the target is.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Towards a (temporary?) workaround for datetime issues at the xarray-level 187591179
225908579 https://github.com/pydata/xarray/pull/882#issuecomment-225908579 https://api.github.com/repos/pydata/xarray/issues/882 MDEyOklzc3VlQ29tbWVudDIyNTkwODU3OQ== spencerahill 6200806 2016-06-14T14:56:30Z 2016-06-14T14:56:30Z CONTRIBUTOR

@shoyer maybe just a name change then? cartopy obviously is inherently geo-specific. Is it possible that other fields would find the utilities in geocolormesh useful? Or could the geocolormesh functionality be refactored into the existing plotting framework? (Just thinking out loud...not familiar with the plotting codebase at all)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  add geocolormesh 160009256
222178311 https://github.com/pydata/xarray/issues/789#issuecomment-222178311 https://api.github.com/repos/pydata/xarray/issues/789 MDEyOklzc3VlQ29tbWVudDIyMjE3ODMxMQ== spencerahill 6200806 2016-05-27T15:32:54Z 2016-05-27T15:32:54Z CONTRIBUTOR

Pandas has created a poll on their mailing list about this issue...I encourage everybody to speak up there: https://groups.google.com/forum/#!topic/pydata/kk04maBGw1U

(Will blast this to xarray mailing list also)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Time limitation (between years 1678 and 2262) restrictive to climate community 139956689
205062093 https://github.com/pydata/xarray/issues/754#issuecomment-205062093 https://api.github.com/repos/pydata/xarray/issues/754 MDEyOklzc3VlQ29tbWVudDIwNTA2MjA5Mw== spencerahill 6200806 2016-04-03T21:59:39Z 2016-04-03T21:59:39Z CONTRIBUTOR

Thanks for the ping. Sorry, I haven't. I'm actually submitting my PhD thesis in ~2 months and then defending it this summer, so I won't be able to help much until August or September. So by all means, go for it.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Expose testing methods 132536288
182508961 https://github.com/pydata/xarray/issues/754#issuecomment-182508961 https://api.github.com/repos/pydata/xarray/issues/754 MDEyOklzc3VlQ29tbWVudDE4MjUwODk2MQ== spencerahill 6200806 2016-02-10T18:12:13Z 2016-02-10T18:12:13Z CONTRIBUTOR

Indeed. I'm happy to do this sometime (eventually...), but if somebody ends up with a pressing need for this in the meantime, go for it. Thanks!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Expose testing methods 132536288
182195886 https://github.com/pydata/xarray/issues/525#issuecomment-182195886 https://api.github.com/repos/pydata/xarray/issues/525 MDEyOklzc3VlQ29tbWVudDE4MjE5NTg4Ng== spencerahill 6200806 2016-02-10T04:45:17Z 2016-02-10T04:45:17Z CONTRIBUTOR

Not to be pedantic, but just one more :+1: on ultimately implementing units support within xarray -- that would be huge.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  support for units 100295585
178906345 https://github.com/pydata/xarray/pull/741#issuecomment-178906345 https://api.github.com/repos/pydata/xarray/issues/741 MDEyOklzc3VlQ29tbWVudDE3ODkwNjM0NQ== spencerahill 6200806 2016-02-03T00:24:04Z 2016-02-03T00:24:04Z CONTRIBUTOR

@shoyer thanks! Good to know.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Allow empty result of numerical operations between DataArrays 130743212
178729433 https://github.com/pydata/xarray/pull/741#issuecomment-178729433 https://api.github.com/repos/pydata/xarray/issues/741 MDEyOklzc3VlQ29tbWVudDE3ODcyOTQzMw== spencerahill 6200806 2016-02-02T18:11:25Z 2016-02-02T18:12:04Z CONTRIBUTOR

@MaximilianR FWIW I also had problems with .version recently when I cloned from the main repo...ended up having to copy version.py from another xarray install. Maybe has something to do with .gitignore? xarray/version.py is included there.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Allow empty result of numerical operations between DataArrays 130743212
178350445 https://github.com/pydata/xarray/pull/736#issuecomment-178350445 https://api.github.com/repos/pydata/xarray/issues/736 MDEyOklzc3VlQ29tbWVudDE3ODM1MDQ0NQ== spencerahill 6200806 2016-02-02T03:44:33Z 2016-02-02T03:44:33Z CONTRIBUTOR

No problem. Sorry for the extra commit -- I tried to squash them together, but I'm new to rebase and apparently didn't do it right. Thanks for helping me incorporate this!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Accept rename to same name 130064443
177362720 https://github.com/pydata/xarray/pull/736#issuecomment-177362720 https://api.github.com/repos/pydata/xarray/issues/736 MDEyOklzc3VlQ29tbWVudDE3NzM2MjcyMA== spencerahill 6200806 2016-01-31T02:29:35Z 2016-01-31T02:29:35Z CONTRIBUTOR

Sure, will do all of these.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Accept rename to same name 130064443
175174311 https://github.com/pydata/xarray/issues/725#issuecomment-175174311 https://api.github.com/repos/pydata/xarray/issues/725 MDEyOklzc3VlQ29tbWVudDE3NTE3NDMxMQ== spencerahill 6200806 2016-01-26T18:50:11Z 2016-01-26T18:50:11Z CONTRIBUTOR

Thanks, to_dataset is a good workaround.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Replacing coord with coord of same name results in NaNs 128735308
174826027 https://github.com/pydata/xarray/issues/724#issuecomment-174826027 https://api.github.com/repos/pydata/xarray/issues/724 MDEyOklzc3VlQ29tbWVudDE3NDgyNjAyNw== spencerahill 6200806 2016-01-26T04:56:15Z 2016-01-26T04:56:15Z CONTRIBUTOR

Sure, will do in the coming days or over the weekend. Thanks.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Behavior of ds.rename when old and new name are the same 128718628
169217099 https://github.com/pydata/xarray/issues/704#issuecomment-169217099 https://api.github.com/repos/pydata/xarray/issues/704 MDEyOklzc3VlQ29tbWVudDE2OTIxNzA5OQ== spencerahill 6200806 2016-01-06T04:28:30Z 2016-01-06T04:28:30Z CONTRIBUTOR

One more vote for import xarray as xr. It mimics import numpy as np, in terms of retaining the first consonant of each syllable.

Also, it is clearly not pronounceable other than as the individual letters sequentially, i.e. as 'ex-are', which is a good thing. xy, xra, and xry all have ambiguous pronunciations.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Complete renaming xray -> xarray 124867009
159637376 https://github.com/pydata/xarray/issues/665#issuecomment-159637376 https://api.github.com/repos/pydata/xarray/issues/665 MDEyOklzc3VlQ29tbWVudDE1OTYzNzM3Ng== spencerahill 6200806 2015-11-25T15:15:27Z 2015-11-25T20:48:40Z CONTRIBUTOR

~~@spencerkclark and I have come across the same problem in one of the atmospheric models we work with, but in our case in addition to the time bounds array the latitude and longitude bounds are also 2D. I mention this because I suspect this wouldn't be resolved by @shoyer 's previous comment re: fixing decode_cf_timedelta.~~ I was wrong here; see comments in #667

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  ValueError: Buffer has wrong number of dimensions (expected 1, got 2) 118525173
159713922 https://github.com/pydata/xarray/issues/665#issuecomment-159713922 https://api.github.com/repos/pydata/xarray/issues/665 MDEyOklzc3VlQ29tbWVudDE1OTcxMzkyMg== spencerahill 6200806 2015-11-25T19:49:00Z 2015-11-25T20:48:16Z CONTRIBUTOR

@shoyer Sure, just did: #667. ~~My suspicion is that it's ultimately the same underlying issue, which is simply that these bound arrays have a 2nd dimension that xray is having a hard time dealing with. Nevertheless, that it occurs in non-time coordinates as well implies that dealing with it solely through time-related methods won't solve everything.~~ I was wrong here; see comments in #667.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  ValueError: Buffer has wrong number of dimensions (expected 1, got 2) 118525173
159727745 https://github.com/pydata/xarray/issues/667#issuecomment-159727745 https://api.github.com/repos/pydata/xarray/issues/667 MDEyOklzc3VlQ29tbWVudDE1OTcyNzc0NQ== spencerahill 6200806 2015-11-25T20:45:42Z 2015-11-25T20:46:14Z CONTRIBUTOR

Sorry, @spencerkclark is right, the ValueError issue we had was also due to the 2D time bounds array only. For example: (the netCDF file used below is also at ftp://ftp.gfdl.noaa.gov/pub/s1h/atmos.201001-201012.t_surf.nc)

``` python In [1]: ds = xray.open_dataset('/archive/Spencer.Hill/am3/am3clim_hurrell/gfdl.ncrc2-intel-prod-openmp/pp/atmos/ts/monthly/1yr/atmos.201001-201012.t_surf.nc')

In [2]: print(ds)

ValueError Traceback (most recent call last) <ipython-input-2-4d24098ddece> in <module>() ----> 1 print(ds)

/home/s1h/anaconda/lib/python2.7/site-packages/xray/core/dataset.pyc in repr(self) 885 886 def repr(self): --> 887 return formatting.dataset_repr(self) 888 889 @property

...

/home/s1h/anaconda/lib/python2.7/site-packages/pandas/tseries/timedeltas.pyc in _convert_listlike(arg, box, unit, name) 47 value = arg.astype('timedelta64[{0}]'.format(unit)).astype('timedelta64[ns]', copy=False) 48 else: ---> 49 value = tslib.array_to_timedelta64(_ensure_object(arg), unit=unit, errors=errors) 50 value = value.astype('timedelta64[ns]', copy=False) 51

pandas/tslib.pyx in pandas.tslib.array_to_timedelta64 (pandas/tslib.c:47046)()

ValueError: Buffer has wrong number of dimensions (expected 1, got 2)

In [3]: ds2 = ds.drop('time_bounds')

In [4]: print(ds2) <xray.Dataset> Dimensions: (bnds: 2, lat: 90, lon: 144, time: 12) Coordinates: * lat (lat) float64 -89.0 -87.0 -85.0 -83.0 -81.0 -79.0 -77.0 ... * lon (lon) float64 1.25 3.75 6.25 8.75 11.25 13.75 16.25 18.75 ... * time (time) datetime64[ns] 2010-01-16T12:00:00 2010-02-15 ... * bnds (bnds) int64 0 1 Data variables: average_DT (time) timedelta64[ns] 31 days 28 days 31 days 30 days ... average_T1 (time) datetime64[ns] 2010-01-01 2010-02-01 2010-03-01 ... average_T2 (time) datetime64[ns] 2010-02-01 2010-03-01 2010-04-01 ... lat_bnds (lat, bnds) float64 -90.0 -88.0 -88.0 -86.0 -86.0 -84.0 ... lon_bnds (lon, bnds) float64 0.0 2.5 2.5 5.0 5.0 7.5 7.5 10.0 10.0 ... t_surf (time, lat, lon) float64 245.9 245.9 245.8 245.7 245.7 245.6 ... ... ```

The errors I was thinking of relating to these lat- and lon-bounds were ultimately due to errors in my own code...my mistakes appear to be the unifying theme here! Sorry for the confusion.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Problems when array of coordinate bounds is 2D 118910006
149664980 https://github.com/pydata/xarray/issues/634#issuecomment-149664980 https://api.github.com/repos/pydata/xarray/issues/634 MDEyOklzc3VlQ29tbWVudDE0OTY2NDk4MA== spencerahill 6200806 2015-10-20T18:49:08Z 2015-10-20T18:49:08Z CONTRIBUTOR

Cool, thanks. Not familiar enough w/ the xray source code at this time to contribute directly to the refactor, but for the time being here's a simple 2-liner workaround:

python dlon = arr['lon'].diff('lon') dlon.values = np.diff(arr['lon'])

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Unexpected behavior by diff when applied to coordinate DataArray 112430028

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 23.659ms · About: xarray-datasette