home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

115 rows where user = 950575 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: reactions, created_at (date), updated_at (date)

issue >30

  • Add a filter_by_attrs method to Dataset 14
  • Fix datetime decoding when time units are 'days since 0000-01-01 00:00:00' 10
  • Getting netCDF4 to work on RTD 5
  • open_mfdataset parallel=True failing with netcdf4 >= 1.6.1 5
  • Read only support for PyNIO backend 4
  • Fix #665 decode_cf_timedelta 2D 4
  • Don't convert data with time units to timedeltas by default 4
  • Don't convert time data to timedelta by default 4
  • New encoding keyword argument for to_netcdf 3
  • 0.7 missing Python 3.3 conda package 3
  • New infer_intervals keyword for pcolormesh 3
  • Behavior of filter_by_attrs() does not match netCDF4.Dataset.get_variables_by_attributes 3
  • Problems with distributed and opendap netCDF endpoint 3
  • local build of docs fails with OSError: [UT_PARSE] Failed to open UDUNITS-2 XML unit database 3
  • API for multi-dimensional resampling/regridding 2
  • Best way to find data variables by standard_name 2
  • error when using broadcast_arrays with coordinates 2
  • dont infer interval breaks in pcolormesh when ax is a cartopy axis 2
  • Switch py2.7 CI build to use conda-forge 2
  • xarray-cartopy broken? 2
  • Expose options for axis sharing between subplots 2
  • pynio backend broken in python 3 2
  • units = 'days' leads to timedelta64 for data variable 2
  • Deprecate decode timedelta 2
  • Opendap access failure error 2
  • bad conda solve with pandas 2 2
  • Create a way to calculate computed coordinates 1
  • time decoding error with "days since" 1
  • Complete renaming xray -> xarray 1
  • xarray package not found by conda 1
  • …

user 1

  • ocefpaf · 115 ✖

author_association 1

  • CONTRIBUTOR 115
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
1498200620 https://github.com/pydata/xarray/issues/7573#issuecomment-1498200620 https://api.github.com/repos/pydata/xarray/issues/7573 IC_kwDOAMm_X85ZTLos ocefpaf 950575 2023-04-05T21:47:19Z 2023-04-05T21:47:19Z CONTRIBUTOR

With the current PR we would end up with two different build numbers with differing behaviour, which might confuse folks.

+1

But I'd rely on @ocefpaf's expertise.

The PR is a good idea and we, conda-forge, even though about making something like that for all packages. The problem is that optional packages metadata in Python-land is super unreliable. So, doing it on a package bases and with the original authors as part of it, it is super safe and recommended.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add optional min versions to conda-forge recipe (`run_constrained`) 1603957501
1496186966 https://github.com/pydata/xarray/issues/7716#issuecomment-1496186966 https://api.github.com/repos/pydata/xarray/issues/7716 IC_kwDOAMm_X85ZLgBW ocefpaf 950575 2023-04-04T15:30:37Z 2023-04-04T15:30:37Z CONTRIBUTOR

@dcherian do you mind taking a look at https://github.com/conda-forge/conda-forge-repodata-patches-feedstock/pull/426? Please check the versions patched and the applied patch! Thanks!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  bad conda solve with pandas 2 1654022522
1496107675 https://github.com/pydata/xarray/issues/7716#issuecomment-1496107675 https://api.github.com/repos/pydata/xarray/issues/7716 IC_kwDOAMm_X85ZLMqb ocefpaf 950575 2023-04-04T14:48:55Z 2023-04-04T14:48:55Z CONTRIBUTOR

We need to do a repodata patch for the current xarray. I'll get to it soon.

{
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 1,
    "rocket": 0,
    "eyes": 0
}
  bad conda solve with pandas 2 1654022522
1494560788 https://github.com/pydata/xarray/issues/7079#issuecomment-1494560788 https://api.github.com/repos/pydata/xarray/issues/7079 IC_kwDOAMm_X85ZFTAU ocefpaf 950575 2023-04-03T15:44:18Z 2023-04-03T15:44:18Z CONTRIBUTOR

@kthyng those files are on a remote server and that may not be the segfault from the original issue here. It may be a server that is not happy with parallel access. Can you try that with local files?

PS: you can also try with netcdf4<1.6.1 and, if that also fails, it is most likely the server than the issue here.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  open_mfdataset parallel=True failing with netcdf4 >= 1.6.1 1385031286
1371028352 https://github.com/pydata/xarray/pull/7415#issuecomment-1371028352 https://api.github.com/repos/pydata/xarray/issues/7415 IC_kwDOAMm_X85RuDuA ocefpaf 950575 2023-01-04T14:53:15Z 2023-01-04T14:53:15Z CONTRIBUTOR

that looks like it works, besides the fact that conda reports numbagg=0.2.1 but

It is not a conda problem but a bug upstream. The v0.2.1 tag hasv0.2.0 hardcoded in it.

See https://github.com/numbagg/numbagg/blob/v0.2.1/setup.py#L24

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  install `numbagg` from `conda-forge` 1519058102
1276668410 https://github.com/pydata/xarray/issues/7079#issuecomment-1276668410 https://api.github.com/repos/pydata/xarray/issues/7079 IC_kwDOAMm_X85MGGn6 ocefpaf 950575 2022-10-12T19:57:35Z 2022-10-12T20:17:08Z CONTRIBUTOR

Note that this is not a bug per se, netcdf-c was never thread safe and, when the work around were removed in netcdf4-python, this issue surfaced. The right fix is to disable threads, like in my example above, or to wait for a netcdf-c release that is thread safe. I don't think the work around will be re-added in netcdf4-python.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  open_mfdataset parallel=True failing with netcdf4 >= 1.6.1 1385031286
1276685512 https://github.com/pydata/xarray/issues/7079#issuecomment-1276685512 https://api.github.com/repos/pydata/xarray/issues/7079 IC_kwDOAMm_X85MGKzI ocefpaf 950575 2022-10-12T20:16:41Z 2022-10-12T20:16:41Z CONTRIBUTOR

This fix will restrict you to serial compute.

I was waiting for someone who do stuff on clusters to comment on that. Thanks! (My workflow is my own laptop only, so I'm quite limited on that front :smile:)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  open_mfdataset parallel=True failing with netcdf4 >= 1.6.1 1385031286
1267477522 https://github.com/pydata/xarray/issues/7079#issuecomment-1267477522 https://api.github.com/repos/pydata/xarray/issues/7079 IC_kwDOAMm_X85LjCwS ocefpaf 950575 2022-10-04T19:24:01Z 2022-10-04T19:34:42Z CONTRIBUTOR

Also, you can try:

python import dask dask.config.set(scheduler="single-threaded")

That would ensure you don't use threads when reading with netcdf-c (netcdf4).


Edit: this is not an xarray problem and I recommend to close this issue and follow up with the one already opened upstream.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  open_mfdataset parallel=True failing with netcdf4 >= 1.6.1 1385031286
1267159210 https://github.com/pydata/xarray/issues/7079#issuecomment-1267159210 https://api.github.com/repos/pydata/xarray/issues/7079 IC_kwDOAMm_X85Lh1Cq ocefpaf 950575 2022-10-04T15:11:17Z 2022-10-04T15:11:17Z CONTRIBUTOR

I believe you are hitting https://github.com/Unidata/netcdf4-python/issues/1192

The verdict is not out on that one yet. Your parallelization may not be thread safe, which makes 1.6.1 failures that expected. For now, if you can, downgrade to 1.6.0 or use an engine that is thread safe. Maybe h5netcdf (not sure!)?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  open_mfdataset parallel=True failing with netcdf4 >= 1.6.1 1385031286
664458881 https://github.com/pydata/xarray/issues/4257#issuecomment-664458881 https://api.github.com/repos/pydata/xarray/issues/4257 MDEyOklzc3VlQ29tbWVudDY2NDQ1ODg4MQ== ocefpaf 950575 2020-07-27T15:16:43Z 2020-07-27T15:16:43Z CONTRIBUTOR

Everything looks OK and I cannot reproduce that. Sorry, but I'm at a loss here.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  local build of docs fails with OSError: [UT_PARSE] Failed to open UDUNITS-2 XML unit database 664458864
664450538 https://github.com/pydata/xarray/issues/4257#issuecomment-664450538 https://api.github.com/repos/pydata/xarray/issues/4257 MDEyOklzc3VlQ29tbWVudDY2NDQ1MDUzOA== ocefpaf 950575 2020-07-27T15:02:35Z 2020-07-27T15:02:35Z CONTRIBUTOR

The difference between your environment and mine does not explain it:

```

diff -Naur conda-list-mine.txt conda-list.txt ⏎ --- conda-list-mine.txt 2020-07-27 11:43:57.271609852 -0300 +++ conda-list.txt 2020-07-27 11:43:38.451218465 -0300 @@ -18,7 +18,7 @@ cartopy 0.18.0 py38h172510d_0 conda-forge certifi 2020.6.20 py38h32f6830_0 conda-forge cf-units 2.1.4 py38h8790de6_0 conda-forge -cffi 1.14.0 py38hd463f26_0 conda-forge +cffi 1.14.1 py38h5bae8af_0 conda-forge cfgrib 0.9.8.3 py_0 conda-forge cfitsio 3.470 hce51eda_6 conda-forge cftime 1.2.1 py38h8790de6_0 conda-forge @@ -77,7 +77,7 @@ kiwisolver 1.2.0 py38hbf85e49_0 conda-forge krb5 1.17.1 hfafb76e_1 conda-forge lcms2 2.11 hbd6801e_0 conda-forge -ld_impl_linux-64 2.34 h53a641e_7 conda-forge +ld_impl_linux-64 2.34 hc38a660_9 conda-forge libaec 1.0.4 he1b5a44_1 conda-forge libblas 3.8.0 17_openblas conda-forge libcblas 3.8.0 17_openblas conda-forge @@ -87,14 +87,14 @@ libffi 3.2.1 he1b5a44_1007 conda-forge libgcc-ng 9.2.0 h24d8f2e_2 conda-forge libgdal 3.0.4 he6a97d6_10 conda-forge -libgfortran-ng 7.5.0 hdf63c60_6 conda-forge +libgfortran-ng 7.5.0 hdf63c60_10 conda-forge libgomp 9.2.0 h24d8f2e_2 conda-forge libiconv 1.15 h516909a_1006 conda-forge libkml 1.3.0 hb574062_1011 conda-forge liblapack 3.8.0 17_openblas conda-forge libllvm9 9.0.1 he513fc3_1 conda-forge libnetcdf 4.7.4 nompi_h84807e1_105 conda-forge -libopenblas 0.3.10 pthreads_hb3c22a3_3 conda-forge +libopenblas 0.3.10 pthreads_hb3c22a3_4 conda-forge libpng 1.6.37 hed695b0_1 conda-forge libpq 12.3 h5513abc_0 conda-forge libsodium 1.0.17 h516909a_0 conda-forge @@ -220,4 +220,4 @@ zict 2.0.0 py_0 conda-forge zipp 3.1.0 py_0 conda-forge zlib 1.2.11 h516909a_1006 conda-forge -zstd 1.4.5 h6597ccf_1 conda-forge +zstd 1.4.5 h6597ccf_2 conda-forge ```

What is your shell? What is the result of echo $UDUNITS2_XML_PATH?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  local build of docs fails with OSError: [UT_PARSE] Failed to open UDUNITS-2 XML unit database 664458864
663695022 https://github.com/pydata/xarray/issues/4257#issuecomment-663695022 https://api.github.com/repos/pydata/xarray/issues/4257 MDEyOklzc3VlQ29tbWVudDY2MzY5NTAyMg== ocefpaf 950575 2020-07-24T19:23:15Z 2020-07-24T19:23:15Z CONTRIBUTOR

I could not reproduce it locally on my Linux box. Also, I believe the CIs here to the same, right? Maybe you could send me your .condarc and conda list so we can check if there is something off.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  local build of docs fails with OSError: [UT_PARSE] Failed to open UDUNITS-2 XML unit database 664458864
658735011 https://github.com/pydata/xarray/pull/4227#issuecomment-658735011 https://api.github.com/repos/pydata/xarray/issues/4227 MDEyOklzc3VlQ29tbWVudDY1ODczNTAxMQ== ocefpaf 950575 2020-07-15T12:21:07Z 2020-07-15T12:21:07Z CONTRIBUTOR

This is not needed. The pytest rc on the main channel was a mistake and it is already resolved. (Sorry about that BTW.)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  pin pytest 657299231
627326097 https://github.com/pydata/xarray/issues/4043#issuecomment-627326097 https://api.github.com/repos/pydata/xarray/issues/4043 MDEyOklzc3VlQ29tbWVudDYyNzMyNjA5Nw== ocefpaf 950575 2020-05-12T12:58:16Z 2020-05-12T12:58:16Z CONTRIBUTOR

I installed xarray through the recommended command in the official website in my minicoda env some months-year ago

That is probably it then. I see you have libnetcdf 4.6.2, if you recreate that env you should get libnetcdf 4.7.4. Can you try it with a new clean env:

shell conda create --name TEST --channel conda-forge xarray dask netCDF4 bottleneck

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Opendap access failure error 614144170
625426383 https://github.com/pydata/xarray/issues/4043#issuecomment-625426383 https://api.github.com/repos/pydata/xarray/issues/4043 MDEyOklzc3VlQ29tbWVudDYyNTQyNjM4Mw== ocefpaf 950575 2020-05-07T18:35:20Z 2020-05-07T18:35:20Z CONTRIBUTOR

How are you installing netcdf4? There was a problem with the underlying libetcdf some time ago that caused access failures like that. You can try upgrading it or using another backend, like pydap.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Opendap access failure error 614144170
557604201 https://github.com/pydata/xarray/issues/3563#issuecomment-557604201 https://api.github.com/repos/pydata/xarray/issues/3563 MDEyOklzc3VlQ29tbWVudDU1NzYwNDIwMQ== ocefpaf 950575 2019-11-22T16:40:39Z 2019-11-22T16:40:39Z CONTRIBUTOR

@dcherian it is very experimental but nbrr can probably help you setup the environment.yaml file with a single command.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  environment file for binderized examples 527296094
524323480 https://github.com/pydata/xarray/pull/3247#issuecomment-524323480 https://api.github.com/repos/pydata/xarray/issues/3247 MDEyOklzc3VlQ29tbWVudDUyNDMyMzQ4MA== ocefpaf 950575 2019-08-23T13:51:48Z 2019-08-23T13:51:48Z CONTRIBUTOR

Thanks @cspencerjones . Can you add a test that tests this?

I guess we can just extend the existing test and check if we can find a coordinate by its attributes. For example, add a standard name for time here and see if we can find time using it.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Update filter_by_attrs to use 'variables' instead of 'data_vars' 484243962
513860913 https://github.com/pydata/xarray/issues/3154#issuecomment-513860913 https://api.github.com/repos/pydata/xarray/issues/3154 MDEyOklzc3VlQ29tbWVudDUxMzg2MDkxMw== ocefpaf 950575 2019-07-22T16:41:31Z 2019-07-22T16:41:31Z CONTRIBUTOR

The problem here is that we had to remove some old builds of gdal (a pynio dependency) due to an incompatibility with a newer icu. However, the new builds are using hdf5 1.10.5 but h5py is not ready for it yet. (h5py still uses hdf 1.10.4.)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  pynio causes dependency conflicts in py36 CI build 470714103
487329572 https://github.com/pydata/xarray/pull/2925#issuecomment-487329572 https://api.github.com/repos/pydata/xarray/issues/2925 MDEyOklzc3VlQ29tbWVudDQ4NzMyOTU3Mg== ocefpaf 950575 2019-04-28T00:20:33Z 2019-04-28T00:20:33Z CONTRIBUTOR

@shoyer conda-forge dropped support for Python 3.5 a while back and the old packages may present a problem.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Attempt to fix py35 build on Travis 437996498
435968410 https://github.com/pydata/xarray/issues/2539#issuecomment-435968410 https://api.github.com/repos/pydata/xarray/issues/2539 MDEyOklzc3VlQ29tbWVudDQzNTk2ODQxMA== ocefpaf 950575 2018-11-05T17:48:13Z 2018-12-07T20:09:18Z CONTRIBUTOR

I agree erdappy seems like good fit for an xarray backend.

Note that erddapy is not really a "client" it is a URL builder that leverages ERDDAP's RESTful Web Services and pandas or xarray to download data.

@rmendels and @jhamman please correct me if I'm wrong below:

The main advantage of ERDDAP, as @rmendels mentioned above, is that one can make requests in coordinate space. However, I don't think there is an OPeNDAP response for that a sliced request in ERDDAP, just the "full data." Also, xarray can already ingest an OPeNDAP URL and lazily slice it using high level coordinate space syntax. (At the cost of loading the coordinates but that is 99% of the time fast enough.)

The alternative would be to use the netCDF response but that means we would need to download the file and load it with xarray, which does not really justify a new backend IMO. (I do that in a very inelegant way in erddapy.)

The download limitation exists in any of the "file format" responses in ERDDAP, making the OPeNDAP the best choice. So, unless we can figure out a way for ERDDAP to serve the "sliced" URL as OPeNDAP, I believe the best we can do already exists in xarray :grimacing:

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Request: Add support for the ERDDAP griddap request 377096851
432796835 https://github.com/pydata/xarray/issues/2503#issuecomment-432796835 https://api.github.com/repos/pydata/xarray/issues/2503 MDEyOklzc3VlQ29tbWVudDQzMjc5NjgzNQ== ocefpaf 950575 2018-10-24T19:29:11Z 2018-10-24T19:29:11Z CONTRIBUTOR

h10edf3e_1 contains the timeout fix and is build against hdf5 1.10.2. The conda-forge h9cd6fdc_11 build is against hdf5 1.10.3 perhaps that makes a different?

There are many variables at play here. The env that solved it in https://github.com/pydata/xarray/issues/2503#issuecomment-432645477 seems quite different from the env where the problem happened, including an xarray dev version. I'm not sure hdf5 is a good candidate to blame :smile:

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Problems with distributed and opendap netCDF endpoint 373121666
432784219 https://github.com/pydata/xarray/issues/2503#issuecomment-432784219 https://api.github.com/repos/pydata/xarray/issues/2503 MDEyOklzc3VlQ29tbWVudDQzMjc4NDIxOQ== ocefpaf 950575 2018-10-24T18:53:22Z 2018-10-24T18:53:22Z CONTRIBUTOR

In defaults libnetcdf4 4.6.1 build 1 and above contain the timeout fix, build 0 has the original timeout.

Thanks @jjhelmus! I guess that info and https://github.com/pydata/xarray/issues/2503#issuecomment-432483817 eliminates the timeout issue from the equation.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Problems with distributed and opendap netCDF endpoint 373121666
432746421 https://github.com/pydata/xarray/issues/2503#issuecomment-432746421 https://api.github.com/repos/pydata/xarray/issues/2503 MDEyOklzc3VlQ29tbWVudDQzMjc0NjQyMQ== ocefpaf 950575 2018-10-24T17:10:44Z 2018-10-24T17:10:44Z CONTRIBUTOR

That version has the fix for the issue.

I know that @jjhelmus ported the fix to defaults but I'm not sure which build number has it, and/or if the previous one was remove, b/c defaults builds are not as transparent as conda-forge's :smile:

He can probably say more about that.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Problems with distributed and opendap netCDF endpoint 373121666
415139088 https://github.com/pydata/xarray/pull/2322#issuecomment-415139088 https://api.github.com/repos/pydata/xarray/issues/2322 MDEyOklzc3VlQ29tbWVudDQxNTEzOTA4OA== ocefpaf 950575 2018-08-22T18:48:39Z 2018-08-22T18:48:39Z CONTRIBUTOR

@shoyer and @jhamman this looks good to go IMO. @DocOtak thanks for fixing my bug!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  BUG: modify behavior of Dataset.filter_by_attrs to match netCDF4.Data… 345322908
413277673 https://github.com/pydata/xarray/issues/2368#issuecomment-413277673 https://api.github.com/repos/pydata/xarray/issues/2368 MDEyOklzc3VlQ29tbWVudDQxMzI3NzY3Mw== ocefpaf 950575 2018-08-15T17:45:40Z 2018-08-15T17:45:40Z CONTRIBUTOR

I believe the last one in the notebook below is already fixed and the first two are mentioned above but here is a data point:

http://nbviewer.jupyter.org/gist/ocefpaf/1bf3b86359c459c89d44a81d3129f967

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Let's list all the netCDF files that xarray can't open 350899839
408268331 https://github.com/pydata/xarray/issues/2315#issuecomment-408268331 https://api.github.com/repos/pydata/xarray/issues/2315 MDEyOklzc3VlQ29tbWVudDQwODI2ODMzMQ== ocefpaf 950575 2018-07-26T23:44:49Z 2018-07-26T23:44:49Z CONTRIBUTOR

I can work on a PR tomorrow. Does the benefit of having the same behavior as the netCDF4 library warrant a potentially breaking change for existing code which relies on the current behavior of filter_by_attrs()?

IMO, yes.

This might need adding a new method with the same behavior as netCDF4 and keeping the existing one as is (with appropriate documentation updates).

That is up to xarray devs but I personally don't think it is necessary.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Behavior of filter_by_attrs() does not match netCDF4.Dataset.get_variables_by_attributes 344631360
408262225 https://github.com/pydata/xarray/issues/2315#issuecomment-408262225 https://api.github.com/repos/pydata/xarray/issues/2315 MDEyOklzc3VlQ29tbWVudDQwODI2MjIyNQ== ocefpaf 950575 2018-07-26T23:09:06Z 2018-07-26T23:09:06Z CONTRIBUTOR

Got it. I cannot dig into this at the moment but having both implementations working in a consistent way would be nice. Do you want to send a PR?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Behavior of filter_by_attrs() does not match netCDF4.Dataset.get_variables_by_attributes 344631360
408255439 https://github.com/pydata/xarray/issues/2315#issuecomment-408255439 https://api.github.com/repos/pydata/xarray/issues/2315 MDEyOklzc3VlQ29tbWVudDQwODI1NTQzOQ== ocefpaf 950575 2018-07-26T22:32:29Z 2018-07-26T22:32:29Z CONTRIBUTOR

it appears that this scenario was not directly contemplated.

Correct. I did not foresee that use case. Are you sure that netCDF4.Dataset.get_variables_by_attributes behaves as a logical AND? The code is virtually a copy-n-paste from there.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Behavior of filter_by_attrs() does not match netCDF4.Dataset.get_variables_by_attributes 344631360
407547882 https://github.com/pydata/xarray/issues/2209#issuecomment-407547882 https://api.github.com/repos/pydata/xarray/issues/2209 MDEyOklzc3VlQ29tbWVudDQwNzU0Nzg4Mg== ocefpaf 950575 2018-07-24T20:51:50Z 2018-07-24T20:51:50Z CONTRIBUTOR

Notice that it took 411 seconds to run conda env create!

If you are using conda-forge bare in mind that our package index is huge and conda is not very smart about it. We are looking into possible solution. Pinging @pelson who have some ideas in mind on how to address this problem.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Build timeouts on ReadTheDocs 328572578
397365916 https://github.com/pydata/xarray/issues/2233#issuecomment-397365916 https://api.github.com/repos/pydata/xarray/issues/2233 MDEyOklzc3VlQ29tbWVudDM5NzM2NTkxNg== ocefpaf 950575 2018-06-14T16:58:03Z 2018-06-14T17:01:17Z CONTRIBUTOR

It is not ideal but you can workaround that by dropping the siglay variable.

python ds = xr.open_dataset(url, drop_variables='siglay')

{
    "total_count": 2,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 1,
    "rocket": 0,
    "eyes": 0
}
  Problem opening unstructured grid ocean forecasts with 4D vertical coordinates 332471780
387053966 https://github.com/pydata/xarray/pull/2105#issuecomment-387053966 https://api.github.com/repos/pydata/xarray/issues/2105 MDEyOklzc3VlQ29tbWVudDM4NzA1Mzk2Ng== ocefpaf 950575 2018-05-07T12:49:14Z 2018-05-07T12:49:14Z CONTRIBUTOR

I would like to be able to round-trip something like the following dataset to netCDF.

OK, I though you meant roundtrip from the netCDF file and back. In my line of work handling high level python objects serialization like that is usually not desired as the user should be responsible of how s/he wants to save the data. (I also never had an application that required timedelta, so it was hard for me to contextualize that.)

I'll be away the next weeks, so if someone wants to pick it up from here please feel free to do so. Otherwise I'll try to get back to this once I return to the office.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Deprecate decode timedelta 320283034
386826761 https://github.com/pydata/xarray/pull/2105#issuecomment-386826761 https://api.github.com/repos/pydata/xarray/issues/2105 MDEyOklzc3VlQ29tbWVudDM4NjgyNjc2MQ== ocefpaf 950575 2018-05-05T18:47:11Z 2018-05-05T18:47:11Z CONTRIBUTOR

@shoyer this is ready for a second round of reviews.

The last thing I'd like to see is support for automatically serializing/deserializing timedelta64 by saving an attribute dtype='timedelta64[ns]'. This will preserve the ability to roundtrip this data to netCDF, which I do think is useful for some users -- otherwise users will have xarray objects that they can no longer save to netCDF.

I'm not sure I follow. Roundtrip is easier now since the original time unit dtype is preserved, no? Ping @rsignell-usgs who is a netCDF/CF specialist :wink:

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Deprecate decode timedelta 320283034
384710080 https://github.com/pydata/xarray/issues/2085#issuecomment-384710080 https://api.github.com/repos/pydata/xarray/issues/2085 MDEyOklzc3VlQ29tbWVudDM4NDcxMDA4MA== ocefpaf 950575 2018-04-26T16:44:25Z 2018-04-26T16:44:25Z CONTRIBUTOR

Thanks! I'll look into those and should have something by next week.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  units = 'days' leads to timedelta64 for data variable 317954266
384704926 https://github.com/pydata/xarray/issues/2085#issuecomment-384704926 https://api.github.com/repos/pydata/xarray/issues/2085 MDEyOklzc3VlQ29tbWVudDM4NDcwNDkyNg== ocefpaf 950575 2018-04-26T16:28:04Z 2018-04-26T16:28:04Z CONTRIBUTOR

@shoyer what is the path forward? In https://github.com/pydata/xarray/pull/940 I implemented a keyword so we could keep both behaviors, which I believe is a bad idea.

Would a PR changing the current behavior and return floats instead timedelta64 be OK? If so I can try to put that together over this weekend.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  units = 'days' leads to timedelta64 for data variable 317954266
345317150 https://github.com/pydata/xarray/issues/1721#issuecomment-345317150 https://api.github.com/repos/pydata/xarray/issues/1721 MDEyOklzc3VlQ29tbWVudDM0NTMxNzE1MA== ocefpaf 950575 2017-11-17T17:59:16Z 2017-11-17T17:59:16Z CONTRIBUTOR

@spencerkclark conda-forge has netcdf4 built with both 4.5.0 and 4.4.1.1, so conda can get one or the other depending on the scenario. The only way to ensure you are building an env with libnetcdf 4.5.0 is to add it explicitly in that list.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Potential test failures with libnetcdf 4.5.0 274392275
339398526 https://github.com/pydata/xarray/issues/1655#issuecomment-339398526 https://api.github.com/repos/pydata/xarray/issues/1655 MDEyOklzc3VlQ29tbWVudDMzOTM5ODUyNg== ocefpaf 950575 2017-10-25T16:59:33Z 2017-10-25T16:59:33Z CONTRIBUTOR

I removed the bad package and things should be back t normal. See https://anaconda.org/conda-forge/hypothesis/files?version=3.33.0

(Not sure what went wrong and I cannot look into it right now, but I guess that latest hypothesis is not crucial for you here.)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Test suite is failing on master: No module named 'hypothesis.extra.pytestplugin' 268213436
338405750 https://github.com/pydata/xarray/issues/1621#issuecomment-338405750 https://api.github.com/repos/pydata/xarray/issues/1621 MDEyOklzc3VlQ29tbWVudDMzODQwNTc1MA== ocefpaf 950575 2017-10-21T14:28:47Z 2017-10-21T14:28:47Z CONTRIBUTOR

I have never found timedelta64 indices to be particularly useful.

Same here. :+1: for 1

PS: 2 could be the start of a nice "CF-addon" package for xarray but I don't think it should be in the xarray code.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Undesired decoding to timedelta64 (was: units of "seconds" translated to time coordinate) 264321376
334508795 https://github.com/pydata/xarray/issues/1611#issuecomment-334508795 https://api.github.com/repos/pydata/xarray/issues/1611 MDEyOklzc3VlQ29tbWVudDMzNDUwODc5NQ== ocefpaf 950575 2017-10-05T15:52:04Z 2017-10-05T15:52:04Z CONTRIBUTOR

Yep, just saw that in https://github.com/conda-forge/pynio-feedstock/pull/30

So we need to wait... Hopefully they will release it soon. Thanks!

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  pynio backend broken in python 3 262966657
334482987 https://github.com/pydata/xarray/issues/1611#issuecomment-334482987 https://api.github.com/repos/pydata/xarray/issues/1611 MDEyOklzc3VlQ29tbWVudDMzNDQ4Mjk4Nw== ocefpaf 950575 2017-10-05T14:31:14Z 2017-10-05T14:31:14Z CONTRIBUTOR

Do you know if the py3k support made into pynio 1.5.0 release? If so I can "fix" the conda package easily, if not I am unsure what source I should use to enable the Python 3 builds.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  pynio backend broken in python 3 262966657
327023000 https://github.com/pydata/xarray/issues/1452#issuecomment-327023000 https://api.github.com/repos/pydata/xarray/issues/1452 MDEyOklzc3VlQ29tbWVudDMyNzAyMzAwMA== ocefpaf 950575 2017-09-04T20:13:38Z 2017-09-04T20:13:38Z CONTRIBUTOR

Totally missed your answer here @shoyer. Thanks!

The workaround is fine and the _FillValue =-1 seems wrong to me. Pinging @rsignell-usgs who is know more about the conventions and was interested into this in the first place.

Closing this as I don't think anything is broken with xarray.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Expected S1 dtype in datarray but got float64 235687353
325999424 https://github.com/pydata/xarray/issues/486#issuecomment-325999424 https://api.github.com/repos/pydata/xarray/issues/486 MDEyOklzc3VlQ29tbWVudDMyNTk5OTQyNA== ocefpaf 950575 2017-08-30T14:00:26Z 2017-08-30T14:00:26Z CONTRIBUTOR

@JiaweiZhuang let's discuss that in the feedstock issue tracker to avoid cluttering xarray's.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  API for multi-dimensional resampling/regridding 96211612
325973754 https://github.com/pydata/xarray/issues/486#issuecomment-325973754 https://api.github.com/repos/pydata/xarray/issues/486 MDEyOklzc3VlQ29tbWVudDMyNTk3Mzc1NA== ocefpaf 950575 2017-08-30T12:22:13Z 2017-08-30T12:22:13Z CONTRIBUTOR

then some effort needs to be made to build conda recipes and other infrastructure for distributing and building the platform.

Like https://github.com/conda-forge/esmf-feedstock :wink:

(Windows is still a problem b/c of the Fortran compiler.)

{
    "total_count": 3,
    "+1": 3,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  API for multi-dimensional resampling/regridding 96211612
323438668 https://github.com/pydata/xarray/issues/1510#issuecomment-323438668 https://api.github.com/repos/pydata/xarray/issues/1510 MDEyOklzc3VlQ29tbWVudDMyMzQzODY2OA== ocefpaf 950575 2017-08-18T19:16:33Z 2017-08-18T19:19:21Z CONTRIBUTOR

Something is not OK when parsing ocean_time (and some other variables).

If you do d['ocean_time'][:] on your example with netCDF4 you'll get the same error as xarray.

Could it be a bad aggregation on the THREDDS service?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  open_dataset leading to NetCDF: file not found 251332357
264471550 https://github.com/pydata/xarray/pull/1148#issuecomment-264471550 https://api.github.com/repos/pydata/xarray/issues/1148 MDEyOklzc3VlQ29tbWVudDI2NDQ3MTU1MA== ocefpaf 950575 2016-12-02T14:53:28Z 2016-12-06T00:51:39Z CONTRIBUTOR

Ignore what I said. looking closer it makes sense to set the default to True.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Expose options for axis sharing between subplots 192816291
264461797 https://github.com/pydata/xarray/pull/1148#issuecomment-264461797 https://api.github.com/repos/pydata/xarray/issues/1148 MDEyOklzc3VlQ29tbWVudDI2NDQ2MTc5Nw== ocefpaf 950575 2016-12-02T14:09:37Z 2016-12-02T14:09:37Z CONTRIBUTOR

Would it make sense to set the default to False? I believe false is used in mpl.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Expose options for axis sharing between subplots 192816291
263244991 https://github.com/pydata/xarray/pull/1134#issuecomment-263244991 https://api.github.com/repos/pydata/xarray/issues/1134 MDEyOklzc3VlQ29tbWVudDI2MzI0NDk5MQ== ocefpaf 950575 2016-11-28T11:11:24Z 2016-11-28T11:11:24Z CONTRIBUTOR

TL;DR you may consider making this change permanent.

@fmaussion the problem is that latest conda-forge's hdf4 does ship with libmfhdf.so.0 but either defaults' version does not or an old version in conda-forge is broken. (I am traveling but I'll confirm that as soon as I get back.)

Adding hdf4 at the env creation forces the solver to get the latest version/build number and prevented the downgrade in the subsequent conda install call. So you should probably leave hdf4 there even if it is over specifying deps because the issue may resurface.

I say that b/c, if the bad version is in conda-forge we can remove it. But if it is in defaults we have no control.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Further attempt to get netCDF4 working on RTD 191822204
262408119 https://github.com/pydata/xarray/issues/1106#issuecomment-262408119 https://api.github.com/repos/pydata/xarray/issues/1106 MDEyOklzc3VlQ29tbWVudDI2MjQwODExOQ== ocefpaf 950575 2016-11-23T00:38:20Z 2016-11-23T00:38:20Z CONTRIBUTOR

@ocefpaf where you able to make any progress on this?

Sorry but no. I will look at it again but I do remember seeing some extra conda install commands issued after the env was create, and that may lead to undesirable up-downgrade of packages. If possible, the RTD env should install everything in one go, with a single conda env create environment.yml.

PS: Not sure what your timezone is but due to my ignorance on RTD it would be nice if we could touch base on gitter at some point. We can probably figure this out quickly.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Getting netCDF4 to work on RTD 188565022
260033149 https://github.com/pydata/xarray/issues/1106#issuecomment-260033149 https://api.github.com/repos/pydata/xarray/issues/1106 MDEyOklzc3VlQ29tbWVudDI2MDAzMzE0OQ== ocefpaf 950575 2016-11-11T19:16:10Z 2016-11-11T19:16:10Z CONTRIBUTOR

We are talking about two different issues here. I'll open another issue for the rasm file.

Ah OK. Sorry for the noise.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Getting netCDF4 to work on RTD 188565022
260030024 https://github.com/pydata/xarray/issues/1106#issuecomment-260030024 https://api.github.com/repos/pydata/xarray/issues/1106 MDEyOklzc3VlQ29tbWVudDI2MDAzMDAyNA== ocefpaf 950575 2016-11-11T19:02:01Z 2016-11-11T19:10:48Z CONTRIBUTOR

The error is visible here the latest build logs are available here thanks!

Thanks @fmaussion I can see that there are multiple conda install commands the the subsequent ones change the env. For example:

``` conda install --yes --name latest sphinx==1.3.5 Pygments==2.1.1 docutils==0.12 mock pillow==3.0.0 sphinx_rtd_theme==0.1.7 alabaster>=0.7,<0.8,!=0.7.5

Fetching package metadata: .... Solving package specifications: .........

Package plan for installation in environment /home/docs/checkouts/readthedocs.org/user_builds/xray/conda/latest:

The following NEW packages will be INSTALLED:

jbig:             2.1-0       
sphinx_rtd_theme: 0.1.7-py27_0

The following packages will be DOWNGRADED:

freetype:         2.6.3-1      --> 2.5.5-1     
jpeg:             9b-0         --> 8d-2        
libtiff:          4.0.6-7      --> 4.0.6-2     
pillow:           3.4.2-py27_0 --> 3.0.0-py27_1
pygments:         2.1.3-py27_1 --> 2.1.1-py27_0
sphinx:           1.4.8-py27_0 --> 1.3.5-py27_0
tk:               8.5.19-0     --> 8.5.18-0

```

That jpeg change is not desirable!

I also cannot see the full log from conda env create --name latest --file /home/docs/checkouts/readthedocs.org/user_builds/xray/checkouts/latest/doc/environment.yml.

It's the same environment, I'm literally typing "git checkout master" or "git checkout v0.8.2" in my xarray-dev conda environment (on Python 3.5)

I am not sure it is due to the change I mention above. I will make a few experiments and report back.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Getting netCDF4 to work on RTD 188565022
260027475 https://github.com/pydata/xarray/issues/1106#issuecomment-260027475 https://api.github.com/repos/pydata/xarray/issues/1106 MDEyOklzc3VlQ29tbWVudDI2MDAyNzQ3NQ== ocefpaf 950575 2016-11-11T18:50:37Z 2016-11-11T18:50:37Z CONTRIBUTOR

Can you point me to the error and some details on RTD. They may need to update the conda version to get it to work.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Getting netCDF4 to work on RTD 188565022
259823644 https://github.com/pydata/xarray/issues/1106#issuecomment-259823644 https://api.github.com/repos/pydata/xarray/issues/1106 MDEyOklzc3VlQ29tbWVudDI1OTgyMzY0NA== ocefpaf 950575 2016-11-10T22:15:17Z 2016-11-10T22:15:17Z CONTRIBUTOR

@fmaussion I am away from a laptop to test this but the following change should fix it (depending on the conda version you have there) :

yaml channels: - conda-forge - defaults

By explicitly adding defaults below conda-forge you will ensure that the right set of packages will be downloaded thanks to the channel preference feature in recent conda version.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Getting netCDF4 to work on RTD 188565022
258461994 https://github.com/pydata/xarray/pull/1079#issuecomment-258461994 https://api.github.com/repos/pydata/xarray/issues/1079 MDEyOklzc3VlQ29tbWVudDI1ODQ2MTk5NA== ocefpaf 950575 2016-11-04T15:26:25Z 2016-11-04T15:26:25Z CONTRIBUTOR

We definitely ignore cell boundaries -- they don't (yet) have any place in the xarray data model.

Even though I would love to have that functionality I do not believe it is high priority. cell boundaries is a corner case, at least in my field of work, and 99% of the time it is OK to infer the intervals. Also, the name of the key word is quite clear: infer_intervals.

Maybe you could only add a note in the docs mentioning that the cell boundaries might exist?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  New infer_intervals keyword for pcolormesh 187208913
258382469 https://github.com/pydata/xarray/pull/1079#issuecomment-258382469 https://api.github.com/repos/pydata/xarray/issues/1079 MDEyOklzc3VlQ29tbWVudDI1ODM4MjQ2OQ== ocefpaf 950575 2016-11-04T09:33:42Z 2016-11-04T09:33:42Z CONTRIBUTOR

Yes, it is a special case for pcolormesh though

A special case that I love. In the old Matlab day we had horrible hacks to get pcolor to work properly :smile:

I'm not sure if the additional complexity added by cell boundaries is on the xarray devs priority list...

That is fine. Maybe a warning in the docs would be nice. So people can revert to a "manual" plotting to get the boundaries right.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  New infer_intervals keyword for pcolormesh 187208913
258378187 https://github.com/pydata/xarray/pull/1079#issuecomment-258378187 https://api.github.com/repos/pydata/xarray/issues/1079 MDEyOklzc3VlQ29tbWVudDI1ODM3ODE4Nw== ocefpaf 950575 2016-11-04T09:12:47Z 2016-11-04T09:12:47Z CONTRIBUTOR

Is infer_intervals creating coordinate bounds for plotting? It looks like it. What about when we do have cell boundaries in the dataset? (See http://cfconventions.org/cf-conventions/v1.6.0/cf-conventions.html#cell-boundaries). Will those be ignored and the inferred one used?

(Sorry if I am only making noise here and this does not make sense. In that case just ignore my comment.)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  New infer_intervals keyword for pcolormesh 187208913
257638891 https://github.com/pydata/xarray/issues/1071#issuecomment-257638891 https://api.github.com/repos/pydata/xarray/issues/1071 MDEyOklzc3VlQ29tbWVudDI1NzYzODg5MQ== ocefpaf 950575 2016-11-01T17:49:53Z 2016-11-01T17:49:53Z CONTRIBUTOR

which version couples to who was not obvious

Sorry, this is an old issue that I had in the back of my head but I do not remember the versions :grimacing: but a general update should get you going.

in the end what worked was installing cartopy from conda-forge repository.

:+1:

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  xarray-cartopy broken? 186582995
257619916 https://github.com/pydata/xarray/issues/1071#issuecomment-257619916 https://api.github.com/repos/pydata/xarray/issues/1071 MDEyOklzc3VlQ29tbWVudDI1NzYxOTkxNg== ocefpaf 950575 2016-11-01T16:46:38Z 2016-11-01T16:46:38Z CONTRIBUTOR

That is unrelated to xarray. It is a cartopy/shapely bug due to version mismatch. If you update both that should work.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  xarray-cartopy broken? 186582995
239210833 https://github.com/pydata/xarray/pull/940#issuecomment-239210833 https://api.github.com/repos/pydata/xarray/issues/940 MDEyOklzc3VlQ29tbWVudDIzOTIxMDgzMw== ocefpaf 950575 2016-08-11T16:15:05Z 2016-08-11T16:15:05Z CONTRIBUTOR

@shoyer the more I think about this the more I don't like the addition of extra keywords. Even though I would like this behavior to be the default one I really do not like the complexity I added here.

Closing this...

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Don't convert time data to timedelta by default 169276671
239210661 https://github.com/pydata/xarray/issues/843#issuecomment-239210661 https://api.github.com/repos/pydata/xarray/issues/843 MDEyOklzc3VlQ29tbWVudDIzOTIxMDY2MQ== ocefpaf 950575 2016-08-11T16:14:28Z 2016-08-11T16:14:28Z CONTRIBUTOR

@shoyer the more I think about this the more I don't like the addition of extra keywords. Even though I would like this behavior to be the default one I really do not like the complexity I added here.

Closing this...

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Don't convert data with time units to timedeltas by default 153066635
237659233 https://github.com/pydata/xarray/pull/940#issuecomment-237659233 https://api.github.com/repos/pydata/xarray/issues/940 MDEyOklzc3VlQ29tbWVudDIzNzY1OTIzMw== ocefpaf 950575 2016-08-04T19:33:35Z 2016-08-04T19:33:35Z CONTRIBUTOR

If the main issue is plotting, you could try fixing that upstream, too! pydata/pandas#8711

Indeed! That makes sense to matter what is decided here. Thanks for pointing that out. (Not sure if I am up to the challenge though.)

It might be worth querying the mailing list for more opinions here.

Done!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Don't convert time data to timedelta by default 169276671
237649523 https://github.com/pydata/xarray/pull/940#issuecomment-237649523 https://api.github.com/repos/pydata/xarray/issues/940 MDEyOklzc3VlQ29tbWVudDIzNzY0OTUyMw== ocefpaf 950575 2016-08-04T18:57:14Z 2016-08-04T18:57:14Z CONTRIBUTOR

Decoding time units into timedelta64 is useful if you want to be able to use them for arithmetic with datetimes.

I get that but my question is how often do users perform do such operations? Again I am biased b/c with my data I never want to do that as it does not make sense. And, when it does make sense, I believe that the price of post conversion is worth the advantages of converting by default.

Still I'm somewhat reluctant to change the default here.

If you want decode_timedeltas=True as the default I am not sure it is worth the extra keyword then. My goal with this PR is to avoid extra steps. If users need to set it to True it is not too different from doing data.values / 1e9. If all we get is to change one step for another I prefer to not make the open_dataset more complex and leave it as is.

Feel free to close this. I don't have strong feelings about what xarray should do by default. I just want to make it more convenient for my uses cases.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Don't convert time data to timedelta by default 169276671
237560363 https://github.com/pydata/xarray/pull/940#issuecomment-237560363 https://api.github.com/repos/pydata/xarray/issues/940 MDEyOklzc3VlQ29tbWVudDIzNzU2MDM2Mw== ocefpaf 950575 2016-08-04T13:56:40Z 2016-08-04T13:56:40Z CONTRIBUTOR

This is ready for review.

Here is an example of this PR in action showing how plotting and working with periods gets easier with floats instead of timedeltas.

BTW, in light of https://github.com/pydata/xarray/issues/939#issue-169274464, I wonder if decode_timedeltas and decode_datetimes (or even the original decode_times) are needed at all. (Just defending myself as I don't really want a new keyword argument but a better default for time data in general :grimacing:)

Maybe I am being thick and I don't know enough use cases data but I cannot see why someone might want to convert time data (which most of the time represent periods) to timedeltas. That breaks from the original data units and forces the user to compute it back and convert the dtype too before for plotting, etc.

Regarding time coordinate itself I understand that xarray, due to the pandas nature of the index, needs to fail to convert time when the calendar is not supported by pandas. Maybe, instead of throwing an error and asking the users to use the option decode_(date)times=False in case of failure, xarray could issue an warning and return only the "numbers" as if decode_(date)times were set to False.

I understand, and agree most of the time, that raising erros is better than issuing warnings, and creating an ambiguity in the returns. So maybe this one is harder to change than the former.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Don't convert time data to timedelta by default 169276671
237430205 https://github.com/pydata/xarray/issues/843#issuecomment-237430205 https://api.github.com/repos/pydata/xarray/issues/843 MDEyOklzc3VlQ29tbWVudDIzNzQzMDIwNQ== ocefpaf 950575 2016-08-04T01:52:53Z 2016-08-04T01:52:53Z CONTRIBUTOR

Prototype is ready http://nbviewer.jupyter.org/gist/ocefpaf/cabdcb27a5ef7da1b6110327a5f03e17

I will polish this and send a PR soon.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Don't convert data with time units to timedeltas by default 153066635
237416540 https://github.com/pydata/xarray/issues/843#issuecomment-237416540 https://api.github.com/repos/pydata/xarray/issues/843 MDEyOklzc3VlQ29tbWVudDIzNzQxNjU0MA== ocefpaf 950575 2016-08-04T00:31:10Z 2016-08-04T00:31:10Z CONTRIBUTOR

No problem. I want to solve this anyways b/c it is quite awkward to keep converting back from timedeltas to time data. I will have a PR soon.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Don't convert data with time units to timedeltas by default 153066635
237414886 https://github.com/pydata/xarray/issues/843#issuecomment-237414886 https://api.github.com/repos/pydata/xarray/issues/843 MDEyOklzc3VlQ29tbWVudDIzNzQxNDg4Ng== ocefpaf 950575 2016-08-04T00:19:31Z 2016-08-04T00:19:31Z CONTRIBUTOR

@shoyer is this what you have in mid? Replacing decode_time for decode_datetimes and decode_timedelta and then:

``` python if decode_datetimes and 'units' in attributes and 'since' in attributes['units']: units = pop_to(attributes, encoding, 'units') calendar = pop_to(attributes, encoding, 'calendar') data = DecodedCFDatetimeArray(data, units, calendar)

if decode_timedelta and attributes['units'] in TIME_UNITS:
    units = pop_to(attributes, encoding, 'units')
    data = DecodedCFTimedeltaArray(data, units)

```

To be honest I cannot see why someone might want to convert time data to a timedelta and I would be inclined to remove decode_timedelta instead. However, that may be my very biased view since such transformation does not make sense with my data :smile:

Do you have any advice? (I would like to try this before you release v0.8.0)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Don't convert data with time units to timedeltas by default 153066635
237033108 https://github.com/pydata/xarray/pull/844#issuecomment-237033108 https://api.github.com/repos/pydata/xarray/issues/844 MDEyOklzc3VlQ29tbWVudDIzNzAzMzEwOA== ocefpaf 950575 2016-08-02T20:28:25Z 2016-08-02T20:28:25Z CONTRIBUTOR

Done. Not sure why only AppVeyor started :confused:

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add a filter_by_attrs method to Dataset 153126324
237008322 https://github.com/pydata/xarray/pull/844#issuecomment-237008322 https://api.github.com/repos/pydata/xarray/issues/844 MDEyOklzc3VlQ29tbWVudDIzNzAwODMyMg== ocefpaf 950575 2016-08-02T19:02:19Z 2016-08-02T19:02:19Z CONTRIBUTOR

Assuming we do add the filter method, maybe filter_by_attrs?

Makes sense. I will modify this soon.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add a filter_by_attrs method to Dataset 153126324
236998182 https://github.com/pydata/xarray/pull/844#issuecomment-236998182 https://api.github.com/repos/pydata/xarray/issues/844 MDEyOklzc3VlQ29tbWVudDIzNjk5ODE4Mg== ocefpaf 950575 2016-08-02T18:31:43Z 2016-08-02T18:31:43Z CONTRIBUTOR

I do think the name is a mouthful, though :).

I agree that is a mouthful but then again, I was trying to be consistent with the existing versions. However, I don't really have a strong opinion on the matter and I am fine with whatever you decide. Should I renamed then? filter_attrs? (Sounds weird though b/c you are not filtering the attrs, but the variables based on the attrs :expressionless:)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add a filter_by_attrs method to Dataset 153126324
236988451 https://github.com/pydata/xarray/pull/844#issuecomment-236988451 https://api.github.com/repos/pydata/xarray/issues/844 MDEyOklzc3VlQ29tbWVudDIzNjk4ODQ1MQ== ocefpaf 950575 2016-08-02T18:00:19Z 2016-08-02T18:00:19Z CONTRIBUTOR

I fine with whatever you decide but here are my two cents: - get_variables_by_attributes is the same name of this method in netcd4 and some java netcdf libraries. So I'd rather not have a specialized version for attributes than adding it with name with a different name. - I see the elegance in s.filter(lambda x: x.attrs['standard_name'] == 'convective_precipitation_flux') and I like it a lot! But the specialized version for attributes is more compact to write and, at least in my field, filtering by attributes is more common making this version more convenient.

Feel free to close this if you don't think it is worth adding the specialized version. Or let me know if you want to rename it.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add a filter_by_attrs method to Dataset 153126324
235785719 https://github.com/pydata/xarray/pull/844#issuecomment-235785719 https://api.github.com/repos/pydata/xarray/issues/844 MDEyOklzc3VlQ29tbWVudDIzNTc4NTcxOQ== ocefpaf 950575 2016-07-28T02:44:46Z 2016-07-28T02:44:46Z CONTRIBUTOR

Is there still an interested in this or should I close in light of https://github.com/pydata/xarray/issues/883?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add a filter_by_attrs method to Dataset 153126324
234722229 https://github.com/pydata/xarray/pull/844#issuecomment-234722229 https://api.github.com/repos/pydata/xarray/issues/844 MDEyOklzc3VlQ29tbWVudDIzNDcyMjIyOQ== ocefpaf 950575 2016-07-23T14:53:41Z 2016-07-23T14:53:41Z CONTRIBUTOR

Agreed -- this is almost ready. Please add to the API docs (api.rst) and do the docstring fixes.

Not sure if I did this right :grimacing:

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add a filter_by_attrs method to Dataset 153126324
225403691 https://github.com/pydata/xarray/pull/844#issuecomment-225403691 https://api.github.com/repos/pydata/xarray/issues/844 MDEyOklzc3VlQ29tbWVudDIyNTQwMzY5MQ== ocefpaf 950575 2016-06-12T01:18:00Z 2016-06-12T01:18:00Z CONTRIBUTOR

Rebased and ready for another round of reviews :wink:

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add a filter_by_attrs method to Dataset 153126324
222164014 https://github.com/pydata/xarray/pull/860#issuecomment-222164014 https://api.github.com/repos/pydata/xarray/issues/860 MDEyOklzc3VlQ29tbWVudDIyMjE2NDAxNA== ocefpaf 950575 2016-05-27T14:40:13Z 2016-05-27T14:40:13Z CONTRIBUTOR

@ocefpaf Looks like it's the official Continuum build of SciPy that just started failing here (entirely coincidentally). I only switched to conda forge for our build with all optional dependencies py27-cdat+pynio. So I'm going to merge this for now and fix the minimal build later.

Cool. I will take a look at adding coveralls once I get back in office. (Heading to PyCon. Will you be there @shoyer?)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Switch py2.7 CI build to use conda-forge 156793282
221734299 https://github.com/pydata/xarray/pull/860#issuecomment-221734299 https://api.github.com/repos/pydata/xarray/issues/860 MDEyOklzc3VlQ29tbWVudDIyMTczNDI5OQ== ocefpaf 950575 2016-05-25T23:07:57Z 2016-05-25T23:09:48Z CONTRIBUTOR

@shoyer I am interested in help you investigate what is going on here as that will help us to keep the conda-forge stable. Can you please add conda config --set show_channel_urls true in the .travis.yml so we know where the packages are coming from? We just added scipy to conda-forge and I am afraid that might be the issue here.

Ping @jakirkham who added the scipy recipe to help investigate the issue :grimacing:

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Switch py2.7 CI build to use conda-forge 156793282
219189433 https://github.com/pydata/xarray/pull/842#issuecomment-219189433 https://api.github.com/repos/pydata/xarray/issues/842 MDEyOklzc3VlQ29tbWVudDIxOTE4OTQzMw== ocefpaf 950575 2016-05-14T00:36:12Z 2016-05-14T00:36:12Z CONTRIBUTOR

Sorry, I'm going camping this weekend so I won't be able to get this out. Next time give me just a little bit more warning :).

No biggie. As I mentioned above I have a plan B (conda install of the latest dev version).

Enjoy your camping!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fix #665 decode_cf_timedelta 2D 152888663
219097316 https://github.com/pydata/xarray/pull/844#issuecomment-219097316 https://api.github.com/repos/pydata/xarray/issues/844 MDEyOklzc3VlQ29tbWVudDIxOTA5NzMxNg== ocefpaf 950575 2016-05-13T16:46:47Z 2016-05-13T17:12:02Z CONTRIBUTOR

The appveyor build failure looks unrelated to this change -- something about conda dependencies.

I am experiencing that in other projects. It is actually a bad download of miniconda, and powershell makes the error message unusable. Re-starting should fix it. (BTW miniconda is pre-installed on AppVeyor and that download is not needed.)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add a filter_by_attrs method to Dataset 153126324
219073332 https://github.com/pydata/xarray/pull/842#issuecomment-219073332 https://api.github.com/repos/pydata/xarray/issues/842 MDEyOklzc3VlQ29tbWVudDIxOTA3MzMzMg== ocefpaf 950575 2016-05-13T15:15:39Z 2016-05-13T15:15:39Z CONTRIBUTOR

@shoyer I will be teaching a tutorial next Monday that will hit this bug. I know it is a lot to ask... But do you think you could cut a bugfix release? (I have a plan B be already, so no pressure.)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fix #665 decode_cf_timedelta 2D 152888663
219061348 https://github.com/pydata/xarray/pull/844#issuecomment-219061348 https://api.github.com/repos/pydata/xarray/issues/844 MDEyOklzc3VlQ29tbWVudDIxOTA2MTM0OA== ocefpaf 950575 2016-05-13T14:36:01Z 2016-05-13T14:36:01Z CONTRIBUTOR

@jhamman and @shoyer if the tests passes this is ready for another round of review. (Let me know if I should squash the previous ones to make it easier to review.)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add a filter_by_attrs method to Dataset 153126324
218314843 https://github.com/pydata/xarray/pull/844#issuecomment-218314843 https://api.github.com/repos/pydata/xarray/issues/844 MDEyOklzc3VlQ29tbWVudDIxODMxNDg0Mw== ocefpaf 950575 2016-05-10T22:48:53Z 2016-05-10T22:48:53Z CONTRIBUTOR

@shoyer this is ready for another round of review.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add a filter_by_attrs method to Dataset 153126324
217171905 https://github.com/pydata/xarray/pull/844#issuecomment-217171905 https://api.github.com/repos/pydata/xarray/issues/844 MDEyOklzc3VlQ29tbWVudDIxNzE3MTkwNQ== ocefpaf 950575 2016-05-05T14:40:09Z 2016-05-10T22:47:52Z CONTRIBUTOR

Well, if you guys are mostly excited about using this for coordinate variables, another consistent choice would be to return a list of matching DataArrays. But if we want to return a Dataset, we should only do data variables, because it's weird to lose all the describing coordinates when you match, e.g., standard_name="air_temperature".

We agree with you and I prefer to return a Dataset.

Right now we always go to netCDF4-python to do this low-level CF interpretation stuff. If we start using xarray for that task we should improve xarray, to take advantage of the conventions (CF/SGRID/UGRID), instead of using xarray to just find the coords data. (For example: creating the z coordinates from non-dimension coordinates and add that to the Dataset coords automatically.)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add a filter_by_attrs method to Dataset 153126324
217163055 https://github.com/pydata/xarray/pull/844#issuecomment-217163055 https://api.github.com/repos/pydata/xarray/issues/844 MDEyOklzc3VlQ29tbWVudDIxNzE2MzA1NQ== ocefpaf 950575 2016-05-05T14:03:57Z 2016-05-05T14:03:57Z CONTRIBUTOR

After discussing with my CF guru (@rsignell-usgs) and the original author of the get_variables_by_attributes (@kwilcox) I jumped the fence and I am OK leaving the xarray model pure and filtering only the data variables.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add a filter_by_attrs method to Dataset 153126324
217132574 https://github.com/pydata/xarray/pull/844#issuecomment-217132574 https://api.github.com/repos/pydata/xarray/issues/844 MDEyOklzc3VlQ29tbWVudDIxNzEzMjU3NA== ocefpaf 950575 2016-05-05T11:41:19Z 2016-05-05T11:41:19Z CONTRIBUTOR

should this filter both data variables and coordinates or only data variables?

I thought a little bit more about this and now I am on the fence. The pros of filtering only data variables are a nice and clean Dataset object, and overall consistency with the high level xarray model. The cons are that we cannot do the filtering on the coords (obviously), but most of the time that we need to do that we go to a lower level object like netCDF4.Dataset. However, it would be nice if both xarray and netCDF4 behaved the same way...

I am 51% with filtering both (and the current implementation does that) but I will leave the final decision to you.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add a filter_by_attrs method to Dataset 153126324
217129119 https://github.com/pydata/xarray/pull/844#issuecomment-217129119 https://api.github.com/repos/pydata/xarray/issues/844 MDEyOklzc3VlQ29tbWVudDIxNzEyOTExOQ== ocefpaf 950575 2016-05-05T11:29:01Z 2016-05-05T11:29:01Z CONTRIBUTOR

An important design question: should this filter both data variables and coordinates or only data variables? My thought is that it's only worth filtering data variables -- filtering out unmatched coordinates is not very useful.

I understand that returning coordinates without a data variable associated to them seems weird to the high level model of xarray.Dataset , but I disagree that it is not very useful. In fact that is the most common operation I do: find coordinates based on axis, formula_terms, etc and construct common grids for plotting and/or building the z coords from non-dimension coords.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add a filter_by_attrs method to Dataset 153126324
216934289 https://github.com/pydata/xarray/pull/842#issuecomment-216934289 https://api.github.com/repos/pydata/xarray/issues/842 MDEyOklzc3VlQ29tbWVudDIxNjkzNDI4OQ== ocefpaf 950575 2016-05-04T17:08:19Z 2016-05-04T17:08:19Z CONTRIBUTOR

possibly we should add an explicit toggle for decoding timedeltas vs datetimes.

:+1:

I am opening a separated issue for that.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fix #665 decode_cf_timedelta 2D 152888663
216932304 https://github.com/pydata/xarray/issues/567#issuecomment-216932304 https://api.github.com/repos/pydata/xarray/issues/567 MDEyOklzc3VlQ29tbWVudDIxNjkzMjMwNA== ocefpaf 950575 2016-05-04T17:01:19Z 2016-05-04T17:01:19Z CONTRIBUTOR

@shoyer this made into netcdf4 and some people in my group would like to have this in xarray too. If you think it is worth it I can put a PR together for this.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Best way to find data variables by standard_name 105688738
216931822 https://github.com/pydata/xarray/pull/842#issuecomment-216931822 https://api.github.com/repos/pydata/xarray/issues/842 MDEyOklzc3VlQ29tbWVudDIxNjkzMTgyMg== ocefpaf 950575 2016-05-04T16:59:40Z 2016-05-04T16:59:40Z CONTRIBUTOR

Looks great! Please add a brief bug fix note to "What's new", then I will merge.

How about https://github.com/pydata/xarray/pull/842/commits/518ea53284f659edbb31cd98c326b3e78f440fc3?

PS: @shoyer I am still not sure that converting any data that has units of time to timedelta is desirable as the default behavior. I may be biased but in my datasets (waves period data) we usually do not want that.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fix #665 decode_cf_timedelta 2D 152888663
192342165 https://github.com/pydata/xarray/issues/784#issuecomment-192342165 https://api.github.com/repos/pydata/xarray/issues/784 MDEyOklzc3VlQ29tbWVudDE5MjM0MjE2NQ== ocefpaf 950575 2016-03-04T16:23:39Z 2016-03-04T16:23:39Z CONTRIBUTOR

second = first.reindex_like(second, method='nearest', tolerance=0.001)

I really like this. Explicit and self-documenting code. I would avoid making this automatic.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  almost-equal grids 138443211
191840829 https://github.com/pydata/xarray/pull/782#issuecomment-191840829 https://api.github.com/repos/pydata/xarray/issues/782 MDEyOklzc3VlQ29tbWVudDE5MTg0MDgyOQ== ocefpaf 950575 2016-03-03T16:32:20Z 2016-03-03T16:33:54Z CONTRIBUTOR

What if there are bounds in the file and the data is regularly spaced? I consider the current behavior a guess, and guessing should be an active user choice, not the automatic behavior.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  dont infer interval breaks in pcolormesh when ax is a cartopy axis 138086327
191607244 https://github.com/pydata/xarray/pull/782#issuecomment-191607244 https://api.github.com/repos/pydata/xarray/issues/782 MDEyOklzc3VlQ29tbWVudDE5MTYwNzI0NA== ocefpaf 950575 2016-03-03T06:35:02Z 2016-03-03T06:35:02Z CONTRIBUTOR

Bare in mind that some netCDF4 files do have the bounds data and those should be used as "breaks" when available.

BTW I'd rather not have the _infer_interval_breaks at all, not sure if assuming linear at the mid points are generic enough.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  dont infer interval breaks in pcolormesh when ax is a cartopy axis 138086327
176378833 https://github.com/pydata/xarray/issues/732#issuecomment-176378833 https://api.github.com/repos/pydata/xarray/issues/732 MDEyOklzc3VlQ29tbWVudDE3NjM3ODgzMw== ocefpaf 950575 2016-01-28T20:04:55Z 2016-01-28T20:04:55Z CONTRIBUTOR

I don't think it's too much to ask for one more py33 build after the rename.

Yes and no. If you want the full optional dependencies, xarray can be a little bit hard to build (with pynio support, etc). If you want just the basics then it is relatively easy.

You should make the conda package request in https://github.com/ContinuumIO/anaconda-issues/issues/635, and other continuum communication channels (mailing list, gitter, etc).

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  0.7 missing Python 3.3 conda package 129525746
176365429 https://github.com/pydata/xarray/issues/732#issuecomment-176365429 https://api.github.com/repos/pydata/xarray/issues/732 MDEyOklzc3VlQ29tbWVudDE3NjM2NTQyOQ== ocefpaf 950575 2016-01-28T19:40:07Z 2016-01-28T19:40:07Z CONTRIBUTOR

The use of "official release" can be confusing in this case. I believe that the latest xarray works OK in Python 3.3. But note that the project is not tested on Python 3.3 since PR https://github.com/pydata/xarray/pull/583. I believe that the official PyPI source dist will install OK in Python 3.3 BTW.

On the other hand, Continuum does seem to be abandoning Python 3.3, there are hardly new conda packages being built for it.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  0.7 missing Python 3.3 conda package 129525746
176308535 https://github.com/pydata/xarray/issues/732#issuecomment-176308535 https://api.github.com/repos/pydata/xarray/issues/732 MDEyOklzc3VlQ29tbWVudDE3NjMwODUzNQ== ocefpaf 950575 2016-01-28T18:04:22Z 2016-01-28T18:04:22Z CONTRIBUTOR

@richardotis any special reason for wanting a Python 3.3 version? The migration to >=3.4 is highly recommended. (And I don't think continuum and the community building stuff for Python 3.3 lately.)

If you cannot upgrade you can use this recipe to build you own Python 3.3 xarray.

If you clone the whole repo you can then type:

shell export CONDA_PY=33 export CONDA_NPY=110 # (or any other numpy version series you need 18, 19...) conda build xarray

If you issue the last command from the repo root directory conda will build dependencies for you.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  0.7 missing Python 3.3 conda package 129525746
174090196 https://github.com/pydata/xarray/issues/721#issuecomment-174090196 https://api.github.com/repos/pydata/xarray/issues/721 MDEyOklzc3VlQ29tbWVudDE3NDA5MDE5Ng== ocefpaf 950575 2016-01-22T23:43:38Z 2016-01-23T04:32:48Z CONTRIBUTOR

conda install -c ioos xarray should work. ;-)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  xarray package not found by conda 128101754
168853121 https://github.com/pydata/xarray/issues/704#issuecomment-168853121 https://api.github.com/repos/pydata/xarray/issues/704 MDEyOklzc3VlQ29tbWVudDE2ODg1MzEyMQ== ocefpaf 950575 2016-01-05T00:26:58Z 2016-01-05T00:26:58Z CONTRIBUTOR

:-1:

But I am just one user... And if the renaming really happens I guess that import xarray as xr is better than xry

(I like the pun: See the data under x rays :smile:)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Complete renaming xray -> xarray 124867009
155118942 https://github.com/pydata/xarray/issues/649#issuecomment-155118942 https://api.github.com/repos/pydata/xarray/issues/649 MDEyOklzc3VlQ29tbWVudDE1NTExODk0Mg== ocefpaf 950575 2015-11-09T16:44:36Z 2015-11-09T16:44:54Z CONTRIBUTOR

Got it. Thanks!

I guess I've been living around numpy.arrays way too long... Time to experiment this brave new world of labeled arrays :wink:

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  error when using broadcast_arrays with coordinates 115897556
155107254 https://github.com/pydata/xarray/issues/649#issuecomment-155107254 https://api.github.com/repos/pydata/xarray/issues/649 MDEyOklzc3VlQ29tbWVudDE1NTEwNzI1NA== ocefpaf 950575 2015-11-09T16:05:29Z 2015-11-09T16:05:29Z CONTRIBUTOR

Hi @rabernat,

Most gsw functions will call np.broadcast_arrays for you internally. So you can pass ds.a.values instead. It is ugly, I know. but consistent when using libraries that expect numpy array.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  error when using broadcast_arrays with coordinates 115897556
152253168 https://github.com/pydata/xarray/pull/636#issuecomment-152253168 https://api.github.com/repos/pydata/xarray/issues/636 MDEyOklzc3VlQ29tbWVudDE1MjI1MzE2OA== ocefpaf 950575 2015-10-29T17:14:08Z 2015-10-29T17:14:08Z CONTRIBUTOR

but it would be nice to be able to say that you use a standard license

:+1:

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Read only support for PyNIO backend 112677315
152226596 https://github.com/pydata/xarray/pull/636#issuecomment-152226596 https://api.github.com/repos/pydata/xarray/issues/636 MDEyOklzc3VlQ29tbWVudDE1MjIyNjU5Ng== ocefpaf 950575 2015-10-29T16:03:22Z 2015-10-29T16:03:22Z CONTRIBUTOR

The license here: http://www.pyngl.ucar.edu/Licenses/PyNIO_source_license.shtml

@david-ian-brown sorry for my ignorance but is that OSI approved?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Read only support for PyNIO backend 112677315
152002550 https://github.com/pydata/xarray/pull/636#issuecomment-152002550 https://api.github.com/repos/pydata/xarray/issues/636 MDEyOklzc3VlQ29tbWVudDE1MjAwMjU1MA== ocefpaf 950575 2015-10-28T21:40:54Z 2015-10-28T21:40:54Z CONTRIBUTOR

Thanks @david-ian-brown! That is good news!!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Read only support for PyNIO backend 112677315
151783551 https://github.com/pydata/xarray/pull/636#issuecomment-151783551 https://api.github.com/repos/pydata/xarray/issues/636 MDEyOklzc3VlQ29tbWVudDE1MTc4MzU1MQ== ocefpaf 950575 2015-10-28T09:51:47Z 2015-10-28T09:51:47Z CONTRIBUTOR

Maybe we could get ioos to also keep a pynio conda build up to date (cc @ocefpaf)

@david-ian-brown is pynio open source now? If so I can add it to the ioos channel, but at PyPI it is still missing the source.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Read only support for PyNIO backend 112677315
143335054 https://github.com/pydata/xarray/pull/589#issuecomment-143335054 https://api.github.com/repos/pydata/xarray/issues/589 MDEyOklzc3VlQ29tbWVudDE0MzMzNTA1NA== ocefpaf 950575 2015-09-25T19:44:36Z 2015-09-25T19:44:36Z CONTRIBUTOR

single option for open_dataset that can disable all of xray's decoding options

I get that and I really like this option!

Not sure about raw=False

:+1: decode=True

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  New encoding keyword argument for to_netcdf 108271509

Next page

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 32.372ms · About: xarray-datasette