id,node_id,number,title,user,state,locked,assignee,milestone,comments,created_at,updated_at,closed_at,author_association,active_lock_reason,draft,pull_request,body,reactions,performed_via_github_app,state_reason,repo,type 320007162,MDU6SXNzdWUzMjAwMDcxNjI=,2102,resample DeprecationWarning only on 1-D arrays?,17162724,closed,0,,,1,2018-05-03T17:13:55Z,2018-05-08T17:36:22Z,2018-05-08T17:36:22Z,CONTRIBUTOR,,,,"#### Code Sample, a copy-pastable example if possible ```python da = xr.DataArray(np.array([1,2,3,4], dtype=np.float).reshape(2,2), ... coords=[pd.date_range('1/1/2000', '1/2/2000', freq='D'), ... np.linspace(0,1,num=2)], ... dims=['time', 'latitude']) da.resample(freq='M', dim='time', how='mean') #/Users/Ray/anaconda/envs/rot-eof-dev-env/bin/ipython:1: DeprecationWarning: #.resample() has been modified to defer calculations. Instead of passing 'dim' and 'how=""mean"", #instead consider using .resample(time=""M"").mean() # #!/Users/Ray/anaconda/envs/rot-eof-dev-env/bin/python #Out[66]: # #array([[2., 3.]]) #Coordinates: # * time (time) datetime64[ns] 2000-01-31 # * latitude (latitude) float64 0.0 1.0 da.resample(time=""M"").mean() # #array([2.5]) #Coordinates: # * time (time) datetime64[ns] 2000-01-31 ``` #### Problem description The DeprecationWarning example seems to only work for 1d arrays as it doesn't average along any dimension. A quick fix could be to show the warning only if the DataArray/Dataset is 1D. A more thorough fix could be to wrap `.resample(time=""M"").mean()` as `.resample(freq='M', dim='time', how='mean')`??? #### Expected Output Same as `da.resample(freq='M', dim='time', how='mean')` #### Output of ``xr.show_versions()``
xr.show_versions() # Not sure about the h5py FutureWarning? /Users/Ray/anaconda/envs/rot-eof-dev-env/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`. from ._conv import register_converters as _register_converters INSTALLED VERSIONS ------------------ commit: None python: 3.6.5.final.0 python-bits: 64 OS: Darwin OS-release: 17.5.0 machine: x86_64 processor: i386 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: en_US.UTF-8 xarray: 0.10.3 pandas: 0.22.0 numpy: 1.14.2 scipy: 1.0.1 netCDF4: 1.3.1 h5netcdf: 0.5.1 h5py: 2.7.1 Nio: None zarr: None bottleneck: 1.2.1 cyordereddict: None dask: 0.17.2 distributed: 1.21.6 matplotlib: 2.2.2 cartopy: 0.16.0 seaborn: None setuptools: 39.0.1 pip: 9.0.3 conda: None pytest: None IPython: 6.3.1 sphinx: None
","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/2102/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 253476466,MDU6SXNzdWUyNTM0NzY0NjY=,1536,Better compression algorithms for NetCDF,6213168,closed,0,,,28,2017-08-28T22:35:31Z,2018-05-08T02:25:40Z,2018-05-08T02:25:40Z,MEMBER,,,,"As of today, ``Dataset.to_netcdf()`` exclusively allows writing uncompressed or compressed with zlib. zlib was absolutely revolutionary when it was released... in 1995. Time has passed, and much better compression algorithms have appeared over time. Good news is, h5py supports LZF out of the box, and is extensible with plugins to support theoretically any other algorithm. h5netcdf exposes such interface through its new (non-legacy) API; however ``Dataset.to_netcdf(engine='h5netcdf')`` supports the legacy API exclusively. I already tested that, once you manage to write to disk with LZF (using h5netcdf directly), ``open_dataset(engine='h5netcdf')`` transparently opens the compressed store. Options: - write a new engine for ``Dataset.to_netcdf()`` to support the new h5netcdf API. - switch the whole ``engine='h5netcdf'`` to the new API. Drop support for the old parameters in ``to_netcdf()``. This is less bad than it sounds, as people can switch to another engine in case of trouble. This is the cleanest solution, but also the most disruptive one. - switch the whole ``engine='h5netcdf'`` to the new API; have ``to_netcdf()`` accept both new and legacy parameters, and implement a translation layer of parameters from the legacy API to the new API. The benefit here is that, as long as the user sticks to the legacy API, he can hop between engines transparently. On the other hand I have a hard time believing anybody would care. - ? ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/1536/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue