issues
53 rows where type = "issue" and user = 6815844 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: milestone, comments, created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
274797981 | MDU6SXNzdWUyNzQ3OTc5ODE= | 1725 | Switch our lazy array classes to use Dask instead? | fujiisoup 6815844 | open | 0 | 9 | 2017-11-17T09:12:34Z | 2023-09-15T15:51:41Z | MEMBER | Ported from #1724, comment by @shoyer
The subtleties of checking |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1725/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
527237590 | MDU6SXNzdWU1MjcyMzc1OTA= | 3562 | Minimize `.item()` call | fujiisoup 6815844 | open | 0 | 1 | 2019-11-22T14:44:43Z | 2023-06-08T04:48:50Z | MEMBER | MCVE Code SampleI want to minimize the number of calls
Both cases, I need to call '.item()'. It is not a big issue, but I think it would be nice if xarray becomes more self-contained. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3562/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
675482176 | MDU6SXNzdWU2NzU0ODIxNzY= | 4325 | Optimize ndrolling nanreduce | fujiisoup 6815844 | open | 0 | 5 | 2020-08-08T07:46:53Z | 2023-04-13T15:56:52Z | MEMBER | In #4219 we added ndrolling.
However, nanreduce, such as We can implement inhouse-nanreduce methods for the strided array.
For example, our |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4325/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
262642978 | MDU6SXNzdWUyNjI2NDI5Nzg= | 1603 | Explicit indexes in xarray's data-model (Future of MultiIndex) | fujiisoup 6815844 | closed | 0 | 1.0 741199 | 68 | 2017-10-04T01:51:47Z | 2022-09-28T09:24:20Z | 2022-09-28T09:24:20Z | MEMBER | I think we can continue the discussion we have in #1426 about In comment , @shoyer recommended to remove I agree with this, as long as my codes work with this improvement. I think if we could have a list of possible Current limitations of |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1603/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | |||||
655382009 | MDU6SXNzdWU2NTUzODIwMDk= | 4218 | what is the best way to reset an unintentional direct push to the master | fujiisoup 6815844 | closed | 0 | 16 | 2020-07-12T11:30:45Z | 2022-04-17T20:34:32Z | 2022-04-17T20:34:32Z | MEMBER | I am sorry but I unintentionally pushed my working scripts to xarray.master. (I thought it is not allowed and I was not careful.) What is the best way to reset this? I'm thinking to do in my local, and force push again, but I'm afraid that I do another wrong thing... I apologize for my mistake. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4218/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
280875330 | MDU6SXNzdWUyODA4NzUzMzA= | 1772 | nonzero method for xr.DataArray | fujiisoup 6815844 | open | 0 | 5 | 2017-12-11T02:25:11Z | 2022-04-01T10:42:20Z | MEMBER |
Problem descriptionApparently, the dimensions and the coordinates conflict each other.
I think we can have our own Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1772/reactions", "total_count": 6, "+1": 6, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
898657012 | MDU6SXNzdWU4OTg2NTcwMTI= | 5361 | Inconsistent behavior in grouby depending on the dimension order | fujiisoup 6815844 | open | 0 | 1 | 2021-05-21T23:11:37Z | 2022-03-29T11:45:32Z | MEMBER |
However, The bug has been discussed in #2944 and solved, but I found this is still there. Output of <tt>xr.show_versions()</tt>INSTALLED VERSIONS ------------------ commit: 09d8a4a785fa6521314924fd785740f2d13fb8ee python: 3.7.7 (default, Mar 23 2020, 22:36:06) [GCC 7.3.0] python-bits: 64 OS: Linux OS-release: 5.4.0-72-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: ('en_US', 'UTF-8') libhdf5: 1.10.4 libnetcdf: 4.6.1 xarray: 0.16.1.dev30+g1d3dee08.d20200808 pandas: 1.1.3 numpy: 1.18.1 scipy: 1.5.2 netCDF4: 1.4.2 pydap: None h5netcdf: 0.8.0 h5py: 2.10.0 Nio: None zarr: None cftime: 1.2.1 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: None dask: 2.6.0 distributed: 2.7.0 matplotlib: 3.2.2 cartopy: None seaborn: 0.10.1 numbagg: None pint: None setuptools: 46.1.1.post20200323 pip: 20.0.2 conda: None pytest: 5.2.1 IPython: 7.13.0 sphinx: None |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5361/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
228295383 | MDU6SXNzdWUyMjgyOTUzODM= | 1408 | .sel does not keep selected coordinate value in case with MultiIndex | fujiisoup 6815844 | closed | 0 | 8 | 2017-05-12T13:40:34Z | 2022-03-17T17:11:41Z | 2022-03-17T17:11:41Z | MEMBER |
```python In[4] ds1 = xr.Dataset({'foo': (('x',), [1, 2, 3])}, {'x': [1, 2, 3], 'y': 'a'}) Out[4]: <xarray.Dataset> Dimensions: () Coordinates: y <U1 'a' x int64 1 Data variables: foo int64 1 ``` But in MultiIndex case, does not.
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1408/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
359240638 | MDU6SXNzdWUzNTkyNDA2Mzg= | 2410 | Updated text for indexing page | fujiisoup 6815844 | open | 0 | 11 | 2018-09-11T22:01:39Z | 2021-11-15T21:17:14Z | MEMBER | We have a bunch of terms to describe the xarray structure, such as dimension, coordinate, dimension coordinate, etc.. Although it has been discussed in #1295 and we tried to use the consistent terminology in our docs, it looks still not easy for users to understand our functionalities. In #2399, @horta wrote a list of definitions (https://drive.google.com/file/d/1uJ_U6nedkNe916SMViuVKlkGwPX-mGK7/view?usp=sharing). I think it would be nice to have something like this in our docs. Any thought? |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2410/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
441088452 | MDU6SXNzdWU0NDEwODg0NTI= | 2944 | `groupby` does not correctly handle non-dimensional coordinate | fujiisoup 6815844 | closed | 0 | 3 | 2019-05-07T07:47:17Z | 2021-05-21T23:12:21Z | 2021-05-21T23:12:21Z | MEMBER | Code Sample, a copy-pastable example if possible```python
Problem description
Expected Output
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2944/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
254927382 | MDU6SXNzdWUyNTQ5MjczODI= | 1553 | Multidimensional reindex | fujiisoup 6815844 | open | 0 | 2 | 2017-09-04T03:29:39Z | 2020-12-19T16:00:00Z | MEMBER | From a discussion in #1473 comment It would be convenient if we have multi-dimensional
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1553/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
216621142 | MDU6SXNzdWUyMTY2MjExNDI= | 1323 | Image related methods | fujiisoup 6815844 | closed | 0 | 9 | 2017-03-24T01:39:52Z | 2020-10-08T16:00:18Z | 2020-06-21T19:25:18Z | MEMBER | Currently I'm using xarray to handle multiple images (typically, a sequence of images), and I feel it would be convenient if xarray supports image related functions. There may be many possibilities, but particular methods I want to have in xarray are
1. xr.open_image(File)
Image (possibly also video?) is naturally high-dimensional and I guess it would fit xarray's concept. Is this sufficiently broad interest? |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1323/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
338662554 | MDU6SXNzdWUzMzg2NjI1NTQ= | 2269 | A special function for unpickling old xarray object? | fujiisoup 6815844 | closed | 0 | 6 | 2018-07-05T17:27:28Z | 2020-07-11T02:55:38Z | 2020-07-11T02:55:38Z | MEMBER | I noticed that some users experiencing troubles to restore xarray objects that is created xarray < 0.8. Is there any possibility to add a function to support unpickling old object, such as
xref (private repo) gafusion/OMFIT-source#2652 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2269/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
619347681 | MDU6SXNzdWU2MTkzNDc2ODE= | 4068 | utility function to save complex values as a netCDF file | fujiisoup 6815844 | closed | 0 | 3 | 2020-05-16T01:19:16Z | 2020-05-25T08:36:59Z | 2020-05-25T08:36:58Z | MEMBER | Currently, we disallow to save complex values to a netCDF file. Maybe netCDF itself does not support complex values, but there may be some workarounds. It would be very handy for me. The most naive workaround may be to split each complex value into a real and imaginary part, add some flags, and restore it when loading them from the file. Maybe we may add a special suffix to the variable name? ```python
I think there may be a better way. Any thoughts are welcome :) p.s.
I just found that |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4068/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
611643130 | MDU6SXNzdWU2MTE2NDMxMzA= | 4024 | small contrast of html view in VScode darkmode | fujiisoup 6815844 | closed | 0 | 6 | 2020-05-04T06:53:32Z | 2020-05-07T20:36:32Z | 2020-05-07T20:36:32Z | MEMBER | If using xarray inside VScode with darkmode, the new html repr has a small contrast of the text color and background. Maybe the text color comes from the default setting, but the background color is not. In light mode, it looks nice. VersionsOutput of <tt>xr.show_versions()</tt>INSTALLED VERSIONS ------------------ commit: None python: 3.7.5 (default, Oct 25 2019, 15:51:11) [GCC 7.3.0] python-bits: 64 OS: Linux OS-release: 4.15.0-1080-oem machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: en_US.UTF-8 libhdf5: 1.10.4 libnetcdf: 4.6.1 xarray: 0.15.1 pandas: 0.25.3 numpy: 1.17.4 scipy: 1.3.2 netCDF4: 1.4.2 pydap: None h5netcdf: 0.8.0 h5py: 2.9.0 Nio: None zarr: None cftime: 1.0.4.2 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: 1.3.1 dask: 2.9.0 distributed: 2.9.0 matplotlib: 3.1.1 cartopy: None seaborn: 0.9.0 numbagg: None setuptools: 42.0.2.post20191203 pip: 19.3.1 conda: None pytest: 5.3.2 IPython: 7.10.2 sphinx: 2.3.0 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4024/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
207962322 | MDU6SXNzdWUyMDc5NjIzMjI= | 1271 | Attrs are lost in mathematical computation | fujiisoup 6815844 | closed | 0 | 7 | 2017-02-15T23:27:51Z | 2020-04-05T19:00:14Z | 2017-02-18T11:03:42Z | MEMBER | Related to #138 Why is |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1271/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
521317260 | MDU6SXNzdWU1MjEzMTcyNjA= | 3512 | selection from MultiIndex does not work properly | fujiisoup 6815844 | closed | 0 | 0 | 2019-11-12T04:12:12Z | 2019-11-14T11:56:18Z | 2019-11-14T11:56:18Z | MEMBER | MCVE Code Sample```python da = xr.DataArray([0, 1], dims=['x'], coords={'x': [0, 1], 'y': 'a'}) db = xr.DataArray([2, 3], dims=['x'], coords={'x': [0, 1], 'y': 'b'}) data = xr.concat([da, db], dim='x').set_index(xy=['x', 'y']) data.sel(y='a')
Expected Output```python
Problem DescriptionShould select the array Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3512/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
345090013 | MDU6SXNzdWUzNDUwOTAwMTM= | 2318 | Failing test by dask==0.18.2 | fujiisoup 6815844 | closed | 0 | 2 | 2018-07-27T04:52:08Z | 2019-11-10T04:37:15Z | 2019-11-10T04:37:15Z | MEMBER | Tests are failing, which is caused by new release of dask==0.18.2. xref: dask/dask#3822 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2318/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
280673215 | MDU6SXNzdWUyODA2NzMyMTU= | 1771 | Needs performance check / improvements in value assignment of DataArray | fujiisoup 6815844 | open | 0 | 1 | 2017-12-09T03:42:41Z | 2019-10-28T14:53:24Z | MEMBER | In #1746, we added a validation in We may need to optimize the logic here. Is it reasonable to constantly monitor the performance of basic operations, such as cc @jhamman @shoyer |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1771/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
349855157 | MDU6SXNzdWUzNDk4NTUxNTc= | 2362 | Wrong behavior of DataArray.resample | fujiisoup 6815844 | closed | 0 | 0 | 2018-08-13T00:02:47Z | 2019-10-22T19:42:08Z | 2019-10-22T19:42:08Z | MEMBER | From #2356, I noticed resample and groupby works nice for Dataset but not for DataArray Code Sample, a copy-pastable example if possible```python In [14]: import numpy as np ...: import xarray as xr ...: import pandas as pd ...: ...: time = pd.date_range('2000-01-01', freq='6H', periods=365 * 4) ...: ds = xr.Dataset({'foo': (('time', 'x'), np.random.randn(365 * 4, 5)), 'time': time, ...: 'x': np.arange(5)}) In [15]: ds
Out[15]:
<xarray.Dataset>
Dimensions: (time: 1460, x: 5)
Coordinates:
* time (time) datetime64[ns] 2000-01-01 ... 2000-12-30T18:00:00
* x (x) int64 0 1 2 3 4
Data variables:
foo (time, x) float64 -0.6916 -1.247 0.5376 ... -0.2197 -0.8479 -0.6719
Problem descriptionresample should work identically for DataArray and Dataset Expected Output
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2362/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
432019600 | MDU6SXNzdWU0MzIwMTk2MDA= | 2887 | Safely open / close netCDF files without resource locking | fujiisoup 6815844 | closed | 0 | 9 | 2019-04-11T13:19:45Z | 2019-05-16T15:28:30Z | 2019-05-16T15:28:30Z | MEMBER | Code Sample, a copy-pastable example if possible(essentially the same to #1629) Opening netCDF file via ds_read = xr.open_dataset('test.nc')
ds.to_netcdf('test.nc') # -> PermissionError
Problem descriptionAnother program cannot write the same netCDF file that xarray has opened, unless -- EDIT --
It is understandable when we do not want to load the entire file into the memory.
However, sometimes I want to read the file that will be updated soon by another program.
Also, I think that many users who are not accustomed to netCDF may expect this behavior (as I think it would be nice to have an option such as Expected OutputNo error Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2887/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
392362056 | MDU6SXNzdWUzOTIzNjIwNTY= | 2619 | Selection of MultiIndex makes following `unstack` wrong | fujiisoup 6815844 | closed | 0 | 2 | 2018-12-18T22:26:31Z | 2018-12-24T15:37:27Z | 2018-12-24T15:37:27Z | MEMBER | Code Sample, a copy-pastable example if possible```python import numpy as np import xarray as xr ds = xr.DataArray(np.arange(40).reshape(8, 5), dims=['x', 'y'], Out[1]: <xarray.DataArray (x: 8, y: 5)> array([[ 0., 1., 2., 3., 4.], [ 5., 6., 7., 8., 9.], [10., 11., 12., 13., 14.], [15., 16., 17., 18., 19.], [nan, nan, nan, nan, nan], [nan, nan, nan, nan, nan], [nan, nan, nan, nan, nan], [nan, nan, nan, nan, nan]]) Coordinates: * x (x) int64 0 1 2 3 4 5 6 7 * y (y) int64 0 1 2 3 4 ``` Problem descriptionAfter unstack, there are still values that are not selected by the previous Expected Output
Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2619/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
364008818 | MDU6SXNzdWUzNjQwMDg4MTg= | 2440 | ddof does not working with 0.10.9 | fujiisoup 6815844 | closed | 0 | 0 | 2018-09-26T12:42:18Z | 2018-09-28T13:44:29Z | 2018-09-28T13:44:29Z | MEMBER | Copied from issue#2236 comments, by @st-bender Hi, just to let you know that .std() does not accept the ddof keyword anymore (it worked in 0.10.8) Should I open a new bugreport? Edit: It fails with: ```python ~/Work/miniconda3/envs/stats/lib/python3.6/site-packages/xarray/core/duck_array_ops.py in f(values, axis, skipna, kwargs) 234 235 try: --> 236 return func(values, axis=axis, kwargs) 237 except AttributeError: 238 if isinstance(values, dask_array_type): TypeError: nanstd() got an unexpected keyword argument 'ddof' ``` |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2440/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
349857086 | MDU6SXNzdWUzNDk4NTcwODY= | 2363 | Reduction APIs for groupby, groupby_bins, resample, rolling | fujiisoup 6815844 | closed | 0 | 1 | 2018-08-13T00:30:10Z | 2018-09-28T06:54:30Z | 2018-09-28T06:54:30Z | MEMBER | From #2356 APIs for ```python import numpy as np import xarray as xr import pandas as pd time = pd.date_range('2000-01-01', freq='6H', periods=365 * 4) ds = xr.Dataset({'foo': (('time', 'x'), np.random.randn(365 * 4, 5)), 'time': time, 'x': [0, 1, 2, 1, 0]}) ds.rolling(time=2).mean() # result dims : ('time', 'x') ds.resample(time='M').mean() # result dims : ('time', 'x') ds['foo'].resample(time='M').mean() # result dims : ('time', ) maybe a bug #2362 ds.groupby('time.month').mean() # result dims : ('month', ) ds.groupby_bins('time', 3).mean() # result dims : ('time_bins', ) ```
I think The possible options would be
1. Change APIs of |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2363/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
348516727 | MDU6SXNzdWUzNDg1MTY3Mjc= | 2352 | Failing test for python=3.6 dask-dev | fujiisoup 6815844 | closed | 0 | 3 | 2018-08-07T23:02:02Z | 2018-08-08T01:45:45Z | 2018-08-08T01:45:45Z | MEMBER | Recently, dask renamed BTW, there is another faling test in python=2.7 dev, claiming that
Is anyone working on this? If not, I think we can temporally skip these tests for python 2.7. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2352/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
347662610 | MDU6SXNzdWUzNDc2NjI2MTA= | 2341 | apply_ufunc silently neglects arguments if `len(input_core_dims) < args` | fujiisoup 6815844 | closed | 0 | 1 | 2018-08-05T02:16:00Z | 2018-08-06T22:38:53Z | 2018-08-06T22:38:53Z | MEMBER | From SO In the following script, the second argument is silently neglected,
The correct scipt might be
I think we can raise a more friendly error if the size of EDIT:
Or we can automatically insert an empty tuple or |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2341/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
333480301 | MDU6SXNzdWUzMzM0ODAzMDE= | 2238 | Failing test with dask_distributed | fujiisoup 6815844 | closed | 0 | jhamman 2443309 | 5 | 2018-06-19T00:34:45Z | 2018-07-14T16:19:53Z | 2018-07-14T16:19:53Z | MEMBER | Some tests related to dask/distributed are failing in travis.
They are raising a Could anyone help to look inside? See the travis's log for the current master: https://travis-ci.org/pydata/xarray/builds/392530577 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2238/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | |||||
333510121 | MDU6SXNzdWUzMzM1MTAxMjE= | 2239 | Error in docs/plottings | fujiisoup 6815844 | closed | 0 | 1 | 2018-06-19T03:50:51Z | 2018-06-20T16:26:37Z | 2018-06-20T16:26:37Z | MEMBER | There is an error on rtd. http://xarray.pydata.org/en/stable/plotting.html#id4 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2239/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
330469406 | MDU6SXNzdWUzMzA0Njk0MDY= | 2218 | interp_like | fujiisoup 6815844 | closed | 0 | 0 | 2018-06-07T23:24:48Z | 2018-06-20T01:39:24Z | 2018-06-20T01:39:24Z | MEMBER | Just as a reminder of the remaining extension of #2104 . We might add |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2218/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
326352018 | MDU6SXNzdWUzMjYzNTIwMTg= | 2184 | Alighment is not working in Dataset.__setitem__ and Dataset.update | fujiisoup 6815844 | closed | 0 | 1 | 2018-05-25T01:38:25Z | 2018-05-26T09:32:50Z | 2018-05-26T09:32:50Z | MEMBER | Code Sample, a copy-pastable example if possiblefrom #2180 , comment
In the above, with anything but an outer join you're destroying d2 - which doesn't even exist in the rhs dataset! A sane, desirable outcome should be Problem descriptionAlignment should work Expected Output
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2184/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
321796423 | MDU6SXNzdWUzMjE3OTY0MjM= | 2112 | Sanity check when assigning a coordinate to DataArray | fujiisoup 6815844 | closed | 0 | 0 | 2018-05-10T03:22:18Z | 2018-05-15T16:39:22Z | 2018-05-15T16:39:22Z | MEMBER | Code Sample, a copy-pastable example if possibleI think we can raise an Error if the newly assigned coordinate to a DataArray has an invalid shape.
Problem descriptionIt is more user-friendly if we make some sanity checks when a new coordinate is assigned to a xr.DataArray. Dataset raises an appropriate error,
Expected OutputValueError |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2112/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
321928898 | MDU6SXNzdWUzMjE5Mjg4OTg= | 2114 | keep_attrs=True does not work `apply_ufunc` with xr.Variable | fujiisoup 6815844 | closed | 0 | 2 | 2018-05-10T13:21:07Z | 2018-05-11T22:54:44Z | 2018-05-11T22:54:44Z | MEMBER | Code Sample, a copy-pastable example if possible
```python In [2]: import numpy as np In [3]: import xarray as xr In [4]: da = xr.DataArray([0, 1, 2], dims='x', attrs={'foo': 'var'}) In [5]: func = lambda x: x*2 In [6]: xr.apply_ufunc(func, da, keep_attrs=True, input_core_dims=[['x']], outpu ...: t_core_dims=[['z']]) Out[6]: <xarray.DataArray (z: 3)> # attrs are tracked for xr.DataArray array([0, 2, 4]) Dimensions without coordinates: z Attributes: foo: var In [7]: xr.apply_ufunc(func, da.variable, keep_attrs=True, input_core_dims=[['x' ...: ]], output_core_dims=[['z']]) Out[7]: <xarray.Variable (z: 3)> # attrs are dropped array([0, 2, 4]) ``` Problem description
Expected Output
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2114/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
319419699 | MDU6SXNzdWUzMTk0MTk2OTk= | 2099 | Dataset.update wrongly handles the coordinate | fujiisoup 6815844 | closed | 0 | 0 | 2018-05-02T06:04:02Z | 2018-05-02T21:59:34Z | 2018-05-02T21:59:34Z | MEMBER | Code Sample, a copy-pastable example if possibleI noticed a bug introduced by #2087 (my PR) ```python import xarray as xr ds = xr.Dataset({'var': ('x', [1, 2, 3])}, coords={'x': [0, 1, 2], 'z1': ('x', [1, 2, 3]), 'z2': ('x', [1, 2, 3])}) ds['var'] = ds['var'] * 2 ``` It claims a ValueError. Problem descriptionHere should be
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2099/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
316660970 | MDU6SXNzdWUzMTY2NjA5NzA= | 2075 | apply_ufunc can generate an invalid object. | fujiisoup 6815844 | closed | 0 | 2 | 2018-04-23T04:52:25Z | 2018-04-23T05:08:02Z | 2018-04-23T05:08:02Z | MEMBER | Code Sample, a copy-pastable example if possible
In the above example, Problem descriptionAny of our function should not generate invalid xarray objects. Expected Output
or raise an Error. Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2075/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
314653502 | MDU6SXNzdWUzMTQ2NTM1MDI= | 2062 | __contains__ does not work with DataArray | fujiisoup 6815844 | closed | 0 | 2 | 2018-04-16T13:34:30Z | 2018-04-16T15:51:30Z | 2018-04-16T15:51:29Z | MEMBER | Code Sample, a copy-pastable example if possible```python
Problem description
Expected Output```python
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2062/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
300486064 | MDU6SXNzdWUzMDA0ODYwNjQ= | 1944 | building doc is failing for the release 0.10.1 | fujiisoup 6815844 | closed | 0 | 9 | 2018-02-27T04:01:28Z | 2018-03-12T20:36:58Z | 2018-03-12T20:35:31Z | MEMBER | I found the following page fails http://xarray.pydata.org/en/stable/examples/weather-data.html |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1944/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
301657312 | MDU6SXNzdWUzMDE2NTczMTI= | 1951 | einsum for xarray | fujiisoup 6815844 | closed | 0 | 1 | 2018-03-02T05:25:23Z | 2018-03-12T06:42:08Z | 2018-03-12T06:42:08Z | MEMBER | Code Sample, a copy-pastable example if possibleI sometimes want to make more flexible dot product of two data arrays, where we sum up along a part of common dimensions. ```python Your code hereda_vals = np.arange(6 * 5 * 4).reshape((6, 5, 4)) da = DataArray(da_vals, dims=['x', 'y', 'z']) dm_vals = np.arange(6 * 4).reshape((6, 4)) dm = DataArray(dm_vals, dims=['x', 'z']) I want something like thisda.dot(dm, 'z') # -> dimensions of the output array: ['x', 'y'] ``` It's an intermediate path of Is this feature sufficiently universal? EDIT:
I just noticed dask does not have |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1951/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
304042598 | MDU6SXNzdWUzMDQwNDI1OTg= | 1979 | Tests are failing caused by zarr 2.2.0 | fujiisoup 6815844 | closed | 0 | 2 | 2018-03-10T05:02:39Z | 2018-03-12T05:37:02Z | 2018-03-12T05:37:02Z | MEMBER | Problem descriptionTests are failing due to the release of zarr 2.2.0 Travis's log https://travis-ci.org/pydata/xarray/jobs/351566529 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1979/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
302001772 | MDU6SXNzdWUzMDIwMDE3NzI= | 1956 | numpy 1.11 support for apply_ufunc | fujiisoup 6815844 | closed | 0 | 1 | 2018-03-03T14:23:40Z | 2018-03-07T16:41:54Z | 2018-03-07T16:41:54Z | MEMBER | I noticed the failing in rtd
http://xarray.pydata.org/en/stable/computation.html#missing-values
is because it still uses numpy=1.11 which does not support This can be easily fixed (just bumping up numpy's version on rtd),
but as our minimum requirement is numpy==1.11, we may need to take care of this in |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1956/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
288567090 | MDU6SXNzdWUyODg1NjcwOTA= | 1831 | Slow performance of rolling.reduce | fujiisoup 6815844 | closed | 0 | 4 | 2018-01-15T11:44:47Z | 2018-03-01T03:39:19Z | 2018-03-01T03:39:19Z | MEMBER | Code Sample, a copy-pastable example if possible```python In [1]: import numpy as np ...: import xarray as xr ...: ...: da = xr.DataArray(np.random.randn(1000, 100), dims=['x', 'y'], ...: coords={'x': np.arange(1000)}) ...: In [2]: %%timeit ...: da.rolling(x=10).reduce(np.sum) ...: 2.04 s ± 8.25 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) ``` Problem descriptionIn Of course, we can use bottleneck methods if available, but this provides only a limited functions. (This also limits possible extensions of rolling, such as ND-rolling (#819), window type (#1142), strides (#819).) I am wondering if we could skip any sanity checks in our |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1831/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
299606951 | MDU6SXNzdWUyOTk2MDY5NTE= | 1937 | `isnull` loads dask array | fujiisoup 6815844 | closed | 0 | 0 | 2018-02-23T05:54:58Z | 2018-02-25T20:52:16Z | 2018-02-25T20:52:16Z | MEMBER | From gitter cc. @davidh-ssec
Problem description
Expected Output
CauseHere, |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1937/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
298012981 | MDU6SXNzdWUyOTgwMTI5ODE= | 1921 | BUG: Indexing by 0-dimensional array | fujiisoup 6815844 | closed | 0 | 0 | 2018-02-17T15:36:31Z | 2018-02-18T07:26:30Z | 2018-02-18T07:26:30Z | MEMBER | ```python In [1]: import xarray as xr ...: import numpy as np ...: ...: a = np.arange(10) ...: a[np.array(0)] ...: Out[1]: 0 In [2]: da = xr.DataArray(a, dims='x') ...: da[np.array(0)] ...: TypeError Traceback (most recent call last) <ipython-input-2-d30fdfc612ec> in <module>() 1 da = xr.DataArray(a, dims='x') ----> 2 da[np.array(0)] /home/keisukefujii/Dropbox/projects/xarray.git/xarray/core/dataarray.pyc in getitem(self, key) 478 else: 479 # xarray-style array indexing --> 480 return self.isel(**self._item_key_to_dict(key)) 481 482 def setitem(self, key, value): /home/keisukefujii/Dropbox/projects/xarray.git/xarray/core/dataarray.pyc in isel(self, drop, indexers) 759 DataArray.sel 760 """ --> 761 ds = self._to_temp_dataset().isel(drop=drop, indexers) 762 return self._from_temp_dataset(ds) 763 /home/keisukefujii/Dropbox/projects/xarray.git/xarray/core/dataset.py in isel(self, drop, indexers) 1390 for name, var in iteritems(self._variables): 1391 var_indexers = {k: v for k, v in indexers_list if k in var.dims} -> 1392 new_var = var.isel(var_indexers) 1393 if not (drop and name in var_indexers): 1394 variables[name] = new_var /home/keisukefujii/Dropbox/projects/xarray.git/xarray/core/variable.pyc in isel(self, **indexers) 851 if dim in indexers: 852 key[i] = indexers[dim] --> 853 return self[tuple(key)] 854 855 def squeeze(self, dim=None): /home/keisukefujii/Dropbox/projects/xarray.git/xarray/core/variable.pyc in getitem(self, key)
619 array /home/keisukefujii/Dropbox/projects/xarray.git/xarray/core/variable.pyc in _broadcast_indexes(self, key) 477 # key can be mapped as an OuterIndexer. 478 if all(not isinstance(k, Variable) for k in key): --> 479 return self._broadcast_indexes_outer(key) 480 481 # If all key is 1-dimensional and there are no duplicate labels, /home/keisukefujii/Dropbox/projects/xarray.git/xarray/core/variable.pyc in _broadcast_indexes_outer(self, key) 542 new_key.append(k) 543 --> 544 return dims, OuterIndexer(tuple(new_key)), None 545 546 def _nonzero(self): /home/keisukefujii/Dropbox/projects/xarray.git/xarray/core/indexing.py in init(self, key) 368 raise TypeError('invalid indexer array for {}, must have ' 369 'exactly 1 dimension: ' --> 370 .format(type(self).name, k)) 371 k = np.asarray(k, dtype=np.int64) 372 else: TypeError: invalid indexer array for OuterIndexer, must have exactly 1 dimension: ``` Indexing by a 0d-array should be identical to the indexing by a scalar. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1921/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
292633789 | MDU6SXNzdWUyOTI2MzM3ODk= | 1866 | aggregation ops for object-dtype are missing | fujiisoup 6815844 | closed | 0 | 0 | 2018-01-30T02:40:27Z | 2018-02-15T22:03:01Z | 2018-02-15T22:03:01Z | MEMBER | This issue arises in #1837 comment, where we need to make a summation of object-dtype array, such as
pandas support this by having their own nan-aggregation methods. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1866/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
294182366 | MDU6SXNzdWUyOTQxODIzNjY= | 1886 | Whether should we follow pandas or numpy if they have different API? | fujiisoup 6815844 | closed | 0 | 4 | 2018-02-04T09:05:30Z | 2018-02-07T00:23:09Z | 2018-02-07T00:23:09Z | MEMBER | In working with #1883, I noticed that our (and numpy's) ```python In [1]: import numpy as np ...: import xarray as xr ...: da = xr.DataArray([0, 1, 2], dims='x', name='da') In [2]: da.std() Out[2]: <xarray.DataArray 'da' ()> array(0.816496580927726) In [3]: da.to_dataframe().std() Out[3]: da 1.0 dtype: float64 In [4]: da.std(ddof=1) Out[4]: <xarray.DataArray 'da' ()> array(1.0) ``` |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1886/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
270159774 | MDU6SXNzdWUyNzAxNTk3NzQ= | 1678 | Coverage badge on README | fujiisoup 6815844 | closed | 0 | 4 | 2017-11-01T00:34:40Z | 2017-11-21T05:31:13Z | 2017-11-21T05:31:12Z | MEMBER | I saw the coverage badge on our README with only 75% (true value 95%), which may be caused by github page caching. Similar issues seem frequently reported, e.g. https://github.com/codecov/support/issues/218 Maybe we can improve this, though it would be difficult to test... |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1678/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
269967350 | MDU6SXNzdWUyNjk5NjczNTA= | 1675 | Ipython autocomplete raises a deprecation warning introduced in #1643. | fujiisoup 6815844 | closed | 0 | 0.10 2415632 | 2 | 2017-10-31T13:56:32Z | 2017-11-01T00:48:42Z | 2017-11-01T00:48:42Z | MEMBER | Code Sample, a copy-pastable example if possible```python Your code hereimport xarray as xr ds = xr.Dataset({'a': ('x', [0, 1, 2])}) ds. -> press 'Tab' ``` Problem descriptionIPython autocomplete raises a deprecation warning, introducing in #1643.
Expected OutputNone Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1675/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | |||||
254368462 | MDU6SXNzdWUyNTQzNjg0NjI= | 1541 | Need small updates of docs | fujiisoup 6815844 | closed | 0 | 2 | 2017-08-31T15:09:17Z | 2017-10-25T03:47:18Z | 2017-10-25T03:47:18Z | MEMBER | found some outdated parts in docs
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1541/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
260611548 | MDU6SXNzdWUyNjA2MTE1NDg= | 1593 | Some tests still check pandas version | fujiisoup 6815844 | closed | 0 | 0 | 2017-09-26T12:51:15Z | 2017-09-27T02:10:58Z | 2017-09-27T02:10:58Z | MEMBER | Although we updated pandas minimum version to 0.18 in #1530 ,
we still check its version like (I forgot to remove these statements in #1530 .) |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1593/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
251666172 | MDU6SXNzdWUyNTE2NjYxNzI= | 1512 | rolling requires pandas >= 0.18 | fujiisoup 6815844 | closed | 0 | 0.10 2415632 | 5 | 2017-08-21T13:58:59Z | 2017-08-31T17:25:10Z | 2017-08-31T17:25:10Z | MEMBER | We need pandas >= 0.18 because dataframe.rolling is supported after 0.18.
But Additionally, I noticed that in travis's CONDA_ENV=py27-min setup, our unit tests run with pandas == 0.20, though it might be intended to run with pandas == 0.15. By Package plan for package removal in environment /home/travis/miniconda/envs/test_env: The following packages will be REMOVED:
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1512/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | |||||
253975242 | MDU6SXNzdWUyNTM5NzUyNDI= | 1537 | Boolean indexing 'by' xr.DataArray | fujiisoup 6815844 | closed | 0 | 5 | 2017-08-30T12:08:08Z | 2017-08-31T01:39:00Z | 2017-08-31T01:39:00Z | MEMBER | Boolean indexing for np.ndarray 'by' a boolean xr.DataArray behaves strange, ```python In [1]: import numpy as np ...: import xarray as xr ...: ...: ind = xr.DataArray([True, True, False], dims=['x']) ...: ind Out[1]: <xarray.DataArray (x: 3)> array([ True, True, False], dtype=bool) Dimensions without coordinates: x In [2]: np.arange(3)[ind.values] Out[2]: array([0, 1]) In [3]: np.arange(3)[ind] Out[3]: array([], shape=(0, 3), dtype=int64) ``` (This is numpy==1.13) numpy==1.11 behaves differently, ```python
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1537/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
220520783 | MDU6SXNzdWUyMjA1MjA3ODM= | 1363 | Typo in reshaping.rst | fujiisoup 6815844 | closed | 0 | 0 | 2017-04-10T02:13:49Z | 2017-04-10T02:24:00Z | 2017-04-10T02:24:00Z | MEMBER | There are some typos in reshaping.rst, that is newly added in #1347. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1363/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
216799807 | MDU6SXNzdWUyMTY3OTk4MDc= | 1326 | Expand dimensions in xarray | fujiisoup 6815844 | closed | 0 | 8 | 2017-03-24T14:20:40Z | 2017-04-10T01:01:54Z | 2017-04-10T01:01:54Z | MEMBER | Based on the post http://stackoverflow.com/questions/34987972/expand-dimensions-xray and issue #1323. I think it would be great if xarray has a method to expand dimension, such as |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1326/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
205084767 | MDU6SXNzdWUyMDUwODQ3Njc= | 1246 | Positional indexing with a large float32 coordinate. | fujiisoup 6815844 | closed | 0 | 2 | 2017-02-03T06:49:42Z | 2017-02-04T02:44:17Z | 2017-02-04T02:44:17Z | MEMBER | The positional indexing fails if the coordinate is large np.float32 array. The minimum working example is
With a smaller sized DataArray, both works.
also freezes the kernel. Are these a pandas's issue? I used Python 3.5 and xarray 0.8.2. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1246/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);