home / github

Menu
  • GraphQL API
  • Search all tables

issues

Table actions
  • GraphQL API for issues

7 rows where milestone = 2415632, state = "closed" and type = "issue" sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: user, comments, created_at (date), updated_at (date), closed_at (date)

type 1

  • issue · 7 ✖

state 1

  • closed · 7 ✖

repo 1

  • xarray 7
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
271998358 MDU6SXNzdWUyNzE5OTgzNTg= 1697 apply_ufunc(dask='parallelized') won't accept scalar *args crusaderky 6213168 closed 0   0.10 2415632 1 2017-11-07T21:56:11Z 2017-11-10T16:46:26Z 2017-11-10T16:46:26Z MEMBER      

As of xarray-0.10-rc1:

Works: ``` import xarray import scipy.stats a = xarray.DataArray([1,2], dims=['x'])

xarray.apply_ufunc(scipy.stats.norm.cdf, a, 0, 1)

<xarray.DataArray (x: 2)> array([ 0.841345, 0.97725 ]) Dimensions without coordinates: x ```

Broken: ``` xarray.apply_ufunc( scipy.stats.norm.cdf, a.chunk(), 0, 1, dask='parallelized', output_dtypes=[a.dtype] ).compute()

IndexError Traceback (most recent call last) <ipython-input-35-1d4025e1ebdb> in <module>() ----> 1 xarray.apply_ufunc(scipy.stats.norm.cdf, a.chunk(), 0, 1, dask='parallelized', output_dtypes=[a.dtype]).compute()

~/anaconda3/lib/python3.6/site-packages/xarray/core/computation.py in apply_ufunc(func, args, kwargs) 913 join=join, 914 exclude_dims=exclude_dims, --> 915 keep_attrs=keep_attrs) 916 elif any(isinstance(a, Variable) for a in args): 917 return variables_ufunc(args)

~/anaconda3/lib/python3.6/site-packages/xarray/core/computation.py in apply_dataarray_ufunc(func, args, kwargs) 210 211 data_vars = [getattr(a, 'variable', a) for a in args] --> 212 result_var = func(data_vars) 213 214 if signature.num_outputs > 1:

~/anaconda3/lib/python3.6/site-packages/xarray/core/computation.py in apply_variable_ufunc(func, args, kwargs) 561 raise ValueError('unknown setting for dask array handling in ' 562 'apply_ufunc: {}'.format(dask)) --> 563 result_data = func(input_data) 564 565 if signature.num_outputs > 1:

~/anaconda3/lib/python3.6/site-packages/xarray/core/computation.py in <lambda>(arrays) 555 func = lambda arrays: _apply_with_dask_atop( 556 numpy_func, arrays, input_dims, output_dims, signature, --> 557 output_dtypes, output_sizes) 558 elif dask == 'allowed': 559 pass

~/anaconda3/lib/python3.6/site-packages/xarray/core/computation.py in _apply_with_dask_atop(func, args, input_dims, output_dims, signature, output_dtypes, output_sizes) 624 for element in (arg, dims[-getattr(arg, 'ndim', 0):])] 625 return da.atop(func, out_ind, *atop_args, dtype=dtype, concatenate=True, --> 626 new_axes=output_sizes) 627 628

~/anaconda3/lib/python3.6/site-packages/dask/array/core.py in atop(func, out_ind, args, kwargs) 2231 raise ValueError("Must specify dtype of output array") 2232 -> 2233 chunkss, arrays = unify_chunks(args) 2234 for k, v in new_axes.items(): 2235 chunkss[k] = (v,)

~/anaconda3/lib/python3.6/site-packages/dask/array/core.py in unify_chunks(args, *kwargs) 2117 chunks = tuple(chunkss[j] if a.shape[n] > 1 else a.shape[n] 2118 if not np.isnan(sum(chunkss[j])) else None -> 2119 for n, j in enumerate(i)) 2120 if chunks != a.chunks and all(a.chunks): 2121 arrays.append(a.rechunk(chunks))

~/anaconda3/lib/python3.6/site-packages/dask/array/core.py in <genexpr>(.0) 2117 chunks = tuple(chunkss[j] if a.shape[n] > 1 else a.shape[n] 2118 if not np.isnan(sum(chunkss[j])) else None -> 2119 for n, j in enumerate(i)) 2120 if chunks != a.chunks and all(a.chunks): 2121 arrays.append(a.rechunk(chunks))

IndexError: tuple index out of range ```

Workaround: ``` xarray.apply_ufunc( scipy.stats.norm.cdf, a, kwargs={'loc': 0, 'scale': 1}, dask='parallelized', output_dtypes=[a.dtype]).compute()

<xarray.DataArray (x: 2)> array([ 0.841345, 0.97725 ]) Dimensions without coordinates: x ```

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1697/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
271599372 MDU6SXNzdWUyNzE1OTkzNzI= 1694 Regression: dropna() on lazy variable fmaussion 10050469 closed 0   0.10 2415632 10 2017-11-06T19:53:18Z 2017-11-08T13:49:01Z 2017-11-08T13:36:09Z MEMBER      

Code Sample, a copy-pastable example if possible

```python import numpy as np import xarray as xr

a = np.random.randn(4, 3) a[1, 1] = np.NaN da = xr.DataArray(a, dims=('y', 'x'), coords={'y':np.arange(4), 'x':np.arange(3)}) da.to_netcdf('test.nc')

with xr.open_dataarray('test.nc') as da: da.dropna(dim='x', how='any')


ValueError Traceback (most recent call last) <ipython-input-37-8d137cf3a813> in <module>() 8 9 with xr.open_dataarray('test.nc') as da: ---> 10 da.dropna(dim='x', how='any')

~/.pyvirtualenvs/py3/lib/python3.5/site-packages/xarray/core/dataarray.py in dropna(self, dim, how, thresh) 1158 DataArray 1159 """ -> 1160 ds = self._to_temp_dataset().dropna(dim, how=how, thresh=thresh) 1161 return self._from_temp_dataset(ds) 1162

~/.pyvirtualenvs/py3/lib/python3.5/site-packages/xarray/core/dataset.py in dropna(self, dim, how, thresh, subset) 2292 raise TypeError('must specify how or thresh') 2293 -> 2294 return self.isel(**{dim: mask}) 2295 2296 def fillna(self, value):

~/.pyvirtualenvs/py3/lib/python3.5/site-packages/xarray/core/dataset.py in isel(self, drop, **indexers) 1291 coord_names = set(variables).intersection(self._coord_names) 1292 selected = self._replace_vars_and_dims(variables, -> 1293 coord_names=coord_names) 1294 1295 # Extract coordinates from indexers

~/.pyvirtualenvs/py3/lib/python3.5/site-packages/xarray/core/dataset.py in _replace_vars_and_dims(self, variables, coord_names, dims, attrs, inplace) 598 """ 599 if dims is None: --> 600 dims = calculate_dimensions(variables) 601 if inplace: 602 self._dims = dims

~/.pyvirtualenvs/py3/lib/python3.5/site-packages/xarray/core/dataset.py in calculate_dimensions(variables) 111 raise ValueError('conflicting sizes for dimension %r: ' 112 'length %s on %r and length %s on %r' % --> 113 (dim, size, k, dims[dim], last_used[dim])) 114 return dims 115

ValueError: conflicting sizes for dimension 'y': length 2 on <this-array> and length 4 on 'y' ```

Problem description

See above. Note that the code runs when: - data is previously read into memory with load() - the DataArray is stored without coordinates (this is strange) - dropna is applied to 'y' instead of 'x'

Expected Output

This used to work in v0.9.6

Output of xr.show_versions()

INSTALLED VERSIONS ------------------ commit: None python: 3.5.2.final.0 python-bits: 64 OS: Linux OS-release: 4.10.0-38-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: en_US.UTF-8 xarray: 0.10.0rc1-5-g2a1d392 pandas: 0.21.0 numpy: 1.13.3 scipy: 1.0.0 netCDF4: 1.3.0 h5netcdf: None Nio: None bottleneck: 1.2.1 cyordereddict: None dask: 0.15.4 matplotlib: 2.1.0 cartopy: 0.15.1 seaborn: 0.8.1 setuptools: 36.6.0 pip: 9.0.1 conda: None pytest: 3.2.3 IPython: 6.2.1 sphinx: 1.6.5
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1694/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
271036342 MDU6SXNzdWUyNzEwMzYzNDI= 1688 NotImplementedError: Vectorized indexing for <class 'xarray.core.indexing.LazilyIndexedArray'> is not implemented. fmaussion 10050469 closed 0   0.10 2415632 1 2017-11-03T16:21:26Z 2017-11-07T20:41:44Z 2017-11-07T20:41:44Z MEMBER      

I think this is a regression in the current 0.10.0rc1:

Code Sample

```python import xarray as xr ds = xr.open_dataset('cesm_data.nc', decode_cf=False) ds.temp.isel(time=ds.time < 274383) # throws an error


NotImplementedError Traceback (most recent call last) <ipython-input-18-a5c4179cd02d> in <module>() ----> 1 ds.temp.isel(time=ds.time < 274383)

~/.pyvirtualenvs/py3/lib/python3.5/site-packages/xarray/core/dataarray.py in isel(self, drop, indexers) 717 DataArray.sel 718 """ --> 719 ds = self._to_temp_dataset().isel(drop=drop, indexers) 720 return self._from_temp_dataset(ds) 721

~/.pyvirtualenvs/py3/lib/python3.5/site-packages/xarray/core/dataset.py in isel(self, drop, indexers) 1278 for name, var in iteritems(self._variables): 1279 var_indexers = {k: v for k, v in indexers_list if k in var.dims} -> 1280 new_var = var.isel(var_indexers) 1281 if not (drop and name in var_indexers): 1282 variables[name] = new_var

~/.pyvirtualenvs/py3/lib/python3.5/site-packages/xarray/core/variable.py in isel(self, **indexers) 771 if dim in indexers: 772 key[i] = indexers[dim] --> 773 return self[tuple(key)] 774 775 def squeeze(self, dim=None):

~/.pyvirtualenvs/py3/lib/python3.5/site-packages/xarray/core/variable.py in getitem(self, key) 595 """ 596 dims, index_tuple, new_order = self._broadcast_indexes(key) --> 597 data = self._indexable_data[index_tuple] 598 if new_order: 599 data = np.moveaxis(data, range(len(new_order)), new_order)

~/.pyvirtualenvs/py3/lib/python3.5/site-packages/xarray/core/indexing.py in getitem(self, key) 414 415 def getitem(self, key): --> 416 return type(self)(_wrap_numpy_scalars(self.array[key])) 417 418 def setitem(self, key, value):

~/.pyvirtualenvs/py3/lib/python3.5/site-packages/xarray/core/indexing.py in getitem(self, key) 394 395 def getitem(self, key): --> 396 return type(self)(_wrap_numpy_scalars(self.array[key])) 397 398 def setitem(self, key, value):

~/.pyvirtualenvs/py3/lib/python3.5/site-packages/xarray/core/indexing.py in getitem(self, key) 361 362 def getitem(self, key): --> 363 return type(self)(self.array, self._updated_key(key)) 364 365 def setitem(self, key, value):

~/.pyvirtualenvs/py3/lib/python3.5/site-packages/xarray/core/indexing.py in _updated_key(self, new_key) 336 raise NotImplementedError( 337 'Vectorized indexing for {} is not implemented. Load your ' --> 338 'data first with .load() or .compute().'.format(type(self))) 339 new_key = iter(expanded_indexer(new_key, self.ndim)) 340 key = []

NotImplementedError: Vectorized indexing for <class 'xarray.core.indexing.LazilyIndexedArray'> is not implemented. Load your data first with .load() or .compute().

``` Here is the file: cesm_data.nc.zip

Expected Output

This used to work in v0.9

Output of xr.show_versions()

INSTALLED VERSIONS ------------------ commit: None python: 3.5.2.final.0 python-bits: 64 OS: Linux OS-release: 4.10.0-38-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: en_US.UTF-8 xarray: 0.10.0rc1 pandas: 0.21.0 numpy: 1.13.3 scipy: 1.0.0 netCDF4: 1.3.0 h5netcdf: None Nio: None bottleneck: 1.2.1 cyordereddict: None dask: 0.15.4 matplotlib: 2.1.0 cartopy: 0.15.1 seaborn: 0.8.1 setuptools: 36.6.0 pip: 9.0.1 conda: None pytest: 3.2.3 IPython: 6.2.1 sphinx: 1.6.5
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1688/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
269967350 MDU6SXNzdWUyNjk5NjczNTA= 1675 Ipython autocomplete raises a deprecation warning introduced in #1643. fujiisoup 6815844 closed 0   0.10 2415632 2 2017-10-31T13:56:32Z 2017-11-01T00:48:42Z 2017-11-01T00:48:42Z MEMBER      

Code Sample, a copy-pastable example if possible

```python

Your code here

import xarray as xr ds = xr.Dataset({'a': ('x', [0, 1, 2])}) ds. -> press 'Tab' ```

Problem description

IPython autocomplete raises a deprecation warning, introducing in #1643. ipython /home/keisukefujii/anaconda3/envs/tensorflow/lib/python3.5/site-packages/jedi/e. getattr(obj, name) /home/keisukefujii/anaconda3/envs/tensorflow/lib/python3.5/site-packages/jedi/e. obj = getattr(obj, name) In [3]: ds.

Expected Output

None

Output of xr.show_versions()

# Paste the output here xr.show_versions() here INSTALLED VERSIONS ------------------ commit: 2ef63bf0b199bacc310f448bf0d070a57b7fc043 python: 3.5.2.final.0 python-bits: 64 OS: Linux OS-release: 4.4.0-97-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: en_US.UTF-8 xarray: 0.9.6-32-g5122ee4 pandas: 0.20.3 numpy: 1.13.1 scipy: 0.19.0 netCDF4: None h5netcdf: None Nio: None bottleneck: None cyordereddict: None dask: 0.15.4 matplotlib: 2.0.2 cartopy: None seaborn: 0.7.1 setuptools: 36.2.7 pip: 9.0.1 conda: None pytest: 3.0.7 IPython: 6.1.0 sphinx: 1.5.2
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1675/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
217457264 MDU6SXNzdWUyMTc0NTcyNjQ= 1333 Deprecate indexing with non-aligned DataArray objects shoyer 1217238 closed 0   0.10 2415632 2 2017-03-28T06:08:31Z 2017-10-20T00:16:54Z 2017-10-20T00:16:54Z MEMBER      

Currently, we strip labels from DataArray arguments to [] / .loc[] / .sel()/ .isel(). But this will break when we finally add indexing with alignment (https://github.com/pydata/xarray/issues/974).

We could start raising deprecation warnings now so users can stop relying on this functionality that will change.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1333/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
254430377 MDU6SXNzdWUyNTQ0MzAzNzc= 1542 Testing: Failing tests on py36-pandas-dev jhamman 2443309 closed 0   0.10 2415632 4 2017-08-31T18:40:47Z 2017-09-05T22:22:32Z 2017-09-05T22:22:32Z MEMBER      

We currently have 7 failing tests when run against the pandas development code (travis).

Question for @shoyer - can you take a look at these and see if we should try to get a fix in place prior to v.0.10.0? It looks like Pandas.0.21 is slated for release on Sept. 30.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1542/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
251666172 MDU6SXNzdWUyNTE2NjYxNzI= 1512 rolling requires pandas >= 0.18 fujiisoup 6815844 closed 0   0.10 2415632 5 2017-08-21T13:58:59Z 2017-08-31T17:25:10Z 2017-08-31T17:25:10Z MEMBER      

We need pandas >= 0.18 because dataframe.rolling is supported after 0.18. But requirements in our setup.py says we need pandas >= 0.15.

Additionally, I noticed that in travis's CONDA_ENV=py27-min setup, our unit tests run with pandas == 0.20, though it might be intended to run with pandas == 0.15.

By conda remove scipy, pandas.0.15 is removed. (Here is the travis log) ``` if [[ "$CONDA_ENV" == "py27-min" ]]; then conda remove scipy; fi Fetching package metadata ......... Solving package specifications: .

Package plan for package removal in environment /home/travis/miniconda/envs/test_env:

The following packages will be REMOVED:

pandas: 0.15.0-np19py27_0 defaults
scipy:  0.17.1-np19py27_1 defaults

then in `python setup.py install`, pandas==0.20.3 is installed. Searching for pandas>=0.15.0 Reading https://pypi.python.org/simple/pandas/ Downloading https://pypi.python.org/packages/ee/aa/90c06f249cf4408fa75135ad0df7d64c09cf74c9870733862491ed5f3a50/pandas-0.20.3.tar.gz#md5=4df858f28b4bf4fa07d9fbb7f2568173 Best match: pandas 0.20.3 ```

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1512/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 1204.52ms · About: xarray-datasette