issues
27 rows where milestone = 2415632 and repo = 13221727 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: user, comments, updated_at, closed_at, author_association, created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
241578773 | MDExOlB1bGxSZXF1ZXN0MTI5NTg4NDgx | 1473 | WIP: indexing with broadcasting | shoyer 1217238 | closed | 0 | 0.10 2415632 | 60 | 2017-07-10T01:49:32Z | 2018-02-05T09:42:24Z | 2017-10-19T16:52:44Z | MEMBER | 0 | pydata/xarray/pulls/1473 |
xref https://github.com/pydata/xarray/issues/974#issuecomment-313977794 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1473/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||
236347050 | MDExOlB1bGxSZXF1ZXN0MTI1OTM2ODM1 | 1457 | Feature/benchmark | jhamman 2443309 | closed | 0 | 0.10 2415632 | 16 | 2017-06-16T00:11:52Z | 2017-11-13T04:09:53Z | 2017-07-26T16:17:34Z | MEMBER | 0 | pydata/xarray/pulls/1457 |
This is a very bare bones addition of the asv benchmarking tool to xarray. I have added four very rudimentary benchmarks in the Usage of Before I go any further, I want to get some input from @pydata/xarray on what we want to see in this PR. In previous projects, I have found designing tests after the fact can end up being fairly arbitrary and I want to avoid that if at all possible. I'm guessing that we will want to focus our efforts for now on I/O and dask related performance but how we do that is up for discussion. cc @shoyer, @rabernat, @MaximilianR, @Zac-HD |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1457/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||
271998358 | MDU6SXNzdWUyNzE5OTgzNTg= | 1697 | apply_ufunc(dask='parallelized') won't accept scalar *args | crusaderky 6213168 | closed | 0 | 0.10 2415632 | 1 | 2017-11-07T21:56:11Z | 2017-11-10T16:46:26Z | 2017-11-10T16:46:26Z | MEMBER | As of xarray-0.10-rc1: Works: ``` import xarray import scipy.stats a = xarray.DataArray([1,2], dims=['x']) xarray.apply_ufunc(scipy.stats.norm.cdf, a, 0, 1) <xarray.DataArray (x: 2)> array([ 0.841345, 0.97725 ]) Dimensions without coordinates: x ``` Broken: ``` xarray.apply_ufunc( scipy.stats.norm.cdf, a.chunk(), 0, 1, dask='parallelized', output_dtypes=[a.dtype] ).compute() IndexError Traceback (most recent call last) <ipython-input-35-1d4025e1ebdb> in <module>() ----> 1 xarray.apply_ufunc(scipy.stats.norm.cdf, a.chunk(), 0, 1, dask='parallelized', output_dtypes=[a.dtype]).compute() ~/anaconda3/lib/python3.6/site-packages/xarray/core/computation.py in apply_ufunc(func, args, kwargs) 913 join=join, 914 exclude_dims=exclude_dims, --> 915 keep_attrs=keep_attrs) 916 elif any(isinstance(a, Variable) for a in args): 917 return variables_ufunc(args) ~/anaconda3/lib/python3.6/site-packages/xarray/core/computation.py in apply_dataarray_ufunc(func, args, kwargs) 210 211 data_vars = [getattr(a, 'variable', a) for a in args] --> 212 result_var = func(data_vars) 213 214 if signature.num_outputs > 1: ~/anaconda3/lib/python3.6/site-packages/xarray/core/computation.py in apply_variable_ufunc(func, args, kwargs) 561 raise ValueError('unknown setting for dask array handling in ' 562 'apply_ufunc: {}'.format(dask)) --> 563 result_data = func(input_data) 564 565 if signature.num_outputs > 1: ~/anaconda3/lib/python3.6/site-packages/xarray/core/computation.py in <lambda>(arrays) 555 func = lambda arrays: _apply_with_dask_atop( 556 numpy_func, arrays, input_dims, output_dims, signature, --> 557 output_dtypes, output_sizes) 558 elif dask == 'allowed': 559 pass ~/anaconda3/lib/python3.6/site-packages/xarray/core/computation.py in _apply_with_dask_atop(func, args, input_dims, output_dims, signature, output_dtypes, output_sizes) 624 for element in (arg, dims[-getattr(arg, 'ndim', 0):])] 625 return da.atop(func, out_ind, *atop_args, dtype=dtype, concatenate=True, --> 626 new_axes=output_sizes) 627 628 ~/anaconda3/lib/python3.6/site-packages/dask/array/core.py in atop(func, out_ind, args, kwargs) 2231 raise ValueError("Must specify dtype of output array") 2232 -> 2233 chunkss, arrays = unify_chunks(args) 2234 for k, v in new_axes.items(): 2235 chunkss[k] = (v,) ~/anaconda3/lib/python3.6/site-packages/dask/array/core.py in unify_chunks(args, *kwargs) 2117 chunks = tuple(chunkss[j] if a.shape[n] > 1 else a.shape[n] 2118 if not np.isnan(sum(chunkss[j])) else None -> 2119 for n, j in enumerate(i)) 2120 if chunks != a.chunks and all(a.chunks): 2121 arrays.append(a.rechunk(chunks)) ~/anaconda3/lib/python3.6/site-packages/dask/array/core.py in <genexpr>(.0) 2117 chunks = tuple(chunkss[j] if a.shape[n] > 1 else a.shape[n] 2118 if not np.isnan(sum(chunkss[j])) else None -> 2119 for n, j in enumerate(i)) 2120 if chunks != a.chunks and all(a.chunks): 2121 arrays.append(a.rechunk(chunks)) IndexError: tuple index out of range ``` Workaround: ``` xarray.apply_ufunc( scipy.stats.norm.cdf, a, kwargs={'loc': 0, 'scale': 1}, dask='parallelized', output_dtypes=[a.dtype]).compute() <xarray.DataArray (x: 2)> array([ 0.841345, 0.97725 ]) Dimensions without coordinates: x ``` |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1697/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | |||||
271599372 | MDU6SXNzdWUyNzE1OTkzNzI= | 1694 | Regression: dropna() on lazy variable | fmaussion 10050469 | closed | 0 | 0.10 2415632 | 10 | 2017-11-06T19:53:18Z | 2017-11-08T13:49:01Z | 2017-11-08T13:36:09Z | MEMBER | Code Sample, a copy-pastable example if possible```python import numpy as np import xarray as xr a = np.random.randn(4, 3) a[1, 1] = np.NaN da = xr.DataArray(a, dims=('y', 'x'), coords={'y':np.arange(4), 'x':np.arange(3)}) da.to_netcdf('test.nc') with xr.open_dataarray('test.nc') as da: da.dropna(dim='x', how='any') ValueError Traceback (most recent call last) <ipython-input-37-8d137cf3a813> in <module>() 8 9 with xr.open_dataarray('test.nc') as da: ---> 10 da.dropna(dim='x', how='any') ~/.pyvirtualenvs/py3/lib/python3.5/site-packages/xarray/core/dataarray.py in dropna(self, dim, how, thresh) 1158 DataArray 1159 """ -> 1160 ds = self._to_temp_dataset().dropna(dim, how=how, thresh=thresh) 1161 return self._from_temp_dataset(ds) 1162 ~/.pyvirtualenvs/py3/lib/python3.5/site-packages/xarray/core/dataset.py in dropna(self, dim, how, thresh, subset) 2292 raise TypeError('must specify how or thresh') 2293 -> 2294 return self.isel(**{dim: mask}) 2295 2296 def fillna(self, value): ~/.pyvirtualenvs/py3/lib/python3.5/site-packages/xarray/core/dataset.py in isel(self, drop, **indexers) 1291 coord_names = set(variables).intersection(self._coord_names) 1292 selected = self._replace_vars_and_dims(variables, -> 1293 coord_names=coord_names) 1294 1295 # Extract coordinates from indexers ~/.pyvirtualenvs/py3/lib/python3.5/site-packages/xarray/core/dataset.py in _replace_vars_and_dims(self, variables, coord_names, dims, attrs, inplace) 598 """ 599 if dims is None: --> 600 dims = calculate_dimensions(variables) 601 if inplace: 602 self._dims = dims ~/.pyvirtualenvs/py3/lib/python3.5/site-packages/xarray/core/dataset.py in calculate_dimensions(variables) 111 raise ValueError('conflicting sizes for dimension %r: ' 112 'length %s on %r and length %s on %r' % --> 113 (dim, size, k, dims[dim], last_used[dim])) 114 return dims 115 ValueError: conflicting sizes for dimension 'y': length 2 on <this-array> and length 4 on 'y' ``` Problem descriptionSee above. Note that the code runs when:
- data is previously read into memory with Expected OutputThis used to work in v0.9.6 Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1694/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | |||||
271036342 | MDU6SXNzdWUyNzEwMzYzNDI= | 1688 | NotImplementedError: Vectorized indexing for <class 'xarray.core.indexing.LazilyIndexedArray'> is not implemented. | fmaussion 10050469 | closed | 0 | 0.10 2415632 | 1 | 2017-11-03T16:21:26Z | 2017-11-07T20:41:44Z | 2017-11-07T20:41:44Z | MEMBER | I think this is a regression in the current 0.10.0rc1: Code Sample```python import xarray as xr ds = xr.open_dataset('cesm_data.nc', decode_cf=False) ds.temp.isel(time=ds.time < 274383) # throws an error NotImplementedError Traceback (most recent call last) <ipython-input-18-a5c4179cd02d> in <module>() ----> 1 ds.temp.isel(time=ds.time < 274383) ~/.pyvirtualenvs/py3/lib/python3.5/site-packages/xarray/core/dataarray.py in isel(self, drop, indexers) 717 DataArray.sel 718 """ --> 719 ds = self._to_temp_dataset().isel(drop=drop, indexers) 720 return self._from_temp_dataset(ds) 721 ~/.pyvirtualenvs/py3/lib/python3.5/site-packages/xarray/core/dataset.py in isel(self, drop, indexers) 1278 for name, var in iteritems(self._variables): 1279 var_indexers = {k: v for k, v in indexers_list if k in var.dims} -> 1280 new_var = var.isel(var_indexers) 1281 if not (drop and name in var_indexers): 1282 variables[name] = new_var ~/.pyvirtualenvs/py3/lib/python3.5/site-packages/xarray/core/variable.py in isel(self, **indexers) 771 if dim in indexers: 772 key[i] = indexers[dim] --> 773 return self[tuple(key)] 774 775 def squeeze(self, dim=None): ~/.pyvirtualenvs/py3/lib/python3.5/site-packages/xarray/core/variable.py in getitem(self, key) 595 """ 596 dims, index_tuple, new_order = self._broadcast_indexes(key) --> 597 data = self._indexable_data[index_tuple] 598 if new_order: 599 data = np.moveaxis(data, range(len(new_order)), new_order) ~/.pyvirtualenvs/py3/lib/python3.5/site-packages/xarray/core/indexing.py in getitem(self, key) 414 415 def getitem(self, key): --> 416 return type(self)(_wrap_numpy_scalars(self.array[key])) 417 418 def setitem(self, key, value): ~/.pyvirtualenvs/py3/lib/python3.5/site-packages/xarray/core/indexing.py in getitem(self, key) 394 395 def getitem(self, key): --> 396 return type(self)(_wrap_numpy_scalars(self.array[key])) 397 398 def setitem(self, key, value): ~/.pyvirtualenvs/py3/lib/python3.5/site-packages/xarray/core/indexing.py in getitem(self, key) 361 362 def getitem(self, key): --> 363 return type(self)(self.array, self._updated_key(key)) 364 365 def setitem(self, key, value): ~/.pyvirtualenvs/py3/lib/python3.5/site-packages/xarray/core/indexing.py in _updated_key(self, new_key) 336 raise NotImplementedError( 337 'Vectorized indexing for {} is not implemented. Load your ' --> 338 'data first with .load() or .compute().'.format(type(self))) 339 new_key = iter(expanded_indexer(new_key, self.ndim)) 340 key = [] NotImplementedError: Vectorized indexing for <class 'xarray.core.indexing.LazilyIndexedArray'> is not implemented. Load your data first with .load() or .compute(). ``` Here is the file: cesm_data.nc.zip Expected OutputThis used to work in v0.9 Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1688/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | |||||
253277979 | MDExOlB1bGxSZXF1ZXN0MTM3OTA3NDIx | 1530 | Deprecate old pandas support | fujiisoup 6815844 | closed | 0 | 0.10 2415632 | 1 | 2017-08-28T09:40:02Z | 2017-11-04T09:51:51Z | 2017-08-31T17:25:10Z | MEMBER | 0 | pydata/xarray/pulls/1530 |
Explicitly deprecated old pandas (< 0.18) and old numpy (< 1.11) supports.
Some backported functions in |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1530/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||
269967350 | MDU6SXNzdWUyNjk5NjczNTA= | 1675 | Ipython autocomplete raises a deprecation warning introduced in #1643. | fujiisoup 6815844 | closed | 0 | 0.10 2415632 | 2 | 2017-10-31T13:56:32Z | 2017-11-01T00:48:42Z | 2017-11-01T00:48:42Z | MEMBER | Code Sample, a copy-pastable example if possible```python Your code hereimport xarray as xr ds = xr.Dataset({'a': ('x', [0, 1, 2])}) ds. -> press 'Tab' ``` Problem descriptionIPython autocomplete raises a deprecation warning, introducing in #1643.
Expected OutputNone Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1675/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | |||||
244726457 | MDExOlB1bGxSZXF1ZXN0MTMxODE2MTk1 | 1485 | add ISSUE_TEMPLATE for github and xr.show_versions() | jhamman 2443309 | closed | 0 | 0.10 2415632 | 3 | 2017-07-21T16:54:29Z | 2017-10-28T01:24:08Z | 2017-10-28T01:24:02Z | MEMBER | 0 | pydata/xarray/pulls/1485 |
This PR adds a new module level function |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1485/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||
245624267 | MDExOlB1bGxSZXF1ZXN0MTMyNDQzMjk4 | 1489 | lazily load dask arrays to dask data frames by calling to_dask_dataframe | jmunroe 6181563 | closed | 0 | 0.10 2415632 | 16 | 2017-07-26T06:58:41Z | 2017-10-28T00:46:58Z | 2017-10-28T00:21:52Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/1489 |
Working towards on a solution for #1462 Just some stub code for the moment. Dask dataframes don't appear to support MultiIndex so not sure what to do about that. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1489/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||
264098911 | MDExOlB1bGxSZXF1ZXN0MTQ1NTk5MzEw | 1619 | Expose apply_ufunc as public API and add documentation | shoyer 1217238 | closed | 0 | 0.10 2415632 | 2 | 2017-10-10T04:54:11Z | 2017-10-20T16:44:51Z | 2017-10-20T16:44:47Z | MEMBER | 0 | pydata/xarray/pulls/1619 |
@MaximilianR @rabernat @jhamman Review from any of you would be appreciated here! |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1619/reactions", "total_count": 4, "+1": 2, "-1": 0, "laugh": 0, "hooray": 2, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||
217457264 | MDU6SXNzdWUyMTc0NTcyNjQ= | 1333 | Deprecate indexing with non-aligned DataArray objects | shoyer 1217238 | closed | 0 | 0.10 2415632 | 2 | 2017-03-28T06:08:31Z | 2017-10-20T00:16:54Z | 2017-10-20T00:16:54Z | MEMBER | Currently, we strip labels from DataArray arguments to We could start raising deprecation warnings now so users can stop relying on this functionality that will change. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1333/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | |||||
254841785 | MDExOlB1bGxSZXF1ZXN0MTM5MDI5NzMx | 1551 | Load nonindex coords ahead of concat() | crusaderky 6213168 | closed | 0 | 0.10 2415632 | 7 | 2017-09-02T23:19:03Z | 2017-10-09T23:32:50Z | 2017-10-09T21:15:31Z | MEMBER | 0 | pydata/xarray/pulls/1551 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1551/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||
208215185 | MDExOlB1bGxSZXF1ZXN0MTA2NTkyMjUx | 1272 | Groupby-like API for resampling | darothen 4992424 | closed | 0 | 0.10 2415632 | 27 | 2017-02-16T19:04:07Z | 2017-09-22T16:27:36Z | 2017-09-22T16:27:35Z | NONE | 0 | pydata/xarray/pulls/1272 | This is a work-in-progress to resolve #1269.
Openly welcome feedback/critiques on how I approached this. Subclassing
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1272/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||
253349435 | MDExOlB1bGxSZXF1ZXN0MTM3OTYwNDEw | 1532 | Avoid computing dask variables on __repr__ and __getattr__ | crusaderky 6213168 | closed | 0 | 0.10 2415632 | 8 | 2017-08-28T14:37:20Z | 2017-09-21T22:30:02Z | 2017-09-21T20:55:43Z | MEMBER | 0 | pydata/xarray/pulls/1532 |
Stop dataset data vars and non-index dataset/dataarray coords from being loaded by repr() and getattr(). The latter is particularly acute when working in Jupyter, which does a dozen or so getattr() when printing an object. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1532/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||
254149471 | MDExOlB1bGxSZXF1ZXN0MTM4NTM5MDAx | 1538 | Fix/1120 | jhamman 2443309 | closed | 0 | 0.10 2415632 | 3 | 2017-08-30T22:02:52Z | 2017-09-06T00:07:11Z | 2017-09-06T00:07:08Z | MEMBER | 0 | pydata/xarray/pulls/1538 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1538/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||
254430377 | MDU6SXNzdWUyNTQ0MzAzNzc= | 1542 | Testing: Failing tests on py36-pandas-dev | jhamman 2443309 | closed | 0 | 0.10 2415632 | 4 | 2017-08-31T18:40:47Z | 2017-09-05T22:22:32Z | 2017-09-05T22:22:32Z | MEMBER | We currently have 7 failing tests when run against the pandas development code (travis). Question for @shoyer - can you take a look at these and see if we should try to get a fix in place prior to v.0.10.0? It looks like Pandas.0.21 is slated for release on Sept. 30. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1542/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | |||||
250747314 | MDExOlB1bGxSZXF1ZXN0MTM2MTEzMjA2 | 1508 | ENH: Support using opened netCDF4.Dataset (Fixes #1459) | dopplershift 221526 | closed | 0 | 0.10 2415632 | 5 | 2017-08-16T20:19:01Z | 2017-08-31T22:24:36Z | 2017-08-31T17:18:51Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/1508 | Make the filename argument to
1459 discussed adding an alternate constructor (i.e. a class method) to |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1508/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||
251666172 | MDU6SXNzdWUyNTE2NjYxNzI= | 1512 | rolling requires pandas >= 0.18 | fujiisoup 6815844 | closed | 0 | 0.10 2415632 | 5 | 2017-08-21T13:58:59Z | 2017-08-31T17:25:10Z | 2017-08-31T17:25:10Z | MEMBER | We need pandas >= 0.18 because dataframe.rolling is supported after 0.18.
But Additionally, I noticed that in travis's CONDA_ENV=py27-min setup, our unit tests run with pandas == 0.20, though it might be intended to run with pandas == 0.15. By Package plan for package removal in environment /home/travis/miniconda/envs/test_env: The following packages will be REMOVED:
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1512/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | |||||
246502828 | MDExOlB1bGxSZXF1ZXN0MTMzMDgyMjU0 | 1496 | ENH: three argument version of where | shoyer 1217238 | closed | 0 | 0.10 2415632 | 11 | 2017-07-29T06:15:39Z | 2017-08-08T17:00:34Z | 2017-08-08T17:00:30Z | MEMBER | 0 | pydata/xarray/pulls/1496 | Example usage: ```python
CC @MaximilianR |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1496/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||
242827817 | MDExOlB1bGxSZXF1ZXN0MTMwNDY1MzUx | 1478 | Fixes dataset rename bug (GH1477) | newt0311 24376349 | closed | 0 | 0.10 2415632 | 1 | 2017-07-13T20:55:04Z | 2017-08-04T20:43:23Z | 2017-07-16T04:12:47Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/1478 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1478/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||
243038175 | MDExOlB1bGxSZXF1ZXN0MTMwNjE5NjE5 | 1479 | Fix test suite failure in TestDataset.test_sel | shoyer 1217238 | closed | 0 | 0.10 2415632 | 2 | 2017-07-14T15:55:33Z | 2017-08-04T20:43:23Z | 2017-07-14T16:31:26Z | MEMBER | 0 | pydata/xarray/pulls/1479 | This is a temporary work around for https://github.com/pandas-dev/pandas/issues/16896, which was introduced by pandas 0.20.3 We can safely revert it after the next pandas release. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1479/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||
245529872 | MDExOlB1bGxSZXF1ZXN0MTMyMzc1NTQz | 1488 | Fix a bug in assert_allclose where rtol and atol were ignored | shoyer 1217238 | closed | 0 | 0.10 2415632 | 0 | 2017-07-25T20:45:11Z | 2017-08-04T20:43:23Z | 2017-07-27T19:57:29Z | MEMBER | 0 | pydata/xarray/pulls/1488 |
~~This still probably should have a regression test.~~ Done |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1488/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||
235761029 | MDExOlB1bGxSZXF1ZXN0MTI1NTEwMTU3 | 1453 | Automate interpretation of _Unsigned attribute | deeplycloudy 1325771 | closed | 0 | 0.10 2415632 | 7 | 2017-06-14T04:43:02Z | 2017-08-04T20:43:22Z | 2017-07-28T17:39:04Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/1453 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1453/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||
235950670 | MDExOlB1bGxSZXF1ZXN0MTI1NjQ4ODMx | 1454 | change NotImplemented to NotImplementedError for h5netcdf autoclose=True | werenike 1388357 | closed | 0 | 0.10 2415632 | 1 | 2017-06-14T17:17:32Z | 2017-08-04T20:43:22Z | 2017-06-15T00:33:00Z | NONE | 0 | pydata/xarray/pulls/1454 | Solves this error:
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1454/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||
236259796 | MDExOlB1bGxSZXF1ZXN0MTI1ODcxNzQ4 | 1455 | Add attributes to rasterio backend | gbrener 2840348 | closed | 0 | 0.10 2415632 | 2 | 2017-06-15T17:21:45Z | 2017-08-04T20:43:22Z | 2017-07-01T09:55:31Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/1455 | Adds the 'res', 'is_tiled', and 'transform' attributes to xarray's rasterio backend.
EDIT: fixed typo; 'tiled' attribute name updated to 'is_tiled' |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1455/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||
239636285 | MDExOlB1bGxSZXF1ZXN0MTI4MjY2NDY0 | 1468 | Center the coordinates to pixels for rasterio backend | gbrener 2840348 | closed | 0 | 0.10 2415632 | 8 | 2017-06-29T23:13:13Z | 2017-08-04T20:43:22Z | 2017-07-05T21:30:46Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/1468 | Rasterio uses edge-based coordinates, which is a different convention from how xarray treats coordinates (based on description here: http://xarray.pydata.org/en/stable/plotting.html#coordinates). This PR centers them, offsetting by half of the resolution.
CCing @fmaussion since he may be interested. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1468/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||
242401142 | MDExOlB1bGxSZXF1ZXN0MTMwMTUzNzg5 | 1476 | Fix text in error message, A leftover from #993 | mzuehlke 204523 | closed | 0 | 0.10 2415632 | 3 | 2017-07-12T14:31:33Z | 2017-08-04T20:43:22Z | 2017-07-12T15:56:50Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/1476 | I've spotted this wrong error message.
In my opinion this is too small to justify an entry in |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1476/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);