home / github

Menu
  • Search all tables
  • GraphQL API

issues

Table actions
  • GraphQL API for issues

15 rows where user = 7441788 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: title, comments, closed_at, state_reason, created_at (date), updated_at (date), closed_at (date)

type 2

  • issue 13
  • pull 2

state 2

  • closed 8
  • open 7

repo 1

  • xarray 15
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
879033384 MDU6SXNzdWU4NzkwMzMzODQ= 5278 DataArray.clip() no longer supports the out argument seth-p 7441788 closed 0     12 2021-05-07T13:45:01Z 2023-12-02T05:52:07Z 2023-12-02T05:52:07Z CONTRIBUTOR      

As of xarray 0.18.0, DataArray.clip() no longer supports the out argument. This is due to #5184. Could you please restore out support?

@max-sixty

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5278/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  not_planned xarray 13221727 issue
266133430 MDU6SXNzdWUyNjYxMzM0MzA= 1635 DataArray.argsort should be deleted seth-p 7441788 open 0     7 2017-10-17T13:52:54Z 2023-03-10T02:31:27Z   CONTRIBUTOR      

Originally posted to https://groups.google.com/forum/#!topic/xarray/wsxeiIPLhgM

DataArray.argsort() appears to simply wrap the result of DataArray.values.argsort() in a same-shape DataArray. This is semantically nonsensical. If anything the index on the resulting argsort() values should simply be range(len(da)), but that adds little to the underlying numpy structure. And there's not much reason to have da.argsort() simply return the (raw) result of da.values.argsort(). So really DataArray.argsort() should simply be deleted.

On the other hand a new function DataArray.rank() that wraps da.values.argsort().argsort() (note repeated call to ndarray.argsort()) in the structure of the original DataArray would make sense, and perhaps even be useful... (Note that I'm not claiming that .argsort().argsort() is the fastest way to calculate this, but it's probably good enough, at least for an initial implementation.)

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1635/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
572875480 MDU6SXNzdWU1NzI4NzU0ODA= 3810 {DataArray,Dataset}.rank() should support an optional list of dimensions seth-p 7441788 open 0     10 2020-02-28T16:57:08Z 2021-11-19T15:09:10Z   CONTRIBUTOR      

{DataArray,Dataset}.rank() requires a single dim. Why not support an optional list of dimensions (defaulting to all)?

``` In [1]: import numpy as np, xarray as xr

In [2]: d = xr.DataArray(np.arange(12).reshape((4,3)), dims=('abc', 'xyz'))

In [3]: d
Out[3]: <xarray.DataArray (abc: 4, xyz: 3)> array([[ 0, 1, 2], [ 3, 4, 5], [ 6, 7, 8], [ 9, 10, 11]]) Dimensions without coordinates: abc, xyz

In [4]: d.rank()

TypeError Traceback (most recent call last) <ipython-input-4-585571c1eca8> in <module> ----> 1 d.rank()

TypeError: rank() missing 1 required positional argument: 'dim'

In [5]: d.rank(dim=('xyz', 'abc'))

TypeError Traceback (most recent call last) <ipython-input-5-006c73551ff8> in <module> ----> 1 d.rank(dim=('xyz', 'abc'))

~/.conda/envs/build/lib/python3.7/site-packages/xarray/core/dataarray.py in rank(self, dim, pct, keep_attrs) 3054 """ 3055 -> 3056 ds = self._to_temp_dataset().rank(dim, pct=pct, keep_attrs=keep_attrs) 3057 return self._from_temp_dataset(ds) 3058

~/.conda/envs/build/lib/python3.7/site-packages/xarray/core/dataset.py in rank(self, dim, pct, keep_attrs) 5295 """ 5296 if dim not in self.dims: -> 5297 raise ValueError("Dataset does not contain the dimension: %s" % dim) 5298 5299 variables = {}

TypeError: not all arguments converted during string formatting

In [6]: xr.show_versions()

INSTALLED VERSIONS

commit: None python: 3.7.6 | packaged by conda-forge | (default, Jan 7 2020, 22:33:48) [GCC 7.3.0] python-bits: 64 OS: Linux OS-release: 3.10.0-693.el7.x86_64 machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: en_US.UTF-8 libhdf5: 1.10.5 libnetcdf: 4.7.3

xarray: 0.15.0 pandas: 1.0.1 numpy: 1.18.1 scipy: 1.4.1 netCDF4: 1.5.3 pydap: None h5netcdf: 0.8.0 h5py: 2.10.0 Nio: None zarr: None cftime: 1.0.4.2 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: 1.3.2 dask: 2.11.0 distributed: 2.11.0 matplotlib: 3.1.3 cartopy: None seaborn: 0.10.0 numbagg: installed setuptools: 45.2.0.post20200209 pip: 20.0.2 conda: 4.8.2 pytest: None IPython: 7.12.0 sphinx: None ```

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3810/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
483028482 MDU6SXNzdWU0ODMwMjg0ODI= 3236 ENH: apply_ufunc logging or callback seth-p 7441788 open 0     3 2019-08-20T18:59:36Z 2021-07-21T10:00:51Z   CONTRIBUTOR      

With long-running (think hours) apply_ufunc(..., vectorize=True) calls, it would be nice to be able to have logging of the input non-core dims being evaluated, perhaps via a call-back.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3236/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
309098246 MDU6SXNzdWUzMDkwOTgyNDY= 2017 np.minimum.accumulate(da) doesn't work seth-p 7441788 open 0     7 2018-03-27T19:15:06Z 2021-07-04T03:20:18Z   CONTRIBUTOR      

Code Sample, a copy-pastable example if possible

``` In [1]: import numpy as np

In [2]: import xarray as xr

In [3]: np.minimum.accumulate(np.array([3,2,4,1])) Out[3]: array([3, 2, 2, 1], dtype=int32)

In [4]: np.minimum.accumulate(xr.DataArray([3,2,4,1]))

NotImplementedError Traceback (most recent call last) <ipython-input-7-7205433fb365> in <module>() ----> 1 np.minimum.accumulate(xr.DataArray([3,2,4,1])) ~\Anaconda3\lib\site-packages\xarray\core\arithmetic.py in array_ufunc(self, ufunc, method, inputs, *kwargs) 49 'alternative, consider explicitly converting xarray objects ' 50 'to NumPy arrays (e.g., with .values).' ---> 51 .format(method, ufunc)) 52 53 if any(isinstance(o, SupportsArithmetic) for o in out): NotImplementedError: accumulate method for ufunc <ufunc 'minimum'> is not implemented on xarray objects, which currently only support the call method. As an alternative, consider explicitly converting xarray objects to NumPy arrays (e.g., with .values). ```

Problem description

I would expect this to work, like xr.apply_ufunc(np.minimum.accumulate, xr.DataArray([3,2,4,1])):

Expected Output

Out[4]: <xarray.DataArray (dim_0: 4)> array([3, 2, 2, 1]) Dimensions without coordinates: dim_0

Output of xr.show_versions()

``` commit: None python: 3.6.4.final.0 python-bits: 64 OS: Windows OS-release: 10 machine: AMD64 processor: Intel64 Family 6 Model 62 Stepping 4, GenuineIntel byteorder: little LC_ALL: None LANG: None LOCALE: None.None xarray: 0.10.2 pandas: 0.22.0 numpy: 1.14.2 scipy: 1.0.0 netCDF4: None h5netcdf: 0.5.0 h5py: 2.7.1 Nio: None zarr: None bottleneck: 1.2.1 cyordereddict: None dask: 0.17.1 distributed: 1.21.3 matplotlib: 2.2.2 cartopy: None seaborn: 0.8.1 setuptools: 39.0.1 pip: 9.0.2 conda: 4.3.34 pytest: 3.4.2 IPython: 6.2.1 sphinx: 1.7.1 ```

Originally posted posted to https://groups.google.com/forum/#!topic/xarray/LiwxrJcJBwY.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2017/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
207317762 MDU6SXNzdWUyMDczMTc3NjI= 1266 Coordinate type changing from string to object seth-p 7441788 open 0     2 2017-02-13T19:38:16Z 2020-10-27T16:33:46Z   CONTRIBUTOR      

Originally posted on https://groups.google.com/forum/#!topic/xarray/4k8ZAx998UU. I would expect [2], [3], and [4] to produce identical results, with coordinate xyz being of type |S1 (in general |Sn where n is minimal to accommodate the coordinate values), not object. This behavior is observed with xarray versions 0.8.2 and 0.9.1.

``` Python 2.7.12 (v2.7.12:d33e0cf91556, Jun 27 2016, 15:24:40) [MSC v.1500 64 bit (AMD64)] Type "copyright", "credits" or "license" for more information.

IPython 5.1.0 -- An enhanced Interactive Python. ? -> Introduction and overview of IPython's features. %quickref -> Quick reference. help -> Python's own help system. object? -> Details about 'object', use 'object??' for extra details.

In [1]: from xarray import DataArray

In [2]: DataArray([1.], dims=('xyz',), coords={'xyz': ['a']}) + \ ...: DataArray([5.], dims=('xyz',), coords={'xyz': ['a']}) Out[2]: <xarray.DataArray (xyz: 1)> array([ 6.]) Coordinates: * xyz (xyz) |S1 'a'

In [3]: DataArray([1., 2.], dims=('xyz',), coords={'xyz': ['a', 'b']}) + \ ...: DataArray([5.], dims=('xyz',), coords={'xyz': ['a']}) Out[3]: <xarray.DataArray (xyz: 1)> array([ 6.]) Coordinates: * xyz (xyz) object 'a'

In [4]: DataArray([1.], dims=('xyz',), coords={'xyz': ['a']}) + \ ...: DataArray([5., 6.], dims=('xyz',), coords={'xyz': ['a', 'b']}) Out[4]: <xarray.DataArray (xyz: 1)> array([ 6.]) Coordinates: * xyz (xyz) object 'a' ```

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1266/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
683657289 MDU6SXNzdWU2ODM2NTcyODk= 4363 Indexing a datetime64[ns] coordinate with a scalar datetime.date produces a KeyError seth-p 7441788 open 0     2 2020-08-21T15:54:16Z 2020-09-19T19:16:37Z   CONTRIBUTOR      

Indexing a datetime64[ns] coordinate with a scalar datetime.date produces a KeyError ([6]). Curiously, indexing with a datetime.date slice does work ([5]). I would expect [6] to work just like [4].

This may well be related to (or a duplicate of) #3736, #4283, #4292, #4306, #4319, or #4370, but none of those actually mentions datetime.date objects, so I can't tell.

```python In [1]: import xarray as xr, pandas as pd, datetime as dt

In [2]: x = xr.DataArray([1., 2., 3.], [('foo', pd.date_range('2010-01-01', periods=3))])

In [3]: x Out[3]: <xarray.DataArray (foo: 3)> array([1., 2., 3.]) Coordinates: * foo (foo) datetime64[ns] 2010-01-01 2010-01-02 2010-01-03

In [4]: x.loc[dt.datetime(2010, 1, 1)] Out[4]: <xarray.DataArray ()> array(1.) Coordinates: foo datetime64[ns] 2010-01-01

In [5]: x.loc[dt.date(2010, 1, 1):dt.date(2010, 1, 3)] Out[5]: <xarray.DataArray (foo: 3)> array([1., 2., 3.]) Coordinates: * foo (foo) datetime64[ns] 2010-01-01 2010-01-02 2010-01-03

In [6]: x.loc[dt.date(2010, 1, 1)]

KeyError Traceback (most recent call last) <ipython-input-5-8ef314626f7d> in <module> ----> 1 x.loc[dt.date(2010, 1, 1)]

~/.conda/envs/build/lib/python3.7/site-packages/xarray/core/dataarray.py in getitem(self, key) 196 labels = indexing.expanded_indexer(key, self.data_array.ndim) 197 key = dict(zip(self.data_array.dims, labels)) --> 198 return self.data_array.sel(**key) 199 200 def setitem(self, key, value) -> None:

~/.conda/envs/build/lib/python3.7/site-packages/xarray/core/dataarray.py in sel(self, indexers, method, tolerance, drop, indexers_kwargs) 1152 method=method, 1153 tolerance=tolerance, -> 1154 indexers_kwargs, 1155 ) 1156 return self._from_temp_dataset(ds)

~/.conda/envs/build/lib/python3.7/site-packages/xarray/core/dataset.py in sel(self, indexers, method, tolerance, drop, **indexers_kwargs) 2100 indexers = either_dict_or_kwargs(indexers, indexers_kwargs, "sel") 2101 pos_indexers, new_indexes = remap_label_indexers( -> 2102 self, indexers=indexers, method=method, tolerance=tolerance 2103 ) 2104 result = self.isel(indexers=pos_indexers, drop=drop)

~/.conda/envs/build/lib/python3.7/site-packages/xarray/core/coordinates.py in remap_label_indexers(obj, indexers, method, tolerance, **indexers_kwargs) 395 396 pos_indexers, new_indexes = indexing.remap_label_indexers( --> 397 obj, v_indexers, method=method, tolerance=tolerance 398 ) 399 # attach indexer's coordinate to pos_indexers

~/.conda/envs/build/lib/python3.7/site-packages/xarray/core/indexing.py in remap_label_indexers(data_obj, indexers, method, tolerance) 268 coords_dtype = data_obj.coords[dim].dtype 269 label = maybe_cast_to_coords_dtype(label, coords_dtype) --> 270 idxr, new_idx = convert_label_indexer(index, label, dim, method, tolerance) 271 pos_indexers[dim] = idxr 272 if new_idx is not None:

~/.conda/envs/build/lib/python3.7/site-packages/xarray/core/indexing.py in convert_label_indexer(index, label, index_name, method, tolerance) 188 else: 189 indexer = index.get_loc( --> 190 label.item(), method=method, tolerance=tolerance 191 ) 192 elif label.dtype.kind == "b":

~/.conda/envs/build/lib/python3.7/site-packages/pandas/core/indexes/datetimes.py in get_loc(self, key, method, tolerance) 620 else: 621 # unrecognized type --> 622 raise KeyError(key) 623 624 try:

KeyError: datetime.date(2010, 1, 1) ```

Environment:

Output of <tt>xr.show_versions()</tt> INSTALLED VERSIONS ------------------ commit: None python: 3.7.8 | packaged by conda-forge | (default, Jul 31 2020, 02:25:08) [GCC 7.5.0] python-bits: 64 OS: Linux OS-release: 3.10.0-693.el7.x86_64 machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: en_US.UTF-8 libhdf5: 1.10.6 libnetcdf: 4.7.4 xarray: 0.16.0 pandas: 1.1.0 numpy: 1.19.1 scipy: 1.5.2 netCDF4: 1.5.4 pydap: None h5netcdf: 0.8.1 h5py: 2.10.0 Nio: None zarr: None cftime: 1.2.1 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: 1.3.2 dask: 2.23.0 distributed: 2.23.0 matplotlib: 3.3.1 cartopy: None seaborn: 0.10.1 numbagg: installed pint: None setuptools: 49.6.0.post20200814 pip: 20.2.2 conda: 4.8.4 pytest: None IPython: 7.17.0 sphinx: None
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4363/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
614149170 MDU6SXNzdWU2MTQxNDkxNzA= 4044 open_mfdataset(paths, combine='nested') with and without concat_dim=None seth-p 7441788 closed 0     2 2020-05-07T15:31:07Z 2020-05-07T22:34:43Z 2020-05-07T22:34:43Z CONTRIBUTOR      

Is there a good reason open_mfdataset(paths, combine='nested') produces an error rather than work as open_mfdataset(paths, combine='nested', concat_dim=None)?

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4044/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
575564170 MDU6SXNzdWU1NzU1NjQxNzA= 3829 {DataArray,Dataset} accessors with parameters seth-p 7441788 open 0     4 2020-03-04T16:41:55Z 2020-04-02T12:11:47Z   CONTRIBUTOR      

I would like to be able to create an DataArray accessor that takes parameters, e.g. obj.weighted(w).sum(dim). This appears to be impossible using the existing @register_{dataarray,dataset}_accessor, which supports only accessors of the form obj.weighted.sum(w, dim).

To support the desired syntax, one could simply change https://github.com/pydata/xarray/blob/master/xarray/core/extensions.py#L36 from accessor_obj = self._accessor(obj) to accessor_obj = partial(self._accessor, obj) But that would break the current syntax (i.e. would require obj.accessor().foo(), so is clearly not acceptable.

So any suggestions (short of simply creating slightly modified copies of register_{dataarray,dataset}_accessor) for supporting both the existing obj.accessor.foo() syntax as well as my desired obj.accessor(*args, **kwargs).foo() syntax?

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3829/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
558204984 MDU6SXNzdWU1NTgyMDQ5ODQ= 3736 BUG: datetime.date slicing doesn't work with Pandas 1.0.0 seth-p 7441788 closed 0     3 2020-01-31T15:39:44Z 2020-02-05T21:09:43Z 2020-02-05T21:09:43Z CONTRIBUTOR      

The following code used to work before I upgraded Pandas to 1.0.0. I would expect [5] to produce the same result as [4]. I don't know if the failure of datetime.date slicing is (a) expected behavior; (b) a Pandas bug; or (c) an xarray bug due to not being updated to reflect an intended change in Pandas.

```python Python 3.7.6 | packaged by conda-forge | (default, Jan 7 2020, 22:33:48) Type 'copyright', 'credits' or 'license' for more information IPython 7.11.1 -- An enhanced Interactive Python. Type '?' for help.

In [1]: import xarray as xr, pandas as pd, datetime as dt

In [2]: x = xr.DataArray([1., 2., 3.], [('foo', pd.date_range('2010-01-01', periods=3))])

In [3]: x
Out[3]: <xarray.DataArray (foo: 3)> array([1., 2., 3.]) Coordinates: * foo (foo) datetime64[ns] 2010-01-01 2010-01-02 2010-01-03

In [4]: x.loc['2010-01-02':'2010-01-04']
Out[4]: <xarray.DataArray (foo: 2)> array([2., 3.]) Coordinates: * foo (foo) datetime64[ns] 2010-01-02 2010-01-03

In [5]: x.loc[dt.date(2010, 1, 2):dt.date(2010, 1, 4)]

TypeError Traceback (most recent call last) pandas/_libs/index.pyx in pandas._libs.index.DatetimeEngine.get_loc()

pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.Int64HashTable.get_item()

TypeError: an integer is required

During handling of the above exception, another exception occurred:

KeyError Traceback (most recent call last) ~/.conda/envs/build/lib/python3.7/site-packages/pandas/core/indexes/base.py in get_loc(self, key, method, tolerance) 2645 try: -> 2646 return self._engine.get_loc(key) 2647 except KeyError:

pandas/_libs/index.pyx in pandas._libs.index.DatetimeEngine.get_loc()

pandas/_libs/index.pyx in pandas._libs.index.DatetimeEngine.get_loc()

pandas/_libs/index.pyx in pandas._libs.index.DatetimeEngine._date_check_type()

KeyError: datetime.date(2010, 1, 4)

During handling of the above exception, another exception occurred:

TypeError Traceback (most recent call last) pandas/_libs/index.pyx in pandas._libs.index.DatetimeEngine.get_loc()

pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.Int64HashTable.get_item()

TypeError: an integer is required

During handling of the above exception, another exception occurred:

KeyError Traceback (most recent call last) ~/.conda/envs/build/lib/python3.7/site-packages/pandas/core/indexes/datetimes.py in get_loc(self, key, method, tolerance) 714 try: --> 715 return Index.get_loc(self, key, method, tolerance) 716 except (KeyError, ValueError, TypeError):

~/.conda/envs/build/lib/python3.7/site-packages/pandas/core/indexes/base.py in get_loc(self, key, method, tolerance) 2647 except KeyError: -> 2648 return self._engine.get_loc(self._maybe_cast_indexer(key)) 2649 indexer = self.get_indexer([key], method=method, tolerance=tolerance)

pandas/_libs/index.pyx in pandas._libs.index.DatetimeEngine.get_loc()

pandas/_libs/index.pyx in pandas._libs.index.DatetimeEngine.get_loc()

pandas/_libs/index.pyx in pandas._libs.index.DatetimeEngine._date_check_type()

KeyError: datetime.date(2010, 1, 4)

During handling of the above exception, another exception occurred:

KeyError Traceback (most recent call last) pandas/_libs/index.pyx in pandas._libs.index.DatetimeEngine.get_loc()

pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.Int64HashTable.get_item()

pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.Int64HashTable.get_item()

KeyError: 1262563200000000000

During handling of the above exception, another exception occurred:

KeyError Traceback (most recent call last) ~/.conda/envs/build/lib/python3.7/site-packages/pandas/core/indexes/base.py in get_loc(self, key, method, tolerance) 2645 try: -> 2646 return self._engine.get_loc(key) 2647 except KeyError:

pandas/_libs/index.pyx in pandas._libs.index.DatetimeEngine.get_loc()

pandas/_libs/index.pyx in pandas._libs.index.DatetimeEngine.get_loc()

KeyError: Timestamp('2010-01-04 00:00:00')

During handling of the above exception, another exception occurred:

KeyError Traceback (most recent call last) pandas/_libs/index.pyx in pandas._libs.index.DatetimeEngine.get_loc()

pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.Int64HashTable.get_item()

pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.Int64HashTable.get_item()

KeyError: 1262563200000000000

During handling of the above exception, another exception occurred:

KeyError Traceback (most recent call last) ~/.conda/envs/build/lib/python3.7/site-packages/pandas/core/indexes/datetimes.py in get_loc(self, key, method, tolerance) 727 stamp = stamp.tz_localize(self.tz) --> 728 return Index.get_loc(self, stamp, method, tolerance) 729 except KeyError:

~/.conda/envs/build/lib/python3.7/site-packages/pandas/core/indexes/base.py in get_loc(self, key, method, tolerance) 2647 except KeyError: -> 2648 return self._engine.get_loc(self._maybe_cast_indexer(key)) 2649 indexer = self.get_indexer([key], method=method, tolerance=tolerance)

pandas/_libs/index.pyx in pandas._libs.index.DatetimeEngine.get_loc()

pandas/_libs/index.pyx in pandas._libs.index.DatetimeEngine.get_loc()

KeyError: Timestamp('2010-01-04 00:00:00')

During handling of the above exception, another exception occurred:

KeyError Traceback (most recent call last) ~/.conda/envs/build/lib/python3.7/site-packages/pandas/core/indexes/base.py in get_slice_bound(self, label, side, kind) 4841 try: -> 4842 slc = self.get_loc(label) 4843 except KeyError as err:

~/.conda/envs/build/lib/python3.7/site-packages/pandas/core/indexes/datetimes.py in get_loc(self, key, method, tolerance) 729 except KeyError: --> 730 raise KeyError(key) 731 except ValueError as e:

KeyError: datetime.date(2010, 1, 4)

During handling of the above exception, another exception occurred:

TypeError Traceback (most recent call last) <ipython-input-7-d06631d74971> in <module> ----> 1 x.loc[dt.date(2010, 1, 2): dt.date(2010, 1, 4)]

~/.conda/envs/build/lib/python3.7/site-packages/xarray/core/dataarray.py in getitem(self, key) 194 labels = indexing.expanded_indexer(key, self.data_array.ndim) 195 key = dict(zip(self.data_array.dims, labels)) --> 196 return self.data_array.sel(**key) 197 198 def setitem(self, key, value) -> None:

~/.conda/envs/build/lib/python3.7/site-packages/xarray/core/dataarray.py in sel(self, indexers, method, tolerance, drop, indexers_kwargs) 1049 method=method, 1050 tolerance=tolerance, -> 1051 indexers_kwargs, 1052 ) 1053 return self._from_temp_dataset(ds)

~/.conda/envs/build/lib/python3.7/site-packages/xarray/core/dataset.py in sel(self, indexers, method, tolerance, drop, **indexers_kwargs) 2012 indexers = either_dict_or_kwargs(indexers, indexers_kwargs, "sel") 2013 pos_indexers, new_indexes = remap_label_indexers( -> 2014 self, indexers=indexers, method=method, tolerance=tolerance 2015 ) 2016 result = self.isel(indexers=pos_indexers, drop=drop)

~/.conda/envs/build/lib/python3.7/site-packages/xarray/core/coordinates.py in remap_label_indexers(obj, indexers, method, tolerance, **indexers_kwargs) 390 391 pos_indexers, new_indexes = indexing.remap_label_indexers( --> 392 obj, v_indexers, method=method, tolerance=tolerance 393 ) 394 # attach indexer's coordinate to pos_indexers

~/.conda/envs/build/lib/python3.7/site-packages/xarray/core/indexing.py in remap_label_indexers(data_obj, indexers, method, tolerance) 258 coords_dtype = data_obj.coords[dim].dtype 259 label = maybe_cast_to_coords_dtype(label, coords_dtype) --> 260 idxr, new_idx = convert_label_indexer(index, label, dim, method, tolerance) 261 pos_indexers[dim] = idxr 262 if new_idx is not None:

~/.conda/envs/build/lib/python3.7/site-packages/xarray/core/indexing.py in convert_label_indexer(index, label, index_name, method, tolerance) 122 _sanitize_slice_element(label.start), 123 _sanitize_slice_element(label.stop), --> 124 _sanitize_slice_element(label.step), 125 ) 126 if not isinstance(indexer, slice):

~/.conda/envs/build/lib/python3.7/site-packages/pandas/core/indexes/datetimes.py in slice_indexer(self, start, end, step, kind) 806 807 try: --> 808 return Index.slice_indexer(self, start, end, step, kind=kind) 809 except KeyError: 810 # For historical reasons DatetimeIndex by default supports

~/.conda/envs/build/lib/python3.7/site-packages/pandas/core/indexes/base.py in slice_indexer(self, start, end, step, kind) 4711 slice(1, 3) 4712 """ -> 4713 start_slice, end_slice = self.slice_locs(start, end, step=step, kind=kind) 4714 4715 # return a slice

~/.conda/envs/build/lib/python3.7/site-packages/pandas/core/indexes/base.py in slice_locs(self, start, end, step, kind) 4930 end_slice = None 4931 if end is not None: -> 4932 end_slice = self.get_slice_bound(end, "right", kind) 4933 if end_slice is None: 4934 end_slice = len(self)

~/.conda/envs/build/lib/python3.7/site-packages/pandas/core/indexes/base.py in get_slice_bound(self, label, side, kind) 4843 except KeyError as err: 4844 try: -> 4845 return self._searchsorted_monotonic(label, side) 4846 except ValueError: 4847 # raise the original KeyError

~/.conda/envs/build/lib/python3.7/site-packages/pandas/core/indexes/base.py in _searchsorted_monotonic(self, label, side) 4794 def _searchsorted_monotonic(self, label, side="left"): 4795 if self.is_monotonic_increasing: -> 4796 return self.searchsorted(label, side=side) 4797 elif self.is_monotonic_decreasing: 4798 # np.searchsorted expects ascending sort order, have to reverse

~/.conda/envs/build/lib/python3.7/site-packages/pandas/core/indexes/datetimes.py in searchsorted(self, value, side, sorter) 851 elif not isinstance(value, DatetimeArray): 852 raise TypeError( --> 853 "searchsorted requires compatible dtype or scalar, " 854 f"not {type(value).name}" 855 )

TypeError: searchsorted requires compatible dtype or scalar, not date ```

``` commit: None python: 3.7.6 | packaged by conda-forge | (default, Jan 7 2020, 22:33:48) [GCC 7.3.0] python-bits: 64 OS: Linux OS-release: 3.10.0-693.el7.x86_64 machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: en_US.UTF-8 libhdf5: 1.10.5 libnetcdf: 4.7.3

xarray: 0.14.1 pandas: 1.0.0 numpy: 1.17.5 scipy: 1.4.1 netCDF4: 1.5.3 pydap: None h5netcdf: 0.7.4 h5py: 2.10.0 Nio: None zarr: None cftime: 1.0.4.2 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: 1.3.1 dask: 2.10.1 distributed: 2.10.0 matplotlib: 3.1.2 cartopy: None seaborn: 0.9.0 numbagg: installed setuptools: 45.1.0.post20200119 pip: 20.0.2 conda: 4.8.2 pytest: None IPython: 7.11.1 sphinx: None ```

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3736/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
468157549 MDU6SXNzdWU0NjgxNTc1NDk= 3133 Add StringAccessor.format() seth-p 7441788 closed 0     0 2019-07-15T14:23:10Z 2019-07-15T19:49:09Z 2019-07-15T19:49:09Z CONTRIBUTOR      

Actually, on further thought I'm not sure how this should work, so I'm going to close this.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3133/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
341664808 MDExOlB1bGxSZXF1ZXN0MjAxNzQ5NDg2 2293 ENH: format_array_flat() always displays first and last items. seth-p 7441788 closed 0     7 2018-07-16T20:21:47Z 2018-07-20T16:05:05Z 2018-07-20T16:04:51Z CONTRIBUTOR   0 pydata/xarray/pulls/2293
  • [x] Closes #1186 (remove if there is no corresponding issue, which should only be the case for minor changes)
  • [x] Tests added (for all bug fixes or enhancements)
  • [x] Tests passed (for all non-documentation changes)
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API (remove if this change should not be visible to users, e.g., if it is an internal clean-up, or if this is part of a larger project that will be documented later)
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2293/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
197709208 MDU6SXNzdWUxOTc3MDkyMDg= 1186 Including last coordinate values when displaying coordinates seth-p 7441788 closed 0     2 2016-12-27T14:18:19Z 2018-07-20T16:04:51Z 2018-07-20T16:04:51Z CONTRIBUTOR      

I first posted this to https://groups.google.com/forum/#!topic/xarray/wjPtXMr91sg .

I'm afraid I'm not set up to submit a PR, but I think something like the following should work once one has created a last_n_items() similar to the existing first_n_items().

Also, I wonder if it would make sense to change the minimum number of items displayed from 1 to min(2, items_ndarray.size).

``` def format_array_flat(items_ndarray, max_width): """Return a formatted string for as many items in the flattened version of items_ndarray that will fit within max_width characters """ # every item will take up at least two characters, but we always want to # print at least one item max_possibly_relevant = max(int(np.ceil(max_width / 2.0)), 1) relevant_items = sum(([x, y] for (x, y) in zip(first_n_items(items_ndarray, (max_possibly_relevant + 1) // 2), reversed(last_n_items(items_ndarray, max_possibly_relevant // 2)))), []) pprint_items = format_items(relevant_items)

cum_len = np.cumsum([len(s) + 1 for s in pprint_items]) - 1
if (max_possibly_relevant < items_ndarray.size or
        (cum_len > max_width).any()):
    padding = u' ... '
    count = max(np.argmax((cum_len + len(padding)) > max_width), 1)
else:
    count = items_ndarray.size
    padding = u'' if (count <= 1) else u' '

pprint_str = u' '.join(np.take(pprint_items, range(0, count, 2))) + padding + \
             u' '.join(np.take(pprint_items, range(count - (count % 2) - 1, 0, -2)))
return pprint_str

```

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1186/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
341149017 MDExOlB1bGxSZXF1ZXN0MjAxMzkyMDMx 2285 ENH: format_array_flat() always displays first and last items. seth-p 7441788 closed 0     4 2018-07-13T20:25:03Z 2018-07-16T20:23:50Z 2018-07-16T20:20:39Z CONTRIBUTOR   0 pydata/xarray/pulls/2285
  • [ ] Closes #1186 (remove if there is no corresponding issue, which should only be the case for minor changes)
  • [ ] Tests added (for all bug fixes or enhancements)
  • [ ] Tests passed (for all non-documentation changes)
  • [ ] Fully documented, including whats-new.rst for all changes and api.rst for new API (remove if this change should not be visible to users, e.g., if it is an internal clean-up, or if this is part of a larger project that will be documented later)
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2285/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
292550002 MDU6SXNzdWUyOTI1NTAwMDI= 1864 BUG: ds['...'].sel(...).values throws exception for a ds loaded from file seth-p 7441788 closed 0     1 2018-01-29T20:31:52Z 2018-01-30T09:19:09Z 2018-01-30T09:19:09Z CONTRIBUTOR      

```python import xarray as xr ds = xr.Dataset(data_vars={'foo': xr.DataArray([1.], dims='abc', coords={'abc': ['a']})}) ds.to_netcdf(path='ds.nc', engine='h5netcdf') ds1 = xr.open_dataset('ds.nc', engine='h5netcdf')

ds1['foo'].values # uncomment to eliminate exception

ds1['foo'].sel(abc=['a']).values # throws "TypeError: PointSelection getitem only works with bool arrays" ```

Problem description

Accessing the DataArray's .sel() before .values leads to TypeError: PointSelection __getitem__ only works with bool arrays. Accessing the DataArray's .values before the .sel().values eliminates the error.

Expected Output

array([ 1.])

Output of xr.show_versions()

INSTALLED VERSIONS

commit: None python: 3.6.4.final.0 python-bits: 64 OS: Windows OS-release: 10 machine: AMD64 processor: Intel64 Family 6 Model 62 Stepping 4, GenuineIntel byteorder: little LC_ALL: None LANG: None LOCALE: None.None

xarray: 0.10.0 pandas: 0.22.0 numpy: 1.13.3 scipy: 1.0.0 netCDF4: None h5netcdf: 0.5.0 Nio: None bottleneck: 1.2.1 cyordereddict: None dask: 0.16.1 matplotlib: 2.1.2 cartopy: None seaborn: 0.8.1 setuptools: 38.4.0 pip: 9.0.1 conda: 4.3.33 pytest: 3.3.2 IPython: 6.2.1 sphinx: 1.6.6

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1864/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 801.142ms · About: xarray-datasette