home / github

Menu
  • GraphQL API
  • Search all tables

issues

Table actions
  • GraphQL API for issues

6 rows where state = "closed", type = "issue" and user = 7441788 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: comments, state_reason, created_at (date), updated_at (date), closed_at (date)

type 1

  • issue · 6 ✖

state 1

  • closed · 6 ✖

repo 1

  • xarray 6
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
879033384 MDU6SXNzdWU4NzkwMzMzODQ= 5278 DataArray.clip() no longer supports the out argument seth-p 7441788 closed 0     12 2021-05-07T13:45:01Z 2023-12-02T05:52:07Z 2023-12-02T05:52:07Z CONTRIBUTOR      

As of xarray 0.18.0, DataArray.clip() no longer supports the out argument. This is due to #5184. Could you please restore out support?

@max-sixty

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5278/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  not_planned xarray 13221727 issue
614149170 MDU6SXNzdWU2MTQxNDkxNzA= 4044 open_mfdataset(paths, combine='nested') with and without concat_dim=None seth-p 7441788 closed 0     2 2020-05-07T15:31:07Z 2020-05-07T22:34:43Z 2020-05-07T22:34:43Z CONTRIBUTOR      

Is there a good reason open_mfdataset(paths, combine='nested') produces an error rather than work as open_mfdataset(paths, combine='nested', concat_dim=None)?

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4044/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
558204984 MDU6SXNzdWU1NTgyMDQ5ODQ= 3736 BUG: datetime.date slicing doesn't work with Pandas 1.0.0 seth-p 7441788 closed 0     3 2020-01-31T15:39:44Z 2020-02-05T21:09:43Z 2020-02-05T21:09:43Z CONTRIBUTOR      

The following code used to work before I upgraded Pandas to 1.0.0. I would expect [5] to produce the same result as [4]. I don't know if the failure of datetime.date slicing is (a) expected behavior; (b) a Pandas bug; or (c) an xarray bug due to not being updated to reflect an intended change in Pandas.

```python Python 3.7.6 | packaged by conda-forge | (default, Jan 7 2020, 22:33:48) Type 'copyright', 'credits' or 'license' for more information IPython 7.11.1 -- An enhanced Interactive Python. Type '?' for help.

In [1]: import xarray as xr, pandas as pd, datetime as dt

In [2]: x = xr.DataArray([1., 2., 3.], [('foo', pd.date_range('2010-01-01', periods=3))])

In [3]: x
Out[3]: <xarray.DataArray (foo: 3)> array([1., 2., 3.]) Coordinates: * foo (foo) datetime64[ns] 2010-01-01 2010-01-02 2010-01-03

In [4]: x.loc['2010-01-02':'2010-01-04']
Out[4]: <xarray.DataArray (foo: 2)> array([2., 3.]) Coordinates: * foo (foo) datetime64[ns] 2010-01-02 2010-01-03

In [5]: x.loc[dt.date(2010, 1, 2):dt.date(2010, 1, 4)]

TypeError Traceback (most recent call last) pandas/_libs/index.pyx in pandas._libs.index.DatetimeEngine.get_loc()

pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.Int64HashTable.get_item()

TypeError: an integer is required

During handling of the above exception, another exception occurred:

KeyError Traceback (most recent call last) ~/.conda/envs/build/lib/python3.7/site-packages/pandas/core/indexes/base.py in get_loc(self, key, method, tolerance) 2645 try: -> 2646 return self._engine.get_loc(key) 2647 except KeyError:

pandas/_libs/index.pyx in pandas._libs.index.DatetimeEngine.get_loc()

pandas/_libs/index.pyx in pandas._libs.index.DatetimeEngine.get_loc()

pandas/_libs/index.pyx in pandas._libs.index.DatetimeEngine._date_check_type()

KeyError: datetime.date(2010, 1, 4)

During handling of the above exception, another exception occurred:

TypeError Traceback (most recent call last) pandas/_libs/index.pyx in pandas._libs.index.DatetimeEngine.get_loc()

pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.Int64HashTable.get_item()

TypeError: an integer is required

During handling of the above exception, another exception occurred:

KeyError Traceback (most recent call last) ~/.conda/envs/build/lib/python3.7/site-packages/pandas/core/indexes/datetimes.py in get_loc(self, key, method, tolerance) 714 try: --> 715 return Index.get_loc(self, key, method, tolerance) 716 except (KeyError, ValueError, TypeError):

~/.conda/envs/build/lib/python3.7/site-packages/pandas/core/indexes/base.py in get_loc(self, key, method, tolerance) 2647 except KeyError: -> 2648 return self._engine.get_loc(self._maybe_cast_indexer(key)) 2649 indexer = self.get_indexer([key], method=method, tolerance=tolerance)

pandas/_libs/index.pyx in pandas._libs.index.DatetimeEngine.get_loc()

pandas/_libs/index.pyx in pandas._libs.index.DatetimeEngine.get_loc()

pandas/_libs/index.pyx in pandas._libs.index.DatetimeEngine._date_check_type()

KeyError: datetime.date(2010, 1, 4)

During handling of the above exception, another exception occurred:

KeyError Traceback (most recent call last) pandas/_libs/index.pyx in pandas._libs.index.DatetimeEngine.get_loc()

pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.Int64HashTable.get_item()

pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.Int64HashTable.get_item()

KeyError: 1262563200000000000

During handling of the above exception, another exception occurred:

KeyError Traceback (most recent call last) ~/.conda/envs/build/lib/python3.7/site-packages/pandas/core/indexes/base.py in get_loc(self, key, method, tolerance) 2645 try: -> 2646 return self._engine.get_loc(key) 2647 except KeyError:

pandas/_libs/index.pyx in pandas._libs.index.DatetimeEngine.get_loc()

pandas/_libs/index.pyx in pandas._libs.index.DatetimeEngine.get_loc()

KeyError: Timestamp('2010-01-04 00:00:00')

During handling of the above exception, another exception occurred:

KeyError Traceback (most recent call last) pandas/_libs/index.pyx in pandas._libs.index.DatetimeEngine.get_loc()

pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.Int64HashTable.get_item()

pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.Int64HashTable.get_item()

KeyError: 1262563200000000000

During handling of the above exception, another exception occurred:

KeyError Traceback (most recent call last) ~/.conda/envs/build/lib/python3.7/site-packages/pandas/core/indexes/datetimes.py in get_loc(self, key, method, tolerance) 727 stamp = stamp.tz_localize(self.tz) --> 728 return Index.get_loc(self, stamp, method, tolerance) 729 except KeyError:

~/.conda/envs/build/lib/python3.7/site-packages/pandas/core/indexes/base.py in get_loc(self, key, method, tolerance) 2647 except KeyError: -> 2648 return self._engine.get_loc(self._maybe_cast_indexer(key)) 2649 indexer = self.get_indexer([key], method=method, tolerance=tolerance)

pandas/_libs/index.pyx in pandas._libs.index.DatetimeEngine.get_loc()

pandas/_libs/index.pyx in pandas._libs.index.DatetimeEngine.get_loc()

KeyError: Timestamp('2010-01-04 00:00:00')

During handling of the above exception, another exception occurred:

KeyError Traceback (most recent call last) ~/.conda/envs/build/lib/python3.7/site-packages/pandas/core/indexes/base.py in get_slice_bound(self, label, side, kind) 4841 try: -> 4842 slc = self.get_loc(label) 4843 except KeyError as err:

~/.conda/envs/build/lib/python3.7/site-packages/pandas/core/indexes/datetimes.py in get_loc(self, key, method, tolerance) 729 except KeyError: --> 730 raise KeyError(key) 731 except ValueError as e:

KeyError: datetime.date(2010, 1, 4)

During handling of the above exception, another exception occurred:

TypeError Traceback (most recent call last) <ipython-input-7-d06631d74971> in <module> ----> 1 x.loc[dt.date(2010, 1, 2): dt.date(2010, 1, 4)]

~/.conda/envs/build/lib/python3.7/site-packages/xarray/core/dataarray.py in getitem(self, key) 194 labels = indexing.expanded_indexer(key, self.data_array.ndim) 195 key = dict(zip(self.data_array.dims, labels)) --> 196 return self.data_array.sel(**key) 197 198 def setitem(self, key, value) -> None:

~/.conda/envs/build/lib/python3.7/site-packages/xarray/core/dataarray.py in sel(self, indexers, method, tolerance, drop, indexers_kwargs) 1049 method=method, 1050 tolerance=tolerance, -> 1051 indexers_kwargs, 1052 ) 1053 return self._from_temp_dataset(ds)

~/.conda/envs/build/lib/python3.7/site-packages/xarray/core/dataset.py in sel(self, indexers, method, tolerance, drop, **indexers_kwargs) 2012 indexers = either_dict_or_kwargs(indexers, indexers_kwargs, "sel") 2013 pos_indexers, new_indexes = remap_label_indexers( -> 2014 self, indexers=indexers, method=method, tolerance=tolerance 2015 ) 2016 result = self.isel(indexers=pos_indexers, drop=drop)

~/.conda/envs/build/lib/python3.7/site-packages/xarray/core/coordinates.py in remap_label_indexers(obj, indexers, method, tolerance, **indexers_kwargs) 390 391 pos_indexers, new_indexes = indexing.remap_label_indexers( --> 392 obj, v_indexers, method=method, tolerance=tolerance 393 ) 394 # attach indexer's coordinate to pos_indexers

~/.conda/envs/build/lib/python3.7/site-packages/xarray/core/indexing.py in remap_label_indexers(data_obj, indexers, method, tolerance) 258 coords_dtype = data_obj.coords[dim].dtype 259 label = maybe_cast_to_coords_dtype(label, coords_dtype) --> 260 idxr, new_idx = convert_label_indexer(index, label, dim, method, tolerance) 261 pos_indexers[dim] = idxr 262 if new_idx is not None:

~/.conda/envs/build/lib/python3.7/site-packages/xarray/core/indexing.py in convert_label_indexer(index, label, index_name, method, tolerance) 122 _sanitize_slice_element(label.start), 123 _sanitize_slice_element(label.stop), --> 124 _sanitize_slice_element(label.step), 125 ) 126 if not isinstance(indexer, slice):

~/.conda/envs/build/lib/python3.7/site-packages/pandas/core/indexes/datetimes.py in slice_indexer(self, start, end, step, kind) 806 807 try: --> 808 return Index.slice_indexer(self, start, end, step, kind=kind) 809 except KeyError: 810 # For historical reasons DatetimeIndex by default supports

~/.conda/envs/build/lib/python3.7/site-packages/pandas/core/indexes/base.py in slice_indexer(self, start, end, step, kind) 4711 slice(1, 3) 4712 """ -> 4713 start_slice, end_slice = self.slice_locs(start, end, step=step, kind=kind) 4714 4715 # return a slice

~/.conda/envs/build/lib/python3.7/site-packages/pandas/core/indexes/base.py in slice_locs(self, start, end, step, kind) 4930 end_slice = None 4931 if end is not None: -> 4932 end_slice = self.get_slice_bound(end, "right", kind) 4933 if end_slice is None: 4934 end_slice = len(self)

~/.conda/envs/build/lib/python3.7/site-packages/pandas/core/indexes/base.py in get_slice_bound(self, label, side, kind) 4843 except KeyError as err: 4844 try: -> 4845 return self._searchsorted_monotonic(label, side) 4846 except ValueError: 4847 # raise the original KeyError

~/.conda/envs/build/lib/python3.7/site-packages/pandas/core/indexes/base.py in _searchsorted_monotonic(self, label, side) 4794 def _searchsorted_monotonic(self, label, side="left"): 4795 if self.is_monotonic_increasing: -> 4796 return self.searchsorted(label, side=side) 4797 elif self.is_monotonic_decreasing: 4798 # np.searchsorted expects ascending sort order, have to reverse

~/.conda/envs/build/lib/python3.7/site-packages/pandas/core/indexes/datetimes.py in searchsorted(self, value, side, sorter) 851 elif not isinstance(value, DatetimeArray): 852 raise TypeError( --> 853 "searchsorted requires compatible dtype or scalar, " 854 f"not {type(value).name}" 855 )

TypeError: searchsorted requires compatible dtype or scalar, not date ```

``` commit: None python: 3.7.6 | packaged by conda-forge | (default, Jan 7 2020, 22:33:48) [GCC 7.3.0] python-bits: 64 OS: Linux OS-release: 3.10.0-693.el7.x86_64 machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: en_US.UTF-8 libhdf5: 1.10.5 libnetcdf: 4.7.3

xarray: 0.14.1 pandas: 1.0.0 numpy: 1.17.5 scipy: 1.4.1 netCDF4: 1.5.3 pydap: None h5netcdf: 0.7.4 h5py: 2.10.0 Nio: None zarr: None cftime: 1.0.4.2 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: 1.3.1 dask: 2.10.1 distributed: 2.10.0 matplotlib: 3.1.2 cartopy: None seaborn: 0.9.0 numbagg: installed setuptools: 45.1.0.post20200119 pip: 20.0.2 conda: 4.8.2 pytest: None IPython: 7.11.1 sphinx: None ```

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3736/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
468157549 MDU6SXNzdWU0NjgxNTc1NDk= 3133 Add StringAccessor.format() seth-p 7441788 closed 0     0 2019-07-15T14:23:10Z 2019-07-15T19:49:09Z 2019-07-15T19:49:09Z CONTRIBUTOR      

Actually, on further thought I'm not sure how this should work, so I'm going to close this.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3133/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
197709208 MDU6SXNzdWUxOTc3MDkyMDg= 1186 Including last coordinate values when displaying coordinates seth-p 7441788 closed 0     2 2016-12-27T14:18:19Z 2018-07-20T16:04:51Z 2018-07-20T16:04:51Z CONTRIBUTOR      

I first posted this to https://groups.google.com/forum/#!topic/xarray/wjPtXMr91sg .

I'm afraid I'm not set up to submit a PR, but I think something like the following should work once one has created a last_n_items() similar to the existing first_n_items().

Also, I wonder if it would make sense to change the minimum number of items displayed from 1 to min(2, items_ndarray.size).

``` def format_array_flat(items_ndarray, max_width): """Return a formatted string for as many items in the flattened version of items_ndarray that will fit within max_width characters """ # every item will take up at least two characters, but we always want to # print at least one item max_possibly_relevant = max(int(np.ceil(max_width / 2.0)), 1) relevant_items = sum(([x, y] for (x, y) in zip(first_n_items(items_ndarray, (max_possibly_relevant + 1) // 2), reversed(last_n_items(items_ndarray, max_possibly_relevant // 2)))), []) pprint_items = format_items(relevant_items)

cum_len = np.cumsum([len(s) + 1 for s in pprint_items]) - 1
if (max_possibly_relevant < items_ndarray.size or
        (cum_len > max_width).any()):
    padding = u' ... '
    count = max(np.argmax((cum_len + len(padding)) > max_width), 1)
else:
    count = items_ndarray.size
    padding = u'' if (count <= 1) else u' '

pprint_str = u' '.join(np.take(pprint_items, range(0, count, 2))) + padding + \
             u' '.join(np.take(pprint_items, range(count - (count % 2) - 1, 0, -2)))
return pprint_str

```

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1186/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
292550002 MDU6SXNzdWUyOTI1NTAwMDI= 1864 BUG: ds['...'].sel(...).values throws exception for a ds loaded from file seth-p 7441788 closed 0     1 2018-01-29T20:31:52Z 2018-01-30T09:19:09Z 2018-01-30T09:19:09Z CONTRIBUTOR      

```python import xarray as xr ds = xr.Dataset(data_vars={'foo': xr.DataArray([1.], dims='abc', coords={'abc': ['a']})}) ds.to_netcdf(path='ds.nc', engine='h5netcdf') ds1 = xr.open_dataset('ds.nc', engine='h5netcdf')

ds1['foo'].values # uncomment to eliminate exception

ds1['foo'].sel(abc=['a']).values # throws "TypeError: PointSelection getitem only works with bool arrays" ```

Problem description

Accessing the DataArray's .sel() before .values leads to TypeError: PointSelection __getitem__ only works with bool arrays. Accessing the DataArray's .values before the .sel().values eliminates the error.

Expected Output

array([ 1.])

Output of xr.show_versions()

INSTALLED VERSIONS

commit: None python: 3.6.4.final.0 python-bits: 64 OS: Windows OS-release: 10 machine: AMD64 processor: Intel64 Family 6 Model 62 Stepping 4, GenuineIntel byteorder: little LC_ALL: None LANG: None LOCALE: None.None

xarray: 0.10.0 pandas: 0.22.0 numpy: 1.13.3 scipy: 1.0.0 netCDF4: None h5netcdf: 0.5.0 Nio: None bottleneck: 1.2.1 cyordereddict: None dask: 0.16.1 matplotlib: 2.1.2 cartopy: None seaborn: 0.8.1 setuptools: 38.4.0 pip: 9.0.1 conda: 4.3.33 pytest: 3.3.2 IPython: 6.2.1 sphinx: 1.6.6

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1864/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 73.22ms · About: xarray-datasette