issues
4 rows where "closed_at" is on date 2020-08-23 and repo = 13221727 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: author_association, created_at (date), updated_at (date), closed_at (date)
| id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 684070245 | MDExOlB1bGxSZXF1ZXN0NDcyMDQxNjA2 | 4368 | pyupgrade | max-sixty 5635139 | closed | 0 | 0 | 2020-08-22T21:29:03Z | 2020-08-23T22:29:52Z | 2020-08-23T21:09:52Z | MEMBER | 0 | pydata/xarray/pulls/4368 | {
"url": "https://api.github.com/repos/pydata/xarray/issues/4368/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
xarray 13221727 | pull | ||||||
| 684248425 | MDU6SXNzdWU2ODQyNDg0MjU= | 4370 | Not able to slice dataset using its own coordinate value, after upgrade to pandas 1.1.0 | russ-schumacher 18426375 | closed | 0 | 4 | 2020-08-23T20:10:56Z | 2020-08-23T20:26:58Z | 2020-08-23T20:26:57Z | NONE | I seem to be having the same issue that was reported here: https://github.com/pydata/xarray/issues/1932, after upgrading pandas to 1.1.0. The error does not arise with pandas 1.0.3. Example:
Result is: `--------------------------------------------------------------------------- KeyError Traceback (most recent call last) <ipython-input-12-31afccd16cef> in <module> ----> 1 da.sel(time=da.time[0]) 2 #da.time ~/miniconda3/envs/ats641/lib/python3.7/site-packages/xarray/core/dataarray.py in sel(self, indexers, method, tolerance, drop, indexers_kwargs) 1152 method=method, 1153 tolerance=tolerance, -> 1154 indexers_kwargs, 1155 ) 1156 return self._from_temp_dataset(ds) ~/miniconda3/envs/ats641/lib/python3.7/site-packages/xarray/core/dataset.py in sel(self, indexers, method, tolerance, drop, **indexers_kwargs) 2100 indexers = either_dict_or_kwargs(indexers, indexers_kwargs, "sel") 2101 pos_indexers, new_indexes = remap_label_indexers( -> 2102 self, indexers=indexers, method=method, tolerance=tolerance 2103 ) 2104 result = self.isel(indexers=pos_indexers, drop=drop) ~/miniconda3/envs/ats641/lib/python3.7/site-packages/xarray/core/coordinates.py in remap_label_indexers(obj, indexers, method, tolerance, **indexers_kwargs) 395 396 pos_indexers, new_indexes = indexing.remap_label_indexers( --> 397 obj, v_indexers, method=method, tolerance=tolerance 398 ) 399 # attach indexer's coordinate to pos_indexers ~/miniconda3/envs/ats641/lib/python3.7/site-packages/xarray/core/indexing.py in remap_label_indexers(data_obj, indexers, method, tolerance) 268 coords_dtype = data_obj.coords[dim].dtype 269 label = maybe_cast_to_coords_dtype(label, coords_dtype) --> 270 idxr, new_idx = convert_label_indexer(index, label, dim, method, tolerance) 271 pos_indexers[dim] = idxr 272 if new_idx is not None: ~/miniconda3/envs/ats641/lib/python3.7/site-packages/xarray/core/indexing.py in convert_label_indexer(index, label, index_name, method, tolerance) 188 else: 189 indexer = index.get_loc( --> 190 label.item(), method=method, tolerance=tolerance 191 ) 192 elif label.dtype.kind == "b": ~/miniconda3/envs/ats641/lib/python3.7/site-packages/pandas/core/indexes/datetimes.py in get_loc(self, key, method, tolerance) 620 else: 621 # unrecognized type --> 622 raise KeyError(key) 623 624 try: KeyError: 1356998400000000000` Output of xr.show_versions() is: ` INSTALLED VERSIONS commit: None python: 3.7.6 | packaged by conda-forge | (default, Jan 7 2020, 22:05:27) [Clang 9.0.1 ] python-bits: 64 OS: Darwin OS-release: 19.6.0 machine: x86_64 processor: i386 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: en_US.UTF-8 libhdf5: 1.10.5 libnetcdf: 4.7.3 xarray: 0.16.0 pandas: 1.1.0 numpy: 1.18.1 scipy: 1.5.2 netCDF4: 1.5.3 pydap: None h5netcdf: None h5py: None Nio: 1.5.5 zarr: None cftime: 1.2.1 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: 0.9.8.3 iris: None bottleneck: None dask: 2.20.0 distributed: None matplotlib: 3.1.3 cartopy: 0.17.0 seaborn: 0.10.1 numbagg: None pint: 0.15 setuptools: 49.6.0.post20200814 pip: 20.0.2 conda: None pytest: None IPython: 7.17.0 sphinx: None ` With pandas 1.0.3, da.sel(time=da.time[0]) works correctly and returns a slice. Thanks for any help you can offer! |
{
"url": "https://api.github.com/repos/pydata/xarray/issues/4370/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
completed | xarray 13221727 | issue | ||||||
| 406971782 | MDU6SXNzdWU0MDY5NzE3ODI= | 2747 | 1000x performance regression in transpose from disk between libnetcdf 4.6.1 and 4.6.2 | coroa 2552981 | closed | 0 | 3 | 2019-02-05T20:59:53Z | 2020-08-23T18:53:41Z | 2020-08-23T18:53:41Z | CONTRIBUTOR | Having generated a = np.random.random((1000, 100)) ds = xr.Dataset({'foo': xr.DataArray(a, [('x', np.arange(1000)), ('y', np.arange(100))])}) ds.to_netcdf('test.nc') ``` I am seeing a huge performance regression from libnetcdf=4.6.1
Loading into memory mitigates the regression (on Output of
|
{
"url": "https://api.github.com/repos/pydata/xarray/issues/2747/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
completed | xarray 13221727 | issue | ||||||
| 538809911 | MDU6SXNzdWU1Mzg4MDk5MTE= | 3632 | applying ufunc over lon and lat | mada0007 44284270 | closed | 0 | 5 | 2019-12-17T03:25:44Z | 2020-08-23T17:37:29Z | 2020-08-23T17:37:29Z | NONE | I am trying to apply ufunc to my code to speed it up. Currently I am looping over lon and lat and it takes ages. After long reads I came up with this though am still not sure where I a going wrong. Suggestions and Ideas would be very helpful to get me going. thanks ``` python def get_grps(s, thresh=-1, Nmin=3): """ Nmin : int > 0 Min number of consecutive values below threshold. """ s = pd.Series(s) m = np.logical_and.reduce([s.shift(-i).le(thresh) for i in range(Nmin)]) if Nmin > 1: m = pd.Series(m, index=s.index).replace({False: np.NaN}).ffill(limit=Nmin-1).fillna(False) else: m = pd.Series(m, index=s.index)
```
``` python def consec_events(obj):
results = consec_events(spi) ``` ERROR IS HERE
I am relatively new to apply this and I have been reading quite a lot. I would appreciatiate if I can be corrected on how to properly apply this over time dimension of my data and get resulting 2d array for each lon and lat? |
{
"url": "https://api.github.com/repos/pydata/xarray/issues/3632/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] (
[id] INTEGER PRIMARY KEY,
[node_id] TEXT,
[number] INTEGER,
[title] TEXT,
[user] INTEGER REFERENCES [users]([id]),
[state] TEXT,
[locked] INTEGER,
[assignee] INTEGER REFERENCES [users]([id]),
[milestone] INTEGER REFERENCES [milestones]([id]),
[comments] INTEGER,
[created_at] TEXT,
[updated_at] TEXT,
[closed_at] TEXT,
[author_association] TEXT,
[active_lock_reason] TEXT,
[draft] INTEGER,
[pull_request] TEXT,
[body] TEXT,
[reactions] TEXT,
[performed_via_github_app] TEXT,
[state_reason] TEXT,
[repo] INTEGER REFERENCES [repos]([id]),
[type] TEXT
);
CREATE INDEX [idx_issues_repo]
ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
ON [issues] ([user]);