issues
4 rows where repo = 13221727, state = "closed" and user = 3169620 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
281897468 | MDU6SXNzdWUyODE4OTc0Njg= | 1778 | ValueError on empty selection with dask based DataArrays | duncanwp 3169620 | closed | 0 | 0.11.1 3801867 | 2 | 2017-12-13T21:09:42Z | 2019-07-12T13:41:08Z | 2019-07-12T13:41:08Z | CONTRIBUTOR | Code Sample, a copy-pastable example if possible```python import xarray as xr import numpy as np da = xr.DataArray(np.random.rand(15), dims=['latitude'], coords={'latitude':np.linspace(90, -90, 15)}) This gives an empty latitude sliceprint(da.sel(latitude=slice(20, 60))) After converting the DataArray to dask...da=da.chunk() ...this throws a ValueError due to 'conflicting sizes'print(da.sel(latitude=slice(20, 60))) ``` Problem descriptionI would expect the dask based DataArray to return an empty slice just as the numpy one does. Although arguably it would be nicer if both returned the latitude values between 20 and 60 - regardless of the direction of the coordinate. Perhaps the sel method could check whether the coordinate is increasing or decreasing? Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1778/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | |||||
293445250 | MDU6SXNzdWUyOTM0NDUyNTA= | 1877 | TypeError for NetCDF float16 output | duncanwp 3169620 | closed | 0 | 4 | 2018-02-01T08:39:03Z | 2018-02-26T10:46:47Z | 2018-02-26T10:46:47Z | CONTRIBUTOR | ```python ds = xr.Dataset({"test": np.arange(0, 5, dtype='float16')}) ds.to_netcdf('test.nc') ``` This fails because the float16 type doesn't exist for NetCDF files, throwing an It might be nice if the xarray netCDF engine promoted this to float32 instead. Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1877/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
278286073 | MDExOlB1bGxSZXF1ZXN0MTU1NzMxMjI3 | 1750 | xarray to and from Iris | duncanwp 3169620 | closed | 0 | 16 | 2017-11-30T22:04:46Z | 2018-01-03T10:19:16Z | 2017-12-20T15:14:17Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/1750 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1750/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
273459295 | MDU6SXNzdWUyNzM0NTkyOTU= | 1715 | Error accessing isnull() and notnull() on Dataset | duncanwp 3169620 | closed | 0 | 3 | 2017-11-13T14:59:30Z | 2017-11-13T16:07:01Z | 2017-11-13T16:07:01Z | CONTRIBUTOR | Code Sample, a copy-pastable example if possible```python Create basic Datasetimport numpy as np import pandas as pd import xarray as xr temp = 15 + 8 * np.random.randn(2, 2, 3) precip = 10 * np.random.rand(2, 2, 3) lon = [[-99.83, -99.32], [-99.79, -99.23]] lat = [[42.25, 42.21], [42.63, 42.59]] for real use cases, its good practice to supply array attributes such asunits, but we won't bother here for the sake of brevityds = xr.Dataset({'temperature': (['x', 'y', 'time'], temp), 'precipitation': (['x', 'y', 'time'], precip)}, coords={'lon': (['x', 'y'], lon), 'lat': (['x', 'y'], lat), 'time': pd.date_range('2014-09-06', periods=3), 'reference_time': pd.Timestamp('2014-09-05')}) Both of these throw attribute errorsds.notnull() ds.isnull()``` Problem descriptionThe above methods throw Expected OutputDataset of boolean DataArrays indicating null-ness. Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1715/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);