issues
11 rows where user = 13837821 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: comments, closed_at, created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
689384366 | MDU6SXNzdWU2ODkzODQzNjY= | 4393 | Dimension attrs lost when creating new variable with that dimension | dnowacki-usgs 13837821 | closed | 0 | 2 | 2020-08-31T17:52:01Z | 2020-09-10T17:37:07Z | 2020-09-10T17:37:07Z | CONTRIBUTOR | What happened: When creating a new variable based on an existing dimension, the attrs of the dimension are lost. What you expected to happen: The attrs should be preserved. Minimal Complete Verifiable Example: ```python import xarray as xr ds = xr.Dataset() ds['x'] = xr.DataArray(range(10), dims='x') ds['y'] = xr.DataArray(range(len(ds['x'])), dims='x') ds['x'].attrs['foo'] = 'bar' print(ds['x']) # attrs of ds['x'] are preserved print('\n****\n') ds = xr.Dataset() ds['x'] = xr.DataArray(range(10), dims='x') ds['x'].attrs['foo'] = 'bar' ds['y'] = xr.DataArray(range(len(ds['x'])), dims='x') print(ds['x']) # attrs of ds['x'] are lost ``` Output of above code: ``` <xarray.DataArray 'x' (x: 10)> array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) Coordinates: * x (x) int64 0 1 2 3 4 5 6 7 8 9 Attributes: foo: bar <xarray.DataArray 'x' (x: 10)> array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) Coordinates: * x (x) int64 0 1 2 3 4 5 6 7 8 9 ``` Environment: Output of <tt>xr.show_versions()</tt>INSTALLED VERSIONS ------------------ commit: None libhdf5: 1.10.5 libnetcdf: 4.7.4 xarray: 0.16.0 pandas: 1.0.3 numpy: 1.19.1 scipy: 1.3.1 netCDF4: 1.5.3 pydap: None h5netcdf: 0.8.0 h5py: 2.10.0 Nio: None zarr: None cftime: 1.2.1 nc_time_axis: None PseudoNetCDF: None rasterio: 1.1.3 cfgrib: None iris: None bottleneck: 1.3.2 dask: 2.18.1 distributed: 2.25.0 matplotlib: 3.2.1 cartopy: 0.18.0 seaborn: 0.10.0 numbagg: None pint: 0.15 setuptools: 49.6.0.post20200814 pip: 19.2.2 conda: 4.8.4 pytest: 5.4.1 IPython: 7.14.0 sphinx: None None |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4393/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
673757961 | MDExOlB1bGxSZXF1ZXN0NDYzNTY3ODM2 | 4314 | Recreate @gajomi's #2070 to keep attrs when calling astype() | dnowacki-usgs 13837821 | closed | 0 | 33 | 2020-08-05T18:30:58Z | 2020-08-19T20:34:35Z | 2020-08-19T20:34:35Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/4314 | This is a do-over of @gajomi's #2070 with minimal additional tests to preserve attrs on Datasets as well as DataArrays.
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4314/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
355698213 | MDU6SXNzdWUzNTU2OTgyMTM= | 2392 | Improving interpolate_na()'s limit argument | dnowacki-usgs 13837821 | closed | 0 | 0 | 2018-08-30T18:16:07Z | 2019-11-15T14:53:17Z | 2019-11-15T14:53:17Z | CONTRIBUTOR | I've been working with some time-series data with occasional nans peppered throughout. I want to interpolate small gaps of nans (say, when there is a single isolated nan or perhaps a block of two) but leave larger blocks as nans. That is, it's not appropriate to fill large gaps, but it acceptable to do so for small gaps. I was hoping I'm not able to attempt tackling this right now, but I guess I wanted to put in a feature request for an additional argument to |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2392/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
507540721 | MDExOlB1bGxSZXF1ZXN0MzI4NTEzMDQw | 3405 | Fix and add test for groupby_bins() isnan TypeError. | dnowacki-usgs 13837821 | closed | 0 | 1 | 2019-10-15T23:59:57Z | 2019-10-17T21:34:40Z | 2019-10-17T21:13:45Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/3405 |
Testing could be improved, but as-is it would fix the same issue from happening again. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3405/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
507524966 | MDU6SXNzdWU1MDc1MjQ5NjY= | 3404 | groupby_bins raises ufunc 'isnan' error on 0.14.0 | dnowacki-usgs 13837821 | closed | 0 | 1 | 2019-10-15T23:02:34Z | 2019-10-17T21:13:45Z | 2019-10-17T21:13:45Z | CONTRIBUTOR | I recently upgraded to xarray 0.14.0. When running code that used to work in 0.13, I get a MCVE Code Sample```python import xarray as xr import pandas as pd import numpy as np ts = pd.date_range(start='2010-08-01', end='2010-08-15', freq='24.8H') ds = xr.Dataset() ds['time'] = xr.DataArray(pd.date_range('2010-08-01', '2010-08-15', freq='15min'), dims='time') ds['val'] = xr.DataArray(np.random.rand(*ds['time'].shape), dims='time') ds.groupby_bins('time', ts) #error thrown here ``` Full error details below.
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-35-43742bae2c94> in <module>
9 ds['val'] = xr.DataArray(np.random.rand(*ds['time'].shape), dims='time')
10
---> 11 ds.groupby_bins('time', ts)
~/miniconda3/lib/python3.7/site-packages/xarray/core/common.py in groupby_bins(self, group, bins, right, labels, precision, include_lowest, squeeze, restore_coord_dims)
727 "labels": labels,
728 "precision": precision,
--> 729 "include_lowest": include_lowest,
730 },
731 )
~/miniconda3/lib/python3.7/site-packages/xarray/core/groupby.py in __init__(self, obj, group, squeeze, grouper, bins, restore_coord_dims, cut_kwargs)
322
323 if bins is not None:
--> 324 if np.isnan(bins).all():
325 raise ValueError("All bin edges are NaN.")
326 binned = pd.cut(group.values, bins, **cut_kwargs)
TypeError: ufunc 'isnan' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''
Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3404/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
436999500 | MDExOlB1bGxSZXF1ZXN0MjczMzY4ODcy | 2917 | Implement load_dataset() and load_dataarray() | dnowacki-usgs 13837821 | closed | 0 | 10 | 2019-04-25T03:59:19Z | 2019-06-06T12:43:37Z | 2019-05-16T15:28:30Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/2917 |
Implement load_dataset() and load_dataarray() BUG: Fixes #2887 by adding @shoyer solution for load_dataset and load_dataarray, wrappers around open_dataset and open_dataarray which open, load, and close the file and return the Dataset/DataArray TST: Add tests for sequentially opening and writing to files using new functions DOC: Add to whats-new.rst. Also a tiny change to the open_dataset docstring |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2917/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
434599855 | MDExOlB1bGxSZXF1ZXN0MjcxNTU3MDE5 | 2906 | Partial fix for #2841 to improve formatting. | dnowacki-usgs 13837821 | closed | 0 | 7 | 2019-04-18T05:45:56Z | 2019-04-19T16:53:50Z | 2019-04-19T16:53:50Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/2906 | Updates formatting to use .format() instead of % operator. Changed all instances of % to .format() and added test for using tuple as key, which errored using % operator. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2906/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
432908313 | MDExOlB1bGxSZXF1ZXN0MjcwMjU4NjM3 | 2892 | Return correct count for scalar datetime64 arrays | dnowacki-usgs 13837821 | closed | 0 | 1 | 2019-04-13T22:38:53Z | 2019-04-14T00:52:04Z | 2019-04-14T00:51:59Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/2892 | BUG: Fix #2770 by changing ~ to np.logical_not() TST: Add test for scalar datetime64 value DOC: Add to whats-new.rst |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2892/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
323357664 | MDU6SXNzdWUzMjMzNTc2NjQ= | 2134 | unlimited_dims generates 0-length dimensions named as letters of unlimited dimension | dnowacki-usgs 13837821 | closed | 0 | 5 | 2018-05-15T19:47:10Z | 2018-05-18T14:48:11Z | 2018-05-18T14:48:11Z | CONTRIBUTOR | I'm not sure I understand how the time = 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 ;
}
I thought it could be related to the variable and dimension having the same name, but this also happens when they are different. Expected OutputThere shouldn't be extra 0-length dimensions Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2134/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
312693611 | MDExOlB1bGxSZXF1ZXN0MTgwNDMyMTA0 | 2045 | Implement setncattr_string for attributes that are lists of strings | dnowacki-usgs 13837821 | closed | 0 | 4 | 2018-04-09T21:26:13Z | 2018-04-17T15:39:36Z | 2018-04-17T15:39:34Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/2045 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2045/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
312633077 | MDU6SXNzdWUzMTI2MzMwNzc= | 2044 | Feature request: writing xarray list-type attributes to netCDF | dnowacki-usgs 13837821 | closed | 0 | 2 | 2018-04-09T18:14:33Z | 2018-04-17T15:39:34Z | 2018-04-17T15:39:34Z | CONTRIBUTOR | Migrated from Stack Overflow. NetCDF supports the NC_STRING type, which can stores arrays of strings in attributes. Xarray already supports reading arrays of strings from attributes in netCDF files, and It would be great if it also supported writing the same. Reading already works```python import xarray as xr import netCDF4 as nc rg = nc.Dataset('test_string.nc', 'w', format='NETCDF4')
rg.setncattr_string('testing', ['a', 'b'])
rg.close()
ds = xr.open_dataset('test_string.nc')
print(ds)
This works because I used the Writing doesn't work
Note the list elements have been concatenated. So this is a request for xarray to implement something like netCDF4's |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2044/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);