issues
14 rows where state = "closed" and user = 6200806 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: comments, updated_at, closed_at, created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
589471115 | MDExOlB1bGxSZXF1ZXN0Mzk1MDE3NjEz | 3906 | Fix for stack+groupby+apply w/ non-increasing coord | spencerahill 6200806 | closed | 0 | 4 | 2020-03-28T00:15:20Z | 2020-03-31T18:29:35Z | 2020-03-31T16:10:10Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/3906 |
I've added a check within |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3906/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
274392275 | MDU6SXNzdWUyNzQzOTIyNzU= | 1721 | Potential test failures with libnetcdf 4.5.0 | spencerahill 6200806 | closed | 0 | 7 | 2017-11-16T04:36:55Z | 2019-11-17T23:57:54Z | 2019-11-17T23:57:54Z | CONTRIBUTOR | A heads up: @spencerkclark unearthed problems with libnetcdf 4.5.0 that will cause xarray test failures. See https://github.com/Unidata/netcdf4-python/issues/742 and link therein for more info. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1721/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
327089588 | MDU6SXNzdWUzMjcwODk1ODg= | 2191 | Adding resample functionality to CFTimeIndex | spencerahill 6200806 | closed | 0 | 22 | 2018-05-28T18:01:57Z | 2019-02-19T20:22:28Z | 2019-02-03T12:16:21Z | CONTRIBUTOR | Now that CFTimeIndex has been implemented (#1252), one thing that remains to implement is resampling. @shoyer provided a sketch of how to implement it: https://github.com/pydata/xarray/pull/1252#issuecomment-380593243. In the interim, @spencerkclark provided a sketch of a workaround for some use-cases using groupby: https://github.com/pydata/xarray/issues/1270#issuecomment-390973986. I thought it would be useful to have a new Issue specifically on this topic from which future conversation can continue. @shoyer, does that sketch you provided still seem like a good starting point? |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2191/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
246612712 | MDU6SXNzdWUyNDY2MTI3MTI= | 1497 | Best way to perform DataArray.mean while retaining coords defined in the dimension being averaged | spencerahill 6200806 | closed | 0 | 2 | 2017-07-30T22:17:48Z | 2019-01-16T03:36:16Z | 2019-01-16T03:36:15Z | CONTRIBUTOR |
```python In [39]: x, y = range(2), range(3) In [40]: arr = xr.DataArray(np.random.random((2,3)), dims=['x', 'y'], coords=dict(x=x, y=y)) In [41]: coord = xr.DataArray(np.random.random((2,3)), dims=['x', 'y'], coords=dict(x=x, y=y)) In [42]: arr = arr.assign_coords(z=coord) In [43]: arr Out[43]: <xarray.DataArray (x: 2, y: 3)> array([[ 0.132368, 0.746242, 0.48783 ], [ 0.12747 , 0.751283, 0.033713]]) Coordinates: * y (y) int64 0 1 2 * x (x) int64 0 1 z (x, y) float64 0.993 0.1031 0.1808 0.2769 0.7237 0.2891 In [44]: arr.mean('x') Out[44]: <xarray.DataArray (y: 3)> array([ 0.129919, 0.748763, 0.260772]) Coordinates: * y (y) int64 0 1 2 ``` We have a use case where we'd like to preserve the coordinates. @spencerkclark came up with the following workaround, which entails converting to a dataset, promoting the coord to a variable, performing the mean, and demoting the original coord back from a variable to a coord:
This works fine, but I just feel like maybe there's an easier way to do this that we're missing. Any ideas? Thanks in advance. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1497/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
187591179 | MDU6SXNzdWUxODc1OTExNzk= | 1084 | Towards a (temporary?) workaround for datetime issues at the xarray-level | spencerahill 6200806 | closed | 0 | 29 | 2016-11-06T21:40:36Z | 2018-05-13T05:19:10Z | 2018-05-13T05:19:10Z | CONTRIBUTOR | Re: #789. The consensus is that upstream fixes in Pandas are not coming anytime soon, and there is an acute need amongst many xarray users for a workaround in the meantime. There are two separate issues: (1) date-range limitations due to nanosecond precision, and (2) support for non-standard calendars. @shoyer, @jhamman , @spencerkclark, @darothen, and I briefly discussed offline a potential workaround that I am (poorly) summarizing here, with hope that others will correct/extend my snippet. The idea is to extend either PeriodIndex or (more involved but potentially more robust) Int64Index, either through subclassing or composition, to implement all of the desired functionality: slicing, resampling, groupby, and serialization. For reference, @spencerkclark nicely summarized the limitations of PeriodIndex and the netCDF4.datetime objects, which are often used as workarounds currently: https://github.com/spencerahill/aospy/issues/98#issuecomment-256043833 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1084/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
205248365 | MDU6SXNzdWUyMDUyNDgzNjU= | 1248 | Clarifying sel/drop behavior for dims with vs. without coords | spencerahill 6200806 | closed | 0 | 2 | 2017-02-03T19:31:51Z | 2017-02-05T23:09:39Z | 2017-02-05T23:09:39Z | CONTRIBUTOR | Just want to clarify if below is the intended behavior re: selecting on dims w/ vs. w/out coords. I'm on v0.9.1: In short, selecting a single value of a dimension causes the dimension to be dropped only if it doesn't have a coordinate. Is that intentional? It's counter-intuitive to me but not a huge deal. Simple example: ```python In [156]: ds=xr.DataArray([1,2], name='test').to_dataset().assign_coords(bounds=[0,1]) In [161]: ds Out[161]: <xarray.Dataset> Dimensions: (bounds: 2, dim_0: 2) Coordinates: * bounds (bounds) int64 0 1 Dimensions without coordinates: dim_0 Data variables: test (dim_0) int64 1 2 In [163]: ds.isel(bounds=0).drop('bounds') Out[163]: <xarray.Dataset> Dimensions: (dim_0: 2) Dimensions without coordinates: dim_0 Data variables: test (dim_0) int64 1 2 In [164]: ds.isel(dim_0=0).drop('dim_0')ValueError Traceback (most recent call last) <ipython-input-164-902de222762c> in <module>() ----> 1 ds.isel(dim_0=0).drop('dim_0') /Users/shill/Dropbox/miniconda3/lib/python3.5/site-packages/xarray/core/dataset.py in drop(self, labels, dim) 1888 labels = [labels] 1889 if dim is None: -> 1890 return self._drop_vars(labels) 1891 else: 1892 try: /Users/shill/Dropbox/miniconda3/lib/python3.5/site-packages/xarray/core/dataset.py in _drop_vars(self, names) 1899 1900 def _drop_vars(self, names): -> 1901 self._assert_all_in_dataset(names) 1902 drop = set(names) 1903 variables = OrderedDict((k, v) for k, v in iteritems(self._variables) /Users/shill/Dropbox/miniconda3/lib/python3.5/site-packages/xarray/core/dataset.py in _assert_all_in_dataset(self, names, virtual_okay) 1867 bad_names -= self.virtual_variables 1868 if bad_names: -> 1869 raise ValueError('One or more of the specified variables ' 1870 'cannot be found in this dataset') 1871 ValueError: One or more of the specified variables cannot be found in this dataset ``` cc @spencerkclark xref https://github.com/spencerahill/aospy/issues/137#issuecomment-277326319 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1248/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
201441553 | MDU6SXNzdWUyMDE0NDE1NTM= | 1216 | Why ufuncs module not included in top-level namespace | spencerahill 6200806 | closed | 0 | 2 | 2017-01-18T00:00:08Z | 2017-01-19T05:59:26Z | 2017-01-19T05:59:26Z | CONTRIBUTOR | I.e. there is no If it was included, one could access them after Sorry if I'm missing something obvious here. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1216/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
201770526 | MDExOlB1bGxSZXF1ZXN0MTAyMjA1NDM3 | 1219 | ENH import ufuncs module in toplevel namespace | spencerahill 6200806 | closed | 0 | 1 | 2017-01-19T05:51:02Z | 2017-01-19T05:59:26Z | 2017-01-19T05:59:26Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/1219 |
Let me know if you want tests and/or what's new; wasn't sure since this is so small. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1219/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
132536288 | MDU6SXNzdWUxMzI1MzYyODg= | 754 | Expose testing methods | spencerahill 6200806 | closed | 0 | 6 | 2016-02-09T21:16:25Z | 2016-12-23T18:48:12Z | 2016-12-23T18:48:12Z | CONTRIBUTOR | Similar to numpy's numpy.testing module, I would find it useful for xarray to expose some of its testing methods, particularly those in its base TestCase. I imagine other folks whose packages and/or modules derive heavily from xarray would find this useful too. For now I am just copy-pasting xarray's code for these methods into my testing modules. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/754/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
128735308 | MDU6SXNzdWUxMjg3MzUzMDg= | 725 | Replacing coord with coord of same name results in NaNs | spencerahill 6200806 | closed | 0 | 2 | 2016-01-26T06:12:42Z | 2016-02-16T05:19:07Z | 2016-02-16T05:19:07Z | CONTRIBUTOR | I think this is another unintended consequence of #648. Consider the following case:
Note the The use case: we have some data defined on the edges of pressure levels in an atmospheric model, and other data defined at the center of the pressure levels. In order to perform calculations involving both kinds of data, we average the edge-defined data (i.e. 0.5*(value at top edge + value at bottom edge)) to get the value at the level centers. But the resulting DataArray still has as its coord (from xarray's perspective, that is) the level edges, and so we replace that coord with the DataArray of the level centers. A workaround would be Somewhat involved and not sure I described clearly, so let me know if clarification needed. Also I vaguely suspect there's a cleaner way of doing this in the first place. Thanks! |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/725/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
130064443 | MDExOlB1bGxSZXF1ZXN0NTc3NTczMzg= | 736 | Accept rename to same name | spencerahill 6200806 | closed | 0 | 4 | 2016-01-31T02:17:01Z | 2016-02-02T03:44:33Z | 2016-02-02T01:33:03Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/736 | Closes #724 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/736/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
128718628 | MDU6SXNzdWUxMjg3MTg2Mjg= | 724 | Behavior of ds.rename when old and new name are the same | spencerahill 6200806 | closed | 0 | 2 | 2016-01-26T04:12:24Z | 2016-02-02T01:33:03Z | 2016-02-02T01:33:03Z | CONTRIBUTOR | Before #648, passing ``` python arr = xr.DataArray(range(2), name='arrname') ds = arr.to_dataset() ds.rename({'dim_0':'dim_0'}) ValueError Traceback (most recent call last) <ipython-input-15-a5b851f9fb10> in <module>() ----> 1 ds.rename({'dim_0':'dim_0'}) /Users/spencerahill/anaconda/lib/python2.7/site-packages/xarray/core/dataset.pyc in rename(self, name_dict, inplace) 1245 "variable in this dataset" % k) 1246 if v in self: -> 1247 raise ValueError('the new name %r already exists' % v) 1248 1249 variables = OrderedDict() ValueError: the new name 'dim_0' already exists ``` This is easy enough to handle with a try/except clause in my own code, but it would be nice (for me at least) for The use case is that we have data coming from multiple sources, often with differing internal names for coordinates and variables, and we're programmatically forcing them to have consistent names via |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/724/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
112430028 | MDU6SXNzdWUxMTI0MzAwMjg= | 634 | Unexpected behavior by diff when applied to coordinate DataArray | spencerahill 6200806 | closed | 0 | 3 | 2015-10-20T18:26:58Z | 2015-12-04T20:40:31Z | 2015-12-04T20:40:31Z | CONTRIBUTOR |
``` In [5]: arr = xray.DataArray(range(0, 20, 2), dims=['lon'], coords=[range(10)]) In [6]: arr.diff('lon') <xray.DataArray (lon: 9)> array([2, 2, 2, 2, 2, 2, 2, 2, 2]) Coordinates: * lon (lon) int64 1 2 3 4 5 6 7 8 9 In [7]: arr['lon'].diff('lon') <xray.DataArray 'lon' (lon: 9)> array([1, 2, 3, 4, 5, 6, 7, 8, 9]) Coordinates: * lon (lon) int64 1 2 3 4 5 6 7 8 9 ``` Is this the intended behavior? The documentation doesn't mention anything about this, and its counter-intuitive, so I'm wondering if its a bug instead. Even if it is intended, I personally would like to be able to use a coordinate array's diff on itself to get its spacing, e.g. for use as the denominator in finite differencing approximations to derivatives. Thanks! |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/634/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
118910006 | MDU6SXNzdWUxMTg5MTAwMDY= | 667 | Problems when array of coordinate bounds is 2D | spencerahill 6200806 | closed | 0 | 4 | 2015-11-25T19:46:12Z | 2015-11-25T20:46:14Z | 2015-11-25T20:45:42Z | CONTRIBUTOR | Most of the netCDF data I work with stores, in addition to the coordinates themselves, the bounds of each coordinate value. Often these bounds are stored as arrays with shape Nx2, where N is the number of points for that coordinate. For example: ``` $ ncdump -c /archive/Spencer.Hill/am3/am3clim_hurrell/gfdl.ncrc2-intel-prod-openmp/pp/atmos/ts/monthly/1yr/atmos.201001-201012.t_surf.nc netcdf atmos.201001-201012.t_surf { dimensions: time = UNLIMITED ; // (12 currently) lat = 90 ; bnds = 2 ; lon = 144 ; variables: double average_DT(time) ; average_DT:long_name = "Length of average period" ; average_DT:units = "days" ; average_DT:missing_value = 1.e+20 ; average_DT:_FillValue = 1.e+20 ; double average_T1(time) ; average_T1:long_name = "Start time for average period" ; average_T1:units = "days since 1980-01-01 00:00:00" ; average_T1:missing_value = 1.e+20 ; average_T1:_FillValue = 1.e+20 ; double average_T2(time) ; average_T2:long_name = "End time for average period" ; average_T2:units = "days since 1980-01-01 00:00:00" ; average_T2:missing_value = 1.e+20 ; average_T2:_FillValue = 1.e+20 ; double lat(lat) ; lat:long_name = "latitude" ; lat:units = "degrees_N" ; lat:cartesian_axis = "Y" ; lat:bounds = "lat_bnds" ; double lat_bnds(lat, bnds) ; lat_bnds:long_name = "latitude bounds" ; lat_bnds:units = "degrees_N" ; lat_bnds:cartesian_axis = "Y" ; double lon(lon) ; lon:long_name = "longitude" ; lon:units = "degrees_E" ; lon:cartesian_axis = "X" ; lon:bounds = "lon_bnds" ; double lon_bnds(lon, bnds) ; lon_bnds:long_name = "longitude bounds" ; lon_bnds:units = "degrees_E" ; lon_bnds:cartesian_axis = "X" ; float t_surf(time, lat, lon) ; t_surf:long_name = "surface temperature" ; t_surf:units = "deg_k" ; t_surf:valid_range = 100.f, 400.f ; t_surf:missing_value = 1.e+20f ; t_surf:_FillValue = 1.e+20f ; t_surf:cell_methods = "time: mean" ; t_surf:time_avg_info = "average_T1,average_T2,average_DT" ; t_surf:interp_method = "conserve_order2" ; double time(time) ; time:long_name = "time" ; time:units = "days since 1980-01-01 00:00:00" ; time:cartesian_axis = "T" ; time:calendar_type = "JULIAN" ; time:calendar = "JULIAN" ; time:bounds = "time_bounds" ; double time_bounds(time, bnds) ; time_bounds:long_name = "time axis boundaries" ; time_bounds:units = "days" ; time_bounds:missing_value = 1.e+20 ; time_bounds:_FillValue = 1.e+20 ; // global attributes: :filename = "atmos.201001-201012.t_surf.nc" ; :title = "am3clim_hurrell" ; :grid_type = "mosaic" ; :grid_tile = "1" ; :comment = "pressure level interpolator, version 3.0, precision=double" ; :history = "fregrid --input_mosaic atmos_mosaic.nc --input_file 20100101.atmos_month --interp_method conserve_order2 --remap_file .fregrid_remap_file_144_by_90 --nlon 144 --nlat 90 --scalar_field (please see the field list in this file)" ; :code_version = "$Name: fre-nctools-bronx-7 $" ; data: lat = -89, -87, -85, -83, -81, -79, -77, -75, -73, -71, -69, -67, -65, -63, -61, -59, -57, -55, -53, -51, -49, -47, -45, -43, -41, -39, -37, -35, -33, -31, -29, -27, -25, -23, -21, -19, -17, -15, -13, -11, -9, -7, -5, -3, -1, 1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25, 27, 29, 31, 33, 35, 37, 39, 41, 43, 45, 47, 49, 51, 53, 55, 57, 59, 61, 63, 65, 67, 69, 71, 73, 75, 77, 79, 81, 83, 85, 87, 89 ; lon = 1.25, 3.75, 6.25, 8.75, 11.25, 13.75, 16.25, 18.75, 21.25, 23.75, 26.25, 28.75, 31.25, 33.75, 36.25, 38.75, 41.25, 43.75, 46.25, 48.75, 51.25, 53.75, 56.25, 58.75, 61.25, 63.75, 66.25, 68.75, 71.25, 73.75, 76.25, 78.75, 81.25, 83.75, 86.25, 88.75, 91.25, 93.75, 96.25, 98.75, 101.25, 103.75, 106.25, 108.75, 111.25, 113.75, 116.25, 118.75, 121.25, 123.75, 126.25, 128.75, 131.25, 133.75, 136.25, 138.75, 141.25, 143.75, 146.25, 148.75, 151.25, 153.75, 156.25, 158.75, 161.25, 163.75, 166.25, 168.75, 171.25, 173.75, 176.25, 178.75, 181.25, 183.75, 186.25, 188.75, 191.25, 193.75, 196.25, 198.75, 201.25, 203.75, 206.25, 208.75, 211.25, 213.75, 216.25, 218.75, 221.25, 223.75, 226.25, 228.75, 231.25, 233.75, 236.25, 238.75, 241.25, 243.75, 246.25, 248.75, 251.25, 253.75, 256.25, 258.75, 261.25, 263.75, 266.25, 268.75, 271.25, 273.75, 276.25, 278.75, 281.25, 283.75, 286.25, 288.75, 291.25, 293.75, 296.25, 298.75, 301.25, 303.75, 306.25, 308.75, 311.25, 313.75, 316.25, 318.75, 321.25, 323.75, 326.25, 328.75, 331.25, 333.75, 336.25, 338.75, 341.25, 343.75, 346.25, 348.75, 351.25, 353.75, 356.25, 358.75 ; time = 10973.5, 11003, 11032.5, 11063, 11093.5, 11124, 11154.5, 11185.5, 11216, 11246.5, 11277, 11307.5 ; } ``` These 2-D bounding arrays lead to the "Buffer has wrong number of dimensions" error in #665. In the case of #665, only the time coordinate has this 2-D bounds array; here other coordinates (namely lat and lon) have it as well. Conceptually, these bound arrays represent coordinates, but when read in as a |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/667/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);