issues
8 rows where assignee = 1217238 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: user, milestone, comments, updated_at, closed_at, author_association, created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1210147360 | I_kwDOAMm_X85IIWIg | 6504 | test_weighted.test_weighted_operations_nonequal_coords should avoid depending on random number seed | shoyer 1217238 | closed | 0 | shoyer 1217238 | 0 | 2022-04-20T19:56:19Z | 2022-08-29T20:42:30Z | 2022-08-29T20:42:30Z | MEMBER | What happened?In testing an upgrade to the latest version of xarray in our systems, I noticed this test failing: ``` def test_weighted_operations_nonequal_coords(): # There are no weights for a == 4, so that data point is ignored. weights = DataArray(np.random.randn(4), dims=("a",), coords=dict(a=[0, 1, 2, 3])) data = DataArray(np.random.randn(4), dims=("a",), coords=dict(a=[1, 2, 3, 4])) check_weighted_operations(data, weights, dim="a", skipna=None)
It appears that this test is hard-coded to match a particular random number seed, which in turn would fix the resutls of What did you expect to happen?Whenever possible, Xarray's own tests should avoid relying on particular random number generators, e.g., in this case we could specify random numbers instead. A back-up option would be to explicitly set random seed locally inside the tests, e.g., by creating a Minimal Complete Verifiable ExampleNo response Relevant log outputNo response Anything else we need to know?No response Environment... |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6504/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | |||||
181099871 | MDU6SXNzdWUxODEwOTk4NzE= | 1039 | NetCDF4 backend fails with single-value dimension | mrksr 5184063 | closed | 0 | shoyer 1217238 | 1 | 2016-10-05T09:03:52Z | 2017-06-17T00:14:32Z | 2017-06-17T00:14:32Z | NONE | Exporting and xarray object with a string dimension which only consists of a single value fails using python 3 and xarray 0.8.2. Here is an example to reproduce the error using the toy weather data from the tutorials: ``` python import xarray as xr import numpy as np import pandas as pd times = pd.date_range('2000-01-01', '2001-12-31', name='time') annual_cycle = np.sin(2 * np.pi * (times.dayofyear / 365.25 - 0.28)) base = 10 + 15 * annual_cycle.reshape(-1, 1) tmin_values = base + 3 * np.random.randn(annual_cycle.size, 3) tmax_values = base + 10 + 3 * np.random.randn(annual_cycle.size, 3) ds = xr.Dataset({'tmin': (('time', 'location'), tmin_values), 'tmax': (('time', 'location'), tmax_values)}, {'time': times, 'location': ['IA', 'IN', 'IL']}) ``` The following function call fails for me with a
The lower part of the traceback: ``` pytb D:\Anaconda3\lib\site-packages\xarray\backends\netCDF4_.py in prepare_variable(self, name, variable, check_encoding) 250 251 if self.format == 'NETCDF4': --> 252 variable, datatype = _nc4_values_and_dtype(variable) 253 else: 254 variable = encode_nc3_variable(variable) D:\Anaconda3\lib\site-packages\xarray\backends\netCDF4_.py in _nc4_values_and_dtype(var) 75 if var.dtype.kind == 'U': 76 # this entire clause should not be necessary with netCDF4>=1.0.9 ---> 77 if len(var) > 0: 78 var = var.astype('O') 79 dtype = str D:\Anaconda3\lib\site-packages\xarray\core\utils.py in len(self) 371 return self.shape[0] 372 except IndexError: --> 373 raise TypeError('len() of unsized object') 374 375 ``` The export works fine both without the selection:
And with dropping the dimension after:
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1039/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | |||||
216535655 | MDU6SXNzdWUyMTY1MzU2NTU= | 1320 | BUG: to_netcdf no longer works with file objects when engine='scipy' | shoyer 1217238 | closed | 0 | shoyer 1217238 | v0.9.3 2444330 | 0 | 2017-03-23T18:53:18Z | 2017-04-13T05:32:59Z | 2017-04-13T05:32:59Z | MEMBER | This worked in xarray v0.8.2, but no longer works in v0.9.1: The traceback looks like:
The problem is that For now, it's easy enough to work around this by creating a byte-string with |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1320/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||
216537677 | MDU6SXNzdWUyMTY1Mzc2Nzc= | 1321 | BUG: to_netcdf(engine='scipy') raises an error when it shouldn't | shoyer 1217238 | closed | 0 | shoyer 1217238 | v0.9.3 2444330 | 0 | 2017-03-23T19:00:18Z | 2017-04-13T05:32:59Z | 2017-04-13T05:32:59Z | MEMBER | With xarray v0.9.1, I get the confusing error message:
This check should be |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1321/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||
109434899 | MDU6SXNzdWUxMDk0MzQ4OTk= | 602 | latest docs are broken | jhamman 2443309 | closed | 0 | shoyer 1217238 | 0.7.0 1368762 | 4 | 2015-10-02T05:48:21Z | 2016-01-02T01:31:17Z | 2016-01-02T01:31:17Z | MEMBER | Looking at the doc build from tonight, something happened and netCDF4 isn't getting picked up. All the docs depending on the netCDF4 package are broken (e.g. plotting, IO, etc.). @shoyer - You may be able to just resubmit the doc build, or maybe we need to fix something. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/602/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||
54391570 | MDExOlB1bGxSZXF1ZXN0MjczOTI5OTU= | 310 | More robust CF datetime unit parsing | akleeman 514053 | closed | 0 | shoyer 1217238 | 0.4 799013 | 1 | 2015-01-14T23:19:07Z | 2015-01-14T23:36:34Z | 2015-01-14T23:35:27Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/310 | This makes it possible to read datasets that don't follow CF datetime conventions perfectly, such as the following example which (surprisingly) comes from NCEP/NCAR (you'd think they would follow CF!) ``` ds = xray.open_dataset('http://thredds.ucar.edu/thredds/dodsC/grib/NCEP/GEFS/Global_1p0deg_Ensemble/members/GEFS_Global_1p0deg_Ensemble_20150114_1200.grib2/GC') print ds['time'].encoding['units'] u'Hour since 2015-01-14T12:00:00Z' ``` |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/310/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||
37841310 | MDU6SXNzdWUzNzg0MTMxMA== | 183 | Checklist for v0.2 release | shoyer 1217238 | closed | 0 | shoyer 1217238 | 0.2 650893 | 1 | 2014-07-15T00:25:27Z | 2014-08-14T20:01:17Z | 2014-08-14T20:01:17Z | MEMBER | Requirements:
- [x] Better documentation:
- [x] Tutorial introduces Nice to have: - [x] Support modifying DataArray dimensions/coordinates in place (#180) - [ ] Automatic alignment in mathematical operations (#184) - [ ] Revised interface for CF encoding/decoding (#155, #175) |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/183/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||
36211623 | MDU6SXNzdWUzNjIxMTYyMw== | 167 | Unable to load pickle Dataset that was picked with cPickle | rzlee 2382049 | closed | 0 | shoyer 1217238 | 0.2 650893 | 1 | 2014-06-21T00:02:43Z | 2014-06-22T01:40:58Z | 2014-06-22T01:40:58Z | NONE | ``` import cPickle as pickle import xray import numpy as np import pandas as pd foo_values = np.random.RandomState(0).rand(3,4) times = pd.date_range('2001-02-03', periods=3) ds = xray.Dataset({'time': ('time', times), 'foo': (['time', 'space'], foo_values)}) with open('mypickle.pkl', 'w') as f: pickle.dump(ds, f) with open('mypickle.pkl') as f: myds = pickle.load(f) myds ``` This code results in:
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/167/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);