issue_comments
2 rows where author_association = "MEMBER" and issue = 417281904 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- Don't use deprecated np.asscalar() · 2 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
469893969 | https://github.com/pydata/xarray/pull/2800#issuecomment-469893969 | https://api.github.com/repos/pydata/xarray/issues/2800 | MDEyOklzc3VlQ29tbWVudDQ2OTg5Mzk2OQ== | shoyer 1217238 | 2019-03-05T23:00:25Z | 2019-03-05T23:00:25Z | MEMBER | Yes, the test failure is definitely unrelated. Thanks @TimoRoth! |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Don't use deprecated np.asscalar() 417281904 | |
469845973 | https://github.com/pydata/xarray/pull/2800#issuecomment-469845973 | https://api.github.com/repos/pydata/xarray/issues/2800 | MDEyOklzc3VlQ29tbWVudDQ2OTg0NTk3Mw== | max-sixty 5635139 | 2019-03-05T20:32:01Z | 2019-03-05T20:32:01Z | MEMBER | Thanks a lot @TimoRoth - this looks good! I think the failures are unrelated. I'll merge shortly unless anyone knows something I don't. Output pasted below for reference:
```
self = <xarray.tests.test_backends.TestValidateAttrs object at 0x000000B367D524E0>
def test_validating_attrs(self):
def new_dataset():
return Dataset({'data': ('y', np.arange(10.0))},
{'y': np.arange(10)})
def new_dataset_and_dataset_attrs():
ds = new_dataset()
return ds, ds.attrs
def new_dataset_and_data_attrs():
ds = new_dataset()
return ds, ds.data.attrs
def new_dataset_and_coord_attrs():
ds = new_dataset()
return ds, ds.coords['y'].attrs
for new_dataset_and_attrs in [new_dataset_and_dataset_attrs,
new_dataset_and_data_attrs,
new_dataset_and_coord_attrs]:
ds, attrs = new_dataset_and_attrs()
attrs[123] = 'test'
with raises_regex(TypeError, 'Invalid name for attr'):
ds.to_netcdf('test.nc')
ds, attrs = new_dataset_and_attrs()
attrs[MiscObject()] = 'test'
with raises_regex(TypeError, 'Invalid name for attr'):
ds.to_netcdf('test.nc')
ds, attrs = new_dataset_and_attrs()
attrs[''] = 'test'
with raises_regex(ValueError, 'Invalid name for attr'):
ds.to_netcdf('test.nc')
# This one should work
ds, attrs = new_dataset_and_attrs()
attrs['test'] = 'test'
with create_tmp_file() as tmp_file:
ds.to_netcdf(tmp_file)
ds, attrs = new_dataset_and_attrs()
attrs['test'] = {'a': 5}
with raises_regex(TypeError, 'Invalid value for attr'):
ds.to_netcdf('test.nc')
ds, attrs = new_dataset_and_attrs()
attrs['test'] = MiscObject()
with raises_regex(TypeError, 'Invalid value for attr'):
ds.to_netcdf('test.nc')
ds, attrs = new_dataset_and_attrs()
attrs['test'] = 5
with create_tmp_file() as tmp_file:
ds.to_netcdf(tmp_file)
ds, attrs = new_dataset_and_attrs()
attrs['test'] = 3.14
with create_tmp_file() as tmp_file:
ds.to_netcdf(tmp_file)
ds, attrs = new_dataset_and_attrs()
attrs['test'] = [1, 2, 3, 4]
with create_tmp_file() as tmp_file:
ds.to_netcdf(tmp_file)
ds, attrs = new_dataset_and_attrs()
attrs['test'] = (1.9, 2.5)
with create_tmp_file() as tmp_file:
ds.to_netcdf(tmp_file)
ds, attrs = new_dataset_and_attrs()
attrs['test'] = np.arange(5)
with create_tmp_file() as tmp_file:
ds.to_netcdf(tmp_file)
ds, attrs = new_dataset_and_attrs()
attrs['test'] = np.arange(12).reshape(3, 4)
with create_tmp_file() as tmp_file:
> ds.to_netcdf(tmp_file)
xarray\tests\test_backends.py:3450:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
xarray\core\dataset.py:1323: in to_netcdf
compute=compute)
xarray\backends\api.py:767: in to_netcdf
unlimited_dims=unlimited_dims)
xarray\backends\api.py:810: in dump_to_store
unlimited_dims=unlimited_dims)
xarray\backends\common.py:262: in store
self.set_attributes(attributes)
xarray\backends\common.py:278: in set_attributes
self.set_attribute(k, v)
xarray\backends\netCDF4_.py:418: in set_attribute
_set_nc_attribute(self.ds, key, value)
xarray\backends\netCDF4_.py:294: in _set_nc_attribute
obj.setncattr(key, value)
netCDF4\_netCDF4.pyx:2781: in netCDF4._netCDF4.Dataset.setncattr
???
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
> ???
E ValueError: multi-dimensional array attributes not supported
netCDF4\_netCDF4.pyx:1514: ValueError
```
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Don't use deprecated np.asscalar() 417281904 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 2