issue_comments
3 rows where issue = 874695249 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- Boolean confusion · 3 ✖
| id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
|---|---|---|---|---|---|---|---|---|---|---|---|
| 832454582 | https://github.com/pydata/xarray/issues/5254#issuecomment-832454582 | https://api.github.com/repos/pydata/xarray/issues/5254 | MDEyOklzc3VlQ29tbWVudDgzMjQ1NDU4Mg== | DerWeh 22542812 | 2021-05-05T06:48:46Z | 2021-05-05T06:48:46Z | NONE | @mathause Indeed, I am using I would also agree on the point that expanding the |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
Boolean confusion 874695249 | |
| 832253672 | https://github.com/pydata/xarray/issues/5254#issuecomment-832253672 | https://api.github.com/repos/pydata/xarray/issues/5254 | MDEyOklzc3VlQ29tbWVudDgzMjI1MzY3Mg== | shoyer 1217238 | 2021-05-04T21:14:48Z | 2021-05-04T21:14:48Z | MEMBER | :( I really wish NumPy didn't have Rather than expanding |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
Boolean confusion 874695249 | |
| 832239085 | https://github.com/pydata/xarray/issues/5254#issuecomment-832239085 | https://api.github.com/repos/pydata/xarray/issues/5254 | MDEyOklzc3VlQ29tbWVudDgzMjIzOTA4NQ== | mathause 10194086 | 2021-05-04T20:55:19Z | 2021-05-04T20:55:19Z | MEMBER | For completeness, here is the error of the second case:
```python-traceback
TypeError Traceback (most recent call last)
<ipython-input-5-41afac9c294c> in <module>
1 data = xr.Dataset()
2 data.attrs['bool_type'] = np.True_
----> 3 data.to_netcdf()
~/code/xarray/xarray/core/dataset.py in to_netcdf(self, path, mode, format, group, engine, encoding, unlimited_dims, compute, invalid_netcdf)
1782 from ..backends.api import to_netcdf
1783
-> 1784 return to_netcdf(
1785 self,
1786 path,
~/code/xarray/xarray/backends/api.py in to_netcdf(dataset, path_or_file, mode, format, group, engine, encoding, unlimited_dims, compute, multifile, invalid_netcdf)
1033 # validate Dataset keys, DataArray names, and attr keys/values
1034 _validate_dataset_names(dataset)
-> 1035 _validate_attrs(dataset, invalid_netcdf=invalid_netcdf and engine == "h5netcdf")
1036
1037 try:
~/code/xarray/xarray/backends/api.py in _validate_attrs(dataset, invalid_netcdf)
170 # Check attrs on the dataset itself
171 for k, v in dataset.attrs.items():
--> 172 check_attr(k, v, valid_types)
173
174 # Check attrs on each variable within the dataset
~/code/xarray/xarray/backends/api.py in check_attr(name, value, valid_types)
162
163 if not isinstance(value, valid_types):
--> 164 raise TypeError(
165 f"Invalid value for attr {name!r}: {value!r}. For serialization to "
166 "netCDF files, its value must be of one of the following types: "
TypeError: Invalid value for attr 'bool_type': True. For serialization to netCDF files, its value must be of one of the following types: str, Number, ndarray, number, list, tuple
```
Can you give some more details. The only way I managed to see the round-trip effect you describe is using |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
Boolean confusion 874695249 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] (
[html_url] TEXT,
[issue_url] TEXT,
[id] INTEGER PRIMARY KEY,
[node_id] TEXT,
[user] INTEGER REFERENCES [users]([id]),
[created_at] TEXT,
[updated_at] TEXT,
[author_association] TEXT,
[body] TEXT,
[reactions] TEXT,
[performed_via_github_app] TEXT,
[issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
ON [issue_comments] ([user]);
user 3