html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue
https://github.com/pydata/xarray/issues/1352#issuecomment-881941374,https://api.github.com/repos/pydata/xarray/issues/1352,881941374,IC_kwDOAMm_X840kVt-,14808389,2021-07-17T18:38:15Z,2021-07-17T18:38:29Z,MEMBER,closing as I believe this to be fixed by #1648,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,219321876
https://github.com/pydata/xarray/issues/1352#issuecomment-813098350,https://api.github.com/repos/pydata/xarray/issues/1352,813098350,MDEyOklzc3VlQ29tbWVudDgxMzA5ODM1MA==,14808389,2021-04-04T20:59:43Z,2021-04-04T20:59:43Z,MEMBER,"bisecting tells me that this seems to have been fixed by 8632cd7b3f468865db4dc9e5c09075f941dc70f2 (#1648):
```python
import xarray as xr
import numpy as np
da = xr.DataArray(
np.linspace(0, 1, 2 * 3 * 4).reshape(2, 3, 4),
coords={""x"": [""a"", ""b""], ""y"": [""x"", ""y"", ""z""], ""z"": np.arange(4)},
dims=(""x"", ""y"", ""z""),
name=""abc"",
)
da.loc[""a"", ""x""].to_dataset().to_netcdf(""test.nc"")
xr.open_dataset(""test.nc"").load() # to verify that the netcdf file is valid
```
only fails before that commit.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,219321876
https://github.com/pydata/xarray/issues/1352#issuecomment-493632647,https://api.github.com/repos/pydata/xarray/issues/1352,493632647,MDEyOklzc3VlQ29tbWVudDQ5MzYzMjY0Nw==,26384082,2019-05-18T00:21:36Z,2019-05-18T00:21:36Z,NONE,"In order to maintain a list of currently relevant issues, we mark issues as stale after a period of inactivity
If this issue remains relevant, please comment here or remove the `stale` label; otherwise it will be marked as closed automatically
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,219321876
https://github.com/pydata/xarray/issues/1352#issuecomment-293016305,https://api.github.com/repos/pydata/xarray/issues/1352,293016305,MDEyOklzc3VlQ29tbWVudDI5MzAxNjMwNQ==,1217238,2017-04-10T17:13:13Z,2017-04-10T17:13:13Z,MEMBER,"It looks like we have some sort of special logic that calls `len()` when it shouldn't, thereby precluding scaling scalars string variables in netCDF files.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,219321876
https://github.com/pydata/xarray/issues/1352#issuecomment-292930254,https://api.github.com/repos/pydata/xarray/issues/1352,292930254,MDEyOklzc3VlQ29tbWVudDI5MjkzMDI1NA==,4992424,2017-04-10T12:06:52Z,2017-04-10T12:07:03Z,NONE,"Yeah, I tend to agree, there should be some sort of auto-magic happening. But, I can think of at least two options:
1. Coerce to array-like, like you do manually in your first comment here. That makes sense if the dimension is important, i.e. it carries useful metadata or encodes something important.
2. Coerce to an attribute on the Dataset.
I use workflows where I concatenate things like multiple ensemble members into a single file, and I wind up with this pattern all the time. I usually just `drop()` the offending coordinate, and save it as part of the output filename. This is because tools like `cdo` really, really don't like non lat-lon-time dimensions, so that can interrupt my workflow sometimes. Saving as an attribute bypasses this issue, but then you lose the ability to retain any metadata that was associated with that coordinate.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,219321876
https://github.com/pydata/xarray/issues/1352#issuecomment-292929325,https://api.github.com/repos/pydata/xarray/issues/1352,292929325,MDEyOklzc3VlQ29tbWVudDI5MjkyOTMyNQ==,358378,2017-04-10T12:02:09Z,2017-04-10T12:02:09Z,CONTRIBUTOR,"Thanks, @darothen ! However, I believe I shouldn't *have to*, but xarray should do the right thing automatically.
Maybe I want to use the dataset after saving for something else, which would mean I'd have to do something like `d_.drop(['category', 'species']).to_netcdf(...)`, which I don't like too much ...","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,219321876
https://github.com/pydata/xarray/issues/1352#issuecomment-292926691,https://api.github.com/repos/pydata/xarray/issues/1352,292926691,MDEyOklzc3VlQ29tbWVudDI5MjkyNjY5MQ==,4992424,2017-04-10T11:48:37Z,2017-04-10T11:48:37Z,NONE,"@andreas-h you can drop the 0D dimensions:
``` python
d_ = d_.drop(['category', 'species'])
d_.to_netcdf(...)
```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,219321876
https://github.com/pydata/xarray/issues/1352#issuecomment-292873315,https://api.github.com/repos/pydata/xarray/issues/1352,292873315,MDEyOklzc3VlQ29tbWVudDI5Mjg3MzMxNQ==,358378,2017-04-10T07:42:02Z,2017-04-10T07:42:02Z,CONTRIBUTOR,"Sure :smile:
I first open a DataArray:
grid_eu_coarse_noborder = xr.open_dataarray('tno-macc3_eu-no-nld-no-border_area.nc')
grid_eu_coarse_noborder
[87091200 values with dtype=float64]
Coordinates:
* lat (lat) float64 30.03 30.09 30.16 30.22 30.28 30.34 30.41 30.47 ...
* category (category) object 'pow' 'res' 'inc' 'pei' 'exf' 'sol' 'tra1' ...
* species (species) object 'CH4' 'CO' 'NH3' 'NMVOC' 'NOX' 'BC_1' ...
* lon (lon) float64 -29.94 -29.81 -29.69 -29.56 -29.44 -29.31 -29.19 ...
from which I select using `.loc[]` and convert to Dataset:
grid_eu_coarse_noborder.loc['NOX', 'pow'].to_dataset()
Dimensions: (lat: 672, lon: 720)
Coordinates:
* lat (lat) float64 30.03 30.09 30.16 30.22 30.28 30.34 30.41 ...
category 396 return self.shape[0]
397 except IndexError:
IndexError: tuple index out of range
During handling of the above exception, another exception occurred:
TypeError Traceback (most recent call last)
in ()
----> 1 grid_eu_coarse_noborder.loc['NOX', 'pow'].to_dataset().to_netcdf('test.nc')
/home/eb/software/LAMOSpy/0.2-intel-2016a-Python-3.5.1/lib/python3.5/site-packages/xarray-0.9.1-py3.5.egg/xarray/core/dataset.py in to_netcdf(self, path, mode, format, group, engine, encoding, unlimited_dims)
951 return to_netcdf(self, path, mode, format=format, group=group,
952 engine=engine, encoding=encoding,
--> 953 unlimited_dims=unlimited_dims)
954
955 def __unicode__(self):
/home/eb/software/LAMOSpy/0.2-intel-2016a-Python-3.5.1/lib/python3.5/site-packages/xarray-0.9.1-py3.5.egg/xarray/backends/api.py in to_netcdf(dataset, path, mode, format, group, engine, writer, encoding, unlimited_dims)
567 try:
568 dataset.dump_to_store(store, sync=sync, encoding=encoding,
--> 569 unlimited_dims=unlimited_dims)
570 if isinstance(path, BytesIO):
571 return path.getvalue()
/home/eb/software/LAMOSpy/0.2-intel-2016a-Python-3.5.1/lib/python3.5/site-packages/xarray-0.9.1-py3.5.egg/xarray/core/dataset.py in dump_to_store(self, store, encoder, sync, encoding, unlimited_dims)
891
892 store.store(variables, attrs, check_encoding,
--> 893 unlimited_dims=unlimited_dims)
894 if sync:
895 store.sync()
/home/eb/software/LAMOSpy/0.2-intel-2016a-Python-3.5.1/lib/python3.5/site-packages/xarray-0.9.1-py3.5.egg/xarray/backends/common.py in store(self, variables, attributes, *args, **kwargs)
233 cf_variables, cf_attrs = cf_encoder(variables, attributes)
234 AbstractWritableDataStore.store(self, cf_variables, cf_attrs,
--> 235 *args, **kwargs)
236
237
/home/eb/software/LAMOSpy/0.2-intel-2016a-Python-3.5.1/lib/python3.5/site-packages/xarray-0.9.1-py3.5.egg/xarray/backends/common.py in store(self, variables, attributes, check_encoding_set, unlimited_dims)
202 self.set_attributes(attributes)
203 self.set_variables(variables, check_encoding_set,
--> 204 unlimited_dims=unlimited_dims)
205
206 def set_attributes(self, attributes):
/home/eb/software/LAMOSpy/0.2-intel-2016a-Python-3.5.1/lib/python3.5/site-packages/xarray-0.9.1-py3.5.egg/xarray/backends/common.py in set_variables(self, variables, check_encoding_set, unlimited_dims)
214 check = vn in check_encoding_set
215 target, source = self.prepare_variable(
--> 216 name, v, check, unlimited_dims=unlimited_dims)
217 self.writer.add(source, target)
218
/home/eb/software/LAMOSpy/0.2-intel-2016a-Python-3.5.1/lib/python3.5/site-packages/xarray-0.9.1-py3.5.egg/xarray/backends/netCDF4_.py in prepare_variable(self, name, variable, check_encoding, unlimited_dims)
272
273 if self.format == 'NETCDF4':
--> 274 variable, datatype = _nc4_values_and_dtype(variable)
275 else:
276 variable = encode_nc3_variable(variable)
/home/eb/software/LAMOSpy/0.2-intel-2016a-Python-3.5.1/lib/python3.5/site-packages/xarray-0.9.1-py3.5.egg/xarray/backends/netCDF4_.py in _nc4_values_and_dtype(var)
78 if var.dtype.kind == 'U':
79 # this entire clause should not be necessary with netCDF4>=1.0.9
---> 80 if len(var) > 0:
81 var = var.astype('O')
82 dtype = str
/home/eb/software/LAMOSpy/0.2-intel-2016a-Python-3.5.1/lib/python3.5/site-packages/xarray-0.9.1-py3.5.egg/xarray/core/utils.py in __len__(self)
396 return self.shape[0]
397 except IndexError:
--> 398 raise TypeError('len() of unsized object')
399
400
TypeError: len() of unsized object
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,219321876
https://github.com/pydata/xarray/issues/1352#issuecomment-292843480,https://api.github.com/repos/pydata/xarray/issues/1352,292843480,MDEyOklzc3VlQ29tbWVudDI5Mjg0MzQ4MA==,1217238,2017-04-10T03:49:16Z,2017-04-10T03:49:16Z,MEMBER,"Sorry I missed this earlier.
Do you know how you created this Dataset? Xarray should not let you make a dataset with a 0D dimension variable: that is a violation of its data model.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,219321876