issues
12 rows where state = "closed" and user = 7799184 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: comments, created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
216626776 | MDU6SXNzdWUyMTY2MjY3NzY= | 1324 | Choose time units in output netcdf | rafa-guedes 7799184 | closed | 0 | 10 | 2017-03-24T02:25:22Z | 2023-08-09T08:01:43Z | 2019-12-04T14:25:59Z | CONTRIBUTOR | Is there any way to define the units in output netcdf created from Thanks |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1324/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
220533356 | MDU6SXNzdWUyMjA1MzMzNTY= | 1366 | Setting attributes to multi-index coordinate | rafa-guedes 7799184 | closed | 0 | 5 | 2017-04-10T04:11:12Z | 2022-03-17T17:11:40Z | 2022-03-17T17:11:40Z | CONTRIBUTOR | I can't seem to be able to define attributes to "virtual" coordinates from multi-index coordinates. Taking from the exemple from the docs: ```python In [1]: import numpy as np In [2]: import pandas as pd In [3]: import xarray as xr In [4]: midx = pd.MultiIndex.from_arrays([['R','R','V','V'], [.1,.2,.7,.9]], names=('band','wn')) In [5]: mda = xr.DataArray(np.random.rand(4), coords={'spec': midx}, dims='spec') Setting up attrs to the full coordinate works:In [6]: mda['spec'].attrs Out[6]: OrderedDict() In [7]: mda['spec'].attrs = {'spec_attr': 'some_attr'} In [8]: mda['spec'].attrs Out[8]: OrderedDict([('spec_attr', 'some_attr')]) Setting attrs to the virtual coordinate does not produce any effect:In [9]: mda['band'].attrs Out[9]: OrderedDict() In [10]: mda['band'].attrs = {'band_attr': 'another_attr'} In [11]: mda['band'].attrs Out[11]: OrderedDict() ``` |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1366/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
518966560 | MDU6SXNzdWU1MTg5NjY1NjA= | 3490 | Dataset global attributes dropped when performing operations against numpy data type | rafa-guedes 7799184 | closed | 0 | 2 | 2019-11-07T00:22:04Z | 2020-10-14T16:29:51Z | 2020-10-14T16:29:51Z | CONTRIBUTOR | Operations against numpy data types seem cause global attributes in dataset to be dropped, example below. I also noticed in a real dataset with multiple dimensions that the order of ``` In [1]: import numpy as np In [2]: import pandas as pd In [3]: import xarray as xr In [4]: dset = xr.DataArray( ...: np.random.rand(4, 3), ...: [("time", pd.date_range("2000-01-01", periods=4)), ("space", ["IA", "IL", "IN"])], ...: name="test", ...: ).to_dataset() ...: dset.attrs = {"attr1": "val1", "attr2": "val2"} In [5]: 1.0 * dset In [6]: np.float64(1.0) * dset In [7]: xr.version |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3490/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
117018372 | MDU6SXNzdWUxMTcwMTgzNzI= | 658 | are there methods to abstract coordinate variables? | rafa-guedes 7799184 | closed | 0 | 2 | 2015-11-15T21:15:30Z | 2019-01-30T02:21:03Z | 2019-01-30T02:21:03Z | CONTRIBUTOR | Hi guys, just wondering if there are, or if you plan to implement, some methods similar to cdms2's getLongitude(), getLatitude(), getTime(), getLevel(), which allow reading these coordinate variables without knowing a prior how they are called in the netcdf files? Thanks |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/658/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
298839307 | MDU6SXNzdWUyOTg4MzkzMDc= | 1932 | Not able to slice dataset using its own coordinate value | rafa-guedes 7799184 | closed | 0 | 2 | 2018-02-21T04:35:01Z | 2018-02-27T01:13:45Z | 2018-02-27T01:13:45Z | CONTRIBUTOR | Code Sample, a copy-pastable example if possible
```python In [6]: ds.time[0] Out[6]: <xarray.DataArray 'time' ()> array('2018-02-12T06:00:00.000000000', dtype='datetime64[ns]') Coordinates: time datetime64[ns] 2018-02-12T06:00:00 site float64 ... Attributes: standard_name: time In [7]: ds.time[1] Out[7]: <xarray.DataArray 'time' ()> array('2018-02-12T06:59:59.999986000', dtype='datetime64[ns]') Coordinates: time datetime64[ns] 2018-02-12T06:59:59.999986 site float64 ... Attributes: standard_name: time ``` Problem descriptionxarray sometimes fails to slice using its own coordinate values. It looks like it may have to do with precision. Traceback below, test file attached. ```python In [7]: ds.sel(time=ds.time[1]) KeyError Traceback (most recent call last) <ipython-input-7-371d2f896b4a> in <module>() ----> 1 ds.sel(time=ds.time[1]) /usr/lib/python2.7/site-packages/xarray/core/dataset.pyc in sel(self, method, tolerance, drop, **indexers) 1444 1445 pos_indexers, new_indexes = indexing.remap_label_indexers( -> 1446 self, v_indexers, method=method, tolerance=tolerance 1447 ) 1448 # attach indexer's coordinate to pos_indexers /usr/lib/python2.7/site-packages/xarray/core/indexing.pyc in remap_label_indexers(data_obj, indexers, method, tolerance) 234 else: 235 idxr, new_idx = convert_label_indexer(index, label, --> 236 dim, method, tolerance) 237 pos_indexers[dim] = idxr 238 if new_idx is not None: /usr/lib/python2.7/site-packages/xarray/core/indexing.pyc in convert_label_indexer(index, label, index_name, method, tolerance) 163 indexer, new_index = index.get_loc_level(label.item(), level=0) 164 else: --> 165 indexer = get_loc(index, label.item(), method, tolerance) 166 elif label.dtype.kind == 'b': 167 indexer = label /usr/lib/python2.7/site-packages/xarray/core/indexing.pyc in get_loc(index, label, method, tolerance) 93 def get_loc(index, label, method=None, tolerance=None): 94 kwargs = _index_method_kwargs(method, tolerance) ---> 95 return index.get_loc(label, **kwargs) 96 97 /usr/lib/python2.7/site-packages/pandas/core/indexes/datetimes.pyc in get_loc(self, key, method, tolerance) 1444 return Index.get_loc(self, stamp, method, tolerance) 1445 except KeyError: -> 1446 raise KeyError(key) 1447 except ValueError as e: 1448 # list-like tolerance size must match target index size KeyError: 1518418799999986000L ``` Expected OutputOutput of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1932/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
128980804 | MDU6SXNzdWUxMjg5ODA4MDQ= | 728 | Cannot inherit DataArray anymore in 0.7 release | rafa-guedes 7799184 | closed | 0 | 15 | 2016-01-26T23:57:03Z | 2017-05-24T17:30:35Z | 2016-01-29T02:48:57Z | CONTRIBUTOR | I understand from @shoyer that inheriting from DataArray may not be the best approach to extend DataArray with other specific methods but this was working before the latest release, and it is not working anymore. Just wondering if this would be some issue caused by new internal structure of DataArray, or maybe something I'm doing wrong? For example, the code below works using xray.0.6.1: ``` import numpy as np import xarray as xr #xarray.0.7.0import xray as xr #xray.0.6.1 class NewArray(xr.DataArray): def init(self, darray): super(NewArray, self).init(darray, name='spec') data = np.random.randint(0,10,12).reshape(4,3) x = [10,20,30] y = [1,2,3,4] darray = xr.DataArray(data, coords={'y': y, 'x': x}, dims=['y','x']) narray = NewArray(darray) print 'xr version: %s\n' % xr.version print 'DataArray object:\n%s\n' % darray print 'NewArray object:\n%s' % narray ``` but it does not work anymore when using the new xarray release. The NewArray instance is actually created, but if I try to access this object, or its narray.coords attribute, I get the traceback below. I can however access some other attributes from narray such as narray.values or narray.dims. ```TypeError Traceback (most recent call last) /source/pymsl/pymsl/core/tests/inherit_test.py in <module>() 16 print 'xr version: %s\n' % xr.version 17 print 'DataArray object:\n%s\n' % darray ---> 18 print 'NewArray object:\n%s' % narray /usr/local/lib/python2.7/site-packages/xarray/core/common.pyc in repr(self) 76 77 def repr(self): ---> 78 return formatting.array_repr(self) 79 80 def _iter(self): /usr/local/lib/python2.7/site-packages/xarray/core/formatting.pyc in array_repr(arr) 254 if hasattr(arr, 'coords'): 255 if arr.coords: --> 256 summary.append(repr(arr.coords)) 257 258 if arr.attrs: /usr/local/lib/python2.7/site-packages/xarray/core/coordinates.pyc in repr(self) 64 65 def repr(self): ---> 66 return formatting.coords_repr(self) 67 68 @property /usr/local/lib/python2.7/site-packages/xarray/core/formatting.pyc in _mapping_repr(mapping, title, summarizer, col_width) 208 summary = ['%s:' % title] 209 if mapping: --> 210 summary += [summarizer(k, v, col_width) for k, v in mapping.items()] 211 else: 212 summary += [EMPTY_REPR] /usr/local/Cellar/python/2.7.10_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/_abcoll.pyc in items(self) 412 def items(self): 413 "D.items() -> list of D's (key, value) pairs, as 2-tuples" --> 414 return [(key, self[key]) for key in self] 415 416 def values(self): /usr/local/lib/python2.7/site-packages/xarray/core/coordinates.pyc in getitem(self, key) 44 key.split('.')[0] in self._names)): 45 # allow indexing current coordinates or components ---> 46 return self._data[key] 47 else: 48 raise KeyError(key) /usr/local/lib/python2.7/site-packages/xarray/core/dataarray.pyc in getitem(self, key) 395 _, key, var = _get_virtual_variable(self._coords, key) 396 --> 397 return self._replace_maybe_drop_dims(var, name=key) 398 else: 399 # orthogonal array indexing /usr/local/lib/python2.7/site-packages/xarray/core/dataarray.pyc in _replace_maybe_drop_dims(self, variable, name) 234 coords = OrderedDict((k, v) for k, v in self._coords.items() 235 if set(v.dims) <= allowed_dims) --> 236 return self._replace(variable, coords, name) 237 238 __this_array = _ThisArray() /usr/local/lib/python2.7/site-packages/xarray/core/dataarray.pyc in _replace(self, variable, coords, name) 225 if name is self.__default: 226 name = self.name --> 227 return type(self)(variable, coords, name=name, fastpath=True) 228 229 def _replace_maybe_drop_dims(self, variable, name=__default): TypeError: init() takes exactly 2 arguments (5 given) ``` |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/728/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
124915222 | MDU6SXNzdWUxMjQ5MTUyMjI= | 706 | Subclassing Dataset and DataArray | rafa-guedes 7799184 | closed | 0 | 8 | 2016-01-05T07:55:03Z | 2016-05-24T22:14:30Z | 2016-05-13T16:48:37Z | CONTRIBUTOR | Hi guys, I have started writing some SpecArray class which inherits from DataArray and defines some methods useful for dealing with wave spectra, such as calculating spectral wave statistics like significant wave height, peak wave period, etc, interpolating, splitting, and performing some other tasks. I'd like to ask please if: - Is this something you guys would maybe be interested to add to your library? - Is there a simple way to ensure the methods I am defining are preserved when creating a Dataset out of this SpecArray object? currently I can create / add to a Dataset using this new object, but all new methods get lost by doing that. Thanks, Rafael |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/706/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
128749355 | MDExOlB1bGxSZXF1ZXN0NTcxNzg1MzU= | 726 | Make import error of tokenize more explicit | rafa-guedes 7799184 | closed | 0 | 2 | 2016-01-26T07:46:21Z | 2016-01-27T16:27:05Z | 2016-01-27T16:26:55Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/726 | This ImportError is raised when using open_mfdataset, even though my version of dask is > 0.6: ```ImportError Traceback (most recent call last) <ipython-input-2-6d05f9a40585> in <module>() ----> 1 dset = xarray.open_mfdataset('/Users/rafaguedes/work/campos20150709_0*.nc') /source/xarray/xarray/backends/api.pyc in open_mfdataset(paths, chunks, concat_dim, preprocess, engine, lock, kwargs) 297 lock = _default_lock(paths[0], engine) 298 datasets = [open_dataset(p, engine=engine, chunks=chunks or {}, lock=lock, --> 299 kwargs) for p in paths] 300 file_objs = [ds._file_obj for ds in datasets] 301 /source/xarray/xarray/backends/api.pyc in open_dataset(filename_or_obj, group, decode_cf, mask_and_scale, decode_times, concat_characters, decode_coords, engine, chunks, lock, drop_variables) 222 lock = _default_lock(filename_or_obj, engine) 223 with close_on_error(store): --> 224 return maybe_decode_store(store, lock) 225 else: 226 if engine is not None and engine != 'scipy': /source/xarray/xarray/backends/api.pyc in maybe_decode_store(store, lock) 163 except ImportError: 164 import dask # raise the usual error if dask is entirely missing --> 165 raise ImportError('xarray requires dask version 0.6 or newer') 166 167 if (isinstance(filename_or_obj, basestring) and ImportError: xarray requires dask version 0.6 or newer ```
This change ensures the actual error caused by the missing library is displayed: ```ImportError Traceback (most recent call last) <ipython-input-2-6d05f9a40585> in <module>() ----> 1 dset = xarray.open_mfdataset('/Users/rafaguedes/work/campos20150709_0*.nc') /source/xarray/xarray/backends/api.py in open_mfdataset(paths, chunks, concat_dim, preprocess, engine, lock, kwargs) 300 lock = _default_lock(paths[0], engine) 301 datasets = [open_dataset(p, engine=engine, chunks=chunks or {}, lock=lock, --> 302 kwargs) for p in paths] 303 file_objs = [ds._file_obj for ds in datasets] 304 /source/xarray/xarray/backends/api.py in open_dataset(filename_or_obj, group, decode_cf, mask_and_scale, decode_times, concat_characters, decode_coords, engine, chunks, lock, drop_variables) 225 lock = _default_lock(filename_or_obj, engine) 226 with close_on_error(store): --> 227 return maybe_decode_store(store, lock) 228 else: 229 if engine is not None and engine != 'scipy': /source/xarray/xarray/backends/api.py in maybe_decode_store(store, lock) 166 raise ImportError('xarray requires dask version 0.6 or newer') 167 else: --> 168 raise ImportError(err) 169 170 if (isinstance(filename_or_obj, basestring) and ImportError: No module named toolz ``` |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/726/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
123384529 | MDU6SXNzdWUxMjMzODQ1Mjk= | 682 | to_netcdf: not able to set dtype encoding with netCDF4 backend | rafa-guedes 7799184 | closed | 0 | 1 | 2015-12-21T23:57:56Z | 2016-01-08T01:27:58Z | 2016-01-08T01:27:58Z | CONTRIBUTOR | I'm trying to set
When ```ValueError Traceback (most recent call last) <ipython-input-3-72122e207569> in <module>() ----> 1 dset_nearest.to_netcdf('/home/rafael/tmp/test3.nc', format='netcdf3_CLASSIC', encoding={'specden': {'dtype': 'float32'}}) /source/xray/xray/core/dataset.pyc in to_netcdf(self, path, mode, format, group, engine, encoding) 880 from ..backends.api import to_netcdf 881 return to_netcdf(self, path, mode, format=format, group=group, --> 882 engine=engine, encoding=encoding) 883 884 dump = utils.function_alias(to_netcdf, 'dump') /source/xray/xray/backends/api.pyc in to_netcdf(dataset, path, mode, format, group, engine, writer, encoding) 352 store = store_cls(path, mode, format, group, writer) 353 try: --> 354 dataset.dump_to_store(store, sync=sync, encoding=encoding) 355 if isinstance(path, BytesIO): 356 return path.getvalue() /source/xray/xray/core/dataset.pyc in dump_to_store(self, store, encoder, sync, encoding) 826 variables, attrs = encoder(variables, attrs) 827 --> 828 store.store(variables, attrs, check_encoding) 829 if sync: 830 store.sync() /source/xray/xray/backends/common.pyc in store(self, variables, attributes, check_encoding_set) 226 cf_variables, cf_attrs = cf_encoder(variables, attributes) 227 AbstractWritableDataStore.store(self, cf_variables, cf_attrs, --> 228 check_encoding_set) /source/xray/xray/backends/common.pyc in store(self, variables, attributes, check_encoding_set) 201 if not (k in neccesary_dims and 202 is_trivial_index(v))) --> 203 self.set_variables(variables, check_encoding_set) 204 205 def set_attributes(self, attributes): /source/xray/xray/backends/common.pyc in set_variables(self, variables, check_encoding_set) 211 name = _encode_variable_name(vn) 212 check = vn in check_encoding_set --> 213 target, source = self.prepare_variable(name, v, check) 214 self.writer.add(source, target) 215 /source/xray/xray/backends/netCDF4_.py in prepare_variable(self, name, variable, check_encoding) 260 261 encoding = _extract_nc4_encoding(variable, --> 262 raise_on_invalid=check_encoding) 263 nc4_var = self.ds.createVariable( 264 varname=name, /source/xray/xray/backends/netCDF4_.py in _extract_nc4_encoding(variable, raise_on_invalid, lsd_okay, backend) 157 if raise_on_invalid: 158 import pdb; pdb.set_trace() --> 159 invalid = [k for enc in encoding for k in enc 160 if k not in valid_encodings] 161 if invalid: ValueError: unexpected encoding parameters for 'netCDF4' backend: ['d', 't', 'y', 'p', 'e'] ``` |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/682/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
117262604 | MDU6SXNzdWUxMTcyNjI2MDQ= | 660 | time slice cannot be list | rafa-guedes 7799184 | closed | 0 | 3 | 2015-11-17T01:53:15Z | 2015-11-18T02:25:36Z | 2015-11-18T02:25:29Z | CONTRIBUTOR | Not sure this is a problem or expected behaviour. When slicing a variable from dataset using sel() method, if I only want one time, the time slice cannot be in a list (in my case, it failed when level slice had more than one value). However, with a scalar float, it works. Please see example below where I try to slice from CFSR currents file. This does not work:
*** IndexError: The indexing operation you are attempting to perform is not valid on netCDF4.Variable object. Try loading your data into memory first by calling .load(). Original traceback: Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/xray-0.6.1_15_g5109f4f-py2.7.egg/xray/backends/netCDF4_.py", line 47, in getitem data = getitem(self.array, key) File "netCDF4.pyx", line 2991, in netCDF4.Variable.getitem (netCDF4.c:36676) File "/usr/lib64/python2.7/site-packages/netCDF4_utils.py", line 245, in _StartCountStride raise IndexError("Indice mismatch. Indices must have the same length.") IndexError: Indice mismatch. Indices must have the same length. This works:
<xray.DataArray 'uo' (lev: 3)> array([ 0.024, 0.012, -0.008]) Coordinates: latitude float64 -48.75 lev (lev) float64 5.0 105.0 949.0 longitude float64 162.8 time float64 1.417e+09 Attributes: short_name: uo long_name: U-Component of Current level: Depth below sea surface units: m/s |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/660/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
117478779 | MDU6SXNzdWUxMTc0Nzg3Nzk= | 662 | Problem with checking in Variable._parse_dimensions() (xray.core.variable) | rafa-guedes 7799184 | closed | 0 | 12 | 2015-11-17T23:53:26Z | 2015-11-18T02:18:43Z | 2015-11-18T02:18:43Z | CONTRIBUTOR | I have had some problem with some dataset I have created by slicing from an existing dataset. Some operations I’m trying to perform on that new dataset fail because it doesn't pass some checking that goes on in xray.core.variable.py (Variable._parse_dimensions()). This is the original dataset:
This is how I have sliced it to create my new dataset,
And this is how the new dataset looks like:
This only has one variable, but in my case I also add some others. I could not identify anything obviously wrong with this new dataset. I was trying to concatenate similar datasets sliced from multiple files. But the same error happens if for example I try to dump it as a netcdf using to_netcdf(). This is the most recent call of the Traceback: ``` /usr/lib/python2.7/site-packages/xray-0.6.1_15_g5109f4f-py2.7.egg/xray/core/variable.py in _parse_dimensions(self, dims) 302 raise ValueError('dimensions %s must have the same length as the ' 303 'number of data dimensions, ndim=%s' --> 304 % (dims, self.ndim)) 305 return dims 306 ValueError: dimensions (u'time', u'depth', u'lat', u'lon') must have the same length as the number of data dimensions, ndim=2 ``` I’m not sure what the “number of data dimensions” ndim represents, but my new dataset is not passing that check (len(dims)==4 but self.ndim==2). However if I comment out that check, everything works - I can concatenate datasets, and dump them to netcdf files. Thanks, Rafael |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/662/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
95788263 | MDU6SXNzdWU5NTc4ODI2Mw== | 479 | Define order of coordinates / variables of netcdf created from dset | rafa-guedes 7799184 | closed | 0 | 1 | 2015-07-18T04:49:56Z | 2015-07-20T03:18:25Z | 2015-07-20T03:18:25Z | CONTRIBUTOR | Hi guys, I'm saving this dataset as a netcdf file:
However I'm not sure how to preserve the order of the coordinates in the output netcdf: ``` netcdf test { dimensions: lat = 41 ; month = 12 ; lon = 41 ; variables: float lat(lat) ; double wdir_mean(month, lat, lon) ; double wspd_mean(month, lat, lon) ; float lon(lon) ; int64 month(month) ; // global attributes: :date_created = "2015-07-18 16:20:02.603378" ; } ``` Is there a way to make sure coordinates / variables are written to netcdf at some specific order please? Thanks |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/479/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);