html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue
https://github.com/pydata/xarray/issues/1301#issuecomment-291516997,https://api.github.com/repos/pydata/xarray/issues/1301,291516997,MDEyOklzc3VlQ29tbWVudDI5MTUxNjk5Nw==,1197350,2017-04-04T14:27:18Z,2017-04-04T14:27:18Z,MEMBER,"My understanding is that you are concatenating across the variable `obs`, so no, it wouldn't make sense to have `obs` be the same in all the datasets.
My tests showed that it's not necessarily the concat step that is slowing this down. Your profiling suggest that it's a netcdf datetime decoding issue.
I wonder if @shoyer or @jhamman have any ideas about how to improve performance here.
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,212561278
https://github.com/pydata/xarray/issues/1301#issuecomment-286220317,https://api.github.com/repos/pydata/xarray/issues/1301,286220317,MDEyOklzc3VlQ29tbWVudDI4NjIyMDMxNw==,1197350,2017-03-13T19:40:50Z,2017-03-13T19:40:50Z,MEMBER,"And the length of `obs` is different in each dataset.
```python
>>> for myds in dsets:
print(myds.dims)
Frozen(SortedKeysDict({u'obs': 7537613}))
Frozen(SortedKeysDict({u'obs': 7247697}))
Frozen(SortedKeysDict({u'obs': 7497680}))
Frozen(SortedKeysDict({u'obs': 7661468}))
Frozen(SortedKeysDict({u'obs': 5750197}))
```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,212561278
https://github.com/pydata/xarray/issues/1301#issuecomment-286219858,https://api.github.com/repos/pydata/xarray/issues/1301,286219858,MDEyOklzc3VlQ29tbWVudDI4NjIxOTg1OA==,1197350,2017-03-13T19:39:15Z,2017-03-13T19:39:15Z,MEMBER,"There is definitely something funky with these datasets that is causing xarray to go very slow.
This is fast:
```python
>>> %time dsets = [xr.open_dataset(fname) for fname in glob('*.nc')]
CPU times: user 1.1 s, sys: 664 ms, total: 1.76 s
Wall time: 1.78 s
```
But even just trying to print the repr is slow
```python
>>> %time print(dsets[0])
CPU times: user 3.66 s, sys: 3.49 s, total: 7.15 s
Wall time: 7.28 s
```
Maybe some of this has to do with the change at 0.9.0 to allowing index-less dimensions (i.e. coordinates are optional). All of these datasets have such a dimension, e.g.
```
Dimensions: (obs: 7247697)
Coordinates:
lon (obs) float64 -124.3 -124.3 ...
lat (obs) float64 44.64 44.64 ...
time (obs) datetime64[ns] 2014-11-10T00:00:00.011253 ...
Dimensions without coordinates: obs
Data variables:
oxy_calphase (obs) float64 3.293e+04 ...
quality_flag (obs) |S2 'ok' 'ok' 'ok' ...
ctdbp_no_seawater_conductivity_qc_executed (obs) uint8 29 29 29 29 29 ...
...
``` ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,212561278
https://github.com/pydata/xarray/issues/1301#issuecomment-285149350,https://api.github.com/repos/pydata/xarray/issues/1301,285149350,MDEyOklzc3VlQ29tbWVudDI4NTE0OTM1MA==,1197350,2017-03-08T19:52:11Z,2017-03-08T19:52:11Z,MEMBER,"I just tried this on a few different datasets. Comparing python 2.7, xarray 0.7.2, dask 0.7.1 (an old environment I had on hand) with python 2.7, xarray 0.9.1-28-g1cad803, dask 0.13.0 (my current ""production"" environment), I could not reproduce. The up-to-date stack was faster by a factor of < 2. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,212561278