home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

39 rows where issue = 224553135 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: reactions, created_at (date), updated_at (date)

user 10

  • rabernat 10
  • shoyer 7
  • chuaxr 7
  • jtomfarrar 4
  • dcherian 3
  • jameshalgren 3
  • sbiner 2
  • keltonhalbert 1
  • dksasaki 1
  • TomNicholas 1

author_association 2

  • MEMBER 21
  • NONE 18

issue 1

  • slow performance with open_mfdataset · 39 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
1043038150 https://github.com/pydata/xarray/issues/1385#issuecomment-1043038150 https://api.github.com/repos/pydata/xarray/issues/1385 IC_kwDOAMm_X84-K3_G rabernat 1197350 2022-02-17T14:57:03Z 2022-02-17T14:57:03Z MEMBER

See deeper dive in https://github.com/pydata/xarray/discussions/6284

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  slow performance with open_mfdataset 224553135
1043022273 https://github.com/pydata/xarray/issues/1385#issuecomment-1043022273 https://api.github.com/repos/pydata/xarray/issues/1385 IC_kwDOAMm_X84-K0HB jtomfarrar 44488331 2022-02-17T14:42:41Z 2022-02-17T14:42:41Z NONE

Thank you. A member of my research group made the netcdf file, so we will make a second file with the time encoding fixed.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  slow performance with open_mfdataset 224553135
1043016100 https://github.com/pydata/xarray/issues/1385#issuecomment-1043016100 https://api.github.com/repos/pydata/xarray/issues/1385 IC_kwDOAMm_X84-Kymk rabernat 1197350 2022-02-17T14:36:23Z 2022-02-17T14:36:23Z MEMBER

Ah ok so if that is your goal, decode_times=False should be enough to solve it.

There is a problem with the time encoding in this file. The units (days since 1950-01-01T00:00:00Z) are not compatible with the values (738457.04166667, etc.). That would place your measurements sometime in the year 3971. This is part of the problem, but not the whole story.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  slow performance with open_mfdataset 224553135
1043009735 https://github.com/pydata/xarray/issues/1385#issuecomment-1043009735 https://api.github.com/repos/pydata/xarray/issues/1385 IC_kwDOAMm_X84-KxDH jtomfarrar 44488331 2022-02-17T14:30:03Z 2022-02-17T14:30:03Z NONE

Thank you, Ryan. I will post the file to a server with a stable URL and replace the google drive link in the other post. My original issue was that I wanted to not read the data (yet), only to have a look at the metadata.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  slow performance with open_mfdataset 224553135
1043001146 https://github.com/pydata/xarray/issues/1385#issuecomment-1043001146 https://api.github.com/repos/pydata/xarray/issues/1385 IC_kwDOAMm_X84-Ku86 rabernat 1197350 2022-02-17T14:21:45Z 2022-02-17T14:22:23Z MEMBER

(I could post to a web server if there's any reason to prefer that.)

In general that would be a little more convenient than google drive, because then we could download the file from python (rather than having a manual step). This would allow us to share a fully copy-pasteable code snippet to reproduce the issue. But don't worry about that for now.

First, I'd note that your issue is not really related to open_mfdataset at all, since it is reproduced just using open_dataset. The core problem is that you have ~15M timesteps, and it is taking forever to decode the times out of them. It's fast when you do decode_times=False because the data aren't actually being read. I'm going to make a post over in discussions to dig a bit deeper into this. StackOverflow isn't monitored too regularly by this community.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  slow performance with open_mfdataset 224553135
1042962960 https://github.com/pydata/xarray/issues/1385#issuecomment-1042962960 https://api.github.com/repos/pydata/xarray/issues/1385 IC_kwDOAMm_X84-KloQ jtomfarrar 44488331 2022-02-17T13:43:21Z 2022-02-17T13:43:21Z NONE

Thanks, Ryan! Sure-- here's a link to the file: https://drive.google.com/file/d/1-05bG2kF8wbvldYtDpZ3LYLyqXnvZyw1/view?usp=sharing

(I could post to a web server if there's any reason to prefer that.)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  slow performance with open_mfdataset 224553135
1042937825 https://github.com/pydata/xarray/issues/1385#issuecomment-1042937825 https://api.github.com/repos/pydata/xarray/issues/1385 IC_kwDOAMm_X84-Kffh rabernat 1197350 2022-02-17T13:14:50Z 2022-02-17T13:14:50Z MEMBER

Hi Tom! 👋

So much has evolved about xarray since this original issue was posted. However, we continue to use it as a catchall for people looking to speed up open_mfdataset. I saw your stackoverflow post. Any chance you could post a link to the actual file in question?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  slow performance with open_mfdataset 224553135
1042930077 https://github.com/pydata/xarray/issues/1385#issuecomment-1042930077 https://api.github.com/repos/pydata/xarray/issues/1385 IC_kwDOAMm_X84-Kdmd jtomfarrar 44488331 2022-02-17T13:06:18Z 2022-02-17T13:06:18Z NONE

@rabernat wrote:

An update on this long-standing issue.

I have learned that open_mfdataset can be blazingly fast if decode_cf=False but extremely slow with decode_cf=True.

I seem to be experiencing a similar (same?) issue with open_dataset: https://stackoverflow.com/questions/71147712/can-i-force-xarray-open-dataset-to-do-a-lazy-load?stw=2

{
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 1,
    "rocket": 0,
    "eyes": 0
}
  slow performance with open_mfdataset 224553135
781407863 https://github.com/pydata/xarray/issues/1385#issuecomment-781407863 https://api.github.com/repos/pydata/xarray/issues/1385 MDEyOklzc3VlQ29tbWVudDc4MTQwNzg2Mw== jameshalgren 53343824 2021-02-18T15:06:13Z 2021-02-18T15:06:13Z NONE

setting parallel=True seg faults... I'm betting that is some quirk of my python environment, though.

This is important! Otherwise that timing scales with number of files. If you get that to work, then you can convert to a dask dataframe and keep things lazy.

Indeed @dcherian -- it took some experimentation to get the right engine to support parallel execution and even then, results are still mixed, which, to me, means further work is needed to isolate the issue.

Along the lines of suggestions here (thanks @jmccreight for pointing this out), we've introduced a very practical pre-processing step to rewrite the datasets so that the read is not striped across the file system, effectively isolating the performance bottleneck to a position where it can be dealt with independently. Of course, such an asynchronous workflow is not possible in all situations, so we're still looking at improving the direct performance.

Two notes as we keep working: - The preprocessor. Reading and re-manipulating an individual dataset is lightning fast. We saw that a small change or adjustment in the individual files, made with a preprocessor, made the multi-file read massively faster. - The "more sophisticated example" referenced here has proven to be very useful.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  slow performance with open_mfdataset 224553135
756948150 https://github.com/pydata/xarray/issues/1385#issuecomment-756948150 https://api.github.com/repos/pydata/xarray/issues/1385 MDEyOklzc3VlQ29tbWVudDc1Njk0ODE1MA== dcherian 2448579 2021-01-08T19:20:51Z 2021-01-08T19:20:51Z MEMBER

setting parallel=True seg faults... I'm betting that is some quirk of my python environment, though.

This is important! Otherwise that timing scales with number of files. If you get that to work, then you can convert to a dask dataframe and keep things lazy.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  slow performance with open_mfdataset 224553135
756922963 https://github.com/pydata/xarray/issues/1385#issuecomment-756922963 https://api.github.com/repos/pydata/xarray/issues/1385 MDEyOklzc3VlQ29tbWVudDc1NjkyMjk2Mw== jameshalgren 53343824 2021-01-08T18:26:44Z 2021-01-08T18:34:49Z NONE

@dcherian We had looked at a number of options. In the end, the best performance I could achieve was with the work-around pre-processor script, rather than any of the built-in options. It's worth noting that a major part of the slowdown we were experiencing was from the dataframe transform option we were doing after reading the files. Once that was fixed, performance was much better, but not necessarily with any of the expected options. This script reading one-day's worth of NWM q_laterals runs in about 8 seconds (on Cheyenne). If you change the globbing pattern to include a full month, it takes about 380 seconds.

setting parallel=True seg faults... I'm betting that is some quirk of my python environment, though.

We are reading everything into memory, which negates the lazy-access benefits of using a dataset and our next steps include looking into that.

300 seconds to read a month isn't totally unacceptable, but we'd like it be faster for the operational runs we'll eventually be doing -- for longer simulations, we may be able to achieve some improvement with asynchronous data access. We'll keep looking into it. (We'll start by trying to adapt the "slightly more sophisticated example" under the docs you referenced here...)

Thanks (for the great package and for getting back on this question!)

```

python /glade/scratch/halgren/qlat_mfopen_test.py

import time import xarray as xr import pandas as pd

def get_ql_from_wrf_hydro_mf( qlat_files, index_col="feature_id", value_col="q_lateral" ): """ qlat_files: globbed list of CHRTOUT files containing desired lateral inflows index_col: column/field in the CHRTOUT files with the segment/link id value_col: column/field in the CHRTOUT files with the lateral inflow value

In general the CHRTOUT files contain one value per time step. At present, there is
no capability for handling non-uniform timesteps in the qlaterals.

The qlateral may also be input using comma delimited file -- see
`get_ql_from_csv`


Note/Todo:
For later needs, filtering for specific features or times may
be accomplished with one of:
    ds.loc[{selectors}]
    ds.sel({selectors})
    ds.isel({selectors})

Returns from these selection functions are sub-datasets.

For example:
```
(Pdb) ds.sel({"feature_id":[4186117, 4186169],"time":ds.time.values[:2]})['q_lateral'].to_dataframe()
                                 latitude  longitude  q_lateral
time                feature_id
2018-01-01 13:00:00 4186117     41.233807 -75.413895   0.006496
2018-01-02 00:00:00 4186117     41.233807 -75.413895   0.006460
```

or...
```
(Pdb) ds.sel({"feature_id":[4186117, 4186169],"time":[np.datetime64('2018-01-01T13:00:00')]})['q_lateral'].to_dataframe()
                                 latitude  longitude  q_lateral
time                feature_id
2018-01-01 13:00:00 4186117     41.233807 -75.413895   0.006496
```
"""
filter_list = None

with xr.open_mfdataset(
    qlat_files,
    combine="by_coords",
    # combine="nested",
    # concat_dim="time",
    # data_vars="minimal",
    # coords="minimal",
    # compat="override",
    preprocess=drop_all_coords,
    # parallel=True,
) as ds:
    ql = pd.DataFrame(
        ds[value_col].values.T,
        index=ds[index_col].values,
        columns=ds.time.values,
        # dtype=float,
    )

return ql

def drop_all_coords(ds): return ds.reset_coords(drop=True)

def main():

input_folder = "/glade/p/cisl/nwc/nwmv21_finals/CONUS/retro/Retro8yr/FullRouting/OUTPUT_chrtout_comp_20181001_20191231"
file_pattern_filter = "/20181101*.CHRTOUT*"
file_index_col = "feature_id"
file_value_col = "q_lateral"
# file_value_col = "streamflow"

start_time = time.time()

qlat_files = (input_folder + file_pattern_filter)
print(f"reading {qlat_files}")
qlat_df = get_ql_from_wrf_hydro_mf(
    qlat_files=qlat_files,
    index_col=file_index_col,
    value_col=file_value_col,
)
print(qlat_df)
print("read qlaterals in %s seconds." % (time.time() - start_time))

if name == "main": main() ``` @groutr, @jmccreight

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  slow performance with open_mfdataset 224553135
756830228 https://github.com/pydata/xarray/issues/1385#issuecomment-756830228 https://api.github.com/repos/pydata/xarray/issues/1385 MDEyOklzc3VlQ29tbWVudDc1NjgzMDIyOA== dcherian 2448579 2021-01-08T15:52:49Z 2021-01-08T15:53:22Z MEMBER

@jameshalgren A lot of these issues have been fixed. Have you tried the advice here: https://xarray.pydata.org/en/stable/io.html#reading-multi-file-datasets?

If not, a reproducible example would help (I have access to Cheyenne). Let's also move this conversation to the "Discussions" forum: https://github.com/pydata/xarray/discussions

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  slow performance with open_mfdataset 224553135
756364564 https://github.com/pydata/xarray/issues/1385#issuecomment-756364564 https://api.github.com/repos/pydata/xarray/issues/1385 MDEyOklzc3VlQ29tbWVudDc1NjM2NDU2NA== jameshalgren 53343824 2021-01-07T20:28:32Z 2021-01-07T20:28:32Z NONE

@rabernat Is test dataset you mention still somewhere on Cheyenne -- we're seeing a general slowness processing multifile netcdf output from the National Water Model (our project here: NOAA-OWP/t-route) and we would like to see how things compare to your mini-benchmark test.

cc @groutr

An update on this long-standing issue.

I have learned that open_mfdataset can be blazingly fast if decode_cf=False but extremely slow with decode_cf=True.

As an example, I am loading a POP datataset on cheyenne. Anyone with access can try this example.

```python base_dir = '/glade/scratch/rpa/' prefix = 'BRCP85C5CN_ne120_t12_pop62.c13b17.asdphys.001' code = 'pop.h.nday1.SST' glob_pattern = os.path.join(base_dir, prefix, '%s.%s.*.nc' % (prefix, code))

def non_time_coords(ds): return [v for v in ds.data_vars if 'time' not in ds[v].dims]

def drop_non_essential_vars_pop(ds): return ds.drop(non_time_coords(ds))

this runs almost instantly

ds = xr.open_mfdataset(glob_pattern, decode_times=False, chunks={'time': 1}, preprocess=drop_non_essential_vars_pop, decode_cf=False) ```

And returns this

<xarray.Dataset> Dimensions: (d2: 2, nlat: 2400, nlon: 3600, time: 16401, z_t: 62, z_t_150m: 15, z_w: 62, z_w_bot: 62, z_w_top: 62) Coordinates: * z_w_top (z_w_top) float32 0.0 1000.0 2000.0 3000.0 4000.0 5000.0 ... * z_t (z_t) float32 500.0 1500.0 2500.0 3500.0 4500.0 5500.0 ... * z_w (z_w) float32 0.0 1000.0 2000.0 3000.0 4000.0 5000.0 6000.0 ... * z_t_150m (z_t_150m) float32 500.0 1500.0 2500.0 3500.0 4500.0 5500.0 ... * z_w_bot (z_w_bot) float32 1000.0 2000.0 3000.0 4000.0 5000.0 6000.0 ... * time (time) float64 7.322e+05 7.322e+05 7.322e+05 7.322e+05 ... Dimensions without coordinates: d2, nlat, nlon Data variables: time_bound (time, d2) float64 dask.array<shape=(16401, 2), chunksize=(1, 2)> SST (time, nlat, nlon) float32 dask.array<shape=(16401, 2400, 3600), chunksize=(1, 2400, 3600)> Attributes: nsteps_total: 480 tavg_sum: 64800.0 title: BRCP85C5CN_ne120_t12_pop62.c13b17.asdphys.001 start_time: This dataset was created on 2016-03-14 at 05:32:30.3 Conventions: CF-1.0; http://www.cgd.ucar.edu/cms/eaton/netcdf/CF-curren... source: CCSM POP2, the CCSM Ocean Component cell_methods: cell_methods = time: mean ==> the variable values are aver... calendar: All years have exactly 365 days. history: none contents: Diagnostic and Prognostic Variables revision: $Id: tavg.F90 56176 2013-12-20 18:35:46Z mlevy@ucar.edu $

This is roughly 45 years of daily data, one file per year.

Instead, if I just change decode_cf=True (the default), it takes forever. I can monitor what is happening via the distributed dashboard. It looks like this:

There are more of these open_dataset tasks then there are number of files (45), so I can only presume there are 16401 individual tasks (one for each timestep), which each takes about 1 s in serial.

This is a real failure of lazy decoding. Maybe it can be fixed by #1725, possibly related to #1372.

cc Pangeo folks: @jhamman, @mrocklin

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  slow performance with open_mfdataset 224553135
685164626 https://github.com/pydata/xarray/issues/1385#issuecomment-685164626 https://api.github.com/repos/pydata/xarray/issues/1385 MDEyOklzc3VlQ29tbWVudDY4NTE2NDYyNg== dcherian 2448579 2020-09-01T22:21:58Z 2020-09-01T22:21:58Z MEMBER

This is the most up-to-date documentation on this issue: https://xarray.pydata.org/en/stable/io.html#reading-multi-file-datasets

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  slow performance with open_mfdataset 224553135
685141540 https://github.com/pydata/xarray/issues/1385#issuecomment-685141540 https://api.github.com/repos/pydata/xarray/issues/1385 MDEyOklzc3VlQ29tbWVudDY4NTE0MTU0MA== dksasaki 17645581 2020-09-01T21:25:24Z 2020-09-01T21:25:24Z NONE

Hi,

I have used xarray for a few years now and always had this slow performance associated to xr.open_mfdataset. Had I known about this issue earlier, it would save a lot of my time. I believe other users would benefit with a warning about this issue, when the method is called. Would this be possible?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  slow performance with open_mfdataset 224553135
561920115 https://github.com/pydata/xarray/issues/1385#issuecomment-561920115 https://api.github.com/repos/pydata/xarray/issues/1385 MDEyOklzc3VlQ29tbWVudDU2MTkyMDExNQ== rabernat 1197350 2019-12-05T01:09:25Z 2019-12-05T01:09:25Z MEMBER

In your twitter thread you said

Do any of my xarray/dask folks know why open_mfdataset takes such a significant amount of time compared to looping over a list of files? Each file corresponds to a new time, just wanting to open multiple times at once...

The general reason for this is usually that open_mfdataset performs coordinate compatibility checks when it concatenates the files. It's useful to actually read the code of open_mfdataset to see how it works.

First, all the files are opened individually https://github.com/pydata/xarray/blob/577d3a75ea8bb25b99f9d31af8da14210cddff78/xarray/backends/api.py#L900-L903

You can recreate this step outside of xarray yourself by doing something like python from glob import glob datasets = [xr.open_dataset(fname, chunks={}) for fname in glob('*.nc')]

Once each dataset is open, xarray calls out to one of its combine functions. This logic has gotten more complex over the years as different options have been introduced, but the gist is this: https://github.com/pydata/xarray/blob/577d3a75ea8bb25b99f9d31af8da14210cddff78/xarray/backends/api.py#L947-L952

You can reproduce this step outside of xarray, e.g. ds = xr.concat(datasets, dim='time') At that point, various checks will kick in to be sure that the coordinates in the different datasets are compatible. Performing these checks requires the data to be read eagerly, which can be a source of slow performance.

Without seeing more details about your files, it's hard to know exactly where the issue lies. A good place to start is to simply drop all coordinates from your data as a preprocessing step.

``` def drop_all_coords(ds): return ds.reset_coords(drop=True)

xr.open_mfdataset('*.nc', combine='by_coords', preprocess=drop_all_coords) ```

If you observe a big speedup, this points at coordinate compatibility checks as the culprit. From there you can experiment with the various options for open_mfdataset, such as coords='minimal', compat='override', etc.

Once you post your file details, we can provide more concrete suggestions.

{
    "total_count": 6,
    "+1": 6,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  slow performance with open_mfdataset 224553135
561915767 https://github.com/pydata/xarray/issues/1385#issuecomment-561915767 https://api.github.com/repos/pydata/xarray/issues/1385 MDEyOklzc3VlQ29tbWVudDU2MTkxNTc2Nw== rabernat 1197350 2019-12-05T00:52:06Z 2019-12-05T00:52:06Z MEMBER

@keltonhalbert - I'm sorry you're frustrated by this issue. It's hard to provide a general answer to "why is open_mfdataset slow?" without seeing the data in question. I'll try to provide some best practices and recommendations here. In the meantime, could you please post the xarray repr of two of your files? To be explicit. python ds1 = xr.open_dataset('file1.nc') print(ds1) ds2 = xr.open_dataset('file2.nc') print(ds2) This will help us debug.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  slow performance with open_mfdataset 224553135
561900194 https://github.com/pydata/xarray/issues/1385#issuecomment-561900194 https://api.github.com/repos/pydata/xarray/issues/1385 MDEyOklzc3VlQ29tbWVudDU2MTkwMDE5NA== keltonhalbert 1411265 2019-12-04T23:57:07Z 2019-12-04T23:57:07Z NONE

So is there any word on a best practice, fix, or workaround with the MFDataset performance? Still getting abysmal reading perfomance with a list of NetCDF files that represent sequential times. I want to use MFDataset to chunk multiple time steps into memory at once but its taking 5-10 minutes to construct MFDataset objects and even longer to run .values on it.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  slow performance with open_mfdataset 224553135
464100720 https://github.com/pydata/xarray/issues/1385#issuecomment-464100720 https://api.github.com/repos/pydata/xarray/issues/1385 MDEyOklzc3VlQ29tbWVudDQ2NDEwMDcyMA== chuaxr 30007270 2019-02-15T15:57:01Z 2019-02-15T18:33:31Z NONE

In that case, the speedup disappears. It seems that the slowdown arises from the entire time array being loaded into memory at once.

EDIT: I subsequently realized that using drop_variables = 'time' caused all the data values to become nan, which makes that an invalid option. ```

%prun ds = xr.open_mfdataset(fname,decode_times=False) 8025 function calls (7856 primitive calls) in 29.662 seconds

Ordered by: internal time

ncalls tottime percall cumtime percall filename:lineno(function) 4 29.608 7.402 29.608 7.402 {built-in method operator.getitem} 1 0.032 0.032 0.032 0.032 netCDF4.py:244(_open_netcdf4_group) 1 0.015 0.015 0.015 0.015 {built-in method posix.lstat} 126/114 0.000 0.000 0.001 0.000 indexing.py:504(shape) 1196 0.000 0.000 0.000 0.000 {built-in method builtins.isinstance} 81 0.000 0.000 0.001 0.000 variable.py:239(init) ``` See the rest of the prun output under the Details for more information:

30 0.000 0.000 0.000 0.000 {method 'getncattr' of 'netCDF4._netCDF4.Variable' objects} 81 0.000 0.000 0.000 0.000 variable.py:709(attrs) 736/672 0.000 0.000 0.000 0.000 {built-in method builtins.len} 157 0.000 0.000 0.001 0.000 utils.py:450(ndim) 81 0.000 0.000 0.001 0.000 variable.py:417(_parse_dimensions) 7 0.000 0.000 0.001 0.000 netCDF4_.py:361(open_store_variable) 4 0.000 0.000 0.000 0.000 base.py:253(__new__) 1 0.000 0.000 29.662 29.662 <string>:1(<module>) 7 0.000 0.000 0.001 0.000 conventions.py:245(decode_cf_variable) 39/19 0.000 0.000 29.609 1.558 {built-in method numpy.core.multiarray.array} 9 0.000 0.000 0.000 0.000 core.py:1776(normalize_chunks) 104 0.000 0.000 0.000 0.000 {built-in method builtins.hasattr} 143 0.000 0.000 0.001 0.000 variable.py:272(shape) 4 0.000 0.000 0.000 0.000 utils.py:88(_StartCountStride) 8 0.000 0.000 0.000 0.000 core.py:747(blockdims_from_blockshape) 23 0.000 0.000 0.032 0.001 file_manager.py:150(acquire) 8 0.000 0.000 0.000 0.000 base.py:590(tokenize) 84 0.000 0.000 0.000 0.000 variable.py:137(as_compatible_data) 268 0.000 0.000 0.000 0.000 {method 'indices' of 'slice' objects} 14 0.000 0.000 29.610 2.115 variable.py:41(as_variable) 35 0.000 0.000 0.000 0.000 variables.py:102(unpack_for_decoding) 81 0.000 0.000 0.000 0.000 variable.py:721(encoding) 192 0.000 0.000 0.000 0.000 {built-in method builtins.getattr} 2 0.000 0.000 0.000 0.000 merge.py:109(merge_variables) 2 0.000 0.000 29.610 14.805 merge.py:392(merge_core) 7 0.000 0.000 0.000 0.000 variables.py:161(<setcomp>) 103 0.000 0.000 0.000 0.000 {built-in method _abc._abc_instancecheck} 1 0.000 0.000 0.001 0.001 conventions.py:351(decode_cf_variables) 3 0.000 0.000 0.000 0.000 dataset.py:90(calculate_dimensions) 1 0.000 0.000 0.000 0.000 {built-in method posix.stat} 361 0.000 0.000 0.000 0.000 {method 'append' of 'list' objects} 20 0.000 0.000 0.000 0.000 variable.py:728(copy) 23 0.000 0.000 0.000 0.000 lru_cache.py:40(__getitem__) 12 0.000 0.000 0.000 0.000 base.py:504(_simple_new) 2 0.000 0.000 0.000 0.000 variable.py:1985(assert_unique_multiindex_level_names) 2 0.000 0.000 0.000 0.000 alignment.py:172(deep_align) 14 0.000 0.000 0.000 0.000 indexing.py:469(__init__) 16 0.000 0.000 29.609 1.851 variable.py:1710(__init__) 1 0.000 0.000 29.662 29.662 {built-in method builtins.exec} 25 0.000 0.000 0.000 0.000 contextlib.py:81(__init__) 7 0.000 0.000 0.000 0.000 {method 'getncattr' of 'netCDF4._netCDF4.Dataset' objects} 24 0.000 0.000 0.000 0.000 indexing.py:331(as_integer_slice) 50/46 0.000 0.000 0.000 0.000 common.py:181(__setattr__) 7 0.000 0.000 0.000 0.000 variables.py:155(decode) 4 0.000 0.000 29.609 7.402 indexing.py:760(explicit_indexing_adapter) 48 0.000 0.000 0.000 0.000 <frozen importlib._bootstrap>:416(parent) 103 0.000 0.000 0.000 0.000 abc.py:137(__instancecheck__) 48 0.000 0.000 0.000 0.000 _collections_abc.py:742(__iter__) 180 0.000 0.000 0.000 0.000 variable.py:411(dims) 4 0.000 0.000 0.000 0.000 locks.py:158(__exit__) 3 0.000 0.000 0.001 0.000 core.py:2048(from_array) 1 0.000 0.000 29.612 29.612 conventions.py:412(decode_cf) 4 0.000 0.000 0.000 0.000 utils.py:50(_maybe_cast_to_cftimeindex) 77/59 0.000 0.000 0.000 0.000 utils.py:473(dtype) 84 0.000 0.000 0.000 0.000 generic.py:7(_check) 146 0.000 0.000 0.000 0.000 indexing.py:319(tuple) 7 0.000 0.000 0.000 0.000 netCDF4_.py:34(__init__) 1 0.000 0.000 29.614 29.614 api.py:270(maybe_decode_store) 1 0.000 0.000 29.662 29.662 api.py:487(open_mfdataset) 20 0.000 0.000 0.000 0.000 common.py:1845(_is_dtype_type) 33 0.000 0.000 0.000 0.000 core.py:1911(<genexpr>) 84 0.000 0.000 0.000 0.000 variable.py:117(_maybe_wrap_data) 3 0.000 0.000 0.001 0.000 variable.py:830(chunk) 25 0.000 0.000 0.000 0.000 contextlib.py:237(helper) 36/25 0.000 0.000 0.000 0.000 utils.py:477(shape) 8 0.000 0.000 0.000 0.000 base.py:566(_shallow_copy) 8 0.000 0.000 0.000 0.000 indexing.py:346(__init__) 26/25 0.000 0.000 0.000 0.000 utils.py:408(__call__) 4 0.000 0.000 0.000 0.000 indexing.py:886(_decompose_outer_indexer) 2 0.000 0.000 29.610 14.805 merge.py:172(expand_variable_dicts) 4 0.000 0.000 29.608 7.402 netCDF4_.py:67(_getitem) 2 0.000 0.000 0.000 0.000 dataset.py:722(copy) 7 0.000 0.000 0.001 0.000 dataset.py:1383(maybe_chunk) 16 0.000 0.000 0.000 0.000 {built-in method numpy.core.multiarray.empty} 14 0.000 0.000 0.000 0.000 fromnumeric.py:1471(ravel) 60 0.000 0.000 0.000 0.000 base.py:652(__len__) 3 0.000 0.000 0.000 0.000 core.py:141(getem) 25 0.000 0.000 0.000 0.000 contextlib.py:116(__exit__) 4 0.000 0.000 29.609 7.402 utils.py:62(safe_cast_to_index) 18 0.000 0.000 0.000 0.000 core.py:891(shape) 25 0.000 0.000 0.000 0.000 contextlib.py:107(__enter__) 4 0.000 0.000 0.001 0.000 utils.py:332(FrozenOrderedDict) 8 0.000 0.000 0.000 0.000 base.py:1271(set_names) 4 0.000 0.000 0.000 0.000 numeric.py:34(__new__) 24 0.000 0.000 0.000 0.000 inference.py:253(is_list_like) 3 0.000 0.000 0.000 0.000 core.py:820(__new__) 12 0.000 0.000 0.000 0.000 variable.py:1785(copy) 36 0.000 0.000 0.000 0.000 {method 'copy' of 'collections.OrderedDict' objects} 8/7 0.000 0.000 0.000 0.000 {built-in method builtins.sorted} 2 0.000 0.000 0.000 0.000 merge.py:220(determine_coords) 46 0.000 0.000 0.000 0.000 file_manager.py:141(_optional_lock) 60 0.000 0.000 0.000 0.000 indexing.py:1252(shape) 50 0.000 0.000 0.000 0.000 {built-in method builtins.next} 59 0.000 0.000 0.000 0.000 {built-in method builtins.iter} 54 0.000 0.000 0.000 0.000 <frozen importlib._bootstrap>:1009(_handle_fromlist) 1 0.000 0.000 0.000 0.000 api.py:146(_protect_dataset_variables_inplace) 1 0.000 0.000 29.646 29.646 api.py:162(open_dataset) 4 0.000 0.000 0.000 0.000 utils.py:424(_out_array_shape) 4 0.000 0.000 29.609 7.402 indexing.py:1224(__init__) 24 0.000 0.000 0.000 0.000 function_base.py:241(iterable) 4 0.000 0.000 0.000 0.000 dtypes.py:968(is_dtype) 2 0.000 0.000 0.000 0.000 merge.py:257(coerce_pandas_values) 14 0.000 0.000 0.000 0.000 missing.py:105(_isna_new) 8 0.000 0.000 0.000 0.000 variable.py:1840(to_index) 7 0.000 0.000 0.000 0.000 {method 'search' of 're.Pattern' objects} 48 0.000 0.000 0.000 0.000 {method 'rpartition' of 'str' objects} 7 0.000 0.000 0.000 0.000 strings.py:66(decode) 7 0.000 0.000 0.000 0.000 netCDF4_.py:257(_disable_auto_decode_variable) 14 0.000 0.000 0.000 0.000 numerictypes.py:619(issubclass_) 24/4 0.000 0.000 29.609 7.402 numeric.py:433(asarray) 7 0.000 0.000 0.000 0.000 {method 'ncattrs' of 'netCDF4._netCDF4.Variable' objects} 8 0.000 0.000 0.000 0.000 numeric.py:67(_shallow_copy) 8 0.000 0.000 0.000 0.000 indexing.py:373(__init__) 3 0.000 0.000 0.000 0.000 core.py:134(<listcomp>) 14 0.000 0.000 0.000 0.000 merge.py:154(<listcomp>) 16 0.000 0.000 0.000 0.000 dataset.py:816(<genexpr>) 11 0.000 0.000 0.000 0.000 netCDF4_.py:56(get_array) 40 0.000 0.000 0.000 0.000 utils.py:40(_find_dim) 22 0.000 0.000 0.000 0.000 core.py:1893(<genexpr>) 27 0.000 0.000 0.000 0.000 {built-in method builtins.all} 26/10 0.000 0.000 0.000 0.000 {built-in method builtins.sum} 2 0.000 0.000 0.000 0.000 dataset.py:424(attrs) 7 0.000 0.000 0.000 0.000 variables.py:231(decode) 1 0.000 0.000 0.000 0.000 file_manager.py:66(__init__) 67 0.000 0.000 0.000 0.000 utils.py:316(__getitem__) 22 0.000 0.000 0.000 0.000 {method 'move_to_end' of 'collections.OrderedDict' objects} 53 0.000 0.000 0.000 0.000 {built-in method builtins.issubclass} 1 0.000 0.000 0.000 0.000 combine.py:374(_infer_concat_order_from_positions) 7 0.000 0.000 0.000 0.000 dataset.py:1378(selkeys) 1 0.000 0.000 0.001 0.001 dataset.py:1333(chunk) 4 0.000 0.000 29.609 7.402 netCDF4_.py:62(__getitem__) 37 0.000 0.000 0.000 0.000 netCDF4_.py:365(<genexpr>) 18 0.000 0.000 0.000 0.000 {method 'ravel' of 'numpy.ndarray' objects} 2 0.000 0.000 0.000 0.000 alignment.py:37(align) 14 0.000 0.000 0.000 0.000 {pandas._libs.lib.is_scalar} 8 0.000 0.000 0.000 0.000 base.py:1239(_set_names) 16 0.000 0.000 0.000 0.000 indexing.py:314(__init__) 3 0.000 0.000 0.000 0.000 config.py:414(get) 7 0.000 0.000 0.000 0.000 dtypes.py:68(maybe_promote) 8 0.000 0.000 0.000 0.000 variable.py:1856(level_names) 37 0.000 0.000 0.000 0.000 {method 'copy' of 'dict' objects} 6 0.000 0.000 0.000 0.000 re.py:180(search) 6 0.000 0.000 0.000 0.000 re.py:271(_compile) 8 0.000 0.000 0.000 0.000 {built-in method _hashlib.openssl_md5} 1 0.000 0.000 0.000 0.000 merge.py:463(merge) 7 0.000 0.000 0.000 0.000 variables.py:158(<listcomp>) 7 0.000 0.000 0.000 0.000 numerictypes.py:687(issubdtype) 6 0.000 0.000 0.000 0.000 utils.py:510(is_remote_uri) 8 0.000 0.000 0.000 0.000 common.py:1702(is_extension_array_dtype) 25 0.000 0.000 0.000 0.000 indexing.py:645(as_indexable) 21 0.000 0.000 0.000 0.000 {method 'pop' of 'collections.OrderedDict' objects} 19 0.000 0.000 0.000 0.000 {built-in method __new__ of type object at 0x2b324a13e3c0} 1 0.000 0.000 0.001 0.001 dataset.py:1394(<listcomp>) 21 0.000 0.000 0.000 0.000 variables.py:117(pop_to) 1 0.000 0.000 0.032 0.032 netCDF4_.py:320(open) 8 0.000 0.000 0.000 0.000 netCDF4_.py:399(<genexpr>) 12 0.000 0.000 0.000 0.000 __init__.py:221(iteritems) 4 0.000 0.000 0.000 0.000 common.py:403(is_datetime64_dtype) 8 0.000 0.000 0.000 0.000 common.py:1809(_get_dtype) 8 0.000 0.000 0.000 0.000 dtypes.py:68(find) 8 0.000 0.000 0.000 0.000 base.py:3607(values) 22 0.000 0.000 0.000 0.000 pycompat.py:32(move_to_end) 8 0.000 0.000 0.000 0.000 utils.py:792(__exit__) 3 0.000 0.000 0.000 0.000 highlevelgraph.py:84(from_collections) 22 0.000 0.000 0.000 0.000 core.py:1906(<genexpr>) 16 0.000 0.000 0.000 0.000 abc.py:141(__subclasscheck__) 1 0.000 0.000 0.000 0.000 posixpath.py:104(split) 1 0.000 0.000 0.001 0.001 combine.py:479(_auto_combine_all_along_first_dim) 1 0.000 0.000 29.610 29.610 dataset.py:321(__init__) 4 0.000 0.000 0.000 0.000 dataset.py:643(_construct_direct) 7 0.000 0.000 0.000 0.000 variables.py:266(decode) 1 0.000 0.000 0.032 0.032 netCDF4_.py:306(__init__) 14 0.000 0.000 0.000 0.000 numeric.py:504(asanyarray) 4 0.000 0.000 0.000 0.000 common.py:503(is_period_dtype) 8 0.000 0.000 0.000 0.000 common.py:1981(pandas_dtype) 12 0.000 0.000 0.000 0.000 base.py:633(_reset_identity) 11 0.000 0.000 0.000 0.000 pycompat.py:18(iteritems) 16 0.000 0.000 0.000 0.000 utils.py:279(is_integer) 14 0.000 0.000 0.000 0.000 variable.py:268(dtype) 4 0.000 0.000 0.000 0.000 indexing.py:698(_outer_to_numpy_indexer) 42 0.000 0.000 0.000 0.000 variable.py:701(attrs) 9 0.000 0.000 0.000 0.000 {built-in method builtins.any} 1 0.000 0.000 0.000 0.000 posixpath.py:338(normpath) 6 0.000 0.000 0.000 0.000 _collections_abc.py:676(items) 24 0.000 0.000 0.000 0.000 {built-in method math.isnan} 1 0.000 0.000 29.610 29.610 merge.py:360(merge_data_and_coords) 1 0.000 0.000 0.000 0.000 dataset.py:1084(set_coords) 1 0.000 0.000 0.001 0.001 common.py:99(load) 1 0.000 0.000 0.000 0.000 file_manager.py:250(decrement) 4 0.000 0.000 0.000 0.000 locks.py:154(__enter__) 7 0.000 0.000 0.000 0.000 netCDF4_.py:160(_ensure_fill_value_valid) 8 0.000 0.000 0.001 0.000 netCDF4_.py:393(<genexpr>) 8 0.000 0.000 0.000 0.000 common.py:572(is_categorical_dtype) 16 0.000 0.000 0.000 0.000 base.py:75(is_dtype) 72 0.000 0.000 0.000 0.000 indexing.py:327(as_integer_or_none) 26 0.000 0.000 0.000 0.000 utils.py:382(dispatch) 3 0.000 0.000 0.000 0.000 core.py:123(slices_from_chunks) 16 0.000 0.000 0.000 0.000 core.py:768(<genexpr>) 4 0.000 0.000 29.609 7.402 indexing.py:514(__array__) 4 0.000 0.000 0.000 0.000 indexing.py:1146(__init__) 4 0.000 0.000 0.000 0.000 indexing.py:1153(_indexing_array_and_key) 4 0.000 0.000 29.609 7.402 variable.py:400(to_index_variable) 30 0.000 0.000 0.000 0.000 {method 'items' of 'collections.OrderedDict' objects} 16 0.000 0.000 0.000 0.000 {built-in method _abc._abc_subclasscheck} 19 0.000 0.000 0.000 0.000 {method 'items' of 'dict' objects} 1 0.000 0.000 0.000 0.000 combine.py:423(_check_shape_tile_ids) 4 0.000 0.000 0.000 0.000 merge.py:91(_assert_compat_valid) 12 0.000 0.000 0.000 0.000 dataset.py:263(<genexpr>) 1 0.000 0.000 29.610 29.610 dataset.py:372(_set_init_vars_and_dims) 3 0.000 0.000 0.000 0.000 dataset.py:413(_attrs_copy) 8 0.000 0.000 0.000 0.000 common.py:120(<genexpr>) 14 0.000 0.000 0.000 0.000 {built-in method pandas._libs.missing.checknull} 4 0.000 0.000 0.000 0.000 common.py:746(is_dtype_equal) 4 0.000 0.000 0.000 0.000 common.py:923(is_signed_integer_dtype) 4 0.000 0.000 0.000 0.000 common.py:1545(is_float_dtype) 14 0.000 0.000 0.000 0.000 missing.py:25(isna) 3 0.000 0.000 0.000 0.000 highlevelgraph.py:71(__init__) 3 0.000 0.000 0.000 0.000 core.py:137(<listcomp>) 33 0.000 0.000 0.000 0.000 core.py:1883(<genexpr>) 35 0.000 0.000 0.000 0.000 variable.py:713(encoding) 2 0.000 0.000 0.000 0.000 {built-in method builtins.min} 16 0.000 0.000 0.000 0.000 _collections_abc.py:719(__iter__) 8 0.000 0.000 0.000 0.000 _collections_abc.py:760(__iter__) 1 0.000 0.000 0.015 0.015 glob.py:9(glob) 2 0.000 0.000 0.015 0.008 glob.py:39(_iglob) 8 0.000 0.000 0.000 0.000 {method 'hexdigest' of '_hashlib.HASH' objects} 1 0.000 0.000 0.000 0.000 combine.py:500(_auto_combine_1d) 14 0.000 0.000 0.000 0.000 merge.py:104(__missing__) 1 0.000 0.000 0.000 0.000 coordinates.py:167(variables) 3 0.000 0.000 0.000 0.000 dataset.py:98(<genexpr>) 4 0.000 0.000 0.000 0.000 dataset.py:402(variables) 1 0.000 0.000 0.000 0.000 netCDF4_.py:269(_disable_auto_decode_group) 12 0.000 0.000 0.032 0.003 netCDF4_.py:357(ds) 1 0.000 0.000 29.646 29.646 api.py:637(<listcomp>) 9 0.000 0.000 0.000 0.000 utils.py:313(__init__) 7 0.000 0.000 0.000 0.000 {method 'filters' of 'netCDF4._netCDF4.Variable' objects} 12 0.000 0.000 0.000 0.000 common.py:117(classes) 8 0.000 0.000 0.000 0.000 common.py:536(is_interval_dtype) 4 0.000 0.000 0.000 0.000 common.py:1078(is_datetime64_any_dtype) 4 0.000 0.000 0.000 0.000 dtypes.py:827(is_dtype) 8 0.000 0.000 0.000 0.000 base.py:551(<dictcomp>) 8 0.000 0.000 0.000 0.000 base.py:547(_get_attributes_dict) 8 0.000 0.000 0.000 0.000 utils.py:789(__enter__) 18 0.000 0.000 0.000 0.000 core.py:903(_get_chunks) 33 0.000 0.000 0.000 0.000 core.py:1885(<genexpr>) 22 0.000 0.000 0.000 0.000 core.py:1889(<genexpr>) 4 0.000 0.000 0.000 0.000 indexing.py:799(_decompose_slice) 4 0.000 0.000 0.000 0.000 indexing.py:1174(__getitem__) 3 0.000 0.000 0.000 0.000 variable.py:294(data) 8 0.000 0.000 0.000 0.000 {method '__enter__' of '_thread.lock' objects} 9 0.000 0.000 0.000 0.000 {built-in method builtins.hash} 4 0.000 0.000 0.000 0.000 {built-in method builtins.max} 4 0.000 0.000 0.000 0.000 {method 'update' of 'set' objects} 7 0.000 0.000 0.000 0.000 {method 'values' of 'dict' objects} 8 0.000 0.000 0.000 0.000 {method 'update' of 'dict' objects} 1 0.000 0.000 0.000 0.000 posixpath.py:376(abspath) 1 0.000 0.000 0.000 0.000 genericpath.py:53(getmtime) 4 0.000 0.000 0.000 0.000 _collections_abc.py:657(get) 1 0.000 0.000 0.000 0.000 __init__.py:548(__init__) 1 0.000 0.000 0.000 0.000 __init__.py:617(update) 4/2 0.000 0.000 0.000 0.000 combine.py:392(_infer_tile_ids_from_nested_list) 1 0.000 0.000 0.001 0.001 combine.py:522(_auto_combine) 2 0.000 0.000 0.000 0.000 merge.py:100(__init__) 5 0.000 0.000 0.000 0.000 coordinates.py:38(__iter__) 5 0.000 0.000 0.000 0.000 coordinates.py:169(<genexpr>) 1 0.000 0.000 0.000 0.000 dataset.py:666(_replace_vars_and_dims) 5 0.000 0.000 0.000 0.000 dataset.py:1078(data_vars) 1 0.000 0.000 0.000 0.000 file_manager.py:133(_make_key) 1 0.000 0.000 0.000 0.000 file_manager.py:245(increment) 1 0.000 0.000 0.000 0.000 lru_cache.py:54(__setitem__) 1 0.000 0.000 0.000 0.000 netCDF4_.py:398(get_attrs) 1 0.000 0.000 0.000 0.000 api.py:80(_get_default_engine) 1 0.000 0.000 0.000 0.000 api.py:92(_normalize_path) 8 0.000 0.000 0.000 0.000 {method 'view' of 'numpy.ndarray' objects} 8 0.000 0.000 0.000 0.000 utils.py:187(is_dict_like) 4 0.000 0.000 0.000 0.000 utils.py:219(is_valid_numpy_dtype) 10 0.000 0.000 0.000 0.000 utils.py:319(__iter__) 1 0.000 0.000 0.000 0.000 {method 'filepath' of 'netCDF4._netCDF4.Dataset' objects} 4 0.000 0.000 0.000 0.000 common.py:434(is_datetime64tz_dtype) 3 0.000 0.000 0.000 0.000 config.py:107(normalize_key) 3 0.000 0.000 0.000 0.000 core.py:160(<listcomp>) 6 0.000 0.000 0.000 0.000 core.py:966(ndim) 4 0.000 0.000 0.000 0.000 indexing.py:791(decompose_indexer) 8 0.000 0.000 0.000 0.000 {method '__exit__' of '_thread.lock' objects} 3 0.000 0.000 0.000 0.000 {method 'replace' of 'str' objects} 4 0.000 0.000 0.000 0.000 {method 'split' of 'str' objects} 1 0.000 0.000 0.000 0.000 posixpath.py:121(splitext) 1 0.000 0.000 0.000 0.000 genericpath.py:117(_splitext) 1 0.000 0.000 0.001 0.001 combine.py:443(_combine_nd) 1 0.000 0.000 0.000 0.000 combine.py:508(<listcomp>) 14 0.000 0.000 0.000 0.000 merge.py:41(unique_variable) 11 0.000 0.000 0.000 0.000 coordinates.py:163(_names) 1 0.000 0.000 0.000 0.000 dataset.py:2593(_assert_all_in_dataset) 1 0.000 0.000 0.000 0.000 variables.py:55(__init__) 1 0.000 0.000 0.000 0.000 file_manager.py:269(__init__) 29 0.000 0.000 0.000 0.000 file_manager.py:273(__hash__) 1 0.000 0.000 0.001 0.001 netCDF4_.py:392(get_variables) 1 0.000 0.000 0.000 0.000 netCDF4_.py:410(<setcomp>) 7 0.000 0.000 0.000 0.000 {method 'set_auto_chartostring' of 'netCDF4._netCDF4.Variable' objects} 1 0.000 0.000 0.000 0.000 {method 'ncattrs' of 'netCDF4._netCDF4.Dataset' objects} 4 0.000 0.000 0.000 0.000 common.py:472(is_timedelta64_dtype) 4 0.000 0.000 0.000 0.000 common.py:980(is_unsigned_integer_dtype) 4 0.000 0.000 0.000 0.000 base.py:3805(_coerce_to_ndarray) 3 0.000 0.000 0.000 0.000 itertoolz.py:241(unique) 11 0.000 0.000 0.000 0.000 core.py:137(<genexpr>) 3 0.000 0.000 0.000 0.000 indexing.py:600(__init__) 2 0.000 0.000 0.000 0.000 {method 'keys' of 'collections.OrderedDict' objects} 2 0.000 0.000 0.000 0.000 {built-in method _thread.allocate_lock} 1 0.000 0.000 0.000 0.000 {built-in method _collections._count_elements} 8 0.000 0.000 0.000 0.000 {method 'encode' of 'str' objects} 3 0.000 0.000 0.000 0.000 {method 'rfind' of 'str' objects} 8 0.000 0.000 0.000 0.000 {method 'add' of 'set' objects} 3 0.000 0.000 0.000 0.000 {method 'intersection' of 'set' objects} 7 0.000 0.000 0.000 0.000 {method 'setdefault' of 'dict' objects} 13 0.000 0.000 0.000 0.000 {method 'pop' of 'dict' objects} 1 0.000 0.000 0.000 0.000 posixpath.py:64(isabs) 1 0.000 0.000 0.015 0.015 posixpath.py:178(lexists) 1 0.000 0.000 0.000 0.000 posixpath.py:232(expanduser) 2 0.000 0.000 0.000 0.000 _collections_abc.py:672(keys) 7 0.000 0.000 0.000 0.000 contextlib.py:352(__init__) 7 0.000 0.000 0.000 0.000 contextlib.py:355(__enter__) 2 0.000 0.000 0.000 0.000 combine.py:496(vars_as_keys) 2 0.000 0.000 0.000 0.000 combine.py:517(_new_tile_id) 7 0.000 0.000 0.000 0.000 common.py:29(_decode_variable_name) 1 0.000 0.000 0.000 0.000 coordinates.py:160(__init__) 3 0.000 0.000 0.000 0.000 dataset.py:262(__iter__) 2 0.000 0.000 0.000 0.000 dataset.py:266(__len__) 2 0.000 0.000 0.000 0.000 dataset.py:940(__iter__) 1 0.000 0.000 0.000 0.000 dataset.py:1071(coords) 7 0.000 0.000 0.000 0.000 dataset.py:1381(<genexpr>) 4 0.000 0.000 0.000 0.000 variables.py:61(dtype) 1 0.000 0.000 0.000 0.000 file_manager.py:189(__del__) 1 0.000 0.000 0.000 0.000 lru_cache.py:47(_enforce_size_limit) 1 0.000 0.000 0.000 0.000 netCDF4_.py:138(_nc4_require_group) 1 0.000 0.000 0.000 0.000 netCDF4_.py:408(get_encoding) 1 0.000 0.000 0.000 0.000 api.py:66(_get_default_engine_netcdf) 4 0.000 0.000 0.000 0.000 utils.py:197(<genexpr>) 1 0.000 0.000 0.000 0.000 alignment.py:17(_get_joiner) 10 0.000 0.000 0.000 0.000 alignment.py:184(is_alignable) 5 0.000 0.000 0.000 0.000 alignment.py:226(<genexpr>) 5 0.000 0.000 0.000 0.000 utils.py:325(__contains__) 5 0.000 0.000 0.000 0.000 {method 'isunlimited' of 'netCDF4._netCDF4.Dimension' objects} 8 0.000 0.000 0.000 0.000 inference.py:435(is_hashable) 12 0.000 0.000 0.000 0.000 common.py:119(<lambda>) 8 0.000 0.000 0.000 0.000 common.py:127(<lambda>) 8 0.000 0.000 0.000 0.000 common.py:122(classes_and_not_datetimelike) 4 0.000 0.000 0.000 0.000 base.py:675(dtype) 8 0.000 0.000 0.000 0.000 base.py:1395(nlevels) 24 0.000 0.000 0.000 0.000 functoolz.py:15(identity) 1 0.000 0.000 0.000 0.000 base.py:610(normalize_dict) 1 0.000 0.000 0.000 0.000 base.py:625(normalize_seq) 3 0.000 0.000 0.000 0.000 indexing.py:453(__init__) 4 0.000 0.000 0.000 0.000 indexing.py:713(<listcomp>) 3 0.000 0.000 0.000 0.000 variable.py:821(chunks) 4 0.000 0.000 0.000 0.000 variable.py:1731(chunk) 8 0.000 0.000 0.000 0.000 variable.py:1874(name) 3 0.000 0.000 0.000 0.000 {method 'values' of 'collections.OrderedDict' objects} 6 0.000 0.000 0.000 0.000 {built-in method posix.fspath} 1 0.000 0.000 0.000 0.000 {method 'join' of 'str' objects} 4 0.000 0.000 0.000 0.000 {method 'startswith' of 'str' objects} 3 0.000 0.000 0.000 0.000 {method 'copy' of 'set' objects} 1 0.000 0.000 0.000 0.000 {method 'union' of 'set' objects} 1 0.000 0.000 0.000 0.000 {method 'get' of 'dict' objects} 2 0.000 0.000 0.000 0.000 posixpath.py:41(_get_sep) 1 0.000 0.000 0.000 0.000 _collections_abc.py:680(values) 9 0.000 0.000 0.000 0.000 _collections_abc.py:698(__init__) 7 0.000 0.000 0.000 0.000 contextlib.py:358(__exit__) 1 0.000 0.000 0.000 0.000 glob.py:145(has_magic) 1 0.000 0.000 0.000 0.000 combine.py:428(<listcomp>) 2 0.000 0.000 0.000 0.000 merge.py:301(_get_priority_vars) 1 0.000 0.000 0.000 0.000 merge.py:370(extract_indexes) 1 0.000 0.000 0.000 0.000 merge.py:378(assert_valid_explicit_coords) 5 0.000 0.000 0.000 0.000 dataset.py:259(__init__) 1 0.000 0.000 0.000 0.000 dataset.py:375(<listcomp>) 2 0.000 0.000 0.000 0.000 dataset.py:416(attrs) 5 0.000 0.000 0.000 0.000 dataset.py:428(encoding) 1 0.000 0.000 0.000 0.000 dataset.py:436(encoding) 1 0.000 0.000 0.000 0.000 dataset.py:1373(<listcomp>) 1 0.000 0.000 0.000 0.000 variables.py:76(lazy_elemwise_func) 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects} 7 0.000 0.000 0.000 0.000 strings.py:39(__init__) 1 0.000 0.000 0.000 0.000 file_manager.py:241(__init__) 1 0.000 0.000 0.000 0.000 locks.py:206(ensure_lock) 1 0.000 0.000 0.000 0.000 netCDF4_.py:236(__init__) 1 0.000 0.000 0.000 0.000 api.py:638(<listcomp>) 1 0.000 0.000 0.000 0.000 utils.py:452(_tostr) 7 0.000 0.000 0.000 0.000 {method 'set_auto_maskandscale' of 'netCDF4._netCDF4.Variable' objects} 1 0.000 0.000 0.000 0.000 utils.py:514(is_grib_path) 3 0.000 0.000 0.000 0.000 core.py:989(name) 8 0.000 0.000 0.000 0.000 variable.py:1834(to_index_variable) 1 0.000 0.000 0.000 0.000 {method 'rstrip' of 'str' objects} 1 0.000 0.000 0.000 0.000 {method 'endswith' of 'str' objects} 1 0.000 0.000 0.000 0.000 {method 'keys' of 'dict' objects} 1 0.000 0.000 0.000 0.000 glob.py:22(iglob) 2 0.000 0.000 0.000 0.000 variable.py:2007(<listcomp>) 1 0.000 0.000 0.000 0.000 combine.py:345(_auto_concat) 1 0.000 0.000 0.000 0.000 combine.py:435(<listcomp>) 1 0.000 0.000 0.000 0.000 merge.py:519(<listcomp>) 2 0.000 0.000 0.000 0.000 dataset.py:934(__len__) 2 0.000 0.000 0.000 0.000 variables.py:106(safe_setitem) 1 0.000 0.000 0.000 0.000 api.py:479(__init__) 1 0.000 0.000 0.000 0.000 utils.py:20(_check_inplace) 7 0.000 0.000 0.000 0.000 {method 'chunking' of 'netCDF4._netCDF4.Variable' objects} 4 0.000 0.000 0.000 0.000 utils.py:498(close_on_error) 1 0.000 0.000 0.000 0.000 numeric.py:101(_assert_safe_casting) 3 0.000 0.000 0.000 0.000 core.py:167(<listcomp>)

Output of ds: <xarray.Dataset> Dimensions: (bnds: 2, lat: 360, level: 23, lon: 576, time: 1827) Coordinates: * lat (lat) float64 -89.75 -89.25 -88.75 -88.25 ... 88.75 89.25 89.75 * level (level) float32 1000.0 925.0 850.0 775.0 700.0 ... 5.0 3.0 2.0 1.0 * lon (lon) float64 0.3125 0.9375 1.562 2.188 ... 358.4 359.1 359.7 * time (time) float64 7.671e+03 7.672e+03 ... 9.496e+03 9.497e+03 Dimensions without coordinates: bnds Data variables: lat_bnds (lat, bnds) float64 dask.array<shape=(360, 2), chunksize=(360, 2)> lon_bnds (lon, bnds) float64 dask.array<shape=(576, 2), chunksize=(576, 2)> sphum (time, level, lat, lon) float32 dask.array<shape=(1827, 23, 360, 576), chunksize=(1827, 23, 360, 576)>

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  slow performance with open_mfdataset 224553135
464113917 https://github.com/pydata/xarray/issues/1385#issuecomment-464113917 https://api.github.com/repos/pydata/xarray/issues/1385 MDEyOklzc3VlQ29tbWVudDQ2NDExMzkxNw== chuaxr 30007270 2019-02-15T16:34:02Z 2019-02-15T16:34:35Z NONE

On a related note, is it possible to clear out the memory used by the xarray dataset after it is no longer needed?

Here's an example:

python fname = '/work/xrc/AM4_xrc/c192L33_am4p0_cmip6Diag/daily/5yr/atmos.19800101-19841231.ucomp.nc'

python import xarray as xr

python with xr.set_options(file_cache_maxsize=1): %time ds = xr.open_mfdataset(fname)

CPU times: user 48 ms, sys: 124 ms, total: 172 ms
Wall time: 29.7 s

```python

fname2 = '/work/xrc/AM4_xrc/c192L33_am4p0_cmip6Diag/daily/5yr/atmos.20100101-20141231.ucomp.nc' ```

```python

with xr.set_options(file_cache_maxsize=1): %time ds = xr.open_mfdataset(fname2) # would like this to free up memory used by fname ```

CPU times: user 39 ms, sys: 124 ms, total: 163 ms
Wall time: 28.8 s

python import gc gc.collect()

```python

with xr.set_options(file_cache_maxsize=1): # expected to take same time as first call %time ds = xr.open_mfdataset(fname) ```

CPU times: user 28 ms, sys: 10 ms, total: 38 ms
Wall time: 37.9 ms
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  slow performance with open_mfdataset 224553135
463369751 https://github.com/pydata/xarray/issues/1385#issuecomment-463369751 https://api.github.com/repos/pydata/xarray/issues/1385 MDEyOklzc3VlQ29tbWVudDQ2MzM2OTc1MQ== rabernat 1197350 2019-02-13T21:04:03Z 2019-02-13T21:04:03Z MEMBER

What if you do xr.open_mfdataset(fname, decode_times=False)?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  slow performance with open_mfdataset 224553135
463367754 https://github.com/pydata/xarray/issues/1385#issuecomment-463367754 https://api.github.com/repos/pydata/xarray/issues/1385 MDEyOklzc3VlQ29tbWVudDQ2MzM2Nzc1NA== chuaxr 30007270 2019-02-13T20:58:52Z 2019-02-13T20:59:06Z NONE

It seems my issue has to do with the time coordinate:
```

fname = '/work/xrc/AM4_xrc/c192L33_am4p0_cmip6Diag/daily/5yr/atmos.20100101-20141231.sphum.nc' %prun ds = xr.open_mfdataset(fname,drop_variables='time') 7510 function calls (7366 primitive calls) in 0.068 seconds

Ordered by: internal time

ncalls tottime percall cumtime percall filename:lineno(function) 1 0.039 0.039 0.039 0.039 netCDF4_.py:244(_open_netcdf4_group) 3 0.022 0.007 0.022 0.007 {built-in method _operator.getitem} 1 0.001 0.001 0.001 0.001 {built-in method posix.lstat} 125/113 0.000 0.000 0.001 0.000 indexing.py:504(shape) 11 0.000 0.000 0.000 0.000 core.py:137(<genexpr>)

fname = '/work/xrc/AM4_xrc/c192L33_am4p0_cmip6Diag/daily/5yr/atmos.20000101-20041231.sphum.nc' %prun ds = xr.open_mfdataset(fname)

      13143 function calls (12936 primitive calls) in 23.853 seconds

Ordered by: internal time

ncalls tottime percall cumtime percall filename:lineno(function) 6 23.791 3.965 23.791 3.965 {built-in method operator.getitem} 1 0.029 0.029 0.029 0.029 netCDF4.py:244(_open_netcdf4_group) 2 0.023 0.012 0.023 0.012 {cftime._cftime.num2date} 1 0.001 0.001 0.001 0.001 {built-in method posix.lstat} 158/139 0.000 0.000 0.001 0.000 indexing.py:504(shape) ```

Both files are 33 GB. This is using xarray 0.11.3.

I also confirm that nc.MFDataset is much faster (<1s).

Is there any speed-up for the time coordinates possible, given that my data follows a standard calendar? (Short of using drop_variables='time' and then manually adding the time coordinate...)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  slow performance with open_mfdataset 224553135
461561653 https://github.com/pydata/xarray/issues/1385#issuecomment-461561653 https://api.github.com/repos/pydata/xarray/issues/1385 MDEyOklzc3VlQ29tbWVudDQ2MTU2MTY1Mw== sbiner 16655388 2019-02-07T19:22:58Z 2019-02-07T19:22:58Z NONE

I just tried and it did not help ...

``` In [5]: run test_ouverture_fichier_nc_vs_xr.py timing glob: 0.00s timing netcdf4: 3.36s timing xarray: 44.82s timing xarray tune: 14.47s

In [6]: xr.show_versions()

INSTALLED VERSIONS

commit: None python: 2.7.15 |Anaconda, Inc.| (default, Dec 14 2018, 19:04:19) [GCC 7.3.0] python-bits: 64 OS: Linux OS-release: 3.10.0-514.2.2.el7.x86_64 machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_CA.UTF-8 LOCALE: None.None libhdf5: 1.10.4 libnetcdf: 4.6.1

xarray: 0.11.3 pandas: 0.24.0 numpy: 1.13.3 scipy: 1.2.0 netCDF4: 1.4.2 pydap: None h5netcdf: None h5py: None Nio: None zarr: None cftime: 1.0.3.4 PseudonetCDF: None rasterio: None cfgrib: None iris: None bottleneck: 1.2.1 cyordereddict: None dask: 1.0.0 distributed: 1.25.2 matplotlib: 2.2.3 cartopy: None seaborn: None setuptools: 40.5.0 pip: 19.0.1 conda: None pytest: None IPython: 5.8.0 sphinx: 1.8.2 ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  slow performance with open_mfdataset 224553135
461554066 https://github.com/pydata/xarray/issues/1385#issuecomment-461554066 https://api.github.com/repos/pydata/xarray/issues/1385 MDEyOklzc3VlQ29tbWVudDQ2MTU1NDA2Ng== TomNicholas 35968931 2019-02-07T19:00:57Z 2019-02-07T19:00:57Z MEMBER

Looks like you're using xarray v0.11.0, but the most recent one is v0.11.3. There have been several changes since then which might affect this, try that first.

On Thu, 7 Feb 2019, 18:53 sbiner, notifications@github.com wrote:

I have the same problem. open_mfdatasset is 10X slower than nc.MFDataset. I used the following code to get some timing on opening 456 local netcdf files located in a nc_local directory (of total size of 532MB)

clef = 'nc_local/*.nc' t00 = time.time() l_fichiers_nc = sorted(glob.glob(clef)) print ('timing glob: {:6.2f}s'.format(time.time()-t00))

netcdf4

t00 = time.time() ds1 = nc.MFDataset(l_fichiers_nc)

dates1 = ouralib.netcdf.calcule_dates(ds1)

print ('timing netcdf4: {:6.2f}s'.format(time.time()-t00))

xarray

t00 = time.time() ds2 = xr.open_mfdataset(l_fichiers_nc) print ('timing xarray: {:6.2f}s'.format(time.time()-t00))

xarray tune

t00 = time.time() ds3 = xr.open_mfdataset(l_fichiers_nc, decode_cf=False, concat_dim='time') ds3 = xr.decode_cf(ds3) print ('timing xarray tune: {:6.2f}s'.format(time.time()-t00))

The output I get is :

timing glob: 0.00s timing netcdf4: 3.80s timing xarray: 44.60s timing xarray tune: 15.61s

I made tests on a centOS server using python2.7 and 3.6, and on mac OS as well with python3.6. The timing changes but the ratios are similar between netCDF4 and xarray.

Is there any way of making open_mfdataset go faster?

In case it helps, here are output from xr.show_versions and %prun xr.open_mfdataset(l_fichiers_nc). I do not know anything about the output of %prun but I have noticed that the first two lines of the ouput are different wether I'm using python 2.7 or python 3.6. I made those tests on centOS and macOS with anaconda environments.

for python 2.7:

    13996351 function calls (13773659 primitive calls) in 42.133 seconds

Ordered by: internal time

ncalls tottime percall cumtime percall filename:lineno(function) 2664 16.290 0.006 16.290 0.006 {time.sleep} 912 6.330 0.007 6.623 0.007 netCDF4_.py:244(_open_netcdf4_group)

for python 3.6:

   9663408 function calls (9499759 primitive calls) in 31.934 seconds

Ordered by: internal time

ncalls tottime percall cumtime percall filename:lineno(function) 5472 15.140 0.003 15.140 0.003 {method 'acquire' of 'thread.lock' objects} 912 5.661 0.006 5.718 0.006 netCDF4.py:244(_open_netcdf4_group)

longer output of %prun with python3.6:

     9663408 function calls (9499759 primitive calls) in 31.934 seconds

Ordered by: internal time

ncalls tottime percall cumtime percall filename:lineno(function) 5472 15.140 0.003 15.140 0.003 {method 'acquire' of 'thread.lock' objects} 912 5.661 0.006 5.718 0.006 netCDF4.py:244(open_netcdf4_group) 4104 0.564 0.000 0.757 0.000 {built-in method _operator.getitem} 133152/129960 0.477 0.000 0.660 0.000 indexing.py:496(shape) 1554550/1554153 0.414 0.000 0.711 0.000 {built-in method builtins.isinstance} 912 0.260 0.000 0.260 0.000 {method 'close' of 'netCDF4._netCDF4.Dataset' objects} 6384 0.244 0.000 0.953 0.000 netCDF4.py:361(open_store_variable) 910 0.241 0.000 0.595 0.001 duck_array_ops.py:141(array_equiv) 20990 0.235 0.000 0.343 0.000 {pandas.libs.lib.is_scalar} 37483/36567 0.228 0.000 0.230 0.000 {built-in method builtins.iter} 93986 0.219 0.000 1.607 0.000 variable.py:239(__init__) 93982 0.194 0.000 0.194 0.000 variable.py:706(attrs) 33744 0.189 0.000 0.189 0.000 {method 'getncattr' of 'netCDF4._netCDF4.Variable' objects} 15511 0.175 0.000 0.638 0.000 core.py:1776(normalize_chunks) 5930 0.162 0.000 0.350 0.000 missing.py:183(_isna_ndarraylike) 297391/296926 0.159 0.000 0.380 0.000 {built-in method builtins.getattr} 134230 0.155 0.000 0.269 0.000 abc.py:180(__instancecheck__) 6384 0.142 0.000 0.199 0.000 netCDF4.py:34(init) 93986 0.126 0.000 0.671 0.000 variable.py:414(_parse_dimensions) 156545 0.119 0.000 0.811 0.000 utils.py:450(ndim) 12768 0.119 0.000 0.203 0.000 core.py:747(blockdims_from_blockshape) 6384 0.117 0.000 2.526 0.000 conventions.py:245(decode_cf_variable) 741183/696380 0.116 0.000 0.134 0.000 {built-in method builtins.len} 41957/23717 0.110 0.000 4.395 0.000 {built-in method numpy.core.multiarray.array} 93978 0.110 0.000 0.110 0.000 variable.py:718(encoding) 219940 0.109 0.000 0.109 0.000 _weakrefset.py:70(contains) 99458 0.100 0.000 0.440 0.000 variable.py:137(as_compatible_data) 53882 0.085 0.000 0.095 0.000 core.py:891(shape) 140604 0.084 0.000 0.628 0.000 variable.py:272(shape) 3192 0.084 0.000 0.170 0.000 utils.py:88(_StartCountStride) 10494 0.081 0.000 0.081 0.000 {method 'reduce' of 'numpy.ufunc' objects} 44688 0.077 0.000 0.157 0.000 variables.py:102(unpack_for_decoding)

output of xr.show_versions()

xr.show_versions()

INSTALLED VERSIONS

commit: None python: 3.6.8.final.0 python-bits: 64 OS: Linux OS-release: 3.10.0-514.2.2.el7.x86_64 machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_CA.UTF-8 LOCALE: en_CA.UTF-8

xarray: 0.11.0 pandas: 0.24.1 numpy: 1.15.4 scipy: None netCDF4: 1.4.2 h5netcdf: None h5py: None Nio: None zarr: None cftime: 1.0.3.4 PseudonetCDF: None rasterio: None iris: None bottleneck: None cyordereddict: None dask: 1.1.1 distributed: 1.25.3 matplotlib: 3.0.2 cartopy: None seaborn: None setuptools: 40.7.3 pip: 19.0.1 conda: None pytest: None IPython: 7.2.0 sphinx: None

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/pydata/xarray/issues/1385#issuecomment-461551320, or mute the thread https://github.com/notifications/unsubscribe-auth/AiTXo7EQRagOh5iu3cYenLDTFDk68MGIks5vLHYMgaJpZM4NJOcQ .

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  slow performance with open_mfdataset 224553135
461551320 https://github.com/pydata/xarray/issues/1385#issuecomment-461551320 https://api.github.com/repos/pydata/xarray/issues/1385 MDEyOklzc3VlQ29tbWVudDQ2MTU1MTMyMA== sbiner 16655388 2019-02-07T18:52:53Z 2019-02-07T18:52:53Z NONE

I have the same problem. open_mfdatasset is 10X slower than nc.MFDataset. I used the following code to get some timing on opening 456 local netcdf files located in a nc_local directory (of total size of 532MB) ``` clef = 'nc_local/*.nc' t00 = time.time() l_fichiers_nc = sorted(glob.glob(clef)) print ('timing glob: {:6.2f}s'.format(time.time()-t00))

netcdf4

t00 = time.time() ds1 = nc.MFDataset(l_fichiers_nc)

dates1 = ouralib.netcdf.calcule_dates(ds1)

print ('timing netcdf4: {:6.2f}s'.format(time.time()-t00))

xarray

t00 = time.time() ds2 = xr.open_mfdataset(l_fichiers_nc) print ('timing xarray: {:6.2f}s'.format(time.time()-t00))

xarray tune

t00 = time.time() ds3 = xr.open_mfdataset(l_fichiers_nc, decode_cf=False, concat_dim='time') ds3 = xr.decode_cf(ds3) print ('timing xarray tune: {:6.2f}s'.format(time.time()-t00)) ``` The output I get is :

timing glob: 0.00s timing netcdf4: 3.80s timing xarray: 44.60s timing xarray tune: 15.61s

I made tests on a centOS server using python2.7 and 3.6, and on mac OS as well with python3.6. The timing changes but the ratios are similar between netCDF4 and xarray.

Is there any way of making open_mfdataset go faster?

In case it helps, here are output from xr.show_versions and %prun xr.open_mfdataset(l_fichiers_nc). I do not know anything about the output of %prun but I have noticed that the first two lines of the ouput are different wether I'm using python 2.7 or python 3.6. I made those tests on centOS and macOS with anaconda environments.

for python 2.7: ``` 13996351 function calls (13773659 primitive calls) in 42.133 seconds

Ordered by: internal time

ncalls tottime percall cumtime percall filename:lineno(function) 2664 16.290 0.006 16.290 0.006 {time.sleep} 912 6.330 0.007 6.623 0.007 netCDF4_.py:244(_open_netcdf4_group) ```

for python 3.6: ``` 9663408 function calls (9499759 primitive calls) in 31.934 seconds

Ordered by: internal time

ncalls tottime percall cumtime percall filename:lineno(function) 5472 15.140 0.003 15.140 0.003 {method 'acquire' of 'thread.lock' objects} 912 5.661 0.006 5.718 0.006 netCDF4.py:244(_open_netcdf4_group) longer output of %prun with python3.6: 9663408 function calls (9499759 primitive calls) in 31.934 seconds

Ordered by: internal time

ncalls tottime percall cumtime percall filename:lineno(function) 5472 15.140 0.003 15.140 0.003 {method 'acquire' of 'thread.lock' objects} 912 5.661 0.006 5.718 0.006 netCDF4.py:244(open_netcdf4_group) 4104 0.564 0.000 0.757 0.000 {built-in method _operator.getitem} 133152/129960 0.477 0.000 0.660 0.000 indexing.py:496(shape) 1554550/1554153 0.414 0.000 0.711 0.000 {built-in method builtins.isinstance} 912 0.260 0.000 0.260 0.000 {method 'close' of 'netCDF4._netCDF4.Dataset' objects} 6384 0.244 0.000 0.953 0.000 netCDF4.py:361(open_store_variable) 910 0.241 0.000 0.595 0.001 duck_array_ops.py:141(array_equiv) 20990 0.235 0.000 0.343 0.000 {pandas.libs.lib.is_scalar} 37483/36567 0.228 0.000 0.230 0.000 {built-in method builtins.iter} 93986 0.219 0.000 1.607 0.000 variable.py:239(__init__) 93982 0.194 0.000 0.194 0.000 variable.py:706(attrs) 33744 0.189 0.000 0.189 0.000 {method 'getncattr' of 'netCDF4._netCDF4.Variable' objects} 15511 0.175 0.000 0.638 0.000 core.py:1776(normalize_chunks) 5930 0.162 0.000 0.350 0.000 missing.py:183(_isna_ndarraylike) 297391/296926 0.159 0.000 0.380 0.000 {built-in method builtins.getattr} 134230 0.155 0.000 0.269 0.000 abc.py:180(__instancecheck__) 6384 0.142 0.000 0.199 0.000 netCDF4.py:34(init) 93986 0.126 0.000 0.671 0.000 variable.py:414(_parse_dimensions) 156545 0.119 0.000 0.811 0.000 utils.py:450(ndim) 12768 0.119 0.000 0.203 0.000 core.py:747(blockdims_from_blockshape) 6384 0.117 0.000 2.526 0.000 conventions.py:245(decode_cf_variable) 741183/696380 0.116 0.000 0.134 0.000 {built-in method builtins.len} 41957/23717 0.110 0.000 4.395 0.000 {built-in method numpy.core.multiarray.array} 93978 0.110 0.000 0.110 0.000 variable.py:718(encoding) 219940 0.109 0.000 0.109 0.000 _weakrefset.py:70(contains) 99458 0.100 0.000 0.440 0.000 variable.py:137(as_compatible_data) 53882 0.085 0.000 0.095 0.000 core.py:891(shape) 140604 0.084 0.000 0.628 0.000 variable.py:272(shape) 3192 0.084 0.000 0.170 0.000 utils.py:88(_StartCountStride) 10494 0.081 0.000 0.081 0.000 {method 'reduce' of 'numpy.ufunc' objects} 44688 0.077 0.000 0.157 0.000 variables.py:102(unpack_for_decoding) ```

output of xr.show_versions() ``` xr.show_versions()

INSTALLED VERSIONS

commit: None python: 3.6.8.final.0 python-bits: 64 OS: Linux OS-release: 3.10.0-514.2.2.el7.x86_64 machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_CA.UTF-8 LOCALE: en_CA.UTF-8

xarray: 0.11.0 pandas: 0.24.1 numpy: 1.15.4 scipy: None netCDF4: 1.4.2 h5netcdf: None h5py: None Nio: None zarr: None cftime: 1.0.3.4 PseudonetCDF: None rasterio: None iris: None bottleneck: None cyordereddict: None dask: 1.1.1 distributed: 1.25.3 matplotlib: 3.0.2 cartopy: None seaborn: None setuptools: 40.7.3 pip: 19.0.1 conda: None pytest: None IPython: 7.2.0 sphinx: None ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  slow performance with open_mfdataset 224553135
439478904 https://github.com/pydata/xarray/issues/1385#issuecomment-439478904 https://api.github.com/repos/pydata/xarray/issues/1385 MDEyOklzc3VlQ29tbWVudDQzOTQ3ODkwNA== chuaxr 30007270 2018-11-16T18:10:53Z 2018-11-16T18:10:53Z NONE

h5netcdf fails with the following error (presumably the file is not compatible): ``` /nbhome/xrc/anaconda2/envs/py361/lib/python3.6/site-packages/h5py/_hl/files.py in make_fid(name, mode, userblock_size, fapl, fcpl, swmr) 97 if swmr and swmr_support: 98 flags |= h5f.ACC_SWMR_READ ---> 99 fid = h5f.open(name, flags, fapl=fapl) 100 elif mode == 'r+': 101 fid = h5f.open(name, h5f.ACC_RDWR, fapl=fapl)

h5py/_objects.pyx in h5py._objects.with_phil.wrapper()

h5py/_objects.pyx in h5py._objects.with_phil.wrapper()

h5py/h5f.pyx in h5py.h5f.open()

OSError: Unable to open file (file signature not found) Using scipy: ncalls tottime percall cumtime percall filename:lineno(function) 65/42 80.448 1.238 80.489 1.916 {built-in method numpy.core.multiarray.array} 764838 0.548 0.000 0.548 0.000 core.py:169(<genexpr>) 3 0.169 0.056 0.717 0.239 core.py:169(<listcomp>) 2 0.041 0.021 0.041 0.021 {cftime._cftime.num2date} 3 0.038 0.013 0.775 0.258 core.py:173(getem) 1 0.024 0.024 81.313 81.313 <string>:1(<module>) ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  slow performance with open_mfdataset 224553135
439454213 https://github.com/pydata/xarray/issues/1385#issuecomment-439454213 https://api.github.com/repos/pydata/xarray/issues/1385 MDEyOklzc3VlQ29tbWVudDQzOTQ1NDIxMw== shoyer 1217238 2018-11-16T16:46:55Z 2018-11-16T16:46:55Z MEMBER

Does it take 10 seconds even to open a single file? The big mystery is what that top line ("_operator.getitem") is but my guess is it's netCDF4-python. h5netcdf might also give different results... On Fri, Nov 16, 2018 at 8:20 AM chuaxr notifications@github.com wrote:

Sorry, I think the speedup had to do with accessing a file that had previously been loaded rather than due to decode_cf. Here's the output of prun using two different files of approximately the same size (~75 GB), run from a notebook without using distributed (which doesn't lead to any speedup):

Output of %prun ds = xr.open_mfdataset('/work/xrc/AM4_skc/ atmos_level.1999010100-2000123123.sphum.nc ',chunks={'lat':20,'time':50,'lon':12,'pfull':11})

      780980 function calls (780741 primitive calls) in 55.374 seconds

Ordered by: internal time

ncalls  tottime  percall  cumtime  percall filename:lineno(function)
     7   54.448    7.778   54.448    7.778 {built-in method _operator.getitem}
764838    0.473    0.000    0.473    0.000 core.py:169(<genexpr>)
     3    0.285    0.095    0.758    0.253 core.py:169(<listcomp>)
     2    0.041    0.020    0.041    0.020 {cftime._cftime.num2date}
     3    0.040    0.013    0.821    0.274 core.py:173(getem)
     1    0.027    0.027   55.374   55.374 <string>:1(<module>)

Output of %prun ds = xr.open_mfdataset('/work/xrc/AM4_skc/ atmos_level.2001010100-2002123123.temp.nc ',chunks={'lat':20,'time':50,'lon':12,'pfull':11}, decode_cf=False)

      772212 function calls (772026 primitive calls) in 56.000 seconds

Ordered by: internal time

ncalls  tottime  percall  cumtime  percall filename:lineno(function)
     5   55.213   11.043   55.214   11.043 {built-in method _operator.getitem}
764838    0.486    0.000    0.486    0.000 core.py:169(<genexpr>)
     3    0.185    0.062    0.671    0.224 core.py:169(<listcomp>)
     3    0.041    0.014    0.735    0.245 core.py:173(getem)
     1    0.027    0.027   56.001   56.001 <string>:1(<module>)

/work isn't a remote archive, so it surprises me that this should happen.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/pydata/xarray/issues/1385#issuecomment-439445695, or mute the thread https://github.com/notifications/unsubscribe-auth/ABKS1jmFqfe9_dIgHAMYlVOh7WKhzO8Kks5uvuXKgaJpZM4NJOcQ .

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  slow performance with open_mfdataset 224553135
439445695 https://github.com/pydata/xarray/issues/1385#issuecomment-439445695 https://api.github.com/repos/pydata/xarray/issues/1385 MDEyOklzc3VlQ29tbWVudDQzOTQ0NTY5NQ== chuaxr 30007270 2018-11-16T16:20:25Z 2018-11-16T16:20:25Z NONE

Sorry, I think the speedup had to do with accessing a file that had previously been loaded rather than due to decode_cf. Here's the output of prun using two different files of approximately the same size (~75 GB), run from a notebook without using distributed (which doesn't lead to any speedup):

Output of %prun ds = xr.open_mfdataset('/work/xrc/AM4_skc/atmos_level.1999010100-2000123123.sphum.nc',chunks={'lat':20,'time':50,'lon':12,'pfull':11})

```

      780980 function calls (780741 primitive calls) in 55.374 seconds

Ordered by: internal time

ncalls  tottime  percall  cumtime  percall filename:lineno(function)
     7   54.448    7.778   54.448    7.778 {built-in method _operator.getitem}
764838    0.473    0.000    0.473    0.000 core.py:169(<genexpr>)
     3    0.285    0.095    0.758    0.253 core.py:169(<listcomp>)
     2    0.041    0.020    0.041    0.020 {cftime._cftime.num2date}
     3    0.040    0.013    0.821    0.274 core.py:173(getem)
     1    0.027    0.027   55.374   55.374 <string>:1(<module>)

Output of %prun ds = xr.open_mfdataset('/work/xrc/AM4_skc/atmos_level.2001010100-2002123123.temp.nc',chunks={'lat':20,'time':50,'lon':12,'pfull':11},\ decode_cf=False)

      772212 function calls (772026 primitive calls) in 56.000 seconds

Ordered by: internal time

ncalls  tottime  percall  cumtime  percall filename:lineno(function)
     5   55.213   11.043   55.214   11.043 {built-in method _operator.getitem}
764838    0.486    0.000    0.486    0.000 core.py:169(<genexpr>)
     3    0.185    0.062    0.671    0.224 core.py:169(<listcomp>)
     3    0.041    0.014    0.735    0.245 core.py:173(getem)
     1    0.027    0.027   56.001   56.001 <string>:1(<module>)

```

/work isn't a remote archive, so it surprises me that this should happen.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  slow performance with open_mfdataset 224553135
439263419 https://github.com/pydata/xarray/issues/1385#issuecomment-439263419 https://api.github.com/repos/pydata/xarray/issues/1385 MDEyOklzc3VlQ29tbWVudDQzOTI2MzQxOQ== shoyer 1217238 2018-11-16T02:45:05Z 2018-11-16T02:45:05Z MEMBER

@chuaxr What do you see when you use %prun when opening the dataset? This might point to the bottleneck.

One way to fix this would be to move our call to decode_cf() in open_dataset() to after applying chunking, i.e., to switch up the order of operations on these lines: https://github.com/pydata/xarray/blob/f547ed0b379ef70a3bda5e77f66de95ec2332ddf/xarray/backends/api.py#L270-L296

In practice, is the difference between using xarray's internal lazy array classes for decoding and dask for decoding. I would expect to see small differences in performance between these approaches (especially when actually computing data), but for constructing the computation graph I would expect them to have similar performance. It is puzzling that dask is orders of magnitude faster -- that suggests that something else is going wrong in the normal code path for decode_cf(). It would certainly be good to understand this before trying to apply any fixes.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  slow performance with open_mfdataset 224553135
439042364 https://github.com/pydata/xarray/issues/1385#issuecomment-439042364 https://api.github.com/repos/pydata/xarray/issues/1385 MDEyOklzc3VlQ29tbWVudDQzOTA0MjM2NA== chuaxr 30007270 2018-11-15T13:37:16Z 2018-11-15T14:06:04Z NONE

Yes, I'm on 0.11.

Nothing displays on the task stream/ progress bar when using open_mfdataset, although I can monitor progress when, say, computing the mean.

The output from %time using decode_cf = False is CPU times: user 4.42 s, sys: 392 ms, total: 4.82 s Wall time: 4.74 s

and for decode_cf = True: CPU times: user 11.6 s, sys: 1.61 s, total: 13.2 s Wall time: 3min 28s

Using xr.set_options(file_cache_maxsize=1) doesn't make any noticeable difference.

If I repeat the open_mfdataset for another 5 files (after opening the first 5), I occasionally get this warning: distributed.utils_perf - WARNING - full garbage collections took 24% CPU time recently (threshold: 10%)

I only began using the dashboard recently; please let me know if there's something basic I'm missing.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  slow performance with open_mfdataset 224553135
438873285 https://github.com/pydata/xarray/issues/1385#issuecomment-438873285 https://api.github.com/repos/pydata/xarray/issues/1385 MDEyOklzc3VlQ29tbWVudDQzODg3MzI4NQ== shoyer 1217238 2018-11-15T00:45:53Z 2018-11-15T00:45:53Z MEMBER

@chuaxr I assume you're testing this with xarray 0.11?

It would be good to do some profiling to figure out what is going wrong here.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  slow performance with open_mfdataset 224553135
438870575 https://github.com/pydata/xarray/issues/1385#issuecomment-438870575 https://api.github.com/repos/pydata/xarray/issues/1385 MDEyOklzc3VlQ29tbWVudDQzODg3MDU3NQ== chuaxr 30007270 2018-11-15T00:32:42Z 2018-11-15T00:32:42Z NONE

I can confirm that ds = xr.open_mfdataset(data_fnames,chunks={'lat':20,'time':50,'lon':24,'pfull':11},\ decode_cf=False) ds = xr.decode_cf(ds) is much faster (seconds vs minutes) than

``` ds = xr.open_mfdataset(data_fnames,chunks={'lat':20,'time':50,'lon':24,'pfull':11})

``` . For reference, data_fnames is a list of 5 files, each of which is ~75 GB.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  slow performance with open_mfdataset 224553135
437630511 https://github.com/pydata/xarray/issues/1385#issuecomment-437630511 https://api.github.com/repos/pydata/xarray/issues/1385 MDEyOklzc3VlQ29tbWVudDQzNzYzMDUxMQ== shoyer 1217238 2018-11-10T23:38:10Z 2018-11-10T23:38:10Z MEMBER

Was this fixed by https://github.com/pydata/xarray/pull/2047?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  slow performance with open_mfdataset 224553135
371933603 https://github.com/pydata/xarray/issues/1385#issuecomment-371933603 https://api.github.com/repos/pydata/xarray/issues/1385 MDEyOklzc3VlQ29tbWVudDM3MTkzMzYwMw== shoyer 1217238 2018-03-09T20:17:19Z 2018-03-09T20:17:19Z MEMBER

OK, so it seems that we need a change to disable wrapping dask arrays with LazilyIndexedArray. Dask arrays are already lazy!

{
    "total_count": 3,
    "+1": 3,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  slow performance with open_mfdataset 224553135
371891466 https://github.com/pydata/xarray/issues/1385#issuecomment-371891466 https://api.github.com/repos/pydata/xarray/issues/1385 MDEyOklzc3VlQ29tbWVudDM3MTg5MTQ2Ng== rabernat 1197350 2018-03-09T17:53:15Z 2018-03-09T17:53:15Z MEMBER

Calling ds = xr.decode_cf(ds, decode_times=False) on the dataset returns instantly. However, the variable data is wrapped in the adaptors, effectively destroying the chunks ```python

ds.SST.variable._data LazilyIndexedArray(array=DaskIndexingAdapter(array=dask.array<_apply_mask, shape=(16401, 2400, 3600), dtype=float32, chunksize=(1, 2400, 3600)>), key=BasicIndexer((slice(None, None, None), slice(None, None, None), slice(None, None, None)))) ```

Calling getitem on this array triggers the whole dask array to be computed, which would takes forever and would completely blow out the notebook memory. This is because of #1372, which would be fixed by #1725.

This has actually become a major showstopper for me. I need to work with this dataset in decoded form.

Versions

INSTALLED VERSIONS ------------------ commit: None python: 3.6.4.final.0 python-bits: 64 OS: Linux OS-release: 3.12.62-60.64.8-default machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: en_US.UTF-8 xarray: 0.10.1 pandas: 0.22.0 numpy: 1.13.3 scipy: 1.0.0 netCDF4: 1.3.1 h5netcdf: 0.5.0 h5py: 2.7.1 Nio: None zarr: 2.2.0a2.dev176 bottleneck: 1.2.1 cyordereddict: None dask: 0.17.1 distributed: 1.21.3 matplotlib: 2.1.2 cartopy: 0.15.1 seaborn: 0.8.1 setuptools: 38.4.0 pip: 9.0.1 conda: None pytest: 3.3.2 IPython: 6.2.1
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  slow performance with open_mfdataset 224553135
370092011 https://github.com/pydata/xarray/issues/1385#issuecomment-370092011 https://api.github.com/repos/pydata/xarray/issues/1385 MDEyOklzc3VlQ29tbWVudDM3MDA5MjAxMQ== shoyer 1217238 2018-03-02T23:58:26Z 2018-03-02T23:58:26Z MEMBER

@rabernat How does performance compare if you call xarray.decode_cf() on the opened dataset? The adjustments I recently did to lazy decoding should only help once the data is already loaded into dask.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  slow performance with open_mfdataset 224553135
370064483 https://github.com/pydata/xarray/issues/1385#issuecomment-370064483 https://api.github.com/repos/pydata/xarray/issues/1385 MDEyOklzc3VlQ29tbWVudDM3MDA2NDQ4Mw== rabernat 1197350 2018-03-02T21:57:26Z 2018-03-02T21:57:26Z MEMBER

An update on this long-standing issue.

I have learned that open_mfdataset can be blazingly fast if decode_cf=False but extremely slow with decode_cf=True.

As an example, I am loading a POP datataset on cheyenne. Anyone with access can try this example. ```python base_dir = '/glade/scratch/rpa/' prefix = 'BRCP85C5CN_ne120_t12_pop62.c13b17.asdphys.001' code = 'pop.h.nday1.SST' glob_pattern = os.path.join(base_dir, prefix, '%s.%s.*.nc' % (prefix, code))

def non_time_coords(ds): return [v for v in ds.data_vars if 'time' not in ds[v].dims]

def drop_non_essential_vars_pop(ds): return ds.drop(non_time_coords(ds))

this runs almost instantly

ds = xr.open_mfdataset(glob_pattern, decode_times=False, chunks={'time': 1}, preprocess=drop_non_essential_vars_pop, decode_cf=False) And returns this <xarray.Dataset> Dimensions: (d2: 2, nlat: 2400, nlon: 3600, time: 16401, z_t: 62, z_t_150m: 15, z_w: 62, z_w_bot: 62, z_w_top: 62) Coordinates: * z_w_top (z_w_top) float32 0.0 1000.0 2000.0 3000.0 4000.0 5000.0 ... * z_t (z_t) float32 500.0 1500.0 2500.0 3500.0 4500.0 5500.0 ... * z_w (z_w) float32 0.0 1000.0 2000.0 3000.0 4000.0 5000.0 6000.0 ... * z_t_150m (z_t_150m) float32 500.0 1500.0 2500.0 3500.0 4500.0 5500.0 ... * z_w_bot (z_w_bot) float32 1000.0 2000.0 3000.0 4000.0 5000.0 6000.0 ... * time (time) float64 7.322e+05 7.322e+05 7.322e+05 7.322e+05 ... Dimensions without coordinates: d2, nlat, nlon Data variables: time_bound (time, d2) float64 dask.array<shape=(16401, 2), chunksize=(1, 2)> SST (time, nlat, nlon) float32 dask.array<shape=(16401, 2400, 3600), chunksize=(1, 2400, 3600)> Attributes: nsteps_total: 480 tavg_sum: 64800.0 title: BRCP85C5CN_ne120_t12_pop62.c13b17.asdphys.001 start_time: This dataset was created on 2016-03-14 at 05:32:30.3 Conventions: CF-1.0; http://www.cgd.ucar.edu/cms/eaton/netcdf/CF-curren... source: CCSM POP2, the CCSM Ocean Component cell_methods: cell_methods = time: mean ==> the variable values are aver... calendar: All years have exactly 365 days. history: none contents: Diagnostic and Prognostic Variables revision: $Id: tavg.F90 56176 2013-12-20 18:35:46Z mlevy@ucar.edu $ ```

This is roughly 45 years of daily data, one file per year.

Instead, if I just change decode_cf=True (the default), it takes forever. I can monitor what is happening via the distributed dashboard. It looks like this:

There are more of these open_dataset tasks then there are number of files (45), so I can only presume there are 16401 individual tasks (one for each timestep), which each takes about 1 s in serial.

This is a real failure of lazy decoding. Maybe it can be fixed by #1725, possibly related to #1372.

cc Pangeo folks: @jhamman, @mrocklin

{
    "total_count": 2,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 2,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  slow performance with open_mfdataset 224553135
297539517 https://github.com/pydata/xarray/issues/1385#issuecomment-297539517 https://api.github.com/repos/pydata/xarray/issues/1385 MDEyOklzc3VlQ29tbWVudDI5NzUzOTUxNw== shoyer 1217238 2017-04-26T20:59:23Z 2017-04-26T20:59:23Z MEMBER

For example, can I give a hint to xarray that this reindex_variables step is not necessary

Yes, adding an boolean argument prealigned which defaults to False to concat seems like a very reasonable optimization here.

But more generally, I am a little surprised by how slow pandas.Index.get_indexer and pandas.Index.is_unique are. This suggests we should add a fast-path optimization to skip these steps in reindex_variables: https://github.com/pydata/xarray/blob/ab4ffee919d4abe9f6c0cf6399a5827c38b9eb5d/xarray/core/alignment.py#L302-L306

Basically, if index.equals(target), we should just set indexer = np.arange(target.size). Although, if we have duplicate values in the index, the operation should arguably fail for correctness.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  slow performance with open_mfdataset 224553135
297494539 https://github.com/pydata/xarray/issues/1385#issuecomment-297494539 https://api.github.com/repos/pydata/xarray/issues/1385 MDEyOklzc3VlQ29tbWVudDI5NzQ5NDUzOQ== rabernat 1197350 2017-04-26T18:07:03Z 2017-04-26T18:07:03Z MEMBER

cc: @geosciz, who is helping with this project.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  slow performance with open_mfdataset 224553135

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 16.885ms · About: xarray-datasette