html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue https://github.com/pydata/xarray/pull/1457#issuecomment-318468605,https://api.github.com/repos/pydata/xarray/issues/1457,318468605,MDEyOklzc3VlQ29tbWVudDMxODQ2ODYwNQ==,2443309,2017-07-27T19:54:01Z,2017-07-27T19:54:01Z,MEMBER,Yes! Thanks @wesm and @TomAugspurger.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,236347050 https://github.com/pydata/xarray/pull/1457#issuecomment-317091662,https://api.github.com/repos/pydata/xarray/issues/1457,317091662,MDEyOklzc3VlQ29tbWVudDMxNzA5MTY2Mg==,2443309,2017-07-21T19:27:49Z,2017-07-21T19:27:49Z,MEMBER,"Thanks @TomAugspurger - see https://github.com/TomAugspurger/asv-runner/issues/1. All, I added a series of multi-file benchmarks. I think for a first PR, this is ready to fly and we can add more benchmarks as needed. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,236347050 https://github.com/pydata/xarray/pull/1457#issuecomment-315220704,https://api.github.com/repos/pydata/xarray/issues/1457,315220704,MDEyOklzc3VlQ29tbWVudDMxNTIyMDcwNA==,2443309,2017-07-13T22:37:02Z,2017-07-13T22:37:02Z,MEMBER,"@rabernat - do you have any thoughts on this? @pydata/xarray - I'm trying to decide if this is worth spending any more time on. What sort of coverage would we want before we merge this first PR? ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,236347050 https://github.com/pydata/xarray/pull/1457#issuecomment-308935684,https://api.github.com/repos/pydata/xarray/issues/1457,308935684,MDEyOklzc3VlQ29tbWVudDMwODkzNTY4NA==,2443309,2017-06-16T05:20:24Z,2017-06-16T05:20:24Z,MEMBER,"Keep the comments coming! I think we can distinguish between benchmarking for regressions and benchmarking for development and introspection. The former will require some thought as to what machines we want to rely on and how to achieve consistency throughout the development track. It sounds like there are a number of options that we could pursue toward those ends. The latter use of benchmarking is useful on a single machine with only a few commits of history. For the four benchmarks in my sample `dataset_io.py`, we get the following interesting results (for one environment): ``` --[ 0.00%] Benchmarking conda-py2.7-bottleneck-dask-netcdf4-numpy-pandas-scipy ---[ 3.12%] Running dataset_io.IOSingleNetCDF.time_load_dataset_netcdf4 134.34ms ---[ 6.25%] Running dataset_io.IOSingleNetCDF.time_load_dataset_scipy 82.60ms ---[ 9.38%] Running dataset_io.IOSingleNetCDF.time_write_dataset_netcdf4 57.71ms ---[ 12.50%] Running dataset_io.IOSingleNetCDF.time_write_dataset_scipy 267.29ms ``` So the relative performance is useful information in deciding how to use and/or develop xarray. (Granted the exact factors will change depending on machine/architecture/dataset). ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,236347050