home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

11 rows where user = 1117224 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: issue_url, reactions, created_at (date), updated_at (date)

issue 7

  • Feature/rasterio 4
  • to_netcdf uses deprecated and unnecessary dask call 2
  • Update time-series.rst 1
  • Multiple preprocessing functions in open_mfdataset? 1
  • Option to make DataArray.transpose also transpose coords 1
  • Parallel open_mfdataset 1
  • Remote writing NETCDF4 files to Amazon S3 1

user 1

  • NicWayand · 11 ✖

author_association 1

  • NONE 11
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
518869785 https://github.com/pydata/xarray/issues/2995#issuecomment-518869785 https://api.github.com/repos/pydata/xarray/issues/2995 MDEyOklzc3VlQ29tbWVudDUxODg2OTc4NQ== NicWayand 1117224 2019-08-06T22:39:07Z 2019-08-06T22:39:07Z NONE

Is it possible to read mulitple netcdf files on s3 using open_mfdataset?

{
    "total_count": 3,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 3
}
  Remote writing NETCDF4 files to Amazon S3 449706080
409349569 https://github.com/pydata/xarray/issues/2273#issuecomment-409349569 https://api.github.com/repos/pydata/xarray/issues/2273 MDEyOklzc3VlQ29tbWVudDQwOTM0OTU2OQ== NicWayand 1117224 2018-07-31T20:02:57Z 2018-07-31T20:03:41Z NONE

Ah thanks @jhamman! (Updated to 10.8 and can confirm warnings are supressed)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  to_netcdf uses deprecated and unnecessary dask call 339611449
409342489 https://github.com/pydata/xarray/issues/2273#issuecomment-409342489 https://api.github.com/repos/pydata/xarray/issues/2273 MDEyOklzc3VlQ29tbWVudDQwOTM0MjQ4OQ== NicWayand 1117224 2018-07-31T19:38:09Z 2018-07-31T19:38:09Z NONE

For anyone else looking for a TEMP fix to hide these warnings (they were spamming my output making debugging difficult)... import warnings warnings.simplefilter(action='ignore', category=UserWarning)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  to_netcdf uses deprecated and unnecessary dask call 339611449
398481880 https://github.com/pydata/xarray/issues/1856#issuecomment-398481880 https://api.github.com/repos/pydata/xarray/issues/1856 MDEyOklzc3VlQ29tbWVudDM5ODQ4MTg4MA== NicWayand 1117224 2018-06-19T17:33:03Z 2018-06-19T17:33:03Z NONE

Also hitting this issue. (Use case: formatting netcdf files for some R code that does not have labeled indexing... ugh). Thanks @phausamann for the work around. Default transposing of coods makes sense to me.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Option to make DataArray.transpose also transpose coords 291485366
382071801 https://github.com/pydata/xarray/pull/1983#issuecomment-382071801 https://api.github.com/repos/pydata/xarray/issues/1983 MDEyOklzc3VlQ29tbWVudDM4MjA3MTgwMQ== NicWayand 1117224 2018-04-17T17:14:33Z 2018-04-17T17:38:42Z NONE

Thanks @jhamman for working on this! I did a test on my real world data (1202 ~3mb files) on my local computer and am not getting results I expected: 1) No speed up with parallel=True 2) Slow down when using distributed (processes=16 cores=16).

Am I missing something?

```python nc_files = glob.glob(E.obs['NSIDC_0081']['sipn_nc']+'/*.nc') print(len(nc_files)) 1202

Parallel False

%time ds = xr.open_mfdataset(nc_files, concat_dim='time', parallel=False, autoclose=True) CPU times: user 57.8 s, sys: 3.2 s, total: 1min 1s Wall time: 1min

Parallel True with default scheduler

%time ds = xr.open_mfdataset(nc_files, concat_dim='time', parallel=True, autoclose=True) CPU times: user 1min 16s, sys: 9.82 s, total: 1min 26s Wall time: 1min 16s

Parallel True with distributed

from dask.distributed import Client client = Client() print(client) <Client: scheduler='tcp://127.0.0.1:43291' processes=16 cores=16> %time ds = xr.open_mfdataset(nc_files, concat_dim='time', parallel=True, autoclose=True) CPU times: user 2min 17s, sys: 12.3 s, total: 2min 29s Wall time: 3min 48s

```

On feature/parallel_open_netcdf commit 280a46f13426a462fb3e983cfd5ac7a0565d1826

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Parallel open_mfdataset 304589831
279016156 https://github.com/pydata/xarray/pull/1070#issuecomment-279016156 https://api.github.com/repos/pydata/xarray/issues/1070 MDEyOklzc3VlQ29tbWVudDI3OTAxNjE1Ng== NicWayand 1117224 2017-02-10T17:54:13Z 2017-02-10T17:54:13Z NONE

Hi @fmaussion, no objections here. I got it working just barely for my project, and won't have time in the near future to devote to wrap this up.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Feature/rasterio 186326698
271420374 https://github.com/pydata/xarray/pull/961#issuecomment-271420374 https://api.github.com/repos/pydata/xarray/issues/961 MDEyOklzc3VlQ29tbWVudDI3MTQyMDM3NA== NicWayand 1117224 2017-01-09T21:57:13Z 2017-01-09T21:57:13Z NONE

Numpy's datetime64 dtype currently used by xarray does not store time zone as mentioned here #552. To prevent users from making time zone errors upon dataset creation, I think the implied assumption that UTC be used, should be made more apparent in the readthedocs. Hopefully in the future it can be added to datetime64??

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Update time-series.rst 170688064
257401393 https://github.com/pydata/xarray/pull/1070#issuecomment-257401393 https://api.github.com/repos/pydata/xarray/issues/1070 MDEyOklzc3VlQ29tbWVudDI1NzQwMTM5Mw== NicWayand 1117224 2016-10-31T19:52:56Z 2016-10-31T19:52:56Z NONE

Any idea why Segmentation faults occur for 3.4 and 4.5?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Feature/rasterio 186326698
257385375 https://github.com/pydata/xarray/pull/1070#issuecomment-257385375 https://api.github.com/repos/pydata/xarray/issues/1070 MDEyOklzc3VlQ29tbWVudDI1NzM4NTM3NQ== NicWayand 1117224 2016-10-31T18:51:46Z 2016-10-31T18:51:46Z NONE

Travis-ci fail is because it can't find rasterio, which comes through the conda-forge channel (https://github.com/conda-forge/rasterio-feedstock). I think it needs to be added as described here (http://conda.pydata.org/docs/travis.html#additional-steps). But I am new to Travis-ci, so don't want to mess up the current .travis.yml file.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Feature/rasterio 186326698
257373530 https://github.com/pydata/xarray/pull/1070#issuecomment-257373530 https://api.github.com/repos/pydata/xarray/issues/1070 MDEyOklzc3VlQ29tbWVudDI1NzM3MzUzMA== NicWayand 1117224 2016-10-31T18:10:52Z 2016-10-31T18:10:52Z NONE

Tested open_mfdataset() on 100+ geotiffs and lazyloading with rasterio does appear to be working.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Feature/rasterio 186326698
240249797 https://github.com/pydata/xarray/issues/970#issuecomment-240249797 https://api.github.com/repos/pydata/xarray/issues/970 MDEyOklzc3VlQ29tbWVudDI0MDI0OTc5Nw== NicWayand 1117224 2016-08-16T21:46:43Z 2016-08-16T21:46:43Z NONE

Yes, that is a perfect solution, thank you!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Multiple preprocessing functions in open_mfdataset? 171504099

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 18.534ms · About: xarray-datasette