home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

7 rows where author_association = "MEMBER", issue = 253476466 and user = 6213168 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • crusaderky · 7 ✖

issue 1

  • Better compression algorithms for NetCDF · 7 ✖

author_association 1

  • MEMBER · 7 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
381504579 https://github.com/pydata/xarray/issues/1536#issuecomment-381504579 https://api.github.com/repos/pydata/xarray/issues/1536 MDEyOklzc3VlQ29tbWVudDM4MTUwNDU3OQ== crusaderky 6213168 2018-04-16T07:26:13Z 2018-04-16T07:26:13Z MEMBER

@shoyer almost finished. However when implementing it I realised that, instead of writing a new engine h5netcdf-new, I could more simply reimplement the already existing h5netcdf to use the new API, and then accept (through a trivial translation layer) both the NetCDF4-python encoding (gzip=True) and the h5py one (compression=zlib). Let me know your thoughts.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Better compression algorithms for NetCDF 253476466
377197301 https://github.com/pydata/xarray/issues/1536#issuecomment-377197301 https://api.github.com/repos/pydata/xarray/issues/1536 MDEyOklzc3VlQ29tbWVudDM3NzE5NzMwMQ== crusaderky 6213168 2018-03-29T10:47:08Z 2018-03-29T10:47:08Z MEMBER

@shoyer new non-functioning public API prototype - please confirm this is what you had in mind

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Better compression algorithms for NetCDF 253476466
373566794 https://github.com/pydata/xarray/issues/1536#issuecomment-373566794 https://api.github.com/repos/pydata/xarray/issues/1536 MDEyOklzc3VlQ29tbWVudDM3MzU2Njc5NA== crusaderky 6213168 2018-03-16T00:38:50Z 2018-03-16T00:38:50Z MEMBER

@shoyer ping - could you give feedback on the API prototype?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Better compression algorithms for NetCDF 253476466
366096878 https://github.com/pydata/xarray/issues/1536#issuecomment-366096878 https://api.github.com/repos/pydata/xarray/issues/1536 MDEyOklzc3VlQ29tbWVudDM2NjA5Njg3OA== crusaderky 6213168 2018-02-15T23:28:08Z 2018-02-15T23:28:08Z MEMBER

@shoyer , see if you like the public API prototype linked above

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Better compression algorithms for NetCDF 253476466
365457702 https://github.com/pydata/xarray/issues/1536#issuecomment-365457702 https://api.github.com/repos/pydata/xarray/issues/1536 MDEyOklzc3VlQ29tbWVudDM2NTQ1NzcwMg== crusaderky 6213168 2018-02-14T00:50:21Z 2018-02-14T00:50:21Z MEMBER

@DennisHeimbigner also, does this mean that h5netcdf should be changed to remove non-gzip compression algorithms from the list of features that requires invalid_netcdf=True?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Better compression algorithms for NetCDF 253476466
365450097 https://github.com/pydata/xarray/issues/1536#issuecomment-365450097 https://api.github.com/repos/pydata/xarray/issues/1536 MDEyOklzc3VlQ29tbWVudDM2NTQ1MDA5Nw== crusaderky 6213168 2018-02-14T00:11:42Z 2018-02-14T00:11:42Z MEMBER

@DennisHeimbigner looks like it's not exposed through netcdf4-python though?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Better compression algorithms for NetCDF 253476466
365410944 https://github.com/pydata/xarray/issues/1536#issuecomment-365410944 https://api.github.com/repos/pydata/xarray/issues/1536 MDEyOklzc3VlQ29tbWVudDM2NTQxMDk0NA== crusaderky 6213168 2018-02-13T21:31:15Z 2018-02-13T21:32:43Z MEMBER

@shoyer I'm starting to work on this.

I'm not sure I understood your latest comment - are you implying that to_hdf5 should internally use the h5netcdf module? I understand the rationale but it sounds a bit counter-intuitive to me?

Also, to allow for non-zlib compression we need to either tap into the new h5netcdf API, or into h5py directly - so I'm afraid to_hdf5 can't be a simple wrapper around to_netcdf.

Could you help me compile a shopping list? - new method Dataset.to_hdf5 - starts as a copy-paste of to_netcdf, including the backend functions underneath - new unit tests, starting as a copy-paste of all unit tests for to_netcdf - change open_dataset and open_mfdataset: - add new possible value for the engine field, "hdf5" - if engine is None and file name terminates with .nc, use the current algorithm to choose default engine - if engine is None and file name terminates with .h5, use h5py - if engine is not None, ignore file extension - add to high level documentation and tutorials - Other?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Better compression algorithms for NetCDF 253476466

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 34.41ms · About: xarray-datasette