home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

where author_association = "MEMBER", issue = 28375178 and user = 1217238 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

user 1

  • shoyer · 4 ✖

issue 1

  • Cross-platform in-memory serialization of netcdf4 (like the current scipy-based dumps) · 4 ✖

author_association 1

  • MEMBER · 4 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
454281015 https://github.com/pydata/xarray/issues/23#issuecomment-454281015 https://api.github.com/repos/pydata/xarray/issues/23 MDEyOklzc3VlQ29tbWVudDQ1NDI4MTAxNQ== shoyer 1217238 2019-01-15T06:27:00Z 2019-01-15T06:27:00Z MEMBER

This is actually finally possible to support now with h5py, which as of the latest release supports reading/writing to file-like objects in Python.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Cross-platform in-memory serialization of netcdf4 (like the current scipy-based dumps) 28375178
90780331 https://github.com/pydata/xarray/issues/23#issuecomment-90780331 https://api.github.com/repos/pydata/xarray/issues/23 MDEyOklzc3VlQ29tbWVudDkwNzgwMzMx shoyer 1217238 2015-04-08T02:07:31Z 2015-04-08T02:07:31Z MEMBER

Just wrote a little library to do netCDF4 via h5py: https://github.com/shoyer/h5netcdf

Unfortunately h5py still can't do in-memory file images (https://github.com/h5py/h5py/issues/552). But it does give an alternative way to read/write netCDF4 without going via the Unidata libraries. There is experimental support for engine='h5netcdf' in my dask PR: https://github.com/xray/xray/pull/381

pytables was not a viable option because it can't read or write HDF5 dimension scales, which are necessary for dimensions in netCDF4 files.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Cross-platform in-memory serialization of netcdf4 (like the current scipy-based dumps) 28375178
36186835 https://github.com/pydata/xarray/issues/23#issuecomment-36186835 https://api.github.com/repos/pydata/xarray/issues/23 MDEyOklzc3VlQ29tbWVudDM2MTg2ODM1 shoyer 1217238 2014-02-26T22:38:42Z 2014-02-26T22:38:42Z MEMBER

HDF5 supports homogeneous n-dimensional arrays and metadata, which in principle should be all we need. Actually, under the covers netCDF4 is HDF5. But yes, we would have to do some work to reinvent this.

On Wed, Feb 26, 2014 at 2:32 PM, ebrevdo notifications@github.com wrote:

Looks like this may be the only option. Based on my tests, netCDF4 is strongly antithetical to any kind of streams/piped buffers. If we go the hdf5 route, we'd have to reimplement the CDM/netcdf4 on top of hdf5, no?

Reply to this email directly or view it on GitHubhttps://github.com/akleeman/xray/issues/23#issuecomment-36186205 .

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Cross-platform in-memory serialization of netcdf4 (like the current scipy-based dumps) 28375178
36184762 https://github.com/pydata/xarray/issues/23#issuecomment-36184762 https://api.github.com/repos/pydata/xarray/issues/23 MDEyOklzc3VlQ29tbWVudDM2MTg0NzYy shoyer 1217238 2014-02-26T22:18:28Z 2014-02-26T22:18:28Z MEMBER

Another option is to add an HDF5 backend with pytables. @ToddSmall has a demo script somewhere that shows how you can pass around in-memory HDF5 objects between processes.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Cross-platform in-memory serialization of netcdf4 (like the current scipy-based dumps) 28375178

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 3744.173ms · About: xarray-datasette