issue_comments
9 rows where author_association = "MEMBER", issue = 253476466 and user = 1217238 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date)
issue 1
- Better compression algorithms for NetCDF · 9 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
381679096 | https://github.com/pydata/xarray/issues/1536#issuecomment-381679096 | https://api.github.com/repos/pydata/xarray/issues/1536 | MDEyOklzc3VlQ29tbWVudDM4MTY3OTA5Ng== | shoyer 1217238 | 2018-04-16T17:09:06Z | 2018-04-16T17:09:06Z | MEMBER | @crusaderky That would work for me, too. No strong preference from my side. In the worst case, we would be stuck maintaining the extra encoding Take a look at h5netcdf for a reference on what that translation layer should do. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Better compression algorithms for NetCDF 253476466 | |
373769617 | https://github.com/pydata/xarray/issues/1536#issuecomment-373769617 | https://api.github.com/repos/pydata/xarray/issues/1536 | MDEyOklzc3VlQ29tbWVudDM3Mzc2OTYxNw== | shoyer 1217238 | 2018-03-16T16:31:07Z | 2018-03-16T16:31:07Z | MEMBER | If using custom compression filters now results in valid netCDF4 files, then I'd rather we still called this |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Better compression algorithms for NetCDF 253476466 | |
365841787 | https://github.com/pydata/xarray/issues/1536#issuecomment-365841787 | https://api.github.com/repos/pydata/xarray/issues/1536 | MDEyOklzc3VlQ29tbWVudDM2NTg0MTc4Nw== | shoyer 1217238 | 2018-02-15T07:03:04Z | 2018-02-15T07:03:04Z | MEMBER | @crusaderky In case adding this to the netCDF4 library doesn't work out:
Yes, I would suggest that
Yes, this is unfortunately true.
Yes
Yes
I think this is a little easier than that. h5netcdf will always be able to read invalid netCDF files, so we can just continue to use As for picking the default engine, see https://github.com/pydata/xarray/pull/1682, which is pretty close, though I need to think a little bit harder about the API to make sure it's right.
Yes |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Better compression algorithms for NetCDF 253476466 | |
326037069 | https://github.com/pydata/xarray/issues/1536#issuecomment-326037069 | https://api.github.com/repos/pydata/xarray/issues/1536 | MDEyOklzc3VlQ29tbWVudDMyNjAzNzA2OQ== | shoyer 1217238 | 2017-08-30T15:58:35Z | 2017-08-30T15:58:35Z | MEMBER | I just released new version of h5netcdf (0.4.0). It adds a |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Better compression algorithms for NetCDF 253476466 | |
325712523 | https://github.com/pydata/xarray/issues/1536#issuecomment-325712523 | https://api.github.com/repos/pydata/xarray/issues/1536 | MDEyOklzc3VlQ29tbWVudDMyNTcxMjUyMw== | shoyer 1217238 | 2017-08-29T16:05:14Z | 2017-08-29T16:05:14Z | MEMBER | I'm adding a loud warning about this (will eventually be an error) to h5netcdf. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Better compression algorithms for NetCDF 253476466 | |
325555913 | https://github.com/pydata/xarray/issues/1536#issuecomment-325555913 | https://api.github.com/repos/pydata/xarray/issues/1536 | MDEyOklzc3VlQ29tbWVudDMyNTU1NTkxMw== | shoyer 1217238 | 2017-08-29T05:02:49Z | 2017-08-29T05:02:49Z | MEMBER |
Of course not. I understand the issue here. I'll issue a fix for h5netcdf to disable this unless explicitly opted into, but we'll also need a fix for xarray to support the users who are currently using it to save data with complex values -- probably by adding a Here is the NetCDF-C issue I opened on reading these sorts of HDF5 enums: https://github.com/Unidata/netcdf-c/issues/267.
No. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Better compression algorithms for NetCDF 253476466 | |
325516877 | https://github.com/pydata/xarray/issues/1536#issuecomment-325516877 | https://api.github.com/repos/pydata/xarray/issues/1536 | MDEyOklzc3VlQ29tbWVudDMyNTUxNjg3Nw== | shoyer 1217238 | 2017-08-29T00:08:38Z | 2017-08-29T00:08:38Z | MEMBER |
Yes, I suppose so (and this should be fixed). h5netcdf currently writes the I hadn't really thought about this because the convention for marking HDF5 files as netCDF files is very recent and not actually enforced by any software (to my knowledge). |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Better compression algorithms for NetCDF 253476466 | |
325514854 | https://github.com/pydata/xarray/issues/1536#issuecomment-325514854 | https://api.github.com/repos/pydata/xarray/issues/1536 | MDEyOklzc3VlQ29tbWVudDMyNTUxNDg1NA== | shoyer 1217238 | 2017-08-28T23:54:31Z | 2017-08-28T23:54:31Z | MEMBER | @dopplershift No, I don't think so. NetCDF-C only supports zlib compression (and doesn't support h5py's handling of complex variables, either, which use an HDF5 enumerated type). |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Better compression algorithms for NetCDF 253476466 | |
325512111 | https://github.com/pydata/xarray/issues/1536#issuecomment-325512111 | https://api.github.com/repos/pydata/xarray/issues/1536 | MDEyOklzc3VlQ29tbWVudDMyNTUxMjExMQ== | shoyer 1217238 | 2017-08-28T23:35:42Z | 2017-08-28T23:35:42Z | MEMBER | h5netcdf already produces (slightly) incompatible netCDF files for some edge cases (e.g., complex numbers). This should probably be fixed, either by disabling these features or requiring an explicit opt-in, but nobody has gotten around to writing a fix yet (see https://github.com/shoyer/h5netcdf/issues/28). In practice, many of our users seem to be pretty happy making use of these new features. LZF compression would just be another one. I like @jhamman's idea of adding a dedicated @petacube zstandard is great, but it's not in h5py yet! I think we'll need |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Better compression algorithms for NetCDF 253476466 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1