issue_comments
5 rows where author_association = "MEMBER" and issue = 595492608 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: reactions, created_at (date), updated_at (date)
issue 1
- Time dtype encoding defaulting to `int64` when writing netcdf or zarr · 5 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
966264200 | https://github.com/pydata/xarray/issues/3942#issuecomment-966264200 | https://api.github.com/repos/pydata/xarray/issues/3942 | IC_kwDOAMm_X845mAWI | spencerkclark 6628425 | 2021-11-11T12:30:21Z | 2021-11-11T12:32:06Z | MEMBER | This logic has been around in xarray for a long time (I think it dates back to https://github.com/pydata/xarray/pull/12!), so it predates me. If I had to guess though, it would have to do with the fact that back then, a form of This of course is not true anymore. We no longer use To be honest, currently it seems the only remaining advantage to choosing a larger time encoding unit and proximate reference date is that it makes the raw encoded values a little more human-readable. However, encoding dates with units of |
{ "total_count": 3, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 } |
Time dtype encoding defaulting to `int64` when writing netcdf or zarr 595492608 | |
965599998 | https://github.com/pydata/xarray/issues/3942#issuecomment-965599998 | https://api.github.com/repos/pydata/xarray/issues/3942 | IC_kwDOAMm_X845jeL- | dcherian 2448579 | 2021-11-10T18:01:46Z | 2021-11-10T18:01:46Z | MEMBER |
It's choosing the highest resolution that matches the data, which has the benefit of allowing the maximum possible time range given the data's frequency: https://github.com/pydata/xarray/blob/5871637873cd83c3a656ee6f4df86ea6628cf68a/xarray/coding/times.py#L317-L319 I'm not sure if this is why it was originally chosen; but that is one advantage. Perhaps @spencerkclark has some insight here. |
{ "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Time dtype encoding defaulting to `int64` when writing netcdf or zarr 595492608 | |
965591847 | https://github.com/pydata/xarray/issues/3942#issuecomment-965591847 | https://api.github.com/repos/pydata/xarray/issues/3942 | IC_kwDOAMm_X845jcMn | dcherian 2448579 | 2021-11-10T17:52:20Z | 2021-11-10T17:52:20Z | MEMBER |
:+1:
Adding this error message would make it obvious that this is happening. PRs are very welcome! |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Time dtype encoding defaulting to `int64` when writing netcdf or zarr 595492608 | |
610444090 | https://github.com/pydata/xarray/issues/3942#issuecomment-610444090 | https://api.github.com/repos/pydata/xarray/issues/3942 | MDEyOklzc3VlQ29tbWVudDYxMDQ0NDA5MA== | rabernat 1197350 | 2020-04-07T15:10:40Z | 2020-04-07T15:10:40Z | MEMBER | I agree with Deepak. Xarray intelligently chooses its encoding when it write the initial dataset to make sure it has enough precision to resolve all times. It cannot magically know that, in the future, you plan to append data which requires greater precision. Your options are: - If you know from the outset that you will require greater precision in time encoding, you can manually specify your encoding before you write (http://xarray.pydata.org/en/stable/io.html#scaling-and-type-conversions) - If you don't know from the outset, you will have to overwrite the full time variable with new encoding I also agree that we should definitely be raising a warning (or even an error) in your situation. |
{ "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Time dtype encoding defaulting to `int64` when writing netcdf or zarr 595492608 | |
610429922 | https://github.com/pydata/xarray/issues/3942#issuecomment-610429922 | https://api.github.com/repos/pydata/xarray/issues/3942 | MDEyOklzc3VlQ29tbWVudDYxMDQyOTkyMg== | dcherian 2448579 | 2020-04-07T14:46:06Z | 2020-04-07T14:46:06Z | MEMBER | I have run in to this problem before. The initial choice to use Note that you can always specify an encoding to make sure that you can append properly. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Time dtype encoding defaulting to `int64` when writing netcdf or zarr 595492608 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 3