issue_comments
6 rows where author_association = "NONE" and user = 9569132 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
1171130914 | https://github.com/pydata/xarray/issues/6733#issuecomment-1171130914 | https://api.github.com/repos/pydata/xarray/issues/6733 | IC_kwDOAMm_X85Fzgoi | davidorme 9569132 | 2022-06-30T11:59:07Z | 2022-06-30T11:59:07Z | NONE | I still see strange memory spikes that kill my jobs but the behaviour is not reproducible - the conversion will fail with > 4x memory use and then succeed the next time with the same inputs. My guess is that this isn't anything to do with |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
CFMaskCoder creates unnecessary copy for `uint16` variables 1286995366 | |
1170900930 | https://github.com/pydata/xarray/issues/6733#issuecomment-1170900930 | https://api.github.com/repos/pydata/xarray/issues/6733 | IC_kwDOAMm_X85FyofC | davidorme 9569132 | 2022-06-30T08:05:47Z | 2022-06-30T08:05:47Z | NONE | Thanks @dcherian - completely agree that assuming 65535 is a fill can be confusing. My question is basically solved, but the big memory increase is surprising to me. If you cast first, when required, you still have the user data at the original precision as a reference for the filling step? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
CFMaskCoder creates unnecessary copy for `uint16` variables 1286995366 | |
1170015912 | https://github.com/pydata/xarray/issues/6733#issuecomment-1170015912 | https://api.github.com/repos/pydata/xarray/issues/6733 | IC_kwDOAMm_X85FvQao | davidorme 9569132 | 2022-06-29T13:57:51Z | 2022-06-29T13:57:51Z | NONE | Ah. I think I get it now. If you are setting So, for any Where a cast is specified in The manual encoding does indeed work as suggested - the only possible gotcha here for users is
that data stored in a netcdf file as integer type data but with a _FillValue is loaded as a float using
```python
There might be a problem here with consistency with ```python
However, ```bash $ ncdump test.nc netcdf test { dimensions: dim_0 = 65536 ; variables: ushort xarray_dataarray_variable(dim_0) ; data: xarray_dataarray_variable = 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, ... 65528, 65529, 65530, 65531, 65532, 65533, 65534, _ ; } ``` This is because 65535 is the default fill value for Using ```python
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
CFMaskCoder creates unnecessary copy for `uint16` variables 1286995366 | |
1169175453 | https://github.com/pydata/xarray/issues/6733#issuecomment-1169175453 | https://api.github.com/repos/pydata/xarray/issues/6733 | IC_kwDOAMm_X85FsDOd | davidorme 9569132 | 2022-06-28T20:02:14Z | 2022-06-28T20:02:14Z | NONE | Thanks again for your help! I think that is what I am doing. If I understand right: Using
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
CFMaskCoder creates unnecessary copy for `uint16` variables 1286995366 | |
1169128311 | https://github.com/pydata/xarray/issues/6733#issuecomment-1169128311 | https://api.github.com/repos/pydata/xarray/issues/6733 | IC_kwDOAMm_X85Fr3t3 | davidorme 9569132 | 2022-06-28T19:20:00Z | 2022-06-28T19:20:00Z | NONE | Thanks for the quick response. I don't quite follow the process for the |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
CFMaskCoder creates unnecessary copy for `uint16` variables 1286995366 | |
1168483200 | https://github.com/pydata/xarray/issues/6733#issuecomment-1168483200 | https://api.github.com/repos/pydata/xarray/issues/6733 | IC_kwDOAMm_X85FpaOA | davidorme 9569132 | 2022-06-28T09:37:59Z | 2022-06-28T09:37:59Z | NONE | I've also tried pre-converting the
I expect that to add that extra 17GB for a total memory of 53GB or so but then exporting to netcdf still shows unexpectedly variable peak memory use:
One thing I do see for some failing files in the script reporting is this exception - the
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
CFMaskCoder creates unnecessary copy for `uint16` variables 1286995366 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1