issue_comments
15 rows where author_association = "NONE" and user = 13906519 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: issue_url, created_at (date), updated_at (date)
user 1
- cwerner · 15 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
654015589 | https://github.com/pydata/xarray/issues/4197#issuecomment-654015589 | https://api.github.com/repos/pydata/xarray/issues/4197 | MDEyOklzc3VlQ29tbWVudDY1NDAxNTU4OQ== | cwerner 13906519 | 2020-07-06T05:02:48Z | 2020-07-07T13:24:29Z | NONE | Ok, so for now I roll with this: ```python def shrink_dataarray(da, dims=None): """remove nodata borders from spatial dims of dataarray""" dims = set(dims) if dims else set(da.dims)
``` Is it possible to identify non-spatial dims with plain xarray dataarrays (non cf-xarray)? And is there maybe a way to detect unlimited dims (usually the time dim)? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Provide a "shrink" command to remove bounding nan/ whitespace of DataArray 650549352 | |
653753668 | https://github.com/pydata/xarray/issues/4197#issuecomment-653753668 | https://api.github.com/repos/pydata/xarray/issues/4197 | MDEyOklzc3VlQ29tbWVudDY1Mzc1MzY2OA== | cwerner 13906519 | 2020-07-04T11:22:42Z | 2020-07-04T11:22:42Z | NONE | @fujiisoup Thanks, that’s great and much cleaner than my previous numpy code. I’ll run with that and maybe try to pack that in a general function. Not sure is this a common enough problem to have in xarray itself? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Provide a "shrink" command to remove bounding nan/ whitespace of DataArray 650549352 | |
653748350 | https://github.com/pydata/xarray/issues/4197#issuecomment-653748350 | https://api.github.com/repos/pydata/xarray/issues/4197 | MDEyOklzc3VlQ29tbWVudDY1Mzc0ODM1MA== | cwerner 13906519 | 2020-07-04T10:20:56Z | 2020-07-04T10:37:29Z | NONE | @keewis @fujiisoup @shoyer thanks. this does indeed not work for my used case if there's a all-nan stretch between parts of the array (think UK and the channel and the northern coast of France) - I simply want to get rid of extra space around a geographic domain (i.e. the nan edges) ``` data = np.array([ [np.nan, np.nan, np.nan, np.nan], [np.nan, 0, 2, np.nan], [np.nan, np.nan, np.nan, np.nan], [np.nan, 2, 0, np.nan], [np.nan, np.nan, np.nan, np.nan], ]) da = xr.DataArray(data, dims=("x", "y")) this also results in a 2x2 array, but should be 3x2``` |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Provide a "shrink" command to remove bounding nan/ whitespace of DataArray 650549352 | |
542224664 | https://github.com/pydata/xarray/issues/3399#issuecomment-542224664 | https://api.github.com/repos/pydata/xarray/issues/3399 | MDEyOklzc3VlQ29tbWVudDU0MjIyNDY2NA== | cwerner 13906519 | 2019-10-15T13:55:48Z | 2019-10-15T13:55:48Z | NONE | Great! Seems I was simply missing the new dim z in my attempts. Could not translate to the new format... Thanks a bunch!!! |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
With sel_points deprecated, how do you replace it? 507211596 | |
375581841 | https://github.com/pydata/xarray/issues/2005#issuecomment-375581841 | https://api.github.com/repos/pydata/xarray/issues/2005 | MDEyOklzc3VlQ29tbWVudDM3NTU4MTg0MQ== | cwerner 13906519 | 2018-03-23T08:43:43Z | 2018-03-23T08:43:43Z | NONE | Maybe it's a misconception of mine how compression with add_offset, scale_factor works? I tried using i2 dtype ( About the code samples: sorry, just copied them verbatim from my script. The first block is the logic to compute the scale and offset values, the second is the enconding application using the decorator-based extension to neatly pipe encoding settings to an data array... Doing a minimal example at the moment is a bit problematic as I'm traveling... |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
What is the recommended way to do proper compression/ scaling of vars? 307444427 | |
344386680 | https://github.com/pydata/xarray/issues/1042#issuecomment-344386680 | https://api.github.com/repos/pydata/xarray/issues/1042 | MDEyOklzc3VlQ29tbWVudDM0NDM4NjY4MA== | cwerner 13906519 | 2017-11-14T20:24:49Z | 2017-11-14T20:24:49Z | NONE | @jhamman Yes, indeed. Sorry to spam this old issue. I misread this one - #757 is what'm seeing. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Dataset.groupby() doesn't preserve variables order 181881219 | |
344385473 | https://github.com/pydata/xarray/issues/1042#issuecomment-344385473 | https://api.github.com/repos/pydata/xarray/issues/1042 | MDEyOklzc3VlQ29tbWVudDM0NDM4NTQ3Mw== | cwerner 13906519 | 2017-11-14T20:20:38Z | 2017-11-14T20:22:46Z | NONE | I am seeing something similar, but maybe this is another issue (I'm on 0.10.0rc2)? I do get a sorted string coordinate after a groupby... My scenario is, that I have a dataset with a coord like this:
``` pfts = ds.coords['pft'].values.tolist() pfts_simplified = [remove(x) for x in pfts] ds2['pft_agg'] = xr.full_like(ds['pft'], 0) ds2['pft_agg'][:] = pfts_simplified ds2_agg = ds2.groupby('pft_agg').sum(dim='pft', skipna=False) result = ds2_agg.rename({'pft_agg': 'pft'}) ``` Then in the end I have: ``` <xarray.DataArray 'pft' (pft: 8)> array(['BBS', 'B_s', 'C3G', 'TeBE', 'TeBE_scl', 'TeBS', 'TeNE', 'Te_s'], dtype=object) Coordinates: * pft (pft) object 'BBS' 'B_s' 'C3G' 'TeBE' 'TeBE_scl' 'TeBS' 'TeNE' ... ``` Am I missing something? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Dataset.groupby() doesn't preserve variables order 181881219 | |
343527624 | https://github.com/pydata/xarray/issues/1041#issuecomment-343527624 | https://api.github.com/repos/pydata/xarray/issues/1041 | MDEyOklzc3VlQ29tbWVudDM0MzUyNzYyNA== | cwerner 13906519 | 2017-11-10T16:56:22Z | 2017-11-10T16:56:22Z | NONE | Ok, do you mean something like this?
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
DataArray resample does not work if the time coordinate name is different from the corresponding dimension name. 181534708 | |
343524554 | https://github.com/pydata/xarray/issues/1041#issuecomment-343524554 | https://api.github.com/repos/pydata/xarray/issues/1041 | MDEyOklzc3VlQ29tbWVudDM0MzUyNDU1NA== | cwerner 13906519 | 2017-11-10T16:45:08Z | 2017-11-10T16:45:08Z | NONE | @shoyer Is it possible to resample using fixed user-defined intervals? I have a non-CF compliant time axis (years -22000 to 1989) and want to aggregate by mean or argmax for 10 year intervals... Is this possible using resample? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
DataArray resample does not work if the time coordinate name is different from the corresponding dimension name. 181534708 | |
343332976 | https://github.com/pydata/xarray/issues/1225#issuecomment-343332976 | https://api.github.com/repos/pydata/xarray/issues/1225 | MDEyOklzc3VlQ29tbWVudDM0MzMzMjk3Ng== | cwerner 13906519 | 2017-11-10T00:07:24Z | 2017-11-10T00:07:24Z | NONE | Thanks for that Stephan. The workaround looks good for the moment ;-)... Detecting a mismatch (and maybe even correcting it) automatically would be very useful cheers, C |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
“ValueError: chunksize cannot exceed dimension size” when trying to write xarray to netcdf 202964277 | |
343325842 | https://github.com/pydata/xarray/issues/1225#issuecomment-343325842 | https://api.github.com/repos/pydata/xarray/issues/1225 | MDEyOklzc3VlQ29tbWVudDM0MzMyNTg0Mg== | cwerner 13906519 | 2017-11-09T23:28:28Z | 2017-11-09T23:28:28Z | NONE | Is there any news on this? Have the same problem. A reset_chunksizes() method would be very helpful. Also, what is the cleanest way to remove all chunk size info? I have a very long computation and it fails at the very end with the mentioned error message. My file is patched together from many sources... cheers |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
“ValueError: chunksize cannot exceed dimension size” when trying to write xarray to netcdf 202964277 | |
281774695 | https://github.com/pydata/xarray/issues/1281#issuecomment-281774695 | https://api.github.com/repos/pydata/xarray/issues/1281 | MDEyOklzc3VlQ29tbWVudDI4MTc3NDY5NQ== | cwerner 13906519 | 2017-02-22T19:27:03Z | 2017-02-22T19:27:03Z | NONE | I would like something like this as well! Also, specifying default attrs for all data arrays of a dataset (like missing_data/ _FillValue/ ...) would be nice... Not sure if this is currently possible? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
update_attrs method 209523348 | |
263807796 | https://github.com/pydata/xarray/pull/604#issuecomment-263807796 | https://api.github.com/repos/pydata/xarray/issues/604 | MDEyOklzc3VlQ29tbWVudDI2MzgwNzc5Ng== | cwerner 13906519 | 2016-11-30T08:00:48Z | 2016-11-30T08:00:48Z | NONE | Hi. I'm seeing the same plotting issues as @jhamman in the plot above Oct 2015 with 0.8.2. Basically, all (most?) operations on the first subplots' axis differ. Is there a fix/ workaround for this? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Add subplot_kws arg to plotting interfaces 109583455 | |
155698426 | https://github.com/pydata/xarray/issues/644#issuecomment-155698426 | https://api.github.com/repos/pydata/xarray/issues/644 | MDEyOklzc3VlQ29tbWVudDE1NTY5ODQyNg== | cwerner 13906519 | 2015-11-11T08:02:56Z | 2015-11-11T08:02:56Z | NONE | Ah, ok, cool. Thanks for the pointers and getting back to me. Looking forward to any future xray improvements. It’s really becoming my goto to for netcdf stuff (in addition to cdo). Christian
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Feature request: only allow nearest-neighbor .sel for valid data (not NaN positions) 114773593 | |
138974154 | https://github.com/pydata/xarray/issues/564#issuecomment-138974154 | https://api.github.com/repos/pydata/xarray/issues/564 | MDEyOklzc3VlQ29tbWVudDEzODk3NDE1NA== | cwerner 13906519 | 2015-09-09T16:57:48Z | 2015-09-09T16:57:48Z | NONE | Ah, ok... Thanks. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
to_netcdf() writes attrs as unicode strings 105536609 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
issue 10