html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue https://github.com/pydata/xarray/issues/4197#issuecomment-654015589,https://api.github.com/repos/pydata/xarray/issues/4197,654015589,MDEyOklzc3VlQ29tbWVudDY1NDAxNTU4OQ==,13906519,2020-07-06T05:02:48Z,2020-07-07T13:24:29Z,NONE,"Ok, so for now I roll with this: ```python def shrink_dataarray(da, dims=None): """"""remove nodata borders from spatial dims of dataarray"""""" dims = set(dims) if dims else set(da.dims) if len(dims) != 2: raise IndexError # non-spatial dims (carry over, only shrink spatial dims) nsd = set(da.dims) - dims nsd_indexers = {d: range(len(da[d])) for d in nsd} indexers = {d: (da.count(dim=dims - set([d])|nsd).cumsum() != 0) * (da.count(dim=dims - set([d])|nsd)[::-1].cumsum()[::-1] != 0) for d in dims} indexers.update(nsd_indexers) return da.isel(**indexers) ``` Is it possible to identify non-spatial dims with plain xarray dataarrays (non cf-xarray)? And is there maybe a way to detect unlimited dims (usually the time dim)?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,650549352 https://github.com/pydata/xarray/issues/4197#issuecomment-653753668,https://api.github.com/repos/pydata/xarray/issues/4197,653753668,MDEyOklzc3VlQ29tbWVudDY1Mzc1MzY2OA==,13906519,2020-07-04T11:22:42Z,2020-07-04T11:22:42Z,NONE,"@fujiisoup Thanks, that’s great and much cleaner than my previous numpy code. I’ll run with that and maybe try to pack that in a general function. Not sure is this a common enough problem to have in xarray itself?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,650549352 https://github.com/pydata/xarray/issues/4197#issuecomment-653748350,https://api.github.com/repos/pydata/xarray/issues/4197,653748350,MDEyOklzc3VlQ29tbWVudDY1Mzc0ODM1MA==,13906519,2020-07-04T10:20:56Z,2020-07-04T10:37:29Z,NONE,"@keewis @fujiisoup @shoyer thanks. this does indeed not work for my used case if there's a all-nan stretch between parts of the array (think UK and the channel and the northern coast of France) - I simply want to get rid of extra space around a geographic domain (i.e. the nan edges) ``` data = np.array([ [np.nan, np.nan, np.nan, np.nan], [np.nan, 0, 2, np.nan], [np.nan, np.nan, np.nan, np.nan], [np.nan, 2, 0, np.nan], [np.nan, np.nan, np.nan, np.nan], ]) da = xr.DataArray(data, dims=(""x"", ""y"")) # this also results in a 2x2 array, but should be 3x2 ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,650549352 https://github.com/pydata/xarray/issues/3399#issuecomment-542224664,https://api.github.com/repos/pydata/xarray/issues/3399,542224664,MDEyOklzc3VlQ29tbWVudDU0MjIyNDY2NA==,13906519,2019-10-15T13:55:48Z,2019-10-15T13:55:48Z,NONE,"Great! Seems I was simply missing the new dim *z* in my attempts. Could not translate to the new format... Thanks a bunch!!!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,507211596 https://github.com/pydata/xarray/issues/2005#issuecomment-375581841,https://api.github.com/repos/pydata/xarray/issues/2005,375581841,MDEyOklzc3VlQ29tbWVudDM3NTU4MTg0MQ==,13906519,2018-03-23T08:43:43Z,2018-03-23T08:43:43Z,NONE,"Maybe it's a misconception of mine how compression with add_offset, scale_factor works? I tried using i2 dtype (```ctype='i2'```)and only scale_factor (no add_offset) and this looks ok. However, when I switch to i4/i8 type I get strange data in the netCDFs (I write with NETCDF4_CLASSIC if this matters?)... Is it not possible to use a higher precision integer type for add_offset/ scale_factor encoding to get a better precision of scaled values? About the code samples: sorry, just copied them verbatim from my script. The first block is the logic to compute the scale and offset values, the second is the enconding application using the decorator-based extension to neatly pipe encoding settings to an data array... Doing a minimal example at the moment is a bit problematic as I'm traveling...","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,307444427 https://github.com/pydata/xarray/issues/1042#issuecomment-344386680,https://api.github.com/repos/pydata/xarray/issues/1042,344386680,MDEyOklzc3VlQ29tbWVudDM0NDM4NjY4MA==,13906519,2017-11-14T20:24:49Z,2017-11-14T20:24:49Z,NONE,"@jhamman Yes, indeed. Sorry to spam this old issue. I misread this one - #757 is what'm seeing.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,181881219 https://github.com/pydata/xarray/issues/1042#issuecomment-344385473,https://api.github.com/repos/pydata/xarray/issues/1042,344385473,MDEyOklzc3VlQ29tbWVudDM0NDM4NTQ3Mw==,13906519,2017-11-14T20:20:38Z,2017-11-14T20:22:46Z,NONE,"I am seeing something similar, but maybe this is another issue (I'm on 0.10.0rc2)? I do get a sorted string coordinate after a groupby... My scenario is, that I have a dataset with a coord like this: ``` array(['TeBE_tm', 'TeBE_itm', 'TeBE_itscl', 'TeBE_tscl', 'TeBS_tm', 'TeBS_itm', 'TeE_s', 'TeR_s', 'TeNE', 'BBS_itm', 'BE_s', 'BS_s', 'C3G'], dtype='|S10') Coordinates: * pft (pft) |S10 'TeBE_tm' 'TeBE_itm' 'TeBE_itscl' 'TeBE_tscl' ... ``` Then I create a new coordinate that I use to aggregate: ``` pfts = ds.coords['pft'].values.tolist() pfts_simplified = [remove(x) for x in pfts] ds2['pft_agg'] = xr.full_like(ds['pft'], 0) ds2['pft_agg'][:] = pfts_simplified ds2_agg = ds2.groupby('pft_agg').sum(dim='pft', skipna=False) result = ds2_agg.rename({'pft_agg': 'pft'}) ``` Then in the end I have: ``` array(['BBS', 'B_s', 'C3G', 'TeBE', 'TeBE_scl', 'TeBS', 'TeNE', 'Te_s'], dtype=object) Coordinates: * pft (pft) object 'BBS' 'B_s' 'C3G' 'TeBE' 'TeBE_scl' 'TeBS' 'TeNE' ... ``` Am I missing something?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,181881219 https://github.com/pydata/xarray/issues/1041#issuecomment-343527624,https://api.github.com/repos/pydata/xarray/issues/1041,343527624,MDEyOklzc3VlQ29tbWVudDM0MzUyNzYyNA==,13906519,2017-11-10T16:56:22Z,2017-11-10T16:56:22Z,NONE,"Ok, do you mean something like this? ``` ds = xr.open_dataset(fname_data, decode_times=False) ds['time_agg'] = xr.full_like(ds['time'], 0) ds['time_agg'][:] = np.repeat(np.arange(len(ds['time'])/10), 10) ds_agg = ds.groupby('time_agg').mean(dim='time') ``` ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,181534708 https://github.com/pydata/xarray/issues/1041#issuecomment-343524554,https://api.github.com/repos/pydata/xarray/issues/1041,343524554,MDEyOklzc3VlQ29tbWVudDM0MzUyNDU1NA==,13906519,2017-11-10T16:45:08Z,2017-11-10T16:45:08Z,NONE,"@shoyer Is it possible to resample using fixed user-defined intervals? I have a non-CF compliant time axis (years -22000 to 1989) and want to aggregate by mean or argmax for 10 year intervals... Is this possible using resample? ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,181534708 https://github.com/pydata/xarray/issues/1225#issuecomment-343332976,https://api.github.com/repos/pydata/xarray/issues/1225,343332976,MDEyOklzc3VlQ29tbWVudDM0MzMzMjk3Ng==,13906519,2017-11-10T00:07:24Z,2017-11-10T00:07:24Z,NONE,"Thanks for that Stephan. The workaround looks good for the moment ;-)... Detecting a mismatch (and maybe even correcting it) automatically would be very useful cheers, C","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,202964277 https://github.com/pydata/xarray/issues/1225#issuecomment-343325842,https://api.github.com/repos/pydata/xarray/issues/1225,343325842,MDEyOklzc3VlQ29tbWVudDM0MzMyNTg0Mg==,13906519,2017-11-09T23:28:28Z,2017-11-09T23:28:28Z,NONE,"Is there any news on this? Have the same problem. A reset_chunksizes() method would be very helpful. Also, what is the cleanest way to remove all chunk size info? I have a very long computation and it fails at the very end with the mentioned error message. My file is patched together from many sources... cheers","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,202964277 https://github.com/pydata/xarray/issues/1281#issuecomment-281774695,https://api.github.com/repos/pydata/xarray/issues/1281,281774695,MDEyOklzc3VlQ29tbWVudDI4MTc3NDY5NQ==,13906519,2017-02-22T19:27:03Z,2017-02-22T19:27:03Z,NONE,"I would like something like this as well! Also, specifying default attrs for all data arrays of a dataset (like missing_data/ _FillValue/ ...) would be nice... Not sure if this is currently possible? ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,209523348 https://github.com/pydata/xarray/pull/604#issuecomment-263807796,https://api.github.com/repos/pydata/xarray/issues/604,263807796,MDEyOklzc3VlQ29tbWVudDI2MzgwNzc5Ng==,13906519,2016-11-30T08:00:48Z,2016-11-30T08:00:48Z,NONE,"Hi. I'm seeing the same plotting issues as @jhamman in the plot above Oct 2015 with 0.8.2. Basically, all (most?) operations on the first subplots' axis differ. Is there a fix/ workaround for this? ![facet](https://cloud.githubusercontent.com/assets/13906519/20744297/6d6a2cae-b6db-11e6-8398-47ade81caa88.jpeg) ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,109583455 https://github.com/pydata/xarray/issues/644#issuecomment-155698426,https://api.github.com/repos/pydata/xarray/issues/644,155698426,MDEyOklzc3VlQ29tbWVudDE1NTY5ODQyNg==,13906519,2015-11-11T08:02:56Z,2015-11-11T08:02:56Z,NONE,"Ah, ok, cool. Thanks for the pointers and getting back to me. Looking forward to any future xray improvements. It’s really becoming my goto to for netcdf stuff (in addition to cdo). Christian > On 11 Nov 2015, at 01:27, Stephan Hoyer notifications@github.com wrote: > > This is tricky to put into .sel because that method currently works by only looking at coordinate labels, not at data values. > > One way to fix this would be to unravel your two dimensions corresponding to latitude and longitude into a single ""lat_lon"" dimension. At this point, you could apply a sea mask, to produce a compressed lat_lon coordinate corresponding to only unmasked points. Now, it's relatively straightforward to imagine doing nearest neighbor lookups on this set of labels. > > This later solution will require a few steps (all of which are on the ""to do"" list, but without any immediate timelines): > 1. support for multi-level indexes in xray > 2. support for ""unraveling"" multiple dimensions into 1-dimension > 3. support for looking up nearest locations in multiple dimensions via some sort of spatial index (e.g., a KD tree) > > — > Reply to this email directly or view it on GitHub https://github.com/xray/xray/issues/644#issuecomment-155611625. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,114773593 https://github.com/pydata/xarray/issues/564#issuecomment-138974154,https://api.github.com/repos/pydata/xarray/issues/564,138974154,MDEyOklzc3VlQ29tbWVudDEzODk3NDE1NA==,13906519,2015-09-09T16:57:48Z,2015-09-09T16:57:48Z,NONE,"Ah, ok... Thanks. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,105536609