issues
3 rows where user = 17701232 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
244016361 | MDU6SXNzdWUyNDQwMTYzNjE= | 1483 | Loss of coordinate information from groupby.apply() on a stacked object | byersiiasa 17701232 | open | 0 | 5 | 2017-07-19T11:59:48Z | 2020-10-04T16:09:22Z | NONE | I use this stack, groupby, unstack quite frequently. e.g. here An issue I have is that after Is there a way to carry them through and is this an issue for others? ```
import xarray as xr <xarray.DataArray (lat: 180, lon: 360, time: 2000)>
array([[[ 0.623891, -0.044304, ..., 1.015785, 0.009088],
[-0.7375 , 0.380369, ..., 0.788351, -0.69295 ],
...,
[ 0.171894, 0.517164, ..., -0.946908, -0.597802],
[ 0.353743, 0.005539, ..., -1.436965, -0.190099]],
....
Coordinates:
* lat (lat) int32 90 89 88 87 86 85 84 83 82 81 80 79 78 77 76 75 74 ...
* lon (lon) int32 -180 -179 -178 -177 -176 -175 -174 -173 -172 -171 ...
* time (time) int32 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 ..
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1483/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
179969119 | MDU6SXNzdWUxNzk5NjkxMTk= | 1019 | groupby_bins: exclude bin or assign bin with nan when bin has no values | byersiiasa 17701232 | closed | 0 | 10 | 2016-09-29T07:09:02Z | 2016-10-03T21:54:38Z | 2016-10-03T15:22:15Z | NONE | When using groupby_bins there are cases where no values are found for some of the bins specified. Currently, it appears that in these cases, the bin is skipped, with no value neither a bin entry added to the output dataarray. Is there a way to identify which bins have been skipped. Or preferably, is it possible to have an option to include those bins, but with nan values. This would make comparing two dataarrays easier in cases where despite the same bin intervals as inputs, the outputs result in dataarrays with different variable and coordinates lengths. ``` import xarray as xr var = xr.open_dataset('c:\users\saveMWE.nc') pop = xr.open_dataset('c:\users\savePOP.nc') binns includes very small bin to test thisbinns = [-100, -50, 0, 50, 50.00001, 100] binned = pop.p2010T.groupby_bins(var.EnsembleMean, binns).sum() print binned print binned.EnsembleMean_bins ``` In this case, no data falls in the 4th bin between 50 and 50.00001.
Obviously one can count the lengths but this doesn't indicate which bin was skipped. An option to include the empty bin with a nan value would be useful! Thanks |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1019/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
155741762 | MDU6SXNzdWUxNTU3NDE3NjI= | 851 | xr.concat and xr.to_netcdf new filesize | byersiiasa 17701232 | closed | 0 | 4 | 2016-05-19T13:51:17Z | 2016-05-20T08:08:44Z | 2016-05-19T21:13:04Z | NONE | I am having an issue whereby I read in two very similar netcdfs. I concatenate them through one dimension (time), and write back to a new netcdf. However the new filesize is enourmous, and I can't work out why. More details in stackoverflow question here: http://stackoverflow.com/questions/37324106/python-xarray-concat-new-file-size Thanks |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/851/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);