issues
9 rows where state = "closed", type = "issue" and user = 12307589 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: comments, created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
148902850 | MDU6SXNzdWUxNDg5MDI4NTA= | 828 | Attributes are currently kept when arrays are resampled, and not when datasets are resampled | mcgibbon 12307589 | closed | 0 | 4 | 2016-04-16T23:43:05Z | 2020-04-05T18:19:03Z | 2016-04-20T07:13:47Z | CONTRIBUTOR | Because line 323 of groupby.py copies attributes from a DataArray to its resampling output (it shouldn't), attributes are kept in many cases when DataArrays are resampled (and not kept for similar cases when Datasets are resampled). |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/828/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
168754274 | MDU6SXNzdWUxNjg3NTQyNzQ= | 929 | Dataset creation requires tuple, list treated differently | mcgibbon 12307589 | closed | 0 | 4 | 2016-08-01T22:11:15Z | 2019-02-26T08:51:17Z | 2019-02-26T08:51:17Z | CONTRIBUTOR | Take the Dataset creation example:
if the tuple (['x', 'y', 'time'], temp) is replaced with a list [['x', 'y', 'time'], temp], the behavior changes in very strange ways. The resulting Dataset will then have a coordinate variable temperature whose dimensions are ('temperature', 'x', 'y', 'time'). Printing temperature shows that the ['x', 'y', 'time'] part has been interpreted as data rather than metadata. It seems to be impossible to access the data in the resulting temperature coordinate by indexing. This might be intentional (since one could actually want to pass in data that is stored as a list), but it may be better to do some sanity checking when a list is passed to figure out whether the list is data or as above. If no change is made, then this feature should probably be pointed out in the documentation. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/929/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
185441216 | MDU6SXNzdWUxODU0NDEyMTY= | 1062 | Add remaining date units to conventions.py | mcgibbon 12307589 | closed | 0 | 6 | 2016-10-26T16:14:44Z | 2019-02-24T21:25:39Z | 2019-02-24T21:25:39Z | CONTRIBUTOR | Currently _netcdf_to_numpy_timeunit in conventions.py (seemingly) artificially imposes that weeks, months, and years can't be used as time units, despite some of these being CF-compliant (months, years), and datetime64 supporting these units. Are these possibly disabled because of the way Udunits defines these units? From CF conventions:
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1062/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
149513700 | MDU6SXNzdWUxNDk1MTM3MDA= | 833 | Coveralls is missing line-by-line report | mcgibbon 12307589 | closed | 0 | 2 | 2016-04-19T16:31:40Z | 2019-01-27T22:48:15Z | 2019-01-27T22:48:15Z | CONTRIBUTOR | { "url": "https://api.github.com/repos/pydata/xarray/issues/833/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | |||||||
325810810 | MDU6SXNzdWUzMjU4MTA4MTA= | 2176 | Advice on unit-aware arithmetic | mcgibbon 12307589 | closed | 0 | 9 | 2018-05-23T17:51:54Z | 2018-05-25T18:11:56Z | 2018-05-25T18:11:55Z | CONTRIBUTOR | This isn't really a bug report. In Basically, we currently have this implemented as a subclass The problem I have is that the new code that results from using an accessor is quite cumbersome. The issue lies in that we mainly use new implementations for arithmetic operations. So, for example, the following code:
instead becomes
This could be a little less cumbersome if we avoid a sympl namespace and instead add separate accessors for each method. At the least it reads naturally. However, there's a reason you don't generally recommend doing this.
I'm looking for advice on what is best for |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2176/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
307857984 | MDU6SXNzdWUzMDc4NTc5ODQ= | 2008 | Cannot save netcdf files with non-standard calendars | mcgibbon 12307589 | closed | 0 | 6 | 2018-03-23T00:11:29Z | 2018-05-16T19:50:40Z | 2018-05-16T19:50:40Z | CONTRIBUTOR | Code Sample, a copy-pastable example if possibleUsing noleap.nc from the following zip: noleap.zip
Problem descriptionA long traceback gets printed out (sorry, I can't copy it properly from my current machine) that ends in
Expected OutputObviously, we expect the file to save. If xarray can decode the times, it should be able to encode them. Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2008/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
165160600 | MDU6SXNzdWUxNjUxNjA2MDA= | 898 | Indexing by multiple arrays inconsistent with numpy | mcgibbon 12307589 | closed | 0 | 1 | 2016-07-12T19:30:53Z | 2016-07-31T23:10:38Z | 2016-07-31T23:10:38Z | CONTRIBUTOR | When indexing an array with multiple 1d arrays of the same length, the behavior of DataArray is different from the behavior of numpy arrays. Particularly, a 2d array is returned instead of a 1d array. ``` In [1]: import xarray as xr In [2]: import numpy as np In [3]: a = np.random.randn(5, 5) In [4]: print(a[range(5), range(5)]) [ 0.92539795 0.06337135 -0.02374713 -0.6795863 -1.98749572] In [5]: a = xr.DataArray(a) In [6]: print(a[range(5), range(5)]) <xarray.DataArray (dim_0: 5, dim_1: 5)> array([[ 0.92539795, 0.34007445, 0.44199176, 1.29499782, -0.92076652], [ 0.23939236, 0.06337135, 0.83446803, 0.58847174, -1.08886251], [ 1.35784349, -0.51613834, -0.02374713, 1.6610402 , 0.80005739], [-0.75571607, -1.67907855, 1.29851435, -0.6795863 , -2.47751013], [-0.05817197, -1.195133 , 0.43844213, 0.29625676, -1.98749572]]) Coordinates: * dim_0 (dim_0) int64 0 1 2 3 4 * dim_1 (dim_1) int64 0 1 2 3 4 In [7]: xr.version Out[7]: '0.7.2-6-g859ddc2' ``` |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/898/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
160507093 | MDU6SXNzdWUxNjA1MDcwOTM= | 885 | Docstring for Dataset.drop needs revision | mcgibbon 12307589 | closed | 0 | 1 | 2016-06-15T19:44:21Z | 2016-06-16T00:45:46Z | 2016-06-16T00:45:36Z | CONTRIBUTOR | The docstring for Dataset.drop indicates that the first argument, labels, should be a string indicating names of variables or index labels to drop. I'm not sure, but I'm guessing it either takes in several strings (in which case the docstring should say *labels), or it takes in either a string or an iterable (in which case the argument type should reflect this). |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/885/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
148765426 | MDU6SXNzdWUxNDg3NjU0MjY= | 825 | keep_attrs for Dataset.resample and DataArray.resample | mcgibbon 12307589 | closed | 0 | 10 | 2016-04-15T20:46:01Z | 2016-04-19T16:02:32Z | 2016-04-19T16:02:32Z | CONTRIBUTOR | Currently there is no option for preserving attributes when resampling a Dataset or DataArray. Could there be a keep_attrs keyword argument for these methods? |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/825/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);