issues
13 rows where user = 3404817 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: title, comments, closed_at, created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
90658514 | MDU6SXNzdWU5MDY1ODUxNA== | 442 | NetCDF attributes like `long_name` and `units` lost on `.mean()` | j08lue 3404817 | closed | 0 | 5 | 2015-06-24T12:14:30Z | 2020-04-05T18:18:31Z | 2015-06-26T07:28:54Z | CONTRIBUTOR | When reading in a variable from netCDF, the standard attributes like Couldn't these CF-Highly Recommended Variable Attributes be kept during this operation? (What to do with them afterwards, e.g. upon merge, is a different question, unresolved also in the pandas community.) EDIT: the problem actually occurs when calling |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/442/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
219692578 | MDU6SXNzdWUyMTk2OTI1Nzg= | 1354 | concat automagically outer-joins coordinates | j08lue 3404817 | closed | 0 | 8 | 2017-04-05T19:39:07Z | 2019-08-07T12:17:07Z | 2019-08-07T12:17:07Z | CONTRIBUTOR | I would like to concatenate two netCDF files that have Using This is because It would be awesome if there was an option to change this behaviour on Note: This could also be a special case, because ```python
Maybe an interface change could be considered together with that discussed in #1340? |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1354/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
110979851 | MDU6SXNzdWUxMTA5Nzk4NTE= | 623 | Circular longitude axis | j08lue 3404817 | closed | 0 | 6 | 2015-10-12T13:56:45Z | 2019-06-20T20:09:46Z | 2016-12-24T00:11:45Z | CONTRIBUTOR | A common issue with global data or model output is that the zonal grid boundary might cut right through a region of interest. In that case, the field must be re-wrapped/shifted such that a region in the far east of the field is placed to the left of a region in the far west. An way of achieving this when using But this works only on data that is already loaded into memory (e.g. with Now the first thing is that it took me quite a while to figure out why this worked in some cases and not in others. Perhaps the It would of course be great if (By the way, in Ferret this is called Modulo Axes.) |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/623/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
302447879 | MDExOlB1bGxSZXF1ZXN0MTcyOTc1OTY4 | 1965 | avoid integer overflow when decoding large time numbers | j08lue 3404817 | closed | 0 | 6 | 2018-03-05T20:21:20Z | 2018-05-01T12:41:28Z | 2018-05-01T12:41:28Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/1965 | The issue: This is in way the back side of #1859: By ensuring that e.g. '2001-01-01' in
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1965/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
319132629 | MDExOlB1bGxSZXF1ZXN0MTg1MTMxMjg0 | 2096 | avoid integer overflow when decoding large time numbers | j08lue 3404817 | closed | 0 | 3 | 2018-05-01T07:02:24Z | 2018-05-01T12:41:13Z | 2018-05-01T12:41:08Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/2096 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2096/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
292231408 | MDExOlB1bGxSZXF1ZXN0MTY1NTgzMTIw | 1863 | test decoding num_dates in float types | j08lue 3404817 | closed | 0 | 4 | 2018-01-28T19:34:52Z | 2018-02-10T12:16:26Z | 2018-02-02T02:01:47Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/1863 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1863/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
291565176 | MDU6SXNzdWUyOTE1NjUxNzY= | 1859 | Time decoding has round-off error 0.10.0. Gone now. | j08lue 3404817 | closed | 0 | 3 | 2018-01-25T13:12:13Z | 2018-02-02T02:01:47Z | 2018-02-02T02:01:47Z | CONTRIBUTOR | Note: This problem occurs with version 0.10.0, but is gone when using current Here is a complete example: https://gist.github.com/j08lue/34498cf17b176d15933e778278ba2921 Problem descriptionI have this time variable from a netCDF file:
I tracked the problem down to So I also tested with current Up to you what you make of this. 😄 Maybe you can just close the issue. Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1859/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
89268800 | MDU6SXNzdWU4OTI2ODgwMA== | 438 | `xray.open_mfdataset` concatenates also variables without time dimension | j08lue 3404817 | closed | 0 | 0.5.2 1172685 | 13 | 2015-06-18T11:34:53Z | 2017-09-19T16:16:58Z | 2015-07-15T21:47:11Z | CONTRIBUTOR | When opening a multi-file dataset with My netCDF files contain a lot of those "static" variables (e.g. grid spacing etc.). Is the different behaviour of Note: I am using Example
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/438/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | |||||
197514417 | MDExOlB1bGxSZXF1ZXN0OTkzMjkwNjc= | 1184 | Add test for issue 1140 | j08lue 3404817 | closed | 0 | 2 | 2016-12-25T20:37:16Z | 2017-03-30T23:08:40Z | 2017-03-30T23:08:40Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/1184 | 1140 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1184/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
192122307 | MDU6SXNzdWUxOTIxMjIzMDc= | 1140 | sel with method 'nearest' fails with AssertionError | j08lue 3404817 | closed | 0 | 10 | 2016-11-28T21:30:43Z | 2017-03-30T19:18:34Z | 2017-03-30T19:18:34Z | CONTRIBUTOR | The following fails
``` with an ``` C:\Users\uuu\AppData\Local\Continuum\Miniconda2\lib\site-packages\xarray\core\dataarray.pyc in sel(self, method, tolerance, indexers) 623 self, indexers, method=method, tolerance=tolerance 624 ) --> 625 return self.isel(pos_indexers)._replace_indexes(new_indexes) 626 627 def isel_points(self, dim='points', **indexers): C:\Users\uuu\AppData\Local\Continuum\Miniconda2\lib\site-packages\xarray\core\dataarray.pyc in isel(self, indexers) 608 DataArray.sel 609 """ --> 610 ds = self._to_temp_dataset().isel(indexers) 611 return self._from_temp_dataset(ds) 612 C:\Users\uuu\AppData\Local\Continuum\Miniconda2\lib\site-packages\xarray\core\dataset.pyc in isel(self, indexers) 910 for name, var in iteritems(self._variables): 911 var_indexers = dict((k, v) for k, v in indexers if k in var.dims) --> 912 variables[name] = var.isel(var_indexers) 913 return self._replace_vars_and_dims(variables) 914 C:\Users\uuu\AppData\Local\Continuum\Miniconda2\lib\site-packages\xarray\core\variable.pyc in isel(self, **indexers) 539 if dim in indexers: 540 key[i] = indexers[dim] --> 541 return self[tuple(key)] 542 543 def _shift_one_dim(self, dim, count): C:\Users\uuu\AppData\Local\Continuum\Miniconda2\lib\site-packages\xarray\core\variable.pyc in getitem(self, key) 377 # orthogonal indexing should ensure the dimensionality is consistent 378 if hasattr(values, 'ndim'): --> 379 assert values.ndim == len(dims), (values.ndim, len(dims)) 380 else: 381 assert len(dims) == 0, len(dims) AssertionError: (0, 1) ``` It does not matter which type the dimension has that is indexed:
``` This is on Miniconda for Windows 64 bit with conda-forge and IOOS builds and * xarray=0.8.2 * pandas=0.19.1 * numpy=1.11.2 Why might this be? Am I doing something wrong? |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1140/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
173773358 | MDU6SXNzdWUxNzM3NzMzNTg= | 992 | Creating unlimited dimensions with xarray.Dataset.to_netcdf | j08lue 3404817 | closed | 0 | 18 | 2016-08-29T13:23:48Z | 2017-01-24T06:38:49Z | 2017-01-24T06:38:49Z | CONTRIBUTOR | @shoyer you wrote in a comment on another issue
I see that xarray does not need |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/992/reactions", "total_count": 5, "+1": 5, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
190683531 | MDU6SXNzdWUxOTA2ODM1MzE= | 1132 | groupby with datetime DataArray fails with `AttributeError` | j08lue 3404817 | closed | 0 | 7 | 2016-11-21T11:00:57Z | 2016-12-19T17:15:17Z | 2016-12-19T17:11:57Z | CONTRIBUTOR | I want to group some data by Oct-May season of each year, i.e. [(Oct 2000 - May 2001), (Oct 2001 - May 2002), ...]. I.e. I do not want some DJF-like mean over all the data but one value for each year. To achieve this, I construct a I give it a custom Please see this ipynb showing the error. Proposed solutionSo it turns out this can easily be fixed by changing line 226 from
to
Please see this other ipynb where the result is as expected. Now is this a bug or am I abusing the code somehow? |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1132/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
190690822 | MDExOlB1bGxSZXF1ZXN0OTQ1OTgxMzI= | 1133 | use safe_cast_to_index to sanitize DataArrays for groupby | j08lue 3404817 | closed | 0 | 2 | 2016-11-21T11:33:08Z | 2016-12-19T17:12:31Z | 2016-12-19T17:11:57Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/1133 | Fixes https://github.com/pydata/xarray/issues/1132 Let me know whether this is a valid bug fix or I am misunderstanding something. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1133/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);