issues
7 rows where user = 1310437 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: comments, body, created_at (date), updated_at (date), closed_at (date)
| id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 186868181 | MDU6SXNzdWUxODY4NjgxODE= | 1074 | DataArray.apply is missing | burnpanck 1310437 | open | 0 | 9 | 2016-11-02T17:30:45Z | 2022-11-04T17:18:58Z | CONTRIBUTOR | In essence, I'm looking for the functionality of |
{
"url": "https://api.github.com/repos/pydata/xarray/issues/1074/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
xarray 13221727 | issue | ||||||||
| 189095110 | MDExOlB1bGxSZXF1ZXN0OTM1NTM5OTA= | 1118 | Do not convert subclasses of `ndarray` unless required | burnpanck 1310437 | closed | 0 | 13 | 2016-11-14T11:59:02Z | 2019-12-25T14:12:34Z | 2019-12-25T14:12:34Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/1118 | By changing a single Particularly, this allows to store physical quantities represented using the I expect that astropy's units would behave similarly, though since I never worked with them yet, I did not include any tests. |
{
"url": "https://api.github.com/repos/pydata/xarray/issues/1118/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
xarray 13221727 | pull | |||||
| 189129954 | MDU6SXNzdWUxODkxMjk5NTQ= | 1120 | Creating a `Dataset` with a coordinate given by a `DataArray` may create an invalid dataset | burnpanck 1310437 | closed | 0 | 3 | 2016-11-14T14:46:44Z | 2017-09-06T00:07:08Z | 2017-09-06T00:07:08Z | CONTRIBUTOR | Consider this:
I came across this situation when trying to generate a dataset with a coordinate that is a copy of another pre-existing coordinate, but under a different name. My expectation was that the coordinate would be renamed (doing so manually also doesn't work out of the box due to #1116). Of course that is not the only possible interpretation of the construct above, arguably it should raise an exception instead (which is what |
{
"url": "https://api.github.com/repos/pydata/xarray/issues/1120/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
completed | xarray 13221727 | issue | ||||||
| 189415576 | MDU6SXNzdWUxODk0MTU1NzY= | 1121 | Performance degradation: `DataArray` with `dtype=object` of `DataArray` gets very slow indexing | burnpanck 1310437 | closed | 0 | 3 | 2016-11-15T15:04:29Z | 2016-11-15T17:36:26Z | 2016-11-15T17:36:26Z | CONTRIBUTOR | I did not follow the code deeply, but there clearly seems to be a huge overhead when indexing such arrays. In particular, in the following code ```python import xarray as xr import numpy as np a = xr.DataArray([None for k in range(100)],dims='c') for k in range(a.c.size): a[k] = xr.DataArray(np.random.randn(1000,5),dims=['a','b']) %prun a[0] ``` the indexing operation takes about 1 second on my machine, or 2 seconds when running under the profiler. The profiler output shows lots of functions with a recursive call count exceeding 100'000 (most likely iterating through each row of the contained sub-arrays). However, there is really no reason to iterate through the nested elements. |
{
"url": "https://api.github.com/repos/pydata/xarray/issues/1121/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
completed | xarray 13221727 | issue | ||||||
| 189451582 | MDExOlB1bGxSZXF1ZXN0OTM4MDgzODM= | 1122 | Fix slow object arrays indexing | burnpanck 1310437 | closed | 0 | 2 | 2016-11-15T17:12:28Z | 2016-11-15T17:30:54Z | 2016-11-15T17:30:54Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/1122 | {
"url": "https://api.github.com/repos/pydata/xarray/issues/1122/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
xarray 13221727 | pull | ||||||
| 189099082 | MDExOlB1bGxSZXF1ZXN0OTM1NTY4NTA= | 1119 | Fix #1116 (rename of coordinates) | burnpanck 1310437 | closed | 0 | 4 | 2016-11-14T12:20:50Z | 2016-11-15T16:18:04Z | 2016-11-15T16:18:00Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/1119 | {
"url": "https://api.github.com/repos/pydata/xarray/issues/1119/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
xarray 13221727 | pull | ||||||
| 189081147 | MDU6SXNzdWUxODkwODExNDc= | 1116 | `DataArray.rename` of a coordinate array fails to rename the coordinate name of it's own coordinate | burnpanck 1310437 | closed | 0 | 1 | 2016-11-14T10:47:24Z | 2016-11-15T16:18:03Z | 2016-11-15T16:18:03Z | CONTRIBUTOR | The documentation of |
{
"url": "https://api.github.com/repos/pydata/xarray/issues/1116/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] (
[id] INTEGER PRIMARY KEY,
[node_id] TEXT,
[number] INTEGER,
[title] TEXT,
[user] INTEGER REFERENCES [users]([id]),
[state] TEXT,
[locked] INTEGER,
[assignee] INTEGER REFERENCES [users]([id]),
[milestone] INTEGER REFERENCES [milestones]([id]),
[comments] INTEGER,
[created_at] TEXT,
[updated_at] TEXT,
[closed_at] TEXT,
[author_association] TEXT,
[active_lock_reason] TEXT,
[draft] INTEGER,
[pull_request] TEXT,
[body] TEXT,
[reactions] TEXT,
[performed_via_github_app] TEXT,
[state_reason] TEXT,
[repo] INTEGER REFERENCES [repos]([id]),
[type] TEXT
);
CREATE INDEX [idx_issues_repo]
ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
ON [issues] ([user]);