issue_comments
9 rows where author_association = "NONE" and user = 25231875 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: issue_url, reactions, created_at (date), updated_at (date)
user 1
- jjpr-mit · 9 ✖
| id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
|---|---|---|---|---|---|---|---|---|---|---|---|
| 396953390 | https://github.com/pydata/xarray/issues/2215#issuecomment-396953390 | https://api.github.com/repos/pydata/xarray/issues/2215 | MDEyOklzc3VlQ29tbWVudDM5Njk1MzM5MA== | jjpr-mit 25231875 | 2018-06-13T14:16:19Z | 2018-06-13T14:17:46Z | NONE | @shoyer That did it. Under pandas 0.22, the DataArrays in |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
align() outer join returns DataArrays that are all NaNs 329438885 | |
| 394758682 | https://github.com/pydata/xarray/issues/2215#issuecomment-394758682 | https://api.github.com/repos/pydata/xarray/issues/2215 | MDEyOklzc3VlQ29tbWVudDM5NDc1ODY4Mg== | jjpr-mit 25231875 | 2018-06-05T15:42:15Z | 2018-06-05T16:23:12Z | NONE | I found a way to reproduce the error. One of the MuliIndex levels on the DataArrays has NaNs in it. If I remove that level, the correct values appear in the result. Should the presence of that MultiIndex level cause this behavior? ``` import string import numpy as np import xarray as xr dims = ("x", "y") shape = (10, 5) das = [] for j in (0, 1): data = np.full(shape, np.nan, dtype="float64") for i in range(shape[0]): data[i, i % shape[1]] = float(i) coords_d = { "ints": ("x", range(jshape[0], (j+1)shape[0])), "nans": ("x", np.array([np.nan] * shape[0], dtype="float64")), "lower": ("y", list(string.ascii_lowercase[:shape[1]])) } da = xr.DataArray(data=data, dims=dims, coords=coords_d) da.set_index(append=True, inplace=True, x=["ints", "nans"], y=["lower"]) das.append(da) nonzeros_raw = [np.nonzero(~np.isnan(da)) for da in das] print("nonzeros_raw: ") print(nonzeros_raw) aligned = xr.align(*das, join="outer") nonzeros_aligned = [np.nonzero(~np.isnan(da)) for da in aligned] print("nonzeros_aligned: ") print(nonzeros_aligned) assert nonzeros_raw[0].shape == nonzeros_aligned[0].shape ``` |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
align() outer join returns DataArrays that are all NaNs 329438885 | |
| 394769300 | https://github.com/pydata/xarray/issues/2215#issuecomment-394769300 | https://api.github.com/repos/pydata/xarray/issues/2215 | MDEyOklzc3VlQ29tbWVudDM5NDc2OTMwMA== | jjpr-mit 25231875 | 2018-06-05T16:12:30Z | 2018-06-05T16:12:30Z | NONE | This is what I would expect to see returned by align():
I see something very similar, but with the |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
align() outer join returns DataArrays that are all NaNs 329438885 | |
| 394765054 | https://github.com/pydata/xarray/issues/2215#issuecomment-394765054 | https://api.github.com/repos/pydata/xarray/issues/2215 | MDEyOklzc3VlQ29tbWVudDM5NDc2NTA1NA== | jjpr-mit 25231875 | 2018-06-05T15:59:57Z | 2018-06-05T15:59:57Z | NONE | For clarity, here are the prints of the arrays before and after alignment: Before alignment: After alignment: |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
align() outer join returns DataArrays that are all NaNs 329438885 | |
| 394762522 | https://github.com/pydata/xarray/issues/2215#issuecomment-394762522 | https://api.github.com/repos/pydata/xarray/issues/2215 | MDEyOklzc3VlQ29tbWVudDM5NDc2MjUyMg== | jjpr-mit 25231875 | 2018-06-05T15:52:57Z | 2018-06-05T15:52:57Z | NONE | Since the align is an outer join, I would expect all the non-NaN values in the original DataArrays to also appear in the aligned DataArrays. Perhaps I am misinterpreting the behavior of |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
align() outer join returns DataArrays that are all NaNs 329438885 | |
| 340005903 | https://github.com/pydata/xarray/issues/1603#issuecomment-340005903 | https://api.github.com/repos/pydata/xarray/issues/1603 | MDEyOklzc3VlQ29tbWVudDM0MDAwNTkwMw== | jjpr-mit 25231875 | 2017-10-27T15:34:42Z | 2017-10-27T15:34:42Z | NONE | Will the new API preserve the order of the levels? One of the features that's necessary for |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
Explicit indexes in xarray's data-model (Future of MultiIndex) 262642978 | |
| 336925565 | https://github.com/pydata/xarray/issues/324#issuecomment-336925565 | https://api.github.com/repos/pydata/xarray/issues/324 | MDEyOklzc3VlQ29tbWVudDMzNjkyNTU2NQ== | jjpr-mit 25231875 | 2017-10-16T15:35:06Z | 2017-10-16T15:35:06Z | NONE | Is use case 1 (Multiple groupby arguments along a single dimension) being held back for use case 2 (Multiple groupby arguments along different dimensions)? Use case 1 would be very useful by itself. |
{
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
Support multi-dimensional grouped operations and group_over 58117200 | |
| 334212532 | https://github.com/pydata/xarray/issues/659#issuecomment-334212532 | https://api.github.com/repos/pydata/xarray/issues/659 | MDEyOklzc3VlQ29tbWVudDMzNDIxMjUzMg== | jjpr-mit 25231875 | 2017-10-04T16:27:21Z | 2017-10-04T16:27:21Z | NONE | In case anyone gets here by Googling something like "xarray groupby slow" and you loaded data from a netCDF file, be aware that slowness you see in groupby aggregation on a |
{
"total_count": 9,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 1,
"rocket": 1,
"eyes": 0
} |
groupby very slow compared to pandas 117039129 | |
| 328954481 | https://github.com/pydata/xarray/issues/1569#issuecomment-328954481 | https://api.github.com/repos/pydata/xarray/issues/1569 | MDEyOklzc3VlQ29tbWVudDMyODk1NDQ4MQ== | jjpr-mit 25231875 | 2017-09-12T19:17:24Z | 2017-09-12T19:17:24Z | NONE | Makes sense. Just needs a doc update, then. What's the preferred means to contribute doc (including little edits like this)? Pull requests? |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
Grouping with multiple levels 257070215 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] (
[html_url] TEXT,
[issue_url] TEXT,
[id] INTEGER PRIMARY KEY,
[node_id] TEXT,
[user] INTEGER REFERENCES [users]([id]),
[created_at] TEXT,
[updated_at] TEXT,
[author_association] TEXT,
[body] TEXT,
[reactions] TEXT,
[performed_via_github_app] TEXT,
[issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
ON [issue_comments] ([user]);
issue 5