issue_comments
3 rows where issue = 138332032 and user = 306380 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- Array size changes following loading of numpy array · 3 ✖
| id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
|---|---|---|---|---|---|---|---|---|---|---|---|
| 193591506 | https://github.com/pydata/xarray/issues/783#issuecomment-193591506 | https://api.github.com/repos/pydata/xarray/issues/783 | MDEyOklzc3VlQ29tbWVudDE5MzU5MTUwNg== | mrocklin 306380 | 2016-03-08T03:44:36Z | 2016-03-08T03:44:36Z | MEMBER | Ah ha! Excellent. Thanks @shoyer . I'll give this a shot tomorrow (or perhaps ask @jcrist to look into it if he has time). |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
Array size changes following loading of numpy array 138332032 | |
| 193522753 | https://github.com/pydata/xarray/issues/783#issuecomment-193522753 | https://api.github.com/repos/pydata/xarray/issues/783 | MDEyOklzc3VlQ29tbWVudDE5MzUyMjc1Mw== | mrocklin 306380 | 2016-03-08T00:20:43Z | 2016-03-08T00:20:43Z | MEMBER | @shoyer perhaps you can help to translate the code within @pwolfram 's script (in particular the lines that I've highlighted) and say how xarray would use dask.array to accomplish this. I think this is a case where we each have some necessary expertise to resolve this issue. We probably need to work together to efficiently hunt down what's going on. |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
Array size changes following loading of numpy array 138332032 | |
| 193501447 | https://github.com/pydata/xarray/issues/783#issuecomment-193501447 | https://api.github.com/repos/pydata/xarray/issues/783 | MDEyOklzc3VlQ29tbWVudDE5MzUwMTQ0Nw== | mrocklin 306380 | 2016-03-07T23:22:46Z | 2016-03-07T23:22:46Z | MEMBER | It looks like the issue is in these lines:
I'm confused by the chunksize change from 21 to 23. In straight dask.array I'm unable to reproduce this problem, although obviously I'm doing something differently here than how xarray does things. ``` python In [1]: import dask.array as da x In [2]: x = da.ones((3630, 100), chunks=(21, 100)) In [3]: y = x[730:830, :] In [4]: y.shape Out[4]: (30, 100) In [5]: y.compute().shape Out[5]: (30, 100) In [6]: y.chunks Out[6]: ((21, 9), (100,)) ``` It would be awesome if you all could produce a failing example with just dask.array. |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
Array size changes following loading of numpy array 138332032 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] (
[html_url] TEXT,
[issue_url] TEXT,
[id] INTEGER PRIMARY KEY,
[node_id] TEXT,
[user] INTEGER REFERENCES [users]([id]),
[created_at] TEXT,
[updated_at] TEXT,
[author_association] TEXT,
[body] TEXT,
[reactions] TEXT,
[performed_via_github_app] TEXT,
[issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
ON [issue_comments] ([user]);
user 1