issue_comments
4 rows where author_association = "MEMBER", issue = 304624171 and user = 6815844 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: updated_at (date)
issue 1
- Load a small subset of data from a big dataset takes forever · 4 ✖
| id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
|---|---|---|---|---|---|---|---|---|---|---|---|
| 372852604 | https://github.com/pydata/xarray/issues/1985#issuecomment-372852604 | https://api.github.com/repos/pydata/xarray/issues/1985 | MDEyOklzc3VlQ29tbWVudDM3Mjg1MjYwNA== | fujiisoup 6815844 | 2018-03-13T23:24:37Z | 2018-03-13T23:24:37Z | MEMBER | I see no problem with your code... Can you try updating xarray to 0.10.2 (released today)? We updated some logic of lazy indexing. |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
Load a small subset of data from a big dataset takes forever 304624171 | |
| 372563938 | https://github.com/pydata/xarray/issues/1985#issuecomment-372563938 | https://api.github.com/repos/pydata/xarray/issues/1985 | MDEyOklzc3VlQ29tbWVudDM3MjU2MzkzOA== | fujiisoup 6815844 | 2018-03-13T06:48:23Z | 2018-03-13T06:48:23Z | MEMBER | Umm. I could not find what is wrong with your code.
Can you find which line loads the data into memory?
If your data is still a dask array, it does not print the entries of the array but instead, it shows something like this,
|
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
Load a small subset of data from a big dataset takes forever 304624171 | |
| 372545491 | https://github.com/pydata/xarray/issues/1985#issuecomment-372545491 | https://api.github.com/repos/pydata/xarray/issues/1985 | MDEyOklzc3VlQ29tbWVudDM3MjU0NTQ5MQ== | fujiisoup 6815844 | 2018-03-13T04:44:52Z | 2018-03-13T04:48:56Z | MEMBER | I notice this line
|
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
Load a small subset of data from a big dataset takes forever 304624171 | |
| 372544809 | https://github.com/pydata/xarray/issues/1985#issuecomment-372544809 | https://api.github.com/repos/pydata/xarray/issues/1985 | MDEyOklzc3VlQ29tbWVudDM3MjU0NDgwOQ== | fujiisoup 6815844 | 2018-03-13T04:39:47Z | 2018-03-13T04:39:47Z | MEMBER |
I don't think so. We support lazy indexing for any dimensional arrays (but not coordinate variables).
What does your data (especially '4Dvariable.nc') look like?
Is |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
Load a small subset of data from a big dataset takes forever 304624171 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] (
[html_url] TEXT,
[issue_url] TEXT,
[id] INTEGER PRIMARY KEY,
[node_id] TEXT,
[user] INTEGER REFERENCES [users]([id]),
[created_at] TEXT,
[updated_at] TEXT,
[author_association] TEXT,
[body] TEXT,
[reactions] TEXT,
[performed_via_github_app] TEXT,
[issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
ON [issue_comments] ([user]);
user 1