issue_comments
4 rows where author_association = "CONTRIBUTOR", issue = 304624171 and user = 22245117 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- Load a small subset of data from a big dataset takes forever · 4 ✖
| id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
|---|---|---|---|---|---|---|---|---|---|---|---|
| 373694632 | https://github.com/pydata/xarray/issues/1985#issuecomment-373694632 | https://api.github.com/repos/pydata/xarray/issues/1985 | MDEyOklzc3VlQ29tbWVudDM3MzY5NDYzMg== | malmans2 22245117 | 2018-03-16T12:09:50Z | 2018-03-16T12:09:50Z | CONTRIBUTOR | Alright, I found the problem. I'm loading several variables from different files. All the variables have 1464 snapshots. However, one of the 3D variables has just one snapshot at a different time (I found a bag in my bash script to re-organize the raw data). When I load my dataset using .open_mfdataset, the time dimension has an extra snapshot (length is 1465). However, xarray doesn't like it and when I run functions such as to_netcdf it takes forever (no error). Thanks @fujiisoup for the help! |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
Load a small subset of data from a big dataset takes forever 304624171 | |
| 372570107 | https://github.com/pydata/xarray/issues/1985#issuecomment-372570107 | https://api.github.com/repos/pydata/xarray/issues/1985 | MDEyOklzc3VlQ29tbWVudDM3MjU3MDEwNw== | malmans2 22245117 | 2018-03-13T07:21:10Z | 2018-03-13T07:21:10Z | CONTRIBUTOR | I forgot to mention that I'm getting this warning: /home/idies/anaconda3/lib/python3.5/site-packages/dask/core.py:306: FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison elif type_arg is type(key) and arg == key: However, I don't think it is relevant since I get the same warning when I'm able to run .to_netcdf() on the 3D variable. |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
Load a small subset of data from a big dataset takes forever 304624171 | |
| 372566304 | https://github.com/pydata/xarray/issues/1985#issuecomment-372566304 | https://api.github.com/repos/pydata/xarray/issues/1985 | MDEyOklzc3VlQ29tbWVudDM3MjU2NjMwNA== | malmans2 22245117 | 2018-03-13T07:01:51Z | 2018-03-13T07:01:51Z | CONTRIBUTOR | The problem occurs when I run the very last line, which is to_netcdf().
Right before, the dataset looks like this:
|
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
Load a small subset of data from a big dataset takes forever 304624171 | |
| 372558850 | https://github.com/pydata/xarray/issues/1985#issuecomment-372558850 | https://api.github.com/repos/pydata/xarray/issues/1985 | MDEyOklzc3VlQ29tbWVudDM3MjU1ODg1MA== | malmans2 22245117 | 2018-03-13T06:19:47Z | 2018-03-13T06:23:00Z | CONTRIBUTOR | I have the same issue if I don't copy the dataset. Here are the coordinates of my dataset:
``` I think somewhere I trigger the loading of the whole dataset. Otherwise, I don't understand why it works when I open just one month instead of the whole year. |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
Load a small subset of data from a big dataset takes forever 304624171 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] (
[html_url] TEXT,
[issue_url] TEXT,
[id] INTEGER PRIMARY KEY,
[node_id] TEXT,
[user] INTEGER REFERENCES [users]([id]),
[created_at] TEXT,
[updated_at] TEXT,
[author_association] TEXT,
[body] TEXT,
[reactions] TEXT,
[performed_via_github_app] TEXT,
[issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
ON [issue_comments] ([user]);
user 1