issue_comments
4 rows where issue = 441222339 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: reactions, created_at (date), updated_at (date)
issue 1
- std interprets continents as zero not nan · 4 ✖
| id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
|---|---|---|---|---|---|---|---|---|---|---|---|
| 490590509 | https://github.com/pydata/xarray/issues/2946#issuecomment-490590509 | https://api.github.com/repos/pydata/xarray/issues/2946 | MDEyOklzc3VlQ29tbWVudDQ5MDU5MDUwOQ== | shoyer 1217238 | 2019-05-08T18:06:04Z | 2019-05-08T18:06:04Z | MEMBER | It sounds like this is an issue that only comes up when using It be nice to have a minimal example of what goes wrong that doesn't require reading/writing netCDF files. Can you construct a synthetic dataset in memory that exhibits this problem? Note that you can use the |
{
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
std interprets continents as zero not nan 441222339 | |
| 490421774 | https://github.com/pydata/xarray/issues/2946#issuecomment-490421774 | https://api.github.com/repos/pydata/xarray/issues/2946 | MDEyOklzc3VlQ29tbWVudDQ5MDQyMTc3NA== | andytraumueller 10809480 | 2019-05-08T09:44:25Z | 2019-05-08T09:49:02Z | NONE | interesting fact i just learned. when you have to process over a huge dataset, first export it as a complete single netcdf file, then calculate its aggregation function. Its a workaround, i suppose bottleneck or dask needs to have its complete set first. For mean it just simply works because of the easy calculation method, for std i think dask or bottleneck assume a nan as a zero for calculation purposes.
It could be problematic by huuuuge datasets in the tb size. |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
std interprets continents as zero not nan 441222339 | |
| 490394601 | https://github.com/pydata/xarray/issues/2946#issuecomment-490394601 | https://api.github.com/repos/pydata/xarray/issues/2946 | MDEyOklzc3VlQ29tbWVudDQ5MDM5NDYwMQ== | andytraumueller 10809480 | 2019-05-08T08:18:21Z | 2019-05-08T09:01:56Z | NONE | fixed: synthetic dataset of the polar region -60 - -90, in the mean calculation everything is proper and nans are ignored. std still looks suspicious. ```python import xarray as xr import glob import numpy as np data = xr.open_dataset(r"test.nc")
data.mean(dim="time", skipna=True).to_netcdf(r"mean_test.nc")
Dropbox to files: https://www.dropbox.com/sh/yuf114u143mj2l3/AABuQfC5wu4nrWDH4GsGgFyJa?dl=0 |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
std interprets continents as zero not nan 441222339 | |
| 490375850 | https://github.com/pydata/xarray/issues/2946#issuecomment-490375850 | https://api.github.com/repos/pydata/xarray/issues/2946 | MDEyOklzc3VlQ29tbWVudDQ5MDM3NTg1MA== | shoyer 1217238 | 2019-05-08T07:13:28Z | 2019-05-08T07:13:28Z | MEMBER | Can you reproduce this with a synthetic dataset? Please read this guide on "Minimal Bug Reports": http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
std interprets continents as zero not nan 441222339 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] (
[html_url] TEXT,
[issue_url] TEXT,
[id] INTEGER PRIMARY KEY,
[node_id] TEXT,
[user] INTEGER REFERENCES [users]([id]),
[created_at] TEXT,
[updated_at] TEXT,
[author_association] TEXT,
[body] TEXT,
[reactions] TEXT,
[performed_via_github_app] TEXT,
[issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
ON [issue_comments] ([user]);
user 2