issue_comments
5 rows where author_association = "NONE", issue = 713834297 and user = 2560426 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- Allow skipna in .dot() · 5 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
713172015 | https://github.com/pydata/xarray/issues/4482#issuecomment-713172015 | https://api.github.com/repos/pydata/xarray/issues/4482 | MDEyOklzc3VlQ29tbWVudDcxMzE3MjAxNQ== | heerad 2560426 | 2020-10-20T22:17:08Z | 2020-10-20T22:21:14Z | NONE | On the topic of fillna(), I'm seeing an odd unrelated issue that I don't have an explanation for. I have a dataarray When I do
Stack trace shows it's failing on a I have no idea how to reproduce this simply... If it helps narrow things down, |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Allow skipna in .dot() 713834297 | |
708474940 | https://github.com/pydata/xarray/issues/4482#issuecomment-708474940 | https://api.github.com/repos/pydata/xarray/issues/4482 | MDEyOklzc3VlQ29tbWVudDcwODQ3NDk0MA== | heerad 2560426 | 2020-10-14T15:21:29Z | 2020-10-14T15:21:55Z | NONE | Adding on, whatever the solution is that avoids blowing up memory, especially when using with |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Allow skipna in .dot() 713834297 | |
707331260 | https://github.com/pydata/xarray/issues/4482#issuecomment-707331260 | https://api.github.com/repos/pydata/xarray/issues/4482 | MDEyOklzc3VlQ29tbWVudDcwNzMzMTI2MA== | heerad 2560426 | 2020-10-12T20:31:26Z | 2020-10-12T21:05:24Z | NONE | See below. I temporarily write some files to netcdf then recombine them lazily using The issue seems to present itself more consistently when my I used the ``` import numpy as np import xarray as xr import os N = 1000 N_per_file = 10 M = 100 K = 10 window_size = 150 tmp_dir = 'tmp' os.mkdir(tmp_dir) save many netcdf files, later to be concatted into a dask.delayed datasetfor i in range(0, N, N_per_file):
open lazilyx = xr.open_mfdataset('{}/*.nc'.format(tmp_dir), parallel=True, concat_dim='d1').vals a rolling window along a stacked dimensionx_windows = x.stack(d13=['d1', 'd3']).rolling(d13=window_size).construct('window') we'll dot x_windows with y along the window dimensiony = xr.DataArray([1]*window_size, dims='window') incremental memory: 1.94 MiBx_windows.dot(y).compute() incremental memory: 20.00 MiBx_windows.notnull().dot(y).compute() incremental memory: 182.13 MiBx_windows.fillna(0.).dot(y).compute() incremental memory: 211.52 MiBx_windows.weighted(y).mean('window', skipna=True).compute() ``` |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Allow skipna in .dot() 713834297 | |
707238146 | https://github.com/pydata/xarray/issues/4482#issuecomment-707238146 | https://api.github.com/repos/pydata/xarray/issues/4482 | MDEyOklzc3VlQ29tbWVudDcwNzIzODE0Ng== | heerad 2560426 | 2020-10-12T17:01:54Z | 2020-10-12T17:16:07Z | NONE | Adding on here, even if This is happening with More evidence in favor: if I do
I'm happy to live with a memory copy for now with |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Allow skipna in .dot() 713834297 | |
702939943 | https://github.com/pydata/xarray/issues/4482#issuecomment-702939943 | https://api.github.com/repos/pydata/xarray/issues/4482 | MDEyOklzc3VlQ29tbWVudDcwMjkzOTk0Mw== | heerad 2560426 | 2020-10-02T20:20:53Z | 2020-10-02T20:32:32Z | NONE | Great, looks like I missed that option. Thanks. For reference, |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Allow skipna in .dot() 713834297 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1