issue_comments
7 rows where user = 1796208 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: issue_url, reactions, created_at (date), updated_at (date)
user 1
- birdsarah · 7 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
487464367 | https://github.com/pydata/xarray/issues/2740#issuecomment-487464367 | https://api.github.com/repos/pydata/xarray/issues/2740 | MDEyOklzc3VlQ29tbWVudDQ4NzQ2NDM2Nw== | birdsarah 1796208 | 2019-04-29T06:32:01Z | 2019-04-29T06:32:01Z | NONE | I am no longer having this issue - see #2927 - so closing. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
`open_zarr` hangs if 's3://' at front of root s3fs string 406178487 | |
457800642 | https://github.com/pydata/xarray/issues/2714#issuecomment-457800642 | https://api.github.com/repos/pydata/xarray/issues/2714 | MDEyOklzc3VlQ29tbWVudDQ1NzgwMDY0Mg== | birdsarah 1796208 | 2019-01-26T04:22:42Z | 2019-01-26T04:22:42Z | NONE | Unfortunately neither of your suggestions work. With the second, I get the error:
With the first:
It's okay. I have something that works. And it's deterministic :D |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Extra dimension on first argument passed into apply_ufunc 403378297 | |
457798552 | https://github.com/pydata/xarray/issues/2714#issuecomment-457798552 | https://api.github.com/repos/pydata/xarray/issues/2714 | MDEyOklzc3VlQ29tbWVudDQ1Nzc5ODU1Mg== | birdsarah 1796208 | 2019-01-26T03:47:08Z | 2019-01-26T03:47:08Z | NONE |
phew! |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Extra dimension on first argument passed into apply_ufunc 403378297 | |
457798514 | https://github.com/pydata/xarray/issues/2714#issuecomment-457798514 | https://api.github.com/repos/pydata/xarray/issues/2714 | MDEyOklzc3VlQ29tbWVudDQ1Nzc5ODUxNA== | birdsarah 1796208 | 2019-01-26T03:46:36Z | 2019-01-26T03:46:36Z | NONE |
Sure - thanks! I have a dataset that's long, the sample code shown below is 200k rows, but the full dataset will be much larger. I'm interested in pairwise distances except not for all rows, just the distances for few thousand rows, wrt to the full 200k. Here's how I hack this together: My starting array ```python df_array = xr.DataArray(df) df_array = df_array.rename({PIVOT: 'all_sites'}) df_array <xarray.DataArray (all_sites: 185084, dim_1: 245)> array([[0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], ..., [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.]]) Coordinates: * all_sites (all_sites) object '0.gravatar.com||gprofiles.js||Gravatar.init' ... 'кÑ\x83Ñ\x80Ñ\x81Ñ\x8b.1Ñ\x81енÑ\x82Ñ\x8fбÑ\x80Ñ\x8f.Ñ\x80Ñ\x84||store.js||store.set' * dim_1 (dim_1) object 'AnalyserNode.connect' ... 'HTMLCanvasElement.previousSibling' ``` My slice of the array
Chunk
Get distances ```python def get_chebyshev_distances_xarray_ufunc(df_array, df_dye_array): chebyshev = lambda x: np.abs(df_array[:,0,:] - x).max(axis=1) result = np.apply_along_axis(chebyshev, 1, df_dye_array).T return result distance_array = xr.apply_ufunc( get_chebyshev_distances_xarray_ufunc, df_array_c, df_dye_array_c, dask='parallelized', output_dtypes=[float], input_core_dims=[['dim_1'], ['dim_1']], ) ``` What I get out is an array with the length of my original array and the width of my sites of interest where each number is the chebyshev distance between their respective rows of the original dataset (which are 245 long). |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Extra dimension on first argument passed into apply_ufunc 403378297 | |
457798029 | https://github.com/pydata/xarray/issues/2714#issuecomment-457798029 | https://api.github.com/repos/pydata/xarray/issues/2714 | MDEyOklzc3VlQ29tbWVudDQ1Nzc5ODAyOQ== | birdsarah 1796208 | 2019-01-26T03:38:31Z | 2019-01-26T03:38:31Z | NONE | Can you clarify one thing in your note.
Is it |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Extra dimension on first argument passed into apply_ufunc 403378297 | |
457797658 | https://github.com/pydata/xarray/issues/2714#issuecomment-457797658 | https://api.github.com/repos/pydata/xarray/issues/2714 | MDEyOklzc3VlQ29tbWVudDQ1Nzc5NzY1OA== | birdsarah 1796208 | 2019-01-26T03:32:10Z | 2019-01-26T03:32:10Z | NONE | Hi, I will have to think about your response a lot more to see if I can wrap my head around it. In the meantime I'm not sure I have my input_core_dims correct, but that's the only configuration I could get to work. I chunk along row_a, and row_b and I output a new array with the dims [row_a, row_b]. By trial and error, the above configuration is the only one I could find where I got out the dims I was expecting and didn't get an error. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Extra dimension on first argument passed into apply_ufunc 403378297 | |
457777423 | https://github.com/pydata/xarray/issues/2714#issuecomment-457777423 | https://api.github.com/repos/pydata/xarray/issues/2714 | MDEyOklzc3VlQ29tbWVudDQ1Nzc3NzQyMw== | birdsarah 1796208 | 2019-01-26T00:09:24Z | 2019-01-26T00:09:24Z | NONE | I should add, if I pass in plain numpy arrays then I do not have this problem. But ultimately I want to pass in a chunked DataArray, as described here: http://xarray.pydata.org/en/stable/dask.html#automatic-parallelization (this is my whole reason for using xarray). The work around is easy I just use |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Extra dimension on first argument passed into apply_ufunc 403378297 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
issue 2