issue_comments
13 rows where author_association = "CONTRIBUTOR" and user = 8982598 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: issue_url, reactions, created_at (date), updated_at (date)
user 1
- jcmgray · 13 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
396745650 | https://github.com/pydata/xarray/issues/1914#issuecomment-396745650 | https://api.github.com/repos/pydata/xarray/issues/1914 | MDEyOklzc3VlQ29tbWVudDM5Njc0NTY1MA== | jcmgray 8982598 | 2018-06-12T21:48:31Z | 2018-06-12T22:40:31Z | CONTRIBUTOR | Indeed, this is exactly the kind of situation I wrote ```python import numpy as np import xyzpy as xyz def some_function(x, y, z): return x * np.random.randn(3, 4) + y / z Define how to label the function's outputrunner_opts = { 'fn': some_function, 'var_names': ['output'], 'var_dims': {'output': ['a', 'b']}, 'var_coords': {'a': [10, 20, 30]}, } runner = xyz.Runner(**runner_opts) set the parameters we want to explore (combos <-> cartesian product)combos = { 'x': np.linspace(1, 2, 11), 'y': np.linspace(2, 3, 21), 'z': np.linspace(4, 5, 31), } run themrunner.run_combos(combos) ``` Should produce: ``` 100%|###################| 7161/7161 [00:00<00:00, 132654.11it/s] <xarray.Dataset> Dimensions: (a: 3, b: 4, x: 11, y: 21, z: 31) Coordinates: * x (x) float64 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2.0 * y (y) float64 2.0 2.05 2.1 2.15 2.2 2.25 2.3 2.35 2.4 2.45 2.5 ... * z (z) float64 4.0 4.033 4.067 4.1 4.133 4.167 4.2 4.233 4.267 4.3 ... * a (a) int32 10 20 30 Dimensions without coordinates: b Data variables: output (x, y, z, a, b) float64 0.6942 -0.3348 -0.9156 -0.517 -0.834 ... ``` And there are options for merging successive, disjoint sets of data ( There are also multiple ways to define functions inputs/outputs (the easiest of which is just to actually return a |
{ "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 } |
cartesian product of coordinates and using it to index / fill empty dataset 297560256 | |
276540506 | https://github.com/pydata/xarray/issues/60#issuecomment-276540506 | https://api.github.com/repos/pydata/xarray/issues/60 | MDEyOklzc3VlQ29tbWVudDI3NjU0MDUwNg== | jcmgray 8982598 | 2017-02-01T00:43:52Z | 2017-02-01T00:43:52Z | CONTRIBUTOR | Would using Ah yes true. I was slightly anticipating e.g. filling with NaT if the |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Implement DataArray.idxmax() 29136905 | |
276537615 | https://github.com/pydata/xarray/issues/60#issuecomment-276537615 | https://api.github.com/repos/pydata/xarray/issues/60 | MDEyOklzc3VlQ29tbWVudDI3NjUzNzYxNQ== | jcmgray 8982598 | 2017-02-01T00:26:24Z | 2017-02-01T00:26:24Z | CONTRIBUTOR | Ah yes both ways are working now, thanks. Just had a little play around with timings, and this seems like a reasonably quick way to achieve correct NaN behaviour: ```python def xr_idxmax(obj, dim): sig = ([(dim,), (dim,)], [()]) kwargs = {'axis': -1}
``` i.e. originally replace all NaN values with -Inf, use the usual |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Implement DataArray.idxmax() 29136905 | |
276232678 | https://github.com/pydata/xarray/issues/60#issuecomment-276232678 | https://api.github.com/repos/pydata/xarray/issues/60 | MDEyOklzc3VlQ29tbWVudDI3NjIzMjY3OA== | jcmgray 8982598 | 2017-01-31T00:06:02Z | 2017-01-31T00:06:02Z | CONTRIBUTOR | So I thought Regarding edge cases: multiple maxes is presumably fine as long as user is aware it just takes the first.
However, |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Implement DataArray.idxmax() 29136905 | |
275778443 | https://github.com/pydata/xarray/issues/60#issuecomment-275778443 | https://api.github.com/repos/pydata/xarray/issues/60 | MDEyOklzc3VlQ29tbWVudDI3NTc3ODQ0Mw== | jcmgray 8982598 | 2017-01-27T21:24:31Z | 2017-01-27T21:24:31Z | CONTRIBUTOR | Just as I am interested in having this functionality, and the new ```python from wherever import argmax, take # numpy or dask def gufunc_idxmax(x, y, axis=None): indx = argmax(x, axis) return take(y, indx) def idxmax(obj, dim): sig = ([(dim,), (dim,)], [()]) kwargs = {'axis': -1} return apply_ufunc(gufunc_idxmin, obj, obj[dim], signature=sig, kwargs=kwargs, dask_array='allowed') ``` |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Implement DataArray.idxmax() 29136905 | |
248380693 | https://github.com/pydata/xarray/pull/1007#issuecomment-248380693 | https://api.github.com/repos/pydata/xarray/issues/1007 | MDEyOklzc3VlQ29tbWVudDI0ODM4MDY5Mw== | jcmgray 8982598 | 2016-09-20T17:57:31Z | 2016-09-20T17:57:31Z | CONTRIBUTOR | This all looks great to me. Might the docstring for |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Fixes for compat='no_conflicts' and open_mfdataset 177689088 | |
246999780 | https://github.com/pydata/xarray/pull/996#issuecomment-246999780 | https://api.github.com/repos/pydata/xarray/issues/996 | MDEyOklzc3VlQ29tbWVudDI0Njk5OTc4MA== | jcmgray 8982598 | 2016-09-14T12:39:41Z | 2016-09-14T12:39:41Z | CONTRIBUTOR | OK, I have stripped the Dataset/Array methods which I agree were largely redundant. Since this sets this type of comparison/merge slightly apart, And I've done a first pass at updating the docs. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
add 'no_conflicts' as compat option for merging non-conflicting data 174404136 | |
244612732 | https://github.com/pydata/xarray/pull/996#issuecomment-244612732 | https://api.github.com/repos/pydata/xarray/issues/996 | MDEyOklzc3VlQ29tbWVudDI0NDYxMjczMg== | jcmgray 8982598 | 2016-09-04T16:23:21Z | 2016-09-04T16:23:21Z | CONTRIBUTOR |
I will have a look into how to do this, but am not that familiar with dask. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
add 'no_conflicts' as compat option for merging non-conflicting data 174404136 | |
244612031 | https://github.com/pydata/xarray/pull/996#issuecomment-244612031 | https://api.github.com/repos/pydata/xarray/issues/996 | MDEyOklzc3VlQ29tbWVudDI0NDYxMjAzMQ== | jcmgray 8982598 | 2016-09-04T16:11:10Z | 2016-09-04T16:11:10Z | CONTRIBUTOR | Ah sorry - yes rebased locally then mistakenly merged the remote fork...
Yes I thought that might be better also, the advantages of |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
add 'no_conflicts' as compat option for merging non-conflicting data 174404136 | |
242235696 | https://github.com/pydata/xarray/issues/742#issuecomment-242235696 | https://api.github.com/repos/pydata/xarray/issues/742 | MDEyOklzc3VlQ29tbWVudDI0MjIzNTY5Ng== | jcmgray 8982598 | 2016-08-24T23:05:49Z | 2016-08-24T23:05:49Z | CONTRIBUTOR | @shoyer My 2 cents for how this might work after 0.8+ (auto-align during ``` python import xarray.ufuncs as xrufuncs def nonnull_compatible(first, second): """ Check whether two (aligned) datasets have any conflicting non-null values. """
``` And then |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
merge and align DataArrays/Datasets on different domains 130753818 | |
227573330 | https://github.com/pydata/xarray/issues/742#issuecomment-227573330 | https://api.github.com/repos/pydata/xarray/issues/742 | MDEyOklzc3VlQ29tbWVudDIyNzU3MzMzMA== | jcmgray 8982598 | 2016-06-21T21:11:21Z | 2016-06-21T21:11:21Z | CONTRIBUTOR | Woops - I actually meant to put
in there as the one that works ... my understanding is that this is supported as long as the specified coordinates are 'nice' (according to And yes, default values for DataArray/Dataset would definitely fill the "create_all_missing" need. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
merge and align DataArrays/Datasets on different domains 130753818 | |
226547071 | https://github.com/pydata/xarray/issues/742#issuecomment-226547071 | https://api.github.com/repos/pydata/xarray/issues/742 | MDEyOklzc3VlQ29tbWVudDIyNjU0NzA3MQ== | jcmgray 8982598 | 2016-06-16T16:57:48Z | 2016-06-16T16:57:48Z | CONTRIBUTOR | Yes following a similar line of thought to you I recently wrote an 'all missing' dataset constructor (rather than 'empty' which I think of as no variables):
To go with this (and this might be separate issue), a
guarantees assigning a new value, (currently only the last syntax I believe). |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
merge and align DataArrays/Datasets on different domains 130753818 | |
226179313 | https://github.com/pydata/xarray/issues/742#issuecomment-226179313 | https://api.github.com/repos/pydata/xarray/issues/742 | MDEyOklzc3VlQ29tbWVudDIyNjE3OTMxMw== | jcmgray 8982598 | 2016-06-15T12:59:08Z | 2016-06-15T12:59:08Z | CONTRIBUTOR | Just a comment that the appearance of I still use use the |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
merge and align DataArrays/Datasets on different domains 130753818 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
issue 5