home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

4 rows where author_association = "CONTRIBUTOR" and issue = 130753818 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • jcmgray 4

issue 1

  • merge and align DataArrays/Datasets on different domains · 4 ✖

author_association 1

  • CONTRIBUTOR · 4 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
242235696 https://github.com/pydata/xarray/issues/742#issuecomment-242235696 https://api.github.com/repos/pydata/xarray/issues/742 MDEyOklzc3VlQ29tbWVudDI0MjIzNTY5Ng== jcmgray 8982598 2016-08-24T23:05:49Z 2016-08-24T23:05:49Z CONTRIBUTOR

@shoyer My 2 cents for how this might work after 0.8+ (auto-align during concat, merge and auto_combine goes a long to solving this already) is that the compat option of merge etc could have a 4th option 'nonnull_equals' (or better named...), with compatibility tested by e.g.

``` python import xarray.ufuncs as xrufuncs

def nonnull_compatible(first, second): """ Check whether two (aligned) datasets have any conflicting non-null values. """

# mask for where both objects are not null
both_not_null = xrufuncs.logical_not(first.isnull() | second.isnull())

# check remaining values are equal
return first.where(both_not_null).equals(second.where(both_not_null))

```

And then fillna to combine variables. Looking now I think this is very similar to what you are suggesting in #835.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  merge and align DataArrays/Datasets on different domains 130753818
227573330 https://github.com/pydata/xarray/issues/742#issuecomment-227573330 https://api.github.com/repos/pydata/xarray/issues/742 MDEyOklzc3VlQ29tbWVudDIyNzU3MzMzMA== jcmgray 8982598 2016-06-21T21:11:21Z 2016-06-21T21:11:21Z CONTRIBUTOR

Woops - I actually meant to put

python ds['var'].loc[{...}]

in there as the one that works ... my understanding is that this is supported as long as the specified coordinates are 'nice' (according to pandas) slices/scalars.

And yes, default values for DataArray/Dataset would definitely fill the "create_all_missing" need.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  merge and align DataArrays/Datasets on different domains 130753818
226547071 https://github.com/pydata/xarray/issues/742#issuecomment-226547071 https://api.github.com/repos/pydata/xarray/issues/742 MDEyOklzc3VlQ29tbWVudDIyNjU0NzA3MQ== jcmgray 8982598 2016-06-16T16:57:48Z 2016-06-16T16:57:48Z CONTRIBUTOR

Yes following a similar line of thought to you I recently wrote an 'all missing' dataset constructor (rather than 'empty' which I think of as no variables):

python def all_missing_ds(coords, var_names, var_dims, var_types): """ Make a dataset whose data is all missing. """ # Empty dataset with appropirate coordinates ds = xr.Dataset(coords=coords) for v_name, v_dims, v_type in zip(var_names, var_dims, var_types): shape = tuple(ds[d].size for d in v_dims) if v_type == int or v_type == float: # Warn about up-casting int to float? nodata = np.tile(np.nan, shape) elif v_type == complex: # astype(complex) produces (nan + 0.0j) nodata = np.tile(np.nan + np.nan*1.0j, shape) else: nodata = np.tile(np.nan, shape).astype(object) ds[v_name] = (v_dims, nodata) return ds

To go with this (and this might be separate issue), a set_value method would be helpful --- just so that one does not have to remember which particular combination of

python ds.sel(...).var = new_values ds.sel(...)['var'] = new_values ds.var.sel(...) = new_values ds['var'].sel(...) = new_values

guarantees assigning a new value, (currently only the last syntax I believe).

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  merge and align DataArrays/Datasets on different domains 130753818
226179313 https://github.com/pydata/xarray/issues/742#issuecomment-226179313 https://api.github.com/repos/pydata/xarray/issues/742 MDEyOklzc3VlQ29tbWVudDIyNjE3OTMxMw== jcmgray 8982598 2016-06-15T12:59:08Z 2016-06-15T12:59:08Z CONTRIBUTOR

Just a comment that the appearance of object types is likely due to the fact that numpy's NaNs are inherently 'floats' - so this will be an issue for any method with an intermediate `missing data' stage if non-floats are being used.

I still use use the align and fillna method since I mostly deal with floats/complex numbers, although @shoyer 's suggestion of a partial align and then concat could definitely be cleaner when the added coordinates are all 'new'.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  merge and align DataArrays/Datasets on different domains 130753818

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 877.844ms · About: xarray-datasette