home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

3 rows where issue = 403326458 and user = 1217238 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: reactions, created_at (date), updated_at (date)

user 1

  • shoyer · 3 ✖

issue 1

  • xarray.DataArray.expand_dims() can only expand dimension for a point coordinate · 3 ✖

author_association 1

  • MEMBER 3
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
485540381 https://github.com/pydata/xarray/issues/2710#issuecomment-485540381 https://api.github.com/repos/pydata/xarray/issues/2710 MDEyOklzc3VlQ29tbWVudDQ4NTU0MDM4MQ== shoyer 1217238 2019-04-22T20:26:03Z 2019-04-22T20:27:27Z MEMBER

@barkls I think da.expand_dims(list(da.coords)) should work for this use-case.

Previously, we only used the argument to expand_dims() as a sequence, but now we distinguish between mappings and other sequences.

I don't know what the best resolution would be here, but this seems to be a hazard of duck-typing. I did not anticipate that some users would already be iterating over mappings like .coords.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  xarray.DataArray.expand_dims() can only expand dimension for a point coordinate  403326458
458745457 https://github.com/pydata/xarray/issues/2710#issuecomment-458745457 https://api.github.com/repos/pydata/xarray/issues/2710 MDEyOklzc3VlQ29tbWVudDQ1ODc0NTQ1Nw== shoyer 1217238 2019-01-29T23:13:33Z 2019-01-29T23:13:33Z MEMBER

Would it be alright if I opened a PR sometime soon that upgraded expand_dims to support the inserting/broadcasting dimensions with size > 1 (the first feature)?

Yes, that sounds welcome to me!

I think much of the underlying logic should already exist on the Variable.set_dims() method. See also the either_dict_or_kwargs utility in xarray.core.utils.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  xarray.DataArray.expand_dims() can only expand dimension for a point coordinate  403326458
457799138 https://github.com/pydata/xarray/issues/2710#issuecomment-457799138 https://api.github.com/repos/pydata/xarray/issues/2710 MDEyOklzc3VlQ29tbWVudDQ1Nzc5OTEzOA== shoyer 1217238 2019-01-26T03:55:39Z 2019-01-26T03:55:39Z MEMBER

broadcast the data across 1 or more new dimensions

Yes, this feels in scope for expand_dims(). But I think there are two separate features here: 1. Support inserting/broadcasting dimensions with size > 1. 2. Specify the size of the new dimension implicitly, by providing coordinate labels.

I think we would want both to be supported -- you should not be required to supply coordinate labels in order to expand to a dimension of size > 1. We can imagine the first being spelled like da.expand_dims({'a': 3}) or da.expand_dims(a=3).

expand an existing dimension to include 1 or more new coordinates

This feels a little different from expand_dims to me. Here the fundamental operation is alignment/reindexing, not broadcasting across a new dimension. The result also looks different, because you get all the NaN values.

I would probably write this with reindex, e.g., In [12]: da.reindex(b=list(da.b.values)+[5, 6]) Out[12]: <xarray.DataArray (b: 7, c: 3)> array([[ 1., 1., 1.], [ 1., 1., 1.], [ 1., 1., 1.], [ 1., 1., 1.], [ 1., 1., 1.], [nan, nan, nan], [nan, nan, nan]]) Coordinates: * b (b) int64 0 1 2 3 4 5 6 * c (c) int64 0 1 2

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  xarray.DataArray.expand_dims() can only expand dimension for a point coordinate  403326458

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 205.466ms · About: xarray-datasette