home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

5 rows where issue = 403326458 and user = 10720577 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: reactions, created_at (date), updated_at (date)

user 1

  • pletchm · 5 ✖

issue 1

  • xarray.DataArray.expand_dims() can only expand dimension for a point coordinate · 5 ✖

author_association 1

  • CONTRIBUTOR 5
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
589133779 https://github.com/pydata/xarray/issues/2710#issuecomment-589133779 https://api.github.com/repos/pydata/xarray/issues/2710 MDEyOklzc3VlQ29tbWVudDU4OTEzMzc3OQ== pletchm 10720577 2020-02-20T15:35:22Z 2020-02-20T15:35:22Z CONTRIBUTOR

Yes, @TomNicholas. My PR got merged but I forgot to close the issue -- closing it now. Thanks for checking.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  xarray.DataArray.expand_dims() can only expand dimension for a point coordinate  403326458
485544371 https://github.com/pydata/xarray/issues/2710#issuecomment-485544371 https://api.github.com/repos/pydata/xarray/issues/2710 MDEyOklzc3VlQ29tbWVudDQ4NTU0NDM3MQ== pletchm 10720577 2019-04-22T20:39:17Z 2019-04-22T20:39:17Z CONTRIBUTOR

Another solution could be adding support for da.sel(dim1='a', squeeze=False) to avoid losing the dim1 dimension/coordinate in the first place

Or equivalently, you could just do da.sel(dim1=['a'])

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  xarray.DataArray.expand_dims() can only expand dimension for a point coordinate  403326458
458730514 https://github.com/pydata/xarray/issues/2710#issuecomment-458730514 https://api.github.com/repos/pydata/xarray/issues/2710 MDEyOklzc3VlQ29tbWVudDQ1ODczMDUxNA== pletchm 10720577 2019-01-29T22:19:58Z 2019-01-29T22:19:58Z CONTRIBUTOR

Oh I see what you're saying. Yeah, that makes sense.

To get the equivalent of da.expand_dims(a=[9, 10, 11]), you'd do ```

new = da.expand_dims(a=3) new <xarray.DataArray (a: 3, b: 5, c: 3)> ... Coordinates: * b (b) int64 0 1 2 3 4 * c (c) int64 0 1 2 Dimensions without coordinates: a new["a"] = [9, 10, 11] ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  xarray.DataArray.expand_dims() can only expand dimension for a point coordinate  403326458
458638827 https://github.com/pydata/xarray/issues/2710#issuecomment-458638827 https://api.github.com/repos/pydata/xarray/issues/2710 MDEyOklzc3VlQ29tbWVudDQ1ODYzODgyNw== pletchm 10720577 2019-01-29T17:49:55Z 2019-01-29T17:49:55Z CONTRIBUTOR

Those would be equivalent, I think, assuming they're both manipulating the same da object (I meant for them to be separate calls not sequential, but even if they were sequential, expand_dims doesn't and wouldn't alter da, but instead return a new xarray object). I edited my above post to clarify what da is.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  xarray.DataArray.expand_dims() can only expand dimension for a point coordinate  403326458
458609172 https://github.com/pydata/xarray/issues/2710#issuecomment-458609172 https://api.github.com/repos/pydata/xarray/issues/2710 MDEyOklzc3VlQ29tbWVudDQ1ODYwOTE3Mg== pletchm 10720577 2019-01-29T16:32:36Z 2019-01-29T17:44:25Z CONTRIBUTOR

Hi, Thanks for replying. I see what you mean about the 2 separate features.

Would it be alright if I opened a PR sometime soon that upgraded expand_dims to support the inserting/broadcasting dimensions with size > 1 (the first feature)?

I would use your suggested API, i.e. not requiring explicit coordinate names -- that makes sense. However, it feels like the dimension kwargs (i.e. the new dimension/dimensions), should be allowed to be given implicit or explicit coordinates, in case the user doesn't want 0-based integer coordinates for the new dimension. For example, da.expand_dims(a=3) is equivalent to da.expand_dims(a=[0, 1, 2]) but this will also work da.expand_dims(a=['w', 'x', 'y', 'z']) where da is ```

coords = {"b": range(5), "c": range(3)} da = xr.DataArray(np.ones([5, 3]), coords=coords, dims=list(coords.keys())) da <xarray.DataArray (b: 5, c: 3)> array([[1., 1., 1.], [1., 1., 1.], [1., 1., 1.], [1., 1., 1.], [1., 1., 1.]]) Coordinates: * b (b) int64 0 1 2 3 4 * c (c) int64 0 1 2 ```` Does that make sense?

Thank you! Martin

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  xarray.DataArray.expand_dims() can only expand dimension for a point coordinate  403326458

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 26.378ms · About: xarray-datasette