home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

5 rows where issue = 1429172192 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: reactions, created_at (date), updated_at (date)

user 5

  • hmaarrfk 1
  • dcherian 1
  • keewis 1
  • headtr1ck 1
  • akanshajais 1

author_association 4

  • MEMBER 2
  • COLLABORATOR 1
  • CONTRIBUTOR 1
  • NONE 1

issue 1

  • include/exclude lists in Dataset.expand_dims · 5 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
1494033789 https://github.com/pydata/xarray/issues/7239#issuecomment-1494033789 https://api.github.com/repos/pydata/xarray/issues/7239 IC_kwDOAMm_X85ZDSV9 akanshajais 85181086 2023-04-03T10:01:52Z 2023-04-03T10:01:52Z NONE

a workaround for achieving this would be to use the apply method of xarray data structures along with the expand_dims method. Here's an example of how we can use it:

``` import xarray as xr dataset = xr.Dataset(data_vars={'foo': 1, 'bar': 2})

Define a function that expands the given variable along a new dimension

def expand_variable(da): if da.name == 'foo': return da.expand_dims('zar') else: return da

Use the apply method to apply the function to only the desired variables

expanded_dataset = dataset.apply(expand_variable, keep_attrs=True)

print(expanded_dataset)

``` Here, the expand_variable function is defined to only expand the variable with the name 'foo' along a new dimension. The apply method is used to apply this function to each data variable in the dataset, but only 'foo' is actually modified.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  include/exclude lists in Dataset.expand_dims 1429172192
1299369449 https://github.com/pydata/xarray/issues/7239#issuecomment-1299369449 https://api.github.com/repos/pydata/xarray/issues/7239 IC_kwDOAMm_X85Ncs3p hmaarrfk 90008 2022-11-01T23:54:07Z 2022-11-01T23:54:07Z CONTRIBUTOR

I think these are good alternatives.

From my experiments (and I'm still trying to create a minimum reproducible code that shows the real problem behind the slowdowns) reindexing can be quite an expensive. We used to have many coordinates (to ensure that critical metdata stays with data_variables) and those coordinates were causing slowdowns on reindexing operations.

Thus the two calls update and expand_dims might cause two reindex merges to occur.

However, for this particular issue, I think that documenting the strategies proposed in the docstring is good enough. I have a feeling if one can get to the bottom of 7224, the performance concerns here will be mitigated too.

We can leave the performance discussion to: https://github.com/pydata/xarray/issues/7224

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  include/exclude lists in Dataset.expand_dims 1429172192
1299289978 https://github.com/pydata/xarray/issues/7239#issuecomment-1299289978 https://api.github.com/repos/pydata/xarray/issues/7239 IC_kwDOAMm_X85NcZd6 keewis 14808389 2022-11-01T22:07:23Z 2022-11-01T22:07:23Z MEMBER

note that the return value of update is deprecated, since update is one of the few in-place operations xarray has. Going forward it should be: python dataset.update(dataset[[vars_to_expand]].expand_dims(...))

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  include/exclude lists in Dataset.expand_dims 1429172192
1298820986 https://github.com/pydata/xarray/issues/7239#issuecomment-1298820986 https://api.github.com/repos/pydata/xarray/issues/7239 IC_kwDOAMm_X85Nam96 dcherian 2448579 2022-11-01T16:48:04Z 2022-11-01T16:48:04Z MEMBER

You can also generalize to dataset = dataset.update(dataset[[vars_to_expand]].expand_dims(...))

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  include/exclude lists in Dataset.expand_dims 1429172192
1297573961 https://github.com/pydata/xarray/issues/7239#issuecomment-1297573961 https://api.github.com/repos/pydata/xarray/issues/7239 IC_kwDOAMm_X85NV2hJ headtr1ck 43316012 2022-10-31T19:33:30Z 2022-10-31T19:33:30Z COLLABORATOR

If it's only a single variable you could do: dataset["foo"] = dataset["foo"].expand_dims("zar")

{
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  include/exclude lists in Dataset.expand_dims 1429172192

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 4077.318ms · About: xarray-datasette