home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

7 rows where issue = 320838184 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: body, reactions, created_at (date), updated_at (date)

user 4

  • brey 3
  • stale[bot] 2
  • shoyer 1
  • max-sixty 1

author_association 2

  • NONE 5
  • MEMBER 2

issue 1

  • Avoiding duplicate time coordinates when opening multiple files · 7 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
1114349288 https://github.com/pydata/xarray/issues/2108#issuecomment-1114349288 https://api.github.com/repos/pydata/xarray/issues/2108 IC_kwDOAMm_X85Ca57o max-sixty 5635139 2022-05-01T22:10:48Z 2022-05-01T22:10:48Z MEMBER

Is this now fixed by drop_duplicates? https://docs.xarray.dev/en/stable/generated/xarray.DataArray.drop_duplicates.html

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Avoiding duplicate time coordinates when opening multiple files 320838184
1114293906 https://github.com/pydata/xarray/issues/2108#issuecomment-1114293906 https://api.github.com/repos/pydata/xarray/issues/2108 IC_kwDOAMm_X85CasaS stale[bot] 26384082 2022-05-01T17:37:49Z 2022-05-01T17:37:49Z NONE

In order to maintain a list of currently relevant issues, we mark issues as stale after a period of inactivity

If this issue remains relevant, please comment here or remove the stale label; otherwise it will be marked as closed automatically

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Avoiding duplicate time coordinates when opening multiple files 320838184
613525795 https://github.com/pydata/xarray/issues/2108#issuecomment-613525795 https://api.github.com/repos/pydata/xarray/issues/2108 MDEyOklzc3VlQ29tbWVudDYxMzUyNTc5NQ== brey 5442433 2020-04-14T15:55:05Z 2020-04-14T15:55:05Z NONE

I am adding here a comment to keep it alive. In fact, this is more complicated than it seems because in combining files with duplicate times one has to choose how to merge i.e keep first, keep last or even a combination of the two.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Avoiding duplicate time coordinates when opening multiple files 320838184
613518293 https://github.com/pydata/xarray/issues/2108#issuecomment-613518293 https://api.github.com/repos/pydata/xarray/issues/2108 MDEyOklzc3VlQ29tbWVudDYxMzUxODI5Mw== stale[bot] 26384082 2020-04-14T15:42:12Z 2020-04-14T15:42:12Z NONE

In order to maintain a list of currently relevant issues, we mark issues as stale after a period of inactivity

If this issue remains relevant, please comment here or remove the stale label; otherwise it will be marked as closed automatically

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Avoiding duplicate time coordinates when opening multiple files 320838184
389196623 https://github.com/pydata/xarray/issues/2108#issuecomment-389196623 https://api.github.com/repos/pydata/xarray/issues/2108 MDEyOklzc3VlQ29tbWVudDM4OTE5NjYyMw== brey 5442433 2018-05-15T14:53:38Z 2018-05-15T14:53:38Z NONE

Thanks @shoyer. Your approach works better (one line) plus is consistent with the xarray-pandas shared paradigm. Unfortunately, I can't spare the time to do the PR right now. I haven't done it before for xarray and it will require some time overhead. Maybe someone with more experience can oblige.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Avoiding duplicate time coordinates when opening multiple files 320838184
387549048 https://github.com/pydata/xarray/issues/2108#issuecomment-387549048 https://api.github.com/repos/pydata/xarray/issues/2108 MDEyOklzc3VlQ29tbWVudDM4NzU0OTA0OA== shoyer 1217238 2018-05-08T21:31:58Z 2018-05-08T21:31:58Z MEMBER

The pandas duplicated() method might be more convenient than using np.unique(), e.g., you could equivalently write: arr.sel(time=~arr.indexes['time'].duplicated())

I think we would be open to adding duplicated() to xarray, too, if you or someone else is interested in making a pull request.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Avoiding duplicate time coordinates when opening multiple files 320838184
387343836 https://github.com/pydata/xarray/issues/2108#issuecomment-387343836 https://api.github.com/repos/pydata/xarray/issues/2108 MDEyOklzc3VlQ29tbWVudDM4NzM0MzgzNg== brey 5442433 2018-05-08T09:33:14Z 2018-05-08T09:33:14Z NONE

To partially answer my issue, I came up with the following post-processing option

  1. get the index of the duplicate coordinate values val,idx = np.unique(arr.time, return_index=True)

  2. trim the dataset arr = arr.isel(time=idx)

Maybe this can be integrated somehow...

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Avoiding duplicate time coordinates when opening multiple files 320838184

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 13.441ms · About: xarray-datasette