home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

2 rows where issue = 822320976 and user = 43613877 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • observingClouds · 2 ✖

issue 1

  • KeyError when selecting "nearest" data with given tolerance · 2 ✖

author_association 1

  • CONTRIBUTOR 2
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
799047819 https://github.com/pydata/xarray/issues/4995#issuecomment-799047819 https://api.github.com/repos/pydata/xarray/issues/4995 MDEyOklzc3VlQ29tbWVudDc5OTA0NzgxOQ== observingClouds 43613877 2021-03-15T02:28:51Z 2021-03-15T02:28:51Z CONTRIBUTOR

Thanks @dcherian, this is doing the job. I'll close this issue as there seems to be no need to implement this into the sel method.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  KeyError when selecting "nearest" data with given tolerance  822320976
791019238 https://github.com/pydata/xarray/issues/4995#issuecomment-791019238 https://api.github.com/repos/pydata/xarray/issues/4995 MDEyOklzc3VlQ29tbWVudDc5MTAxOTIzOA== observingClouds 43613877 2021-03-04T23:10:11Z 2021-03-04T23:10:11Z CONTRIBUTOR

Introducing a fill_value seems like a good idea, such that the size of the output does not change compared to the intended selection. Choosing the original/requested coordinate as a label for the missing datapoint seems to be a valid choice because this position has been checked for valid data nearby without success. I would suggest, that the fill_value should then be automatically determined from the _FillValue, the datatype and only at last requires the fill_value to be set.

However, the shortcoming that I see in using a fill_value is that the indexing has to modify the data ( insert e.g. -999) and also 'invent' a new coordinate point ( here 40). This gets reasonably complex, when applying to a dataset with DataArrays of different types, e.g. ```python import numpy as np import xarray as xr

ds = xr.Dataset() ds['data1'] = xr.DataArray(np.array([1,2,3,4,5], dtype=int), dims=["lat"], coords={'lat':[10,20,30,50,60]}) ds['data2'] = xr.DataArray(np.array([1,2,3,4,5], dtype=float), dims=["lat"], coords={'lat':[10,20,30,50,60]}) `` Onefill_valuemight not fit to all data arrays being it because of the datatype or the actual data. E.g.-999might be a goodfill_value` for one DataArray but a valid datapoint in another one.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  KeyError when selecting "nearest" data with given tolerance  822320976

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 3198.968ms · About: xarray-datasette