home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

2 rows where issue = 771382653 and user = 14808389 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date)

user 1

  • keewis · 2 ✖

issue 1

  • Allow sel's method and tolerance to vary per-dimension · 2 ✖

author_association 1

  • MEMBER 2
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
748498256 https://github.com/pydata/xarray/issues/4714#issuecomment-748498256 https://api.github.com/repos/pydata/xarray/issues/4714 MDEyOklzc3VlQ29tbWVudDc0ODQ5ODI1Ng== keewis 14808389 2020-12-19T16:58:20Z 2020-12-19T16:58:20Z MEMBER

I think reindex would need to be changed

that's true, I only tried the special case where the data that would be used to do the forward fill is included in the result.

I guess this works but it's a bit cumbersome

yeah, to_dataset is probably not the right tool for pointwise indexing.

it does not fail if one of the sensors in the query list is missing

if I understand correctly, you would like to index with arbitrary values for time, but would like an error for missing values of sensor. Unfortunately, I don't think that is possible using a single call to sel. Instead, you could set the fill_value parameter of reindex to some other value (for example, -np.inf) and then drop these values after the pointwise indexing.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Allow sel's method and tolerance to vary per-dimension 771382653
748483357 https://github.com/pydata/xarray/issues/4714#issuecomment-748483357 https://api.github.com/repos/pydata/xarray/issues/4714 MDEyOklzc3VlQ29tbWVudDc0ODQ4MzM1Nw== keewis 14808389 2020-12-19T14:41:47Z 2020-12-19T14:41:47Z MEMBER

reindex does not have to be changed since we can just call e.g. ffill with the dim parameter for this to work: python arr.reindex(...).ffill(dim="dim")

This really depends on how you intend to use the result of the indexing. For example, if you don't really need the big matrix, you could just convert the DataArray to a Dataset where the sensor dimension is the names of the variables (using to_dataset(dim="sensor"), or construct it that way). If you do need the matrix, this might be slightly better (you still end up allocating a T * (S + n) array): python arr.reindex(sensor=["A", "B", "C"]).sel({"sensor": ..., "time": ...}, method="ffill") but if you really care about the memory allocated at once, you might be better off using dask: python arr.chunk({"time": 100}).reindex(...).sel(...)

If all of that is not an option, I guess we might be able add a method_kwargs parameter (not sure if there is a better option, though).

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Allow sel's method and tolerance to vary per-dimension 771382653

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 4642.206ms · About: xarray-datasette