issue_comments
3 rows where author_association = "MEMBER", issue = 597475005 and user = 14808389 sorted by updated_at descending
This data as json, CSV (advanced)
These facets timed out: author_association, issue
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
612598462 | https://github.com/pydata/xarray/issues/3959#issuecomment-612598462 | https://api.github.com/repos/pydata/xarray/issues/3959 | MDEyOklzc3VlQ29tbWVudDYxMjU5ODQ2Mg== | keewis 14808389 | 2020-04-12T11:11:26Z | 2020-04-12T22:18:31Z | MEMBER |
Not really, I just thought the variables in the dataset were a way to uniquely identify its variant (i.e. do the validation of the dataset's structure). If you have different means to do so, of course you can use that instead. Re Edit: we'd still need to convince
I don't think so? There were a few discussions about subclassing, but I couldn't find anything about static type analysis. It's definitely worth having this discussion, either here (repurposing this issue) or in a new issue. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Extending Xarray for domain-specific toolkits 597475005 | |
612076605 | https://github.com/pydata/xarray/issues/3959#issuecomment-612076605 | https://api.github.com/repos/pydata/xarray/issues/3959 | MDEyOklzc3VlQ29tbWVudDYxMjA3NjYwNQ== | keewis 14808389 | 2020-04-10T15:23:08Z | 2020-04-10T15:56:08Z | MEMBER | you could emulate the availability of the accessors by checking your variables in the constructor of the accessor using ```python dataset_types = { frozenset("variable1", "variable2"): "type1", frozenset("variable2", "variable3"): "type2", frozenset("variable1", "variable3"): "type3", } def _dataset_type(ds): data_vars = frozenset(ds.data_vars.keys()) return dataset_types[data_vars] @xr.register_dataset_accessor("type1")
class Type1Accessor:
def init(self, ds):
if _dataset_type(ds) != "type1":
raise AttributeError("not a type1 dataset")
self.dataset = ds
``` If you just wanted to use static code analysis using e.g. class Dataset1(DatasetType): longitude : Coordinate[ArrayType[Float64Type]] latitude : Coordinate[ArrayType[Float64Type]]
def function(ds : Dataset1): # ... return ds ``` and have the type checker validate the structure of the dataset. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Extending Xarray for domain-specific toolkits 597475005 | |
611997039 | https://github.com/pydata/xarray/issues/3959#issuecomment-611997039 | https://api.github.com/repos/pydata/xarray/issues/3959 | MDEyOklzc3VlQ29tbWVudDYxMTk5NzAzOQ== | keewis 14808389 | 2020-04-10T11:49:32Z | 2020-04-10T11:49:32Z | MEMBER | do you have any control on how the datasets are created? If so, you could provide a factory function (maybe pass in arrays via required kwargs?) that does the checks and describes the required dataset structure in its docstring.
This probably won't happen in the near future, though, since the custom dtypes for |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Extending Xarray for domain-specific toolkits 597475005 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1