issue_comments
2 rows where author_association = "NONE", issue = 241290234 and user = 941907 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- sharing dimensions across dataarrays in a dataset · 2 ✖
| id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
|---|---|---|---|---|---|---|---|---|---|---|---|
| 430946620 | https://github.com/pydata/xarray/issues/1471#issuecomment-430946620 | https://api.github.com/repos/pydata/xarray/issues/1471 | MDEyOklzc3VlQ29tbWVudDQzMDk0NjYyMA== | smartass101 941907 | 2018-10-18T09:48:20Z | 2018-10-18T09:48:20Z | NONE | I indeed often resort to using a |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
sharing dimensions across dataarrays in a dataset 241290234 | |
| 430324391 | https://github.com/pydata/xarray/issues/1471#issuecomment-430324391 | https://api.github.com/repos/pydata/xarray/issues/1471 | MDEyOklzc3VlQ29tbWVudDQzMDMyNDM5MQ== | smartass101 941907 | 2018-10-16T17:24:42Z | 2018-10-16T17:46:17Z | NONE | I've hit this design limitation quite often as well, with several use-cases, both in experiment and simulation. It detracts from xarray's power of conveniently and transparently handling coordinate meta-data. From the Why xarray? page:
Adding effectively dummy dimensions or coordinates is essentially what this alignment design is forcing us to do. A possible solution would be something like having (some) coordinate arrays in an (Unaligned)Dataset being a "reducible" (it would reduce to Index for each Datarray) MultiIndex. A workaround can be using MultiIndex coordinates directly, but then alignment cannot be done easily as levels do not behave as real dimensions. Use-cases examples:1. coordinate "metadata"I often have measurements on related axes, but also with additional coordinates (different positions, etc.) Consider:
What I would like to get (pseudocode):
While it is possible to 2. unaligned time domainsThis s a large problem especially when different time-bases are involved. A difference in sampling intervals will blow up the storage by a huge number of nan values. Which of course greatly complicates further calculations, e.g. filtering in the time domain. Or just non-overlaping time intervals will require at least double the storage area. I often find myself resorting rather to |
{
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
sharing dimensions across dataarrays in a dataset 241290234 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] (
[html_url] TEXT,
[issue_url] TEXT,
[id] INTEGER PRIMARY KEY,
[node_id] TEXT,
[user] INTEGER REFERENCES [users]([id]),
[created_at] TEXT,
[updated_at] TEXT,
[author_association] TEXT,
[body] TEXT,
[reactions] TEXT,
[performed_via_github_app] TEXT,
[issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
ON [issue_comments] ([user]);
user 1