issue_comments
3 rows where issue = 1189140909 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- concat along dim with mix of scalar coordinate and array coordinates is not right · 3 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
1089206433 | https://github.com/pydata/xarray/issues/6434#issuecomment-1089206433 | https://api.github.com/repos/pydata/xarray/issues/6434 | IC_kwDOAMm_X85A6_ih | benbovy 4160723 | 2022-04-05T19:07:34Z | 2022-04-05T19:07:34Z | MEMBER |
Ah yes 👍 . Not sure why this case didn't fill the conditions for calling |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
concat along dim with mix of scalar coordinate and array coordinates is not right 1189140909 | |
1086751973 | https://github.com/pydata/xarray/issues/6434#issuecomment-1086751973 | https://api.github.com/repos/pydata/xarray/issues/6434 | IC_kwDOAMm_X85AxoTl | dcherian 2448579 | 2022-04-03T01:00:48Z | 2022-04-03T02:02:34Z | MEMBER |
There's a typo in the first line, we need DatetimeIndex(['2013-01-01'], dtype='datetime64[ns]', freq=None)``` The issue is that the Alternatively, we could loop over datasets and call |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
concat along dim with mix of scalar coordinate and array coordinates is not right 1189140909 | |
1086704180 | https://github.com/pydata/xarray/issues/6434#issuecomment-1086704180 | https://api.github.com/repos/pydata/xarray/issues/6434 | IC_kwDOAMm_X85Axco0 | benbovy 4160723 | 2022-04-02T19:08:48Z | 2022-04-02T19:08:48Z | MEMBER | The first example works because there's no index. In the second example, a The problem is when creating a ```python array = da.isel(time=0).values value = array.item() seq = np.array([value], dtype=array.dtype) pd.Index(seq, dtype=array.dtype) Float64Index([1.0], dtype='float64')``` So in the example above you end-up with different index types, which ```python concat.indexes["time"] Index([1356998400000000000, 2013-01-01 06:00:00], dtype='object', name='time')da.indexes["time"] DatetimeIndex(['2013-01-01 00:00:00', '2013-01-01 06:00:00'], dtype='datetime64[ns]', name='time', freq=None)concat.indexes["time"].equals(da.indexes["time"]) False``` I'm not very satisfied with the current solution in concat but I'm not sure what we should do here:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
concat along dim with mix of scalar coordinate and array coordinates is not right 1189140909 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 2