issue_comments
3 rows where author_association = "CONTRIBUTOR", issue = 1563270549 and user = 5179430 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: reactions, created_at (date), updated_at (date)
issue 1
- Update contains_cftime_datetimes to avoid loading entire variable array · 3 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
1458453929 | https://github.com/pydata/xarray/pull/7494#issuecomment-1458453929 | https://api.github.com/repos/pydata/xarray/issues/7494 | IC_kwDOAMm_X85W7j2p | agoodm 5179430 | 2023-03-07T16:22:21Z | 2023-03-07T16:22:21Z | CONTRIBUTOR | Thanks @Illviljan and @dcherian for helping to see this through. |
{ "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Update contains_cftime_datetimes to avoid loading entire variable array 1563270549 | |
1411206291 | https://github.com/pydata/xarray/pull/7494#issuecomment-1411206291 | https://api.github.com/repos/pydata/xarray/issues/7494 | IC_kwDOAMm_X85UHUyT | agoodm 5179430 | 2023-01-31T23:17:38Z | 2023-01-31T23:17:38Z | CONTRIBUTOR | @Illviljan I gave your update a quick test, it seems to work well enough and still maintains the performance improvement. It looks fine to me though I guess it looks like you still need to fix this failing mypy stuff now? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Update contains_cftime_datetimes to avoid loading entire variable array 1563270549 | |
1410253782 | https://github.com/pydata/xarray/pull/7494#issuecomment-1410253782 | https://api.github.com/repos/pydata/xarray/issues/7494 | IC_kwDOAMm_X85UDsPW | agoodm 5179430 | 2023-01-31T12:22:02Z | 2023-01-31T12:26:37Z | CONTRIBUTOR |
This isn't actually the line of code that's causing the performance bottleneck, it's the access to ```python import numpy as np import xarray as xr str_array = np.arange(100000000).astype(str) ds = xr.DataArray(dims=('x',), data=str_array).to_dataset(name='str_array') ds = ds.chunk(x=10000) ds['str_array'] = ds.str_array.astype('O') # Needs to actually be object dtype to show the problem ds.to_zarr('str_array.zarr') %time xr.open_zarr('str_array.zarr') ``` |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Update contains_cftime_datetimes to avoid loading entire variable array 1563270549 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1