issue_comments
4 rows where user = 8241481 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: issue_url, reactions, created_at (date), updated_at (date)
user 1
- Mikejmnez · 4 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
1483958731 | https://github.com/pydata/xarray/issues/7549#issuecomment-1483958731 | https://api.github.com/repos/pydata/xarray/issues/7549 | IC_kwDOAMm_X85Yc2nL | Mikejmnez 8241481 | 2023-03-26T00:41:10Z | 2023-03-26T00:41:10Z | CONTRIBUTOR | Thanks everybody. Similar to @gewitterblitz and based on https://github.com/SciTools/iris/issues/5187 , pinning libnetcdf to v4.8.1 did the trick |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
HDF5-DIAG warnings calling `open_mfdataset` with more than `file_cache_maxsize` datasets (hdf5 1.12.2) 1596115847 | |
651530759 | https://github.com/pydata/xarray/pull/4003#issuecomment-651530759 | https://api.github.com/repos/pydata/xarray/issues/4003 | MDEyOklzc3VlQ29tbWVudDY1MTUzMDc1OQ== | Mikejmnez 8241481 | 2020-06-30T04:45:42Z | 2020-06-30T04:45:42Z | CONTRIBUTOR | @weiji14 @shoyer Thanks you guys! Sorry it has taken me long to come back to this PR - I really mean to come back to this but I got stuck with another bigger PR that is actually part of my main research project. Anyways, much appreciated for the help, cheers!!
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray.open_mzar: open multiple zarr files (in parallel) 606683601 | |
620943840 | https://github.com/pydata/xarray/pull/4003#issuecomment-620943840 | https://api.github.com/repos/pydata/xarray/issues/4003 | MDEyOklzc3VlQ29tbWVudDYyMDk0Mzg0MA== | Mikejmnez 8241481 | 2020-04-29T01:43:43Z | 2020-04-29T01:44:46Z | CONTRIBUTOR | Following your advise, NOTE: Additional feature: As a result of these changes, This is different from |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray.open_mzar: open multiple zarr files (in parallel) 606683601 | |
620133764 | https://github.com/pydata/xarray/pull/4003#issuecomment-620133764 | https://api.github.com/repos/pydata/xarray/issues/4003 | MDEyOklzc3VlQ29tbWVudDYyMDEzMzc2NA== | Mikejmnez 8241481 | 2020-04-27T17:45:25Z | 2020-04-27T17:45:25Z | CONTRIBUTOR | I like this approach (add capability to open_mfdataset to open multiple zarr files), as it is the easiest and cleanest. I considered it, and I am glad this is coming up because I wanted to know different opinions. Two things influenced my decision to have
For netcdf-files ( zarr files (
I am extremely interested what people think about |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray.open_mzar: open multiple zarr files (in parallel) 606683601 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
issue 2