issue_comments
6 rows where author_association = "MEMBER", issue = 1402002645 and user = 14808389 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: reactions
issue 1
- Segfault writing large netcdf files to s3fs · 6 ✖
| id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
|---|---|---|---|---|---|---|---|---|---|---|---|
| 1272560073 | https://github.com/pydata/xarray/issues/7146#issuecomment-1272560073 | https://api.github.com/repos/pydata/xarray/issues/7146 | IC_kwDOAMm_X85L2bnJ | keewis 14808389 | 2022-10-09T14:56:28Z | 2022-10-09T14:57:44Z | MEMBER | Since we have eliminated |
{
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
Segfault writing large netcdf files to s3fs 1402002645 | |
| 1272555653 | https://github.com/pydata/xarray/issues/7146#issuecomment-1272555653 | https://api.github.com/repos/pydata/xarray/issues/7146 | IC_kwDOAMm_X85L2aiF | keewis 14808389 | 2022-10-09T14:36:13Z | 2022-10-09T14:36:13Z | MEMBER | great, good to know. Can you try this with N_TIMES = 48 with h5py.File("test.nc", mode="w") as f: time = f.create_dataset("time", (N_TIMES,), dtype="i") time[:] = 0
``` |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
Segfault writing large netcdf files to s3fs 1402002645 | |
| 1272550986 | https://github.com/pydata/xarray/issues/7146#issuecomment-1272550986 | https://api.github.com/repos/pydata/xarray/issues/7146 | IC_kwDOAMm_X85L2ZZK | keewis 14808389 | 2022-10-09T14:09:44Z | 2022-10-09T14:09:44Z | MEMBER | okay, then does changing the dtype do anything? I.e. does this only happen with |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
Segfault writing large netcdf files to s3fs 1402002645 | |
| 1272542780 | https://github.com/pydata/xarray/issues/7146#issuecomment-1272542780 | https://api.github.com/repos/pydata/xarray/issues/7146 | IC_kwDOAMm_X85L2XY8 | keewis 14808389 | 2022-10-09T13:26:25Z | 2022-10-09T13:26:25Z | MEMBER | with this:
Now I'd probably check if it's just the size that makes it fail (i.e. remove |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
Segfault writing large netcdf files to s3fs 1402002645 | |
| 1272539394 | https://github.com/pydata/xarray/issues/7146#issuecomment-1272539394 | https://api.github.com/repos/pydata/xarray/issues/7146 | IC_kwDOAMm_X85L2WkC | keewis 14808389 | 2022-10-09T13:10:12Z | 2022-10-09T13:10:25Z | MEMBER | which ones fail if you add the 3D variable? |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
Segfault writing large netcdf files to s3fs 1402002645 | |
| 1272535683 | https://github.com/pydata/xarray/issues/7146#issuecomment-1272535683 | https://api.github.com/repos/pydata/xarray/issues/7146 | IC_kwDOAMm_X85L2VqD | keewis 14808389 | 2022-10-09T12:48:51Z | 2022-10-09T12:49:28Z | MEMBER | if this crashes with both As for the MCVE: I wonder if we can trim it a bit. Can you reproduce with ```python import xarray as xr import pandas as pd N_TIMES = 48
time_vals = pd.date_range("2022-10-06", freq="20 min", periods=N_TIMES)
ds = xr.Dataset({"time": ("T", time_vals)})
ds.to_netcdf(path="/my_s3_fs/test_netcdf.nc", format="NETCDF4", mode="w")
|
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
Segfault writing large netcdf files to s3fs 1402002645 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] (
[html_url] TEXT,
[issue_url] TEXT,
[id] INTEGER PRIMARY KEY,
[node_id] TEXT,
[user] INTEGER REFERENCES [users]([id]),
[created_at] TEXT,
[updated_at] TEXT,
[author_association] TEXT,
[body] TEXT,
[reactions] TEXT,
[performed_via_github_app] TEXT,
[issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
ON [issue_comments] ([user]);
user 1