issue_comments
1 row where author_association = "CONTRIBUTOR", issue = 344614881 and user = 17162724 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- Example on using `preprocess` with `mfdataset` · 1 ✖
| id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
|---|---|---|---|---|---|---|---|---|---|---|---|
| 778554202 | https://github.com/pydata/xarray/issues/2313#issuecomment-778554202 | https://api.github.com/repos/pydata/xarray/issues/2313 | MDEyOklzc3VlQ29tbWVudDc3ODU1NDIwMg== | raybellwaves 17162724 | 2021-02-13T03:20:58Z | 2021-02-13T03:20:58Z | CONTRIBUTOR | Edit: Copied and pasted from a duplicate issue I opened. Closing that and moving convo here. @jhamman's SO answer circa 2018 helped me this week https://stackoverflow.com/a/51714004/6046019 I wonder if it's worth (not sure where) providing an example of how to use Add an Examples entry to the doc string? (http://xarray.pydata.org/en/latest/generated/xarray.open_mfdataset.html / https://github.com/pydata/xarray/blob/5296ed18272a856d478fbbb3d3253205508d1c2d/xarray/backends/api.py#L895) While not a small example (as the remote files are large) this is how I used it: ``` import xarray as xr import s3fs def preprocess(ds): return ds.expand_dims('time') fs = s3fs.S3FileSystem(anon=True) f1 = fs.open('s3://fmi-opendata-rcrhirlam-surface-grib/2021/02/03/00/numerical-hirlam74-forecast-MaximumWind-20210203T000000Z.grb2') f2 = fs.open('s3://fmi-opendata-rcrhirlam-surface-grib/2021/02/03/06/numerical-hirlam74-forecast-MaximumWind-20210203T060000Z.grb2') ds = xr.open_mfdataset([f1, f2], engine="cfgrib", preprocess=preprocess, parallel=True) ``` with one file looking like:
A smaller example could be (WIP; note I was hoping ds would concat along t but it doesn't do what I expect) ``` import numpy as np import xarray as xr f1 = xr.DataArray(np.arange(2), coords=[np.arange(2)], dims=["a"], name="f1") f1 = f1.assign_coords(t=0) f1.to_dataset().to_zarr("f1.zarr") # What's the best way to store small files to open again with mf_dataset? csv via xarray objects? can you use open_mfdataset on pkl objects? f2 = xr.DataArray(np.arange(2), coords=[np.arange(2)], dims=["a"], name="f2") f2 = f2.assign_coords(t=1) f2.to_dataset().to_zarr("f2.zarr") Concat along tdef preprocess(ds): return ds.expand_dims('t') ds = xr.open_mfdataset(["f1.zarr", "f2.zarr"], engine="zarr", concat_dim="t", preprocess=preprocess)
|
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
Example on using `preprocess` with `mfdataset` 344614881 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] (
[html_url] TEXT,
[issue_url] TEXT,
[id] INTEGER PRIMARY KEY,
[node_id] TEXT,
[user] INTEGER REFERENCES [users]([id]),
[created_at] TEXT,
[updated_at] TEXT,
[author_association] TEXT,
[body] TEXT,
[reactions] TEXT,
[performed_via_github_app] TEXT,
[issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
ON [issue_comments] ([user]);
user 1