issue_comments
5 rows where author_association = "NONE" and user = 7237617 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: issue_url, created_at (date), updated_at (date)
user 1
- porterdf · 5 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
968176008 | https://github.com/pydata/xarray/issues/5878#issuecomment-968176008 | https://api.github.com/repos/pydata/xarray/issues/5878 | IC_kwDOAMm_X845tTGI | porterdf 7237617 | 2021-11-13T23:43:17Z | 2021-11-13T23:44:27Z | NONE | Update: my local notebook accessing the public bucket does see the appended zarr store exactly as expected, while the 2i2c-hosted notebook still is not (been well over 3600s). Also, I do as @jkingslake does above and set the |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
problem appending to zarr on GCS when using json token 1030811490 | |
967408017 | https://github.com/pydata/xarray/issues/5878#issuecomment-967408017 | https://api.github.com/repos/pydata/xarray/issues/5878 | IC_kwDOAMm_X845qXmR | porterdf 7237617 | 2021-11-12T19:40:46Z | 2021-11-13T23:25:53Z | NONE |
Ignorant question: is this cache relevant to client (Jupyter) side or server (GCS) side? It has been well over 3600s and I'm still not seeing the appended zarr when reading it in using Xarray.
I tried to do this last night but did not have permission myself. Perhaps @jkingslake does? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
problem appending to zarr on GCS when using json token 1030811490 | |
967340995 | https://github.com/pydata/xarray/issues/5878#issuecomment-967340995 | https://api.github.com/repos/pydata/xarray/issues/5878 | IC_kwDOAMm_X845qHPD | porterdf 7237617 | 2021-11-12T18:52:01Z | 2021-11-12T18:58:52Z | NONE | Thanks for pointing out this cache feature @rabernat. I had no idea - makes sense in general but slows down testing if no known about! Anyway for my case, when appending the second Zarr store to the first, the Zarr's size (using
In my instance, there is no error, only this returned: |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
problem appending to zarr on GCS when using json token 1030811490 | |
828683287 | https://github.com/pydata/xarray/issues/5023#issuecomment-828683287 | https://api.github.com/repos/pydata/xarray/issues/5023 | MDEyOklzc3VlQ29tbWVudDgyODY4MzI4Nw== | porterdf 7237617 | 2021-04-28T18:30:46Z | 2021-04-28T18:30:46Z | NONE | Thanks @dcherian ```
ValueError: Could not find any dimension coordinates to use to order the datasets for concatenation ``` So it doesn't work, but perhaps that's not surprising give that 'XTIME' is a coordinate, but 'Time' is the dimension (one of WRF's quirks related to staggered grids and moving nests). ```
Coordinates: XLAT (Time, south_north, west_east) float32 dask.array<chunksize=(8, 1035, 675), meta=np.ndarray> XLONG (Time, south_north, west_east) float32 dask.array<chunksize=(8, 1035, 675), meta=np.ndarray> XTIME (Time) datetime64[ns] dask.array<chunksize=(8,), meta=np.ndarray> XLAT_U (Time, south_north, west_east_stag) float32 dask.array<chunksize=(8, 1035, 676), meta=np.ndarray> XLONG_U (Time, south_north, west_east_stag) float32 dask.array<chunksize=(8, 1035, 676), meta=np.ndarray> XLAT_V (Time, south_north_stag, west_east) float32 dask.array<chunksize=(8, 1036, 675), meta=np.ndarray> XLONG_V (Time, south_north_stag, west_east) float32 dask.array<chunksize=(8, 1036, 675), meta=np.ndarray> ``` As such, I'm following the documentation to add a preprocessor Thanks for everyone's help! Shall I close this? (as it was never actually an issue? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Unable to load multiple WRF NetCDF files into Dask array on pangeo 829426650 | |
812278389 | https://github.com/pydata/xarray/issues/5023#issuecomment-812278389 | https://api.github.com/repos/pydata/xarray/issues/5023 | MDEyOklzc3VlQ29tbWVudDgxMjI3ODM4OQ== | porterdf 7237617 | 2021-04-02T02:14:19Z | 2021-04-02T02:14:19Z | NONE | Thanks for the great suggestion @shoyer - your suggestion to loop through the netCDF files is working well in Dask using the following code: ``` import xarray as xr import gcsfs from tqdm.autonotebook import tqdm xr.set_options(display_style="html"); fs = gcsfs.GCSFileSystem(project='ldeo-glaciology', mode='r',cache_timeout = 0) NCs = fs.glob('gs://ldeo-glaciology/AMPS/WRF_24/domain_02/*.nc') url = 'gs://' + NCs[0] openfile = fs.open(url, mode='rb') ds = xr.open_dataset(openfile, engine='h5netcdf',chunks={'Time': -1}) for i in tqdm(range(1, 8)): url = 'gs://' + NCs[i] openfile = fs.open(url, mode='rb') temp = xr.open_dataset(openfile, engine='h5netcdf',chunks={'Time': -1}) ds = xr.concat([ds,temp],'Time') ``` However, I am still confused why ```
xarray.DataArray'XTIME'Time: 8 array(['2019-01-01T03:00:00.000000000', '2019-01-01T06:00:00.000000000', '2019-01-01T09:00:00.000000000', '2019-01-01T12:00:00.000000000', '2019-01-01T15:00:00.000000000', '2019-01-01T18:00:00.000000000', '2019-01-01T21:00:00.000000000', '2019-01-02T00:00:00.000000000'], dtype='datetime64[ns]') ``` |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Unable to load multiple WRF NetCDF files into Dask array on pangeo 829426650 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
issue 2