issues
4 rows where repo = 13221727 and user = 2444231 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
579722569 | MDExOlB1bGxSZXF1ZXN0Mzg3MDY0ODEz | 3858 | Backend env | pgierz 2444231 | closed | 0 | 5 | 2020-03-12T06:30:28Z | 2023-01-05T03:58:54Z | 2023-01-05T03:58:54Z | NONE | 0 | pydata/xarray/pulls/3858 | This merge request allows the user to set a
Here, I need some help: How should I actually design the tests? The environment is only temporarily modified, so as soon as the open_dataset function ends again, the environment is restored. I would have though temporarily adding an equivalent to I added a section to the relevant docstring. Not sure how much this needs to also be included in the other files. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3858/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
578427969 | MDU6SXNzdWU1Nzg0Mjc5Njk= | 3853 | Custom Table when opening GRIB Files | pgierz 2444231 | open | 0 | 8 | 2020-03-10T08:58:42Z | 2022-04-27T14:34:02Z | NONE | Hello, I'd like to open some old-school Grib files from one of our climate models. I'm using the
So, would it somehow be possible to provide a code table to be used when opening grb files? I have files next to my output where the codes are stored. An example is below. I can imagine something like:
Would this be difficult to implement? Cheers, Paul
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3853/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
894498459 | MDU6SXNzdWU4OTQ0OTg0NTk= | 5332 | Progressbar for groupby operations? | pgierz 2444231 | open | 0 | 2 | 2021-05-18T15:19:19Z | 2021-05-19T01:27:55Z | NONE | I recently learned that Would it be simple to implement something similar in Xarray? The documentation seems to read as if the groupby is heavily inspired by pandas. Is your feature request related to a problem? Please describe. No, everything works as expected, this would just be a "quality of life" improvement. Describe the solution you'd like The implementation in tqdm states:
I suppose something similar would need to be implemented in Xarray, and then we might be able to copy the tqdm logic. Describe alternatives you've considered I could loop over whatever dimension I have and make my own progress bar, but that seems like defeating the purpose of groupby. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5332/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
315381649 | MDU6SXNzdWUzMTUzODE2NDk= | 2066 | open_mfdataset can't handle many files | pgierz 2444231 | closed | 0 | 7 | 2018-04-18T08:33:15Z | 2019-03-18T14:58:15Z | 2019-03-18T14:58:14Z | NONE | Code Sample, a copy-pastable example if possibleIt appears as if the ```python ensemble = xr.open_mfdataset("/scratch/simulation_database/incoming/Eem125-S2/output/Eem125-S2_echam5_main_mm_26*.nc") OSError Traceback (most recent call last) <ipython-input-4-038705c4f255> in <module>() ----> 1 ensemble = xr.open_mfdataset("/scratch/simulation_database/incoming/Eem125-S2/output/Eem125-S2_echam5_main_mm_26*.nc") ~/anaconda3/lib/python3.6/site-packages/xarray/backends/api.py in open_mfdataset(paths, chunks, concat_dim, compat, preprocess, engine, lock, data_vars, coords, **kwargs) ~/anaconda3/lib/python3.6/site-packages/xarray/backends/api.py in <listcomp>(.0) ~/anaconda3/lib/python3.6/site-packages/xarray/backends/api.py in open_dataset(filename_or_obj, group, decode_cf, mask_and_scale, decode_times, autoclose, concat_characters, decode_coords, engine, chunks, lock, cache, drop_variables) ~/anaconda3/lib/python3.6/site-packages/xarray/backends/netCDF4_.py in open(cls, filename, mode, format, group, writer, clobber, diskless, persist, autoclose) ~/anaconda3/lib/python3.6/site-packages/xarray/backends/netCDF4_.py in _open_netcdf4_group(filename, mode, group, **kwargs) netCDF4/_netCDF4.pyx in netCDF4._netCDF4.Dataset.init() netCDF4/_netCDF4.pyx in netCDF4._netCDF4._ensure_nc_success() OSError: [Errno 24] Too many open files: b'/scratch/simulation_database/incoming/Eem125-S2/output/Eem125-S2_echam5_main_mm_260001.nc' ``` Problem descriptionOften, climate simulations produce more than one output file per model component (generally 1 per saved time output, e.g. months, years, days, or something else). It would be good to access all of these as one object, rather than having to combining them by hand before with e.g. Expected Output
Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2066/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);