issues
3 rows where repo = 13221727 and user = 8241481 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: comments, created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2216068694 | I_kwDOAMm_X86EFoZW | 8895 | droping variables when accessing remote datasets via pydap | Mikejmnez 8241481 | open | 0 | 1 | 2024-03-29T22:55:45Z | 2024-05-03T15:15:09Z | CONTRIBUTOR | Is your feature request related to a problem?I ran into the following issue when trying to access a remote dataset. Here is the concrete example that reproduces the error. ```python from pydap.client import open_url from pydap.cas.urs import setup_session import xarray as xr import numpy as np username = "UsernameHere" password= "PasswordHere" filename = 'Daymet_Daily_V4R1.daymet_v4_daily_na_tmax_2010.nc' hyrax_url = 'https://opendap.earthdata.nasa.gov/collections/C2532426483-ORNL_CLOUD/granules/' url1 = hyrax_url + filename session = setup_session(username, password, check_url=hyrax_url) ds = xr.open_dataset(url1, engine="pydap", session=session)
The issue involves the variable I tried all this with the newer versions of Describe the solution you'd likeI think it would be nice to be able to drop the variable I know I don't want. So something like this:
Describe alternatives you've consideredThis is potentially a For example I can easily open the dataset and drop the variable with pydap as described below ```python $ dataset = open_url(url1, session=session) # this works $ dataset[tuple([var for var in dataset.keys() if var not in ['time_bnds']])] # this takes < 1ms
It looks like it would be a easy implementation on the backend, but at the same time I took a look at and I feel like it could also be implemented at the Any thoughts or suggestions? I can certainly lead on this effort as I already will be working on enabling the dap4 implementation within pydap. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8895/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
605266906 | MDU6SXNzdWU2MDUyNjY5MDY= | 3995 | open_mfzarr files + intake-xarray | Mikejmnez 8241481 | open | 0 | 1 | 2020-04-23T06:11:41Z | 2022-04-30T13:37:50Z | CONTRIBUTOR | This is related to a previous issue (#3668), although the actual problem on that issue is a bit more technically involved and is related with clusters... I decided to open this related issue so that the discussion in #3668 remains visible for other users. This issue is about the need to implement code that allows to read multiple zarr files. This can be particularly helpful when reading data through an Intake Catalog entry (Intake-xarray plugin), which can allow for a compact way to introduce parallelism when working with multiple zarr files. There are two steps for this, one to work at the xarray level (write a fuction that does that) and then write an option on the Intake-xarray plugin that can use such xarray functionality. I am more than willing to work on such problem, if nobody's already working on this. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3995/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
606683601 | MDExOlB1bGxSZXF1ZXN0NDA4ODQ0NTc0 | 4003 | xarray.open_mzar: open multiple zarr files (in parallel) | Mikejmnez 8241481 | closed | 0 | 16 | 2020-04-25T04:08:50Z | 2020-09-22T05:40:31Z | 2020-09-22T05:40:31Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/4003 | This is associated with #3995 and somewhat mentioned in #3668. This is, emulating |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4003/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);