home / github / issue_comments

Menu
  • GraphQL API
  • Search all tables

issue_comments: 697952300

This data as json

html_url issue_url id node_id user created_at updated_at author_association body reactions performed_via_github_app issue
https://github.com/pydata/xarray/issues/4309#issuecomment-697952300 https://api.github.com/repos/pydata/xarray/issues/4309 697952300 MDEyOklzc3VlQ29tbWVudDY5Nzk1MjMwMA== 226037 2020-09-23T20:23:17Z 2020-09-24T15:12:18Z MEMBER

@shoyer & @jhamman just to give you an idea, I aim to see open_dataset reduced to the following:

```python def open_dataset(filename_or_obj, , engine=None, chunks=None, cache=None, backend_kwargs=None, *kwargs): filename_or_obj = nomalize_filename_or_obj(filename_or_obj) if engine is None: engine = autodetect_engine(filename_or_obj) open_backend_dataset = get_opener(engine)

backend_ds = open_backend_dataset(filename_or_obj, **backend_kwargs, **kwargs)
ds = dataset_from_backend_dataset(
    backend_ds, chunks, cache, filename_or_obj=filename_or_obj, **kwargs
)
return ds

```

Where the key observation is that backend_ds variable must be either np.ndarray or subclasses of BackendArray. That is backend should not be concerned with the in-memory representation of the variables, so they know nothign about dask, cache behaviour, etc. (@shoyer this was addressed briefly today)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  672912921
Powered by Datasette · Queries took 0.816ms · About: xarray-datasette