home / github / issue_comments

Menu
  • Search all tables
  • GraphQL API

issue_comments: 405293927

This data as json

html_url issue_url id node_id user created_at updated_at author_association body reactions performed_via_github_app issue
https://github.com/pydata/xarray/pull/2287#issuecomment-405293927 https://api.github.com/repos/pydata/xarray/issues/2287 405293927 MDEyOklzc3VlQ29tbWVudDQwNTI5MzkyNw== 2089799 2018-07-16T15:50:48Z 2018-07-16T16:10:09Z NONE

if xarray's rasterio backend wouldn't be a better template for an imageio backend instead.

I started with rasterio in the beginning, since TIFF-like containers are not modifiable, cache is crucial. However, it is my understanding, and please correct me if I am wrong, that rasterio backend is aimed at being a read-only I/O for xarray without having the extensibility for write ability, and I'm really eager to keep the ability to aggregate multiple files through dask, as well as saving data through imageio transparently (if plausible) :p

On second thought, maybe open_rasterio type of approach with to_zarr for saving is more preferable? But what are the possible approaches for open_mfdataset (and the potential of using dask for out-of-memory file I/O)?

For example, can imageio open any file which resembles a Dataset (i.e.: more than one variable with different datatypes), or would a DataArray be enough?

Generally, image containers opened by imageio represents a single Dataset only, which is why I default names to their readout sequences. However, there are indeed cases where multiple variables exist, such as pyramids of image sequences, each pyramid layer represents a different resolution for the sequence.

I must admit that a single DataArray is more applicable to the general usage.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  341321742
Powered by Datasette · Queries took 158.646ms · About: xarray-datasette