html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue
https://github.com/pydata/xarray/issues/1482#issuecomment-316376598,https://api.github.com/repos/pydata/xarray/issues/1482,316376598,MDEyOklzc3VlQ29tbWVudDMxNjM3NjU5OA==,4992424,2017-07-19T12:54:30Z,2017-07-19T12:54:30Z,NONE,"@mitar it depends on your data/application, right? But that information would also be helpful in figuring out alternative pathways. If you're always going to process the images individually or sequentially, then what advantage is there (aside from convenience) of dumping them in some giant array with forced dimensions/shape per slice?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,243964948
https://github.com/pydata/xarray/issues/1482#issuecomment-316371416,https://api.github.com/repos/pydata/xarray/issues/1482,316371416,MDEyOklzc3VlQ29tbWVudDMxNjM3MTQxNg==,4992424,2017-07-19T12:34:32Z,2017-07-19T12:34:32Z,NONE,"The problem is that these sorts of arrays break the [common data model](http://www.unidata.ucar.edu/software/thredds/current/netcdf-java/CDM/) on top of which xarray (and NetCDF) is built.
> If I understand correctly, I could batch all images of the same size into its own dimension? That might be also acceptable.
Yes, if you can pre-process all the images and align them on some common set of dimensions (maybe just **xi** and **yi**, denoting integer index in the x and y directions), and pad unused space for each image with NaNs, then you could concatenate everything into a `Dataset`.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,243964948