html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue https://github.com/pydata/xarray/pull/446#issuecomment-118097399,https://api.github.com/repos/pydata/xarray/issues/446,118097399,MDEyOklzc3VlQ29tbWVudDExODA5NzM5OQ==,1177508,2015-07-02T17:16:41Z,2015-07-02T17:16:41Z,NONE,"I need to get my head around this... I know that when you do list comprehension, this isn't lazy so basically it goes through the loop and evaluates for each iteration... so I thought that: ``` if preprocess is not None: datasets = [preprocess(ds) for ds in datasets] ``` translates to forcing the application of the `preprocess` function to the dataset effectively loading it in memory... anyway, this is really cool, I'll definitely try it out :+1: ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,91547750 https://github.com/pydata/xarray/pull/446#issuecomment-118093087,https://api.github.com/repos/pydata/xarray/issues/446,118093087,MDEyOklzc3VlQ29tbWVudDExODA5MzA4Nw==,1177508,2015-07-02T17:00:47Z,2015-07-02T17:00:47Z,NONE,"I have a question about this preprocess thing. Would it mean now that... basically `xray` will load all data in memory? because of the preprocesing step, whereas before... or at least that's what I understood from the documentation, `xray` would access the data by a need only basis. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,91547750