id,node_id,number,title,user,state,locked,assignee,milestone,comments,created_at,updated_at,closed_at,author_association,active_lock_reason,draft,pull_request,body,reactions,performed_via_github_app,state_reason,repo,type 88339814,MDExOlB1bGxSZXF1ZXN0Mzc2NjMwMDc=,434,One less copy when reading big-endian data with engine='scipy',1217238,closed,0,,1143506,0,2015-06-15T06:59:55Z,2015-06-15T07:51:44Z,2015-06-15T07:51:41Z,MEMBER,,0,pydata/xarray/pulls/434,,"{""url"": ""https://api.github.com/repos/pydata/xarray/issues/434/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 88240870,MDExOlB1bGxSZXF1ZXN0Mzc2NDc1NDQ=,433,Assign order,1217238,closed,0,,1143506,0,2015-06-14T20:09:04Z,2015-06-15T01:16:45Z,2015-06-15T01:16:31Z,MEMBER,,0,pydata/xarray/pulls/433,"`xray.Dataset.assign` and `xray.Dataset.assign_coords` now assign new variables in sorted (alphabetical) order, mirroring the behavior in pandas. Previously, the order was arbitrary. ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/433/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 87025092,MDExOlB1bGxSZXF1ZXN0MzczNzY5Njk=,429,Add pipe method copied from pandas,1217238,closed,0,,1143506,0,2015-06-10T16:19:52Z,2015-06-11T16:45:57Z,2015-06-11T16:45:56Z,MEMBER,,0,pydata/xarray/pulls/429,"The implementation here is directly copied from pandas: https://github.com/pydata/pandas/pull/10253 ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/429/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 85692656,MDExOlB1bGxSZXF1ZXN0MzcwODQ0MjM=,427,Fix concat for identical index variables,1217238,closed,0,,1143506,0,2015-06-06T04:05:10Z,2015-06-07T06:03:23Z,2015-06-07T06:03:16Z,MEMBER,,0,pydata/xarray/pulls/427,"Fixes #425 ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/427/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 85670978,MDExOlB1bGxSZXF1ZXN0MzcwODIzNjk=,426,Decode non-native endianness,1217238,closed,0,,1143506,0,2015-06-06T01:31:14Z,2015-06-06T03:51:14Z,2015-06-06T03:51:13Z,MEMBER,,0,pydata/xarray/pulls/426,"Fixes #416 By the way, it turns out the simple work around for this was to install netCDF4 -- only scipy.io.netcdf returns the big-endian arrays directly. CC @bareid ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/426/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull