html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue
https://github.com/pydata/xarray/issues/2191#issuecomment-465294992,https://api.github.com/repos/pydata/xarray/issues/2191,465294992,MDEyOklzc3VlQ29tbWVudDQ2NTI5NDk5Mg==,23510121,2019-02-19T20:22:28Z,2019-02-19T20:22:28Z,NONE,"@spencerkclark
Very helpful!!! Thanks a million! :) ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,327089588
https://github.com/pydata/xarray/issues/2191#issuecomment-464953041,https://api.github.com/repos/pydata/xarray/issues/2191,464953041,MDEyOklzc3VlQ29tbWVudDQ2NDk1MzA0MQ==,23510121,2019-02-19T02:22:22Z,2019-02-19T02:22:58Z,NONE,"@spencerkclark Thank you very much for your help! I will install the development version on my local machine.
Currently I am using NCAR Cheyenne to manipulate the climate data. What I am doing on Cheyenne as a detour is:
`
xarray.assign_coords(time = xarray.indexes['time'].to_datetimeindex())
xarray.resample(time=""D"").mean(""time"")
`
I hope NCAR will support the next release of xarray.
A follow-up question is that when we using xarray to manipulate the large dataset such as and want to save the results for further machine learning applications (e.g., using sklearn or XGBoost, even deep learning), what will be a **good format** to store the data on server or local machine that will be easily used by sklearn or XGBoost?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,327089588
https://github.com/pydata/xarray/issues/2191#issuecomment-464923777,https://api.github.com/repos/pydata/xarray/issues/2191,464923777,MDEyOklzc3VlQ29tbWVudDQ2NDkyMzc3Nw==,23510121,2019-02-18T23:46:46Z,2019-02-18T23:46:59Z,NONE,"> @zzheng93 this will be possible in the next release of xarray, so not quite yet, but soon. If you're in a hurry you could install the development version.
@spencerkclark Thank you very much :)
I am new to the Xarray community. I am wondering if there is any instruction regarding installing the latest development version and how to implement the daily resampling function.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,327089588
https://github.com/pydata/xarray/issues/2191#issuecomment-464875401,https://api.github.com/repos/pydata/xarray/issues/2191,464875401,MDEyOklzc3VlQ29tbWVudDQ2NDg3NTQwMQ==,23510121,2019-02-18T20:56:02Z,2019-02-18T20:56:02Z,NONE,"Hi folks,
I have some data like
2000-01-01 00:00:00, 2000-01-01 12:00:00,
2000-01-02 00:00:00, 2000-01-02 12:00:00.
The index is cftime
And I want to take the average within the same date and save the results.
I am wondering if it is possible to resample them at a daily level (e.g., the results will be 2000-01-01 00:00:00 and 2000-01-02 00:00:00)?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,327089588
https://github.com/pydata/xarray/issues/2191#issuecomment-395067197,https://api.github.com/repos/pydata/xarray/issues/2191,395067197,MDEyOklzc3VlQ29tbWVudDM5NTA2NzE5Nw==,31460695,2018-06-06T13:25:11Z,2018-06-06T13:25:11Z,NONE,"Yes, when open_mfdataset decides to convert to CFTime this is much faster. When time is in datetime64, I get:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
in ()
9 dss = xr.open_mfdataset(files,decode_times=True,autoclose=True)
10 #month_start = [DatetimeNoLeap(date.dt.year, date.dt.month, 1) for date in dss.time]
---> 11 month_start = [DatetimeNoLeap(date.year, date.month, 1) for date in dss.time.values]
12 #month_start = [DatetimeNoLeap(yr, mon, 1) for yr,mon in zip(dss.time.dt.year,dss.time.dt.month)]
13 #break
in (.0)
9 dss = xr.open_mfdataset(files,decode_times=True,autoclose=True)
10 #month_start = [DatetimeNoLeap(date.dt.year, date.dt.month, 1) for date in dss.time]
---> 11 month_start = [DatetimeNoLeap(date.year, date.month, 1) for date in dss.time.values]
12 #month_start = [DatetimeNoLeap(yr, mon, 1) for yr,mon in zip(dss.time.dt.year,dss.time.dt.month)]
13 #break
AttributeError: 'numpy.datetime64' object has no attribute 'year'
```
You can see I made a feeble attempt to fix it to work for all the CMIP5 calendars, but is just as slow. Any suggestions?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,327089588
https://github.com/pydata/xarray/issues/2191#issuecomment-394890878,https://api.github.com/repos/pydata/xarray/issues/2191,394890878,MDEyOklzc3VlQ29tbWVudDM5NDg5MDg3OA==,31460695,2018-06-05T23:20:00Z,2018-06-05T23:20:00Z,NONE,"@spencerkclark thanks! I hadn't figured out that particular workaround, but it works, albeit quite slow. For now it will get me to the next step, but just changing to first-of-the-month takes longer than regridding all models to a common grid!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,327089588
https://github.com/pydata/xarray/issues/2191#issuecomment-394827475,https://api.github.com/repos/pydata/xarray/issues/2191,394827475,MDEyOklzc3VlQ29tbWVudDM5NDgyNzQ3NQ==,31460695,2018-06-05T19:15:09Z,2018-06-05T19:15:09Z,NONE,"I am trying to combine the monthly CMIP5 rcp85 ts datasets (go past 2064AD) with the myriad calendars, so I love the new CFTimeIndex! But I need resample(time='MS') in order to force them all to start on the first of each month
thanks!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,327089588