id,node_id,number,state,locked,title,user,body,created_at,updated_at,closed_at,merged_at,merge_commit_sha,assignee,milestone,draft,head,base,author_association,auto_merge,repo,url,merged_by 104726723,MDExOlB1bGxSZXF1ZXN0MTA0NzI2NzIz,1252,closed,0,CFTimeIndex,6628425," - [x] closes #1084 - [x] passes ``git diff upstream/master | flake8 --diff`` - [x] tests added / passed - [x] whatsnew entry This work in progress PR is a start on implementing a ```NetCDFTimeIndex```, a subclass of pandas.Index, which closely mimics pandas.DatetimeIndex, but uses ```netcdftime._netcdftime.datetime``` objects. Currently implemented in the new index are: - Partial datetime-string indexing (using strictly [ISO8601-format strings](https://en.wikipedia.org/wiki/ISO_8601), using a date parser implemented by @shoyer in https://github.com/pydata/xarray/issues/1084#issuecomment-274372547) - Field-accessors for year, month, day, hour, minute, second, and microsecond, to enable ```groupby``` operations on attributes of date objects This index is meant as a step towards improving the handling of non-standard calendars and dates outside the range ```Timestamp('1677-09-21 00:12:43.145225')``` to ```Timestamp('2262-04-11 23:47:16.854775807')```. -------------- For now I have pushed only the code and some tests for the new index; I want to make sure the index is solid and well-tested before we consider integrating it into any of xarray's existing logic or writing any documentation. Regarding the index, there are a couple remaining outstanding issues (that at least I'm aware of): 1. Currently one can create non-sensical datetimes using ```netcdftime._netcdftime.datetime``` objects. This means one can attempt to index with an out-of-bounds string or datetime without raising an error. Could this possibly be addressed upstream? For example: ``` In [1]: from netcdftime import DatetimeNoLeap In [2]: DatetimeNoLeap(2000, 45, 45) Out[2]: netcdftime._netcdftime.DatetimeNoLeap(2000, 45, 45, 0, 0, 0, 0, -1, 1) ``` 2. I am looking to enable this index to be used in pandas.Series and pandas.DataFrame objects as well; this requires implementing a ```get_value``` method. I have taken @shoyer's suggested simplified approach from https://github.com/pydata/xarray/issues/1084#issuecomment-275963433, and tweaked it to also allow for slice indexing, so I think this is most of the way there. A remaining to-do for me, however, is to implement something to allow for integer-indexing outside of ```iloc```, e.g. if you have a pandas.Series ```series```, indexing with the syntax ```series[1]``` or ```series[1:3]```. Hopefully this is a decent start; in particular I'm not an expert in writing tests so please let me know if there are improvements I can make to the structure and / or style I've used so far. I'm happy to make changes. I appreciate your help.",2017-02-06T02:10:47Z,2019-02-18T20:54:03Z,2018-05-13T05:19:11Z,2018-05-13T05:19:10Z,ebe0dd03187a5c3138ea12ca4beb13643679fe21,,,0,c318755b51c5dab4008a6f48d0afdc80bbd6bea6,39bd2076e87090ef3130f55f472f3138abad3558,MEMBER,,13221727,https://github.com/pydata/xarray/pull/1252, 106726464,MDExOlB1bGxSZXF1ZXN0MTA2NzI2NDY0,1274,closed,0,Switch AppVeyor CI to use conda env / requirements.yml,6628425," - [x] closes #1127 - [x] tests added / passed - [x] passes ``git diff upstream/master | flake8 --diff`` - [ ] whatsnew entry @shoyer here I'm reusing existing requirements files. Is this along the lines of what you were looking for in #1127? I think this should solve the AppVeyor test failures in #1252, as it should install version 1.2.7 of netCDF4, rather than version 1.2.4.",2017-02-17T12:46:03Z,2017-02-21T13:24:29Z,2017-02-20T21:30:12Z,2017-02-20T21:30:12Z,94342d5c0dd86c32a8b8e2970da39efa1feb5549,,,0,340a7396d70ef4fbceb05ee7f5b2de6aea1f9a68,62333208fc2a80c05848a12de67a10f00a6610a1,MEMBER,,13221727,https://github.com/pydata/xarray/pull/1274, 170312811,MDExOlB1bGxSZXF1ZXN0MTcwMzEyODEx,1929,closed,0,Use requires_netcdftime decorators in test_coding_times.py,6628425,"@jhamman I'm sorry I missed this in #1920. The time decoding tests in the temporary Travis build with the new `netcdftime` library are all skipped because they are tagged with `@requires_netCDF4` decorators rather than `@requires_netcdftime` ones. This PR fixes that. In a local environment (after swapping these decorators) with the new `netcdftime`, I'm actually getting a failure, so there may be a bug we need to sort out upstream.",2018-02-20T21:26:06Z,2018-02-21T13:57:10Z,2018-02-21T06:18:35Z,2018-02-21T06:18:35Z,697cc74b9af5fbfedadd54fd07019ce7684553ec,,,0,93f2e03aeb18e79488026464798b95328856c8a1,97f5778261e48391ba6772ca518cd2a51ff0ec83,MEMBER,,13221727,https://github.com/pydata/xarray/pull/1929, 181521266,MDExOlB1bGxSZXF1ZXN0MTgxNTIxMjY2,2054,closed,0,Updates for the renaming of netcdftime to cftime,6628425,"Addresses https://github.com/pydata/xarray/pull/1252#issuecomment-381131366 Perhaps I should have waited until `cftime` was up on conda-forge, but once that happens I can update this PR to use that in setting up the CI environments rather than pip. I made updates to the installing and time series pages of the docs. Does this need a what's new entry? I'm not sure which heading I would classify it under.",2018-04-13T15:25:50Z,2018-04-16T01:21:54Z,2018-04-16T01:07:59Z,2018-04-16T01:07:59Z,a0bdbfbe5e2333d150930807e3c31f33ab455d26,,,0,39ec37962643a00a31e6a9c041b5825cb74d86c7,a9d1f3a36229636f0d519eb36a8d4a7c91f6e1cd,MEMBER,,13221727,https://github.com/pydata/xarray/pull/2054, 187665622,MDExOlB1bGxSZXF1ZXN0MTg3NjY1NjIy,2126,closed,0,Add cftime to doc/environment.yml,6628425,"cftime is now needed to build the documentation: http://xarray.pydata.org/en/latest/time-series.html#non-standard-calendars-and-dates-outside-the-timestamp-valid-range Sorry I neglected this in #1252! ",2018-05-13T11:44:09Z,2018-05-13T13:10:15Z,2018-05-13T11:56:54Z,2018-05-13T11:56:54Z,91ac573e00538e0372cf9e5f2fdc1528a4ee8cb8,,,0,7d0c3c899c4cb81ba5776e0d17dc7ed69278e2b7,ebe0dd03187a5c3138ea12ca4beb13643679fe21,MEMBER,,13221727,https://github.com/pydata/xarray/pull/2126, 187824650,MDExOlB1bGxSZXF1ZXN0MTg3ODI0NjUw,2128,closed,0,Fix datetime.timedelta casting bug in coding.times.infer_datetime_units,6628425," - [x] Closes #2127 - [x] Tests added - [x] Tests passed I can confirm the docs now build properly locally: ",2018-05-14T13:20:03Z,2018-05-14T19:18:05Z,2018-05-14T19:17:37Z,2018-05-14T19:17:37Z,188141fe97a5effacf32f2508fd05b644c720e5d,,,0,383ac07f10e0e146d3ea53dfeb6553de7ef13c71,f861186cbd11bdbfb2aab8289118a59283a2d7af,MEMBER,,13221727,https://github.com/pydata/xarray/pull/2128, 189263277,MDExOlB1bGxSZXF1ZXN0MTg5MjYzMjc3,2166,closed,0,Fix string slice indexing for a length-1 CFTimeIndex,6628425," - [x] Closes #2165 - [x] Tests added (for all bug fixes or enhancements) - [x] Tests passed (for all non-documentation changes) - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API (remove if this change should not be visible to users, e.g., if it is an internal clean-up, or if this is part of a larger project that will be documented later) The issue is that both `is_monotonic_decreasing` and `is_monotonic_increasing` return `True` for a length-1 index; therefore an additional check is needed to make sure the length of the index is greater than 1 in `CFTimeIndex._maybe_cast_slice_bound`. This is similar to how things are done in [`DatetimeIndex._maybe_cast_slice_bound`](https://github.com/pandas-dev/pandas/blob/master/pandas/core/indexes/datetimes.py#L1666) in pandas.",2018-05-21T01:04:01Z,2018-05-21T10:51:16Z,2018-05-21T08:02:35Z,2018-05-21T08:02:35Z,48d55eea052fec204b843babdc81c258f3ed5ce1,,,0,98061ea8813ac0d701174e5d9ebc7bbfaa24b655,585b9a7913d98e26c28b4f1da599c1c6db551362,MEMBER,,13221727,https://github.com/pydata/xarray/pull/2166, 202609913,MDExOlB1bGxSZXF1ZXN0MjAyNjA5OTEz,2301,closed,0,WIP Add a CFTimeIndex-enabled xr.cftime_range function,6628425," - [x] Closes #2142 - [x] Tests added (for all bug fixes or enhancements) - [x] Tests passed (for all non-documentation changes) - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API (remove if this change should not be visible to users, e.g., if it is an internal clean-up, or if this is part of a larger project that will be documented later) I took the approach first discussed [here](https://github.com/pydata/xarray/pull/1252#issuecomment-380593243) by @shoyer and followed pandas by creating simplified offset classes for use with cftime objects to implement a `CFTimeIndex`-enabled `cftime_range` function. I still may clean things up a bit and add a few more tests, but I wanted to post this in its current state to show some progress, as I think it is more or less working. I will try to ping folks when it is ready for a more detailed review. Here are a few examples: ``` In [1]: import xarray as xr In [2]: xr.cftime_range('2000-02-01', '2002-05-05', freq='3M', calendar='noleap') Out[2]: CFTimeIndex([2000-02-28 00:00:00, 2000-05-31 00:00:00, 2000-08-31 00:00:00, 2000-11-30 00:00:00, 2001-02-28 00:00:00, 2001-05-31 00:00:00, 2001-08-31 00:00:00, 2001-11-30 00:00:00, 2002-02-28 00:00:00], dtype='object') In [3]: xr.cftime_range('2000-02-01', periods=4, freq='3A-JUN', calendar='noleap') Out[3]: CFTimeIndex([2000-06-30 00:00:00, 2003-06-30 00:00:00, 2006-06-30 00:00:00, 2009-06-30 00:00:00], dtype='object') In [4]: xr.cftime_range(end='2000-02-01', periods=4, freq='3A-JUN') Out[4]: CFTimeIndex([1990-06-30 00:00:00, 1993-06-30 00:00:00, 1996-06-30 00:00:00, 1999-06-30 00:00:00], dtype='object') ``` Hopefully the offset classes defined here would also be useful for implementing things like `resample` for `CFTimeIndex` objects (#2191) and `CFTimeIndex.shift` (#2244).",2018-07-19T16:04:10Z,2018-09-19T20:24:51Z,2018-09-19T20:24:40Z,2018-09-19T20:24:40Z,5b87b6e2f159b827f739e12d4faae57a0b6f6178,,,0,19c1dfe6c243d6d52ff381fe3b1111729dd1cc2d,e5ae4088f3512eb805b13ea138087350b8180d69,MEMBER,,13221727,https://github.com/pydata/xarray/pull/2301, 217487623,MDExOlB1bGxSZXF1ZXN0MjE3NDg3NjIz,2431,closed,0,Add CFTimeIndex.shift,6628425," - [x] Closes #2244 - [x] Tests added (for all bug fixes or enhancements) - [x] Tests passed (for all non-documentation changes) - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API (remove if this change should not be visible to users, e.g., if it is an internal clean-up, or if this is part of a larger project that will be documented later) ",2018-09-23T01:42:25Z,2018-10-02T15:34:49Z,2018-10-02T14:44:30Z,2018-10-02T14:44:30Z,8fb57f7b9ff683225650a928b8d7d287d8954e79,,,0,5e70b3bdf28dd933eb9bb61560d01a139787b579,f9c4169150286fa1aac020ab965380ed21fe1148,MEMBER,,13221727,https://github.com/pydata/xarray/pull/2431, 217531049,MDExOlB1bGxSZXF1ZXN0MjE3NTMxMDQ5,2434,closed,0,Enable use of cftime.datetime coordinates with differentiate and interp,6628425," - [x] Tests added (for all bug fixes or enhancements) - [x] Tests passed (for all non-documentation changes) - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API (remove if this change should not be visible to users, e.g., if it is an internal clean-up, or if this is part of a larger project that will be documented later) As discussed in https://github.com/pydata/xarray/pull/2398#pullrequestreview-156804917, this enables the use of `differentiate` and `interp` on DataArrays/Datasets with `cftime.datetime` coordinates.",2018-09-23T21:02:36Z,2018-09-28T13:45:44Z,2018-09-28T13:44:55Z,2018-09-28T13:44:55Z,c2b09d697c741b5d6ddede0ba01076c0cb09cf19,,,0,fd8f92f0080cfd954bb9d03faabde3790c323d8c,96dde664eda26a76f934151dd10dc02f6cb0000b,MEMBER,,13221727,https://github.com/pydata/xarray/pull/2434, 219154499,MDExOlB1bGxSZXF1ZXN0MjE5MTU0NDk5,2448,closed,0,Fix FutureWarning resulting from CFTimeIndex.date_type,6628425,"With the latest version of pandas, checking the `date_type` of a CFTimeIndex produces a FutureWarning: ``` In [1]: import xarray as xr In [2]: times = xr.cftime_range('2000', periods=5) In [3]: times.date_type /Users/spencerclark/xarray-dev/xarray/xarray/coding/cftimeindex.py:161: FutureWarning: CFTimeIndex.data is deprecated and will be removed in a future version if self.data: Out[3]: cftime._cftime.DatetimeProlepticGregorian ``` I think it was a typo to begin with to use `self.data` in `cftimeindex.get_date_type` (my mistake). Here I switch to using `self._data`, which is used elsewhere when internally referencing values of the index.",2018-09-29T15:48:16Z,2018-09-30T13:17:11Z,2018-09-30T13:16:49Z,2018-09-30T13:16:49Z,f9c4169150286fa1aac020ab965380ed21fe1148,,,0,89d190b910c8d526a234ea2f9d1972a156e1a848,23d1cda3b7da5c73a5f561a5c953b50beaa2bfe6,MEMBER,,13221727,https://github.com/pydata/xarray/pull/2448, 220516773,MDExOlB1bGxSZXF1ZXN0MjIwNTE2Nzcz,2464,closed,0,Clean up _parse_array_of_cftime_strings,6628425,"Per @shoyer's comment, https://github.com/pydata/xarray/pull/2431#discussion_r221976257, this cleans up `_parse_array_of_cftime_strings`, making it robust to multi-dimensional arrays in the process.",2018-10-04T21:02:29Z,2018-10-05T11:10:46Z,2018-10-05T08:02:18Z,2018-10-05T08:02:18Z,3cef8d730d5bbd699a393fa15266064ebb9849e2,,,0,346d2e1b0d9851cf25519bec97fcd621eea81292,0f70a876759197388d32d6d9f0317f0fe63e0336,MEMBER,,13221727,https://github.com/pydata/xarray/pull/2464, 222650815,MDExOlB1bGxSZXF1ZXN0MjIyNjUwODE1,2485,closed,0,Improve arithmetic operations involving CFTimeIndexes and TimedeltaIndexes,6628425," - [x] Closes #2484 - [x] Tests added (for all bug fixes or enhancements) - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API",2018-10-13T13:41:18Z,2018-10-18T18:22:44Z,2018-10-17T04:00:57Z,2018-10-17T04:00:57Z,7cab33a1335cc2cbeb93090145a7f6d4c25a1692,,,0,dee6fe87c526280dac21ca09002780f06bac7253,4bad455a801e91b329794895afa0040c868ff128,MEMBER,,13221727,https://github.com/pydata/xarray/pull/2485, 226098736,MDExOlB1bGxSZXF1ZXN0MjI2MDk4NzM2,2515,closed,0,Remove Dataset.T from api-hidden.rst,6628425,"Just a minor followup to #2509 to remove `Dataset.T` from the documentation. ",2018-10-26T13:29:46Z,2018-10-26T14:52:22Z,2018-10-26T14:50:35Z,2018-10-26T14:50:35Z,b622c5e7da928524ef949d9e389f6c7f38644494,,,0,46955427c2b8fecdaf8d469dce46b0e2b767d59c,5940100761478604080523ebb1291ecff90e779e,MEMBER,,13221727,https://github.com/pydata/xarray/pull/2515, 226137875,MDExOlB1bGxSZXF1ZXN0MjI2MTM3ODc1,2516,closed,0,Switch enable_cftimeindex to True by default,6628425,"As discussed in #2437 and #2505, this sets the option `enable_cftimeindex` to `True` by default. - [x] Fully documented, including `whats-new.rst` for all changes. ",2018-10-26T15:26:31Z,2018-11-01T17:52:45Z,2018-11-01T05:04:25Z,2018-11-01T05:04:25Z,656f8bd05e44880c21c1ad56a03cfd1b4d0f38ee,,,0,6d08d3b5fa5337bc1080b518544bfc1306b232ac,6d55f99905d664ef73cb708cfe8c52c2c651e8dc,MEMBER,,13221727,https://github.com/pydata/xarray/pull/2516, 226328649,MDExOlB1bGxSZXF1ZXN0MjI2MzI4NjQ5,2519,closed,0,Fix bug in encode_cf_datetime,6628425," - [x] Closes #2272 - [x] Tests added (for all bug fixes or enhancements) - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API (remove if this change should not be visible to users, e.g., if it is an internal clean-up, or if this is part of a larger project that will be documented later) ",2018-10-27T22:28:37Z,2018-10-28T01:30:00Z,2018-10-28T00:39:00Z,2018-10-28T00:38:59Z,c2a6902f090e063692c53e1dacd6c20e584d8e80,,,0,50042a83dbde8a7a59167b6b474520149cd2d1f3,2f0096cfab62523f26232bedf3debaba5f58d337,MEMBER,,13221727,https://github.com/pydata/xarray/pull/2519, 226363179,MDExOlB1bGxSZXF1ZXN0MjI2MzYzMTc5,2522,closed,0,Remove tests where results change in cftime 1.0.2.1,6628425," - [x] Closes #2521 (remove if there is no corresponding issue, which should only be the case for minor changes) `cftime` version 1.0.2.1 (currently only installed on Windows, because it hasn't appeared on conda-forge yet) includes some changes that improve the precision of datetime arithmetic, which causes some results of `infer_datetime_units` to change. These changes aren't really a concern, because it doesn't impact our ability to round-trip dates; it just changes the units dates are encoded with in some cases. For that reason I've just deleted the tests where the answers change across versions. ",2018-10-28T12:25:38Z,2018-10-30T01:58:15Z,2018-10-30T01:00:43Z,2018-10-30T01:00:43Z,3176d8a241ff2bcfaa93536a59497c637358b022,,,0,aefbdbcf644443d60a23e353713c186e1a64d08a,c2a6902f090e063692c53e1dacd6c20e584d8e80,MEMBER,,13221727,https://github.com/pydata/xarray/pull/2522, 228322873,MDExOlB1bGxSZXF1ZXN0MjI4MzIyODcz,2543,closed,0,Remove old-style resample example in documentation,6628425,Minor follow-up to #2541,2018-11-05T11:37:47Z,2018-11-05T17:22:52Z,2018-11-05T16:46:30Z,2018-11-05T16:46:30Z,70f3b1cb251798335099ccdcca27ac85c70e6449,,,0,cc9ab3a6b4ba27d7764d1615e4fdde13edabfc75,421be442041e6dbaa47934cb223cb28dd2b37e53,MEMBER,,13221727,https://github.com/pydata/xarray/pull/2543, 237620828,MDExOlB1bGxSZXF1ZXN0MjM3NjIwODI4,2599,closed,0,Add dayofyear and dayofweek accessors to CFTimeIndex,6628425," - [x] Closes #2597 - [x] Tests added - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API ",2018-12-11T10:17:04Z,2018-12-11T19:29:13Z,2018-12-11T19:28:31Z,2018-12-11T19:28:31Z,5d8ef5f885f7dc1cff5a34ab0e0aec1b4c2e3798,,,0,cd2238eb0e217fce8b10be6787fa4eb614e08238,53746c962701a864255f15e69e5ab5fec4cf908c,MEMBER,,13221727,https://github.com/pydata/xarray/pull/2599, 238361903,MDExOlB1bGxSZXF1ZXN0MjM4MzYxOTAz,2604,closed,0,Update cftime version in doc environment,6628425,"As mentioned in https://github.com/pydata/xarray/issues/2597#issuecomment-446151329, the `dayofyr` and `dayofwk` attributes of `cftime.datetime` objects do not always work in versions of cftime prior to 1.0.2. This issue comes up in the [latest doc build](http://xarray.pydata.org/en/latest/time-series.html#non-standard-calendars-and-dates-outside-the-timestamp-valid-range): This updates the documentation environment to use the most recent version (1.0.3.4), which should fix things.",2018-12-13T11:52:02Z,2018-12-13T17:12:38Z,2018-12-13T17:12:38Z,2018-12-13T17:12:38Z,cbb32e16079ad56555ffa816cd880fb2ef803315,,,0,3ed7c9015be2a6b02691dc628427077b0b1a803b,82789bc6f72a76d69ace4bbabd00601e28e808da,MEMBER,,13221727,https://github.com/pydata/xarray/pull/2604, 239112553,MDExOlB1bGxSZXF1ZXN0MjM5MTEyNTUz,2613,closed,0,Remove tz argument in cftime_range,6628425,"This was caught by @jwenfai in #2593. I hope no one was inadvertently trying to use this argument before. Should this need a what's new entry?",2018-12-17T11:32:10Z,2018-12-18T19:21:57Z,2018-12-18T17:21:36Z,2018-12-18T17:21:35Z,a4c9ab5b5044801d2656e6e5527dcf21bd2dc356,,,0,9db3a2443a4f282e7d68e9e24ecf3b07cdc05c03,f8cced75f718ca0ad278224cf4b09bd42f5cd999,MEMBER,,13221727,https://github.com/pydata/xarray/pull/2613, 240772434,MDExOlB1bGxSZXF1ZXN0MjQwNzcyNDM0,2630,closed,0,Fix failure in time encoding for pandas < 0.21.1,6628425," - [x] Closes #2623 - [x] Tests added - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API This is related to a bug fixed in https://github.com/pandas-dev/pandas/pull/18020#issuecomment-340477318 (this should return a `TimedeltaIndex`): ``` In [2]: times = pd.date_range('2000', periods=3) In [3]: times - np.datetime64('2000-01-01') Out[3]: DatetimeIndex(['1970-01-01', '1970-01-02', '1970-01-03'], dtype='datetime64[ns]', freq='D') ``` Subtracting a `Timestamp` object seems to work in all versions: ``` In [4]: times - pd.Timestamp('2000-01-01') Out[4]: TimedeltaIndex(['0 days', '1 days', '2 days'], dtype='timedelta64[ns]', freq=None) ```",2018-12-24T13:03:42Z,2018-12-24T15:58:21Z,2018-12-24T15:58:03Z,2018-12-24T15:58:03Z,7fcb80f9865a7ade1b9c2f3d48bf0d31d6672bdb,,,0,81288daeecb2e5bf7bd9979bb00047e3d72304bd,b5059a538ee2efda4d753cc9a49f8c09cd026c19,MEMBER,,13221727,https://github.com/pydata/xarray/pull/2630, 240872035,MDExOlB1bGxSZXF1ZXN0MjQwODcyMDM1,2633,closed,0,Fix dayofweek and dayofyear attributes from dates generated by cftime_range,6628425," - [x] Tests added It turns out there was a remaining bug in cftime (https://github.com/Unidata/cftime/issues/106) that impacted the results of the `dayofwk` and `dayofyr` attributes of cftime objects generated by their `replace` method, which we use when parsing dates from strings, and in some offset arithmetic. A workaround is to add a `dayofwk=-1` argument to each `replace` call where the `dayofwk` or `dayofyr` would be expected to change. I've fixed this bug upstream in cftime (https://github.com/Unidata/cftime/pull/108), but it will only be available in a future version. Would it be appropriate to use this workaround in xarray? This would fix [this doc page](http://xarray.pydata.org/en/latest/time-series.html#non-standard-calendars-and-dates-outside-the-timestamp-valid-range) for instance: ",2018-12-25T12:57:13Z,2018-12-28T22:55:55Z,2018-12-28T19:04:50Z,2018-12-28T19:04:50Z,a8e5002ab616e43f2e1b19a5963475a8275b0220,,,0,30d9d074d7cc83c51e4b118757cc4dd45b11812a,2667deb74a30dc3bd88752a3ce5da590cf7ddd48,MEMBER,,13221727,https://github.com/pydata/xarray/pull/2633, 241653161,MDExOlB1bGxSZXF1ZXN0MjQxNjUzMTYx,2640,closed,0, Use built-in interp for interpolation with resample,6628425," - [x] Closes #2197 - [x] Tests added - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API My main goal with this was to help out with #2593 (xarray's built-in interpolation method is compatible with cftime coordinates, so this refactor would simplify things there). While doing this I realized that I could also add the simple bug-fix for #2197. cc: @jwenfai",2019-01-01T22:09:44Z,2019-01-03T01:18:06Z,2019-01-03T01:18:06Z,2019-01-03T01:18:06Z,49731d438e261073ddd71269e829c77418e465e9,,,0,dfae861e76e9c49d493102208213258dbad7efda,11e6aac859a12a9ffda66bbf5963e545314257e0,MEMBER,,13221727,https://github.com/pydata/xarray/pull/2640, 242386020,MDExOlB1bGxSZXF1ZXN0MjQyMzg2MDIw,2651,closed,0,Convert ref_date to UTC in encode_cf_datetime,6628425," - [x] Closes #2649 - [x] Tests added - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API I *think* this should be an appropriate fix for #2649, but I'd appreciate input from those who are more experienced dealing with timezones in NumPy/pandas. My understanding is that NumPy dates are stored as UTC and do not carry any timezone information. Therefore converting the `ref_date` with `tz_convert(None)` here, which converts it to UTC and removes the timezone information, should be appropriate for encoding.",2019-01-04T22:10:21Z,2019-01-15T18:55:50Z,2019-01-05T19:06:55Z,2019-01-05T19:06:54Z,85f88e7ac363c55b77375af93ebfc8c15b75c129,,,0,1e1ddb299a267c5a810de27f8f26ddae3daada36,06244df57cd910af4e85506fe067291888035155,MEMBER,,13221727,https://github.com/pydata/xarray/pull/2651, 242458104,MDExOlB1bGxSZXF1ZXN0MjQyNDU4MTA0,2654,closed,0,Improve test for #2649,6628425,"Currently, while we indeed do always decode to UTC, I'm not sure how well we test that. In addition this tests both the `np.datetime64` and `cftime.datetime` pathways.",2019-01-05T20:07:36Z,2019-01-06T00:56:00Z,2019-01-06T00:55:22Z,2019-01-06T00:55:22Z,dba299befbdf19b02612573b218bcc1e97d4e010,,,0,a953dfb2afd9e27643fea7a9a1532c7e7fd00935,85f88e7ac363c55b77375af93ebfc8c15b75c129,MEMBER,,13221727,https://github.com/pydata/xarray/pull/2654, 244469227,MDExOlB1bGxSZXF1ZXN0MjQ0NDY5MjI3,2672,closed,0,Enable subtracting a scalar cftime.datetime object from a CFTimeIndex,6628425," - [x] Closes #2671 - [x] Tests added - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API ",2019-01-14T14:47:50Z,2019-01-30T16:45:10Z,2019-01-30T16:45:10Z,2019-01-30T16:45:10Z,fd2552a0f2d837c43085bc0c5d5da428771b8989,,,0,f73de88781d8409ff2168ebcf56a45d6c13f71a3,e8bf4bf9a744148f1f6586cabe7f5c5ef6e9bf26,MEMBER,,13221727,https://github.com/pydata/xarray/pull/2672, 249603702,MDExOlB1bGxSZXF1ZXN0MjQ5NjAzNzAy,2734,closed,0,dropna() for a Series indexed by a CFTimeIndex,6628425,"Thanks for the suggestion, @shoyer. - [x] Closes #2688 - [x] Tests added - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API cc: @jwenfai",2019-02-01T13:29:40Z,2019-02-16T02:17:05Z,2019-02-02T06:56:12Z,2019-02-02T06:56:12Z,a1ff90be63667ac4384ec74e82406dbcd1e05165,,,0,655b2c26c4fd1c32e481432375c696c519c1985e,d634f64c818d84dfc6fcc0f7fef81e4bb2094540,MEMBER,,13221727,https://github.com/pydata/xarray/pull/2734, 251786295,MDExOlB1bGxSZXF1ZXN0MjUxNzg2Mjk1,2759,closed,0,Add use_cftime option to open_dataset,6628425," Based on @shoyer's suggestion in https://github.com/pydata/xarray/issues/2754#issuecomment-461983092. - [x] Closes #1263; Closes #2754 - [x] Tests added - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API ",2019-02-11T02:05:18Z,2019-02-19T20:47:30Z,2019-02-19T20:47:26Z,2019-02-19T20:47:26Z,612d390f925e5490314c363e5e368b2a8bd5daf0,,,0,7b0911bd26b00c71d855121e315f26c019e1834c,57cd76d7521526a39a6e94eeacf1e40ef7b974b6,MEMBER,,13221727,https://github.com/pydata/xarray/pull/2759, 252873333,MDExOlB1bGxSZXF1ZXN0MjUyODczMzMz,2771,closed,0,Use DatetimeGregorian when calendar='standard' in cftime_range instead of DatetimeProlepticGregorian,6628425," - [x] Closes #2761 - [x] Tests added - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API ",2019-02-13T22:37:55Z,2019-02-15T21:58:56Z,2019-02-15T21:58:16Z,2019-02-15T21:58:16Z,cd8e370e63f82deeaf4fc190f5c1d90463067368,,,0,9c93f14bd1639dbcb605eca3ec6308804b9cf9bc,17fa64f5314aa898f262a73fdc00d228ec380968,MEMBER,,13221727,https://github.com/pydata/xarray/pull/2771, 254372718,MDExOlB1bGxSZXF1ZXN0MjU0MzcyNzE4,2778,closed,0,Add support for cftime.datetime coordinates with coarsen,6628425," - [x] Tests added - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API For now I've held off on making these changes dask-compatible (I could do it, but I'm not sure it is worth the extra complexity).",2019-02-19T19:06:17Z,2019-03-06T19:48:10Z,2019-03-06T19:47:47Z,2019-03-06T19:47:47Z,c770eec39c401d49d01ec87c5c8499893da08cb5,,,0,04949d0910e6061e35939a18c870acdd1c685457,57cd76d7521526a39a6e94eeacf1e40ef7b974b6,MEMBER,,13221727,https://github.com/pydata/xarray/pull/2778, 268343135,MDExOlB1bGxSZXF1ZXN0MjY4MzQzMTM1,2879,closed,0,Reduce length of cftime resample tests,6628425,"The main issue is that we were resampling the same time indexes across a large range of frequencies, in some cases producing very long results, e.g. resampling an index that spans 27 years to a frequency of 12 hours. This modifies the primary test so that it constructs time indexes whose ranges are based on the frequencies we resample to. Now in total the tests in `test_cftimeindex_resample.py` take around 6 seconds. @jwenfai I did some coverage analysis offline, and these tests produce the same coverage that we had before (I found it necessary to be sure to test cases where the reference index had either a shorter or longer frequency than the resample frequency). Do you think what I have here is sufficient? I think we could potentially shorten things even more, but I'm not sure if it's worth the effort. - [x] Closes #2874 See below for the new profiling results; now the longest cftime tests are no longer associated with resample. ``` $ pytest -k cftime --durations=50 ... 0.18s call xarray/tests/test_backends.py::TestScipyInMemoryData::test_roundtrip_cftime_datetime_data 0.11s call xarray/tests/test_backends.py::TestScipyFilePath::test_roundtrip_cftime_datetime_data 0.10s call xarray/tests/test_backends.py::TestNetCDF4Data::test_roundtrip_cftime_datetime_data 0.09s call xarray/tests/test_backends.py::TestNetCDF4ClassicViaNetCDF4Data::test_roundtrip_cftime_datetime_data 0.09s call xarray/tests/test_backends.py::TestNetCDF4ViaDaskData::test_roundtrip_cftime_datetime_data 0.08s teardown xarray/tests/test_cftime_offsets.py::test_add_year_end_onOffset[julian-(2, 12)-()--(1, 12)-()] 0.06s call xarray/tests/test_backends.py::TestNetCDF3ViaNetCDF4Data::test_roundtrip_cftime_datetime_data 0.06s call xarray/tests/test_backends.py::TestGenericNetCDFData::test_roundtrip_cftime_datetime_data 0.05s call xarray/tests/test_backends.py::TestScipyFileObject::test_roundtrip_cftime_datetime_data 0.04s call xarray/tests/test_conventions.py::TestCFEncodedDataStore::test_roundtrip_cftime_datetime_data 0.03s call xarray/tests/test_dataset.py::test_differentiate_cftime[True] 0.03s call xarray/tests/test_dataset.py::test_trapz_datetime[cftime-True] 0.02s call xarray/tests/test_coding_times.py::test_contains_cftime_datetimes_dask_3d[standard] 0.02s call xarray/tests/test_backends.py::test_use_cftime_standard_calendar_default_out_of_range[2500-gregorian] 0.02s call xarray/tests/test_dataset.py::test_differentiate_cftime[False] 0.02s call xarray/tests/test_cftimeindex_resample.py::test_resample[longer_da_freq-24-right-None-4A-MAY] 0.02s call xarray/tests/test_backends.py::test_use_cftime_standard_calendar_default_in_range[gregorian] 0.02s call xarray/tests/test_cftimeindex_resample.py::test_resample[longer_da_freq-24-None-right-11Q-JUN] 0.02s call xarray/tests/test_backends.py::test_use_cftime_standard_calendar_default_out_of_range[2500-proleptic_gregorian] 0.02s call xarray/tests/test_backends.py::test_use_cftime_true[1500-gregorian] 0.02s call xarray/tests/test_backends.py::test_use_cftime_true[2500-proleptic_gregorian] 0.01s call xarray/tests/test_backends.py::test_use_cftime_true[2000-gregorian] 0.01s call xarray/tests/test_cftimeindex_resample.py::test_resample[longer_da_freq-31-None-right-4A-MAY] 0.01s call xarray/tests/test_backends.py::test_use_cftime_standard_calendar_default_out_of_range[2500-standard] 0.01s call xarray/tests/test_backends.py::test_use_cftime_true[1500-julian] 0.01s call xarray/tests/test_cftimeindex_resample.py::test_resample[longer_da_freq-31-left-right-4A-MAY] 0.01s call xarray/tests/test_cftimeindex_resample.py::test_resample[longer_da_freq-24-right-None-11Q-JUN] 0.01s call xarray/tests/test_cftimeindex_resample.py::test_resample[longer_da_freq-24-None-right-4A-MAY] 0.01s call xarray/tests/test_cftimeindex_resample.py::test_resample[longer_da_freq-31-left-None-7M] 0.01s call xarray/tests/test_cftimeindex_resample.py::test_resample[longer_da_freq-24-left-None-4A-MAY] 0.01s call xarray/tests/test_cftimeindex_resample.py::test_resample[longer_da_freq-24-left-right-4A-MAY] 0.01s call xarray/tests/test_cftimeindex_resample.py::test_resample[longer_da_freq-31-None-None-11Q-JUN] 0.01s call xarray/tests/test_cftimeindex_resample.py::test_resample[longer_da_freq-24-right-right-4A-MAY] 0.01s call xarray/tests/test_backends.py::test_use_cftime_standard_calendar_default_out_of_range[1500-proleptic_gregorian] 0.01s call xarray/tests/test_cftimeindex_resample.py::test_resample[longer_da_freq-24-left-None-11Q-JUN] 0.01s call xarray/tests/test_backends.py::test_use_cftime_true[2500-julian] 0.01s call xarray/tests/test_backends.py::test_use_cftime_standard_calendar_default_out_of_range[1500-gregorian] 0.01s call xarray/tests/test_backends.py::test_use_cftime_true[1500-proleptic_gregorian] 0.01s call xarray/tests/test_cftimeindex_resample.py::test_resample[longer_da_freq-31-left-right-11Q-JUN] 0.01s call xarray/tests/test_backends.py::test_use_cftime_true[2000-standard] 0.01s call xarray/tests/test_backends.py::test_use_cftime_true[2500-standard] 0.01s call xarray/tests/test_cftimeindex_resample.py::test_resample[longer_da_freq-31-None-None-4A-MAY] 0.01s call xarray/tests/test_cftimeindex_resample.py::test_resample[longer_da_freq-31-right-right-11Q-JUN] 0.01s call xarray/tests/test_cftimeindex_resample.py::test_resample[longer_da_freq-24-right-right-7M] 0.01s call xarray/tests/test_backends.py::test_use_cftime_true[2500-gregorian] 0.01s call xarray/tests/test_cftimeindex_resample.py::test_resample[longer_da_freq-24-left-right-7M] 0.01s call xarray/tests/test_backends.py::test_use_cftime_true[2000-proleptic_gregorian] 0.01s call xarray/tests/test_cftimeindex_resample.py::test_resample[longer_da_freq-31-right-None-7M] 0.01s call xarray/tests/test_cftimeindex_resample.py::test_resample[longer_da_freq-31-None-right-7M] 0.01s call xarray/tests/test_cftimeindex_resample.py::test_resample[longer_da_freq-31-left-None-11Q-JUN] ```",2019-04-08T13:44:50Z,2019-04-11T11:42:16Z,2019-04-11T11:42:09Z,2019-04-11T11:42:09Z,b9a920e1a9012e88719cc96e8113bb877279c854,,,0,1e556222d407ba15d84500df9a1886505c1c5a06,3435b03de218f54a55eb72dff597bb47b0f407cb,MEMBER,,13221727,https://github.com/pydata/xarray/pull/2879, 332766751,MDExOlB1bGxSZXF1ZXN0MzMyNzY2NzUx,3450,closed,0,Remove outdated code related to compatibility with netcdftime,6628425," Per https://github.com/pydata/xarray/pull/3431#discussion_r337620810, this removes outdated code leftover from the netcdftime -> cftime transition. Currently the [minimum version of netCDF4 that xarray tests against](https://github.com/dcherian/xarray/blob/fa9b644dd3d41d5bedb4b040d71f101590e48d11/ci/requirements/py36-min-all-deps.yml#L30) is 1.4, which does not include netcdftime, and instead specifies cftime as a required dependency. - [x] Passes `black . && mypy . && flake8` - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API ",2019-10-26T13:24:38Z,2019-10-29T15:30:55Z,2019-10-29T15:30:55Z,2019-10-29T15:30:55Z,cb5eef1ad17e36626e2556bc2cfaf5c74aedf807,,,0,c820b2e5a89e5db63c8b87e7137ba9e63c1f24e8,fb0cf7b5fe56519a933ffcecbce9e9327fe236a6,MEMBER,,13221727,https://github.com/pydata/xarray/pull/3450, 341779110,MDExOlB1bGxSZXF1ZXN0MzQxNzc5MTEw,3543,closed,0,Minor fix to combine_by_coords to allow for the combination of CFTimeIndexes separated by large time intervals,6628425," This is a possible fix for the issue @mathause described in https://github.com/pydata/xarray/issues/3535#issuecomment-554317768. @TomNicholas does this seem like a safe change to make in `combine_by_coords`? - [x] Closes #3535 - [x] Tests added - [x] Passes `black . && mypy . && flake8` - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API ",2019-11-16T18:20:57Z,2019-12-07T20:38:01Z,2019-12-07T20:38:00Z,2019-12-07T20:38:00Z,1c446d374e81afcd174a6a2badda9121d2d776c0,,,0,ed43f21deac39108e0a986ebb2cb0648a0e1c78c,cafcaeea897894e3a2f44a38bd33c50a48c86215,MEMBER,,13221727,https://github.com/pydata/xarray/pull/3543, 356084487,MDExOlB1bGxSZXF1ZXN0MzU2MDg0NDg3,3652,closed,0,Use encoding['dtype'] over data.dtype when possible within CFMaskCoder.encode,6628425," This uses `encoding['dtype']` over `data.dtype` when possible within `CFMaskCoder.encode` to decide what type to cast `encoding['missing_value']` or `encoding['_FillValue']` to; this is one way to fix #3624. Another possible way would be to ensure the times have the proper `dtype` coming from `CFDatetimeCoder.encode`. I'm not sure what is the preferred solution. cc: @andersy005, @spencerahill - [x] Closes #3624 - [x] Tests added - [x] Passes `black . && mypy . && flake8` - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API ",2019-12-22T13:05:18Z,2020-01-15T15:23:41Z,2020-01-15T15:22:30Z,2020-01-15T15:22:30Z,99594051ef591f12b4b78a8b24136da46d0bf28f,,,0,a46efa0787b74475b2fb0a4fb445cf0c327b705c,e0fd48052dbda34ee35d2491e4fe856495c9621b,MEMBER,,13221727,https://github.com/pydata/xarray/pull/3652, 361832970,MDExOlB1bGxSZXF1ZXN0MzYxODMyOTcw,3688,closed,0,Fix test_cf_datetime_nan under pandas master,6628425," This fixes `test_cf_datetime_nan` for upcoming releases of pandas. See failure class (2) reported in #3673. - [x] Tests added - [x] Passes `black . && mypy . && flake8` ",2020-01-12T14:01:50Z,2020-01-13T16:36:33Z,2020-01-13T16:31:38Z,2020-01-13T16:31:38Z,59d3ba5e938bafb4a1981c1a56d42aa31041df0a,,,0,a4ea1455307f992154b97c4072c7a4eec915d0a5,1689db493f10262555196f658c52e370aacb4a33,MEMBER,,13221727,https://github.com/pydata/xarray/pull/3688, 373655388,MDExOlB1bGxSZXF1ZXN0MzczNjU1Mzg4,3764,closed,0,Fix CFTimeIndex-related errors stemming from updates in pandas,6628425," - [x] Closes #3751 - [x] Tests added - [x] Passes `isort -rc . && black . && mypy . && flake8` - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API This fixes the errors identified when #3751 was created by allowing one to subtract a `pd.Index` of `cftime.datetime` objects from a `CFTimeIndex`. Some new errors have come up too (not associated with any updates I made here), which I still need to work on identifying the source of: ``` ____________________________ test_indexing_in_series_getitem[365_day] _____________________________ series = 0001-01-01 00:00:00 1 0001-02-01 00:00:00 2 0002-01-01 00:00:00 3 0002-02-01 00:00:00 4 dtype: int64 index = CFTimeIndex([0001-01-01 00:00:00, 0001-02-01 00:00:00, 0002-01-01 00:00:00, 0002-02-01 00:00:00], dtype='object') scalar_args = [cftime.DatetimeNoLeap(0001-01-01 00:00:00)] range_args = ['0001', slice('0001-01-01', '0001-12-30', None), slice(None, '0001-12-30', None), slice(cftime.DatetimeNoLeap(0001-01...:00), cftime.DatetimeNoLeap(0001-12-30 00:00:00), None), slice(None, cftime.DatetimeNoLeap(0001-12-30 00:00:00), None)] @requires_cftime def test_indexing_in_series_getitem(series, index, scalar_args, range_args): for arg in scalar_args: > assert series[arg] == 1 test_cftimeindex.py:597: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../../../pandas/pandas/core/series.py:884: in __getitem__ return self._get_with(key) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = 0001-01-01 00:00:00 1 0001-02-01 00:00:00 2 0002-01-01 00:00:00 3 0002-02-01 00:00:00 4 dtype: int64 key = cftime.DatetimeNoLeap(0001-01-01 00:00:00) def _get_with(self, key): # other: fancy integer or otherwise if isinstance(key, slice): # _convert_slice_indexer to determing if this slice is positional # or label based, and if the latter, convert to positional slobj = self.index._convert_slice_indexer(key, kind=""getitem"") return self._slice(slobj) elif isinstance(key, ABCDataFrame): raise TypeError( ""Indexing a Series with DataFrame is not "" ""supported, use the appropriate DataFrame column"" ) elif isinstance(key, tuple): try: return self._get_values_tuple(key) except ValueError: # if we don't have a MultiIndex, we may still be able to handle # a 1-tuple. see test_1tuple_without_multiindex if len(key) == 1: key = key[0] if isinstance(key, slice): return self._get_values(key) raise if not isinstance(key, (list, np.ndarray, ExtensionArray, Series, Index)): > key = list(key) E TypeError: 'cftime._cftime.DatetimeNoLeap' object is not iterable ../../../pandas/pandas/core/series.py:911: TypeError ```",2020-02-11T13:22:04Z,2020-03-15T14:58:26Z,2020-03-13T06:14:41Z,2020-03-13T06:14:41Z,650a981734ce3291f5aaa68648ebde451339f28a,,,0,ec4e19f44ff587628bea3d9f7b1d2b7166d8cb80,f4ebbfef8f317205fba9edecadaac843dfa131f7,MEMBER,,13221727,https://github.com/pydata/xarray/pull/3764, 378642119,MDExOlB1bGxSZXF1ZXN0Mzc4NjQyMTE5,3792,closed,0,Enable pandas-style rounding of cftime.datetime objects,6628425," - [x] Tests added - [x] Passes `isort -rc . && black . && mypy . && flake8` - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API This is particularly useful for removing microsecond noise that can sometimes be added from decoding times via `cftime.num2date`, though also applies more generally. The methods used here for rounding dates in the integer domain are copied from pandas. On a somewhat more internal note, this adds an `asi8` property to `CFTimeIndex`, which encodes the dates as integer values representing microseconds since 1970-01-01; this encoding is made exact via the `exact_cftime_datetime_difference` function. It's possible this could be useful in other contexts. Some examples: ``` In [1]: import xarray as xr In [2]: times = xr.cftime_range(""2000"", periods=5, freq=""17D"") In [3]: time = xr.DataArray(times, dims=[""time""], name=""time"") In [4]: time.dt.floor(""11D"") Out[4]: array([cftime.DatetimeGregorian(1999-12-31 00:00:00), cftime.DatetimeGregorian(2000-01-11 00:00:00), cftime.DatetimeGregorian(2000-02-02 00:00:00), cftime.DatetimeGregorian(2000-02-13 00:00:00), cftime.DatetimeGregorian(2000-03-06 00:00:00)], dtype=object) Coordinates: * time (time) object 2000-01-01 00:00:00 ... 2000-03-09 00:00:00 In [5]: time.dt.ceil(""11D"") Out[5]: array([cftime.DatetimeGregorian(2000-01-11 00:00:00), cftime.DatetimeGregorian(2000-01-22 00:00:00), cftime.DatetimeGregorian(2000-02-13 00:00:00), cftime.DatetimeGregorian(2000-02-24 00:00:00), cftime.DatetimeGregorian(2000-03-17 00:00:00)], dtype=object) Coordinates: * time (time) object 2000-01-01 00:00:00 ... 2000-03-09 00:00:00 In [6]: time.dt.round(""11D"") Out[6]: array([cftime.DatetimeGregorian(1999-12-31 00:00:00), cftime.DatetimeGregorian(2000-01-22 00:00:00), cftime.DatetimeGregorian(2000-02-02 00:00:00), cftime.DatetimeGregorian(2000-02-24 00:00:00), cftime.DatetimeGregorian(2000-03-06 00:00:00)], dtype=object) Coordinates: * time (time) object 2000-01-01 00:00:00 ... 2000-03-09 00:00:00 ```",2020-02-22T23:26:50Z,2020-03-02T12:03:47Z,2020-03-02T09:41:20Z,2020-03-02T09:41:20Z,45d88fc4b2524ecb0c1236cd31767d00f72b0ea1,,,0,bc28dd21d3f94cd2a357e5848af6bc79ded3f6c0,20e6236f250d1507d22daf06d38b283a83c12e44,MEMBER,,13221727,https://github.com/pydata/xarray/pull/3792, 381351819,MDExOlB1bGxSZXF1ZXN0MzgxMzUxODE5,3808,closed,0,xfail tests due to #3751,6628425," @max-sixty @shoyer -- I agree we've let these linger far too long. This should hopefully get things back to being green. - [x] Passes `isort -rc . && black . && mypy . && flake8` ",2020-02-28T11:52:15Z,2020-02-28T13:45:33Z,2020-02-28T13:39:58Z,2020-02-28T13:39:58Z,fd08842e81576f5ea6b826e31bc2031bcca79de2,,,0,a7cb2b6106bf044a275262b214e93e45ea106de7,b6c8162724b4f828361204a8c0759b8437d80290,MEMBER,,13221727,https://github.com/pydata/xarray/pull/3808, 391855803,MDExOlB1bGxSZXF1ZXN0MzkxODU1ODAz,3874,closed,0,Re-enable tests xfailed in #3808 and fix new CFTimeIndex failures due to upstream changes,6628425," xref: #3869 ",2020-03-21T12:57:49Z,2020-03-23T00:29:58Z,2020-03-22T22:19:42Z,2020-03-22T22:19:42Z,2d0b85e84fa1d3d540ead8be04fc27703041b2cb,,,0,cbf0e11cee8c6e428215e3655f2c63962847ab25,564a291b13db73a31c15c4cf2a9ff5ec1ad2498c,MEMBER,,13221727,https://github.com/pydata/xarray/pull/3874, 395099116,MDExOlB1bGxSZXF1ZXN0Mzk1MDk5MTE2,3907,closed,0,Un-xfail test_dayofyear_after_cftime_range,6628425,"With Unidata/cftime#163 merged, this test, [which we temporarily xfailed in #3885](https://github.com/pydata/xarray/pull/3885#issuecomment-603406294), should pass with cftime master.",2020-03-28T13:55:50Z,2020-03-28T14:26:49Z,2020-03-28T14:26:46Z,2020-03-28T14:26:45Z,b084064fa62d3dedc3706c2f6c2dff90940fec27,,,0,6b991447dade5f66a34680644a65c3fca25a10e3,acf7d4157ca44f05c85a92d1b914b68738988773,MEMBER,,13221727,https://github.com/pydata/xarray/pull/3907, 398149869,MDExOlB1bGxSZXF1ZXN0Mzk4MTQ5ODY5,3930,closed,0,Only fail certain use_cftime backend tests if a specific warning occurs,6628425," - [x] Closes #3928 - [x] Passes `isort -rc . && black . && mypy . && flake8` The warning we want to avoid in these tests is: ``` SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime) ``` Other warnings could occur, but shouldn't cause the tests to fail. This modifies these tests to only fail if a warning with this message occurs. The warning that is occurring seems to be stemming from [within the netcdf4-python library](https://github.com/Unidata/netcdf4-python/blob/06e58422204cc77946fa21effd31ffb9421bd139/netCDF4/_netCDF4.pyx#L1416-L1419): ``` DeprecationWarning: tostring() is deprecated. Use tobytes() instead. attributes = {k: var.getncattr(k) for k in var.ncattrs()} ```",2020-04-03T12:39:47Z,2020-04-03T23:22:29Z,2020-04-03T19:35:18Z,2020-04-03T19:35:18Z,6bccbff975d59530a8c9cb1979cfcd5c8327254e,,,0,67c0d933d5860088ec905a2491e952560f37e476,1ed4f4d6d967d8b9435368444d9af6247748a047,MEMBER,,13221727,https://github.com/pydata/xarray/pull/3930, 399063445,MDExOlB1bGxSZXF1ZXN0Mzk5MDYzNDQ1,3935,closed,0,Add a days_in_month accessor to CFTimeIndex,6628425," - [x] Tests added - [x] Passes `isort -rc . && black . && mypy . && flake8` - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API This adds a `days_in_month` accessor to CFTimeIndex, which allows for easy computation of monthly time weights for non-standard calendars: ``` In [1]: import xarray as xr In [2]: times = xr.cftime_range(""2000"", periods=24, freq=""MS"", calendar=""noleap"") In [3]: da = xr.DataArray(times, dims=[""time""]) In [4]: da.dt.days_in_month Out[4]: array([31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31, 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31]) Coordinates: * time (time) object 2000-01-01 00:00:00 ... 2001-12-01 00:00:00 ``` This simplifies the [""Calculating Seasonal Averages from Timeseries of Monthly Means"" example](http://xarray.pydata.org/en/stable/examples/monthly-means.html) @jhamman wrote for the docs a while back, which I've taken the liberty of updating. The ability to add this feature to xarray is thanks in large part to @huard, who added a `daysinmonth` attribute to `cftime.datetime` objects late last year: https://github.com/Unidata/cftime/pull/138.",2020-04-05T12:38:50Z,2020-04-06T14:02:58Z,2020-04-06T14:02:11Z,2020-04-06T14:02:11Z,604835603c83618dbe101331813cc6ae428d8be1,,,0,86faba51a3f047fa42c46106ab3bee7c8e7a985a,8d280cd7b1d80567cfdc6ae55165c522a5d4c2ce,MEMBER,,13221727,https://github.com/pydata/xarray/pull/3935, 433355535,MDExOlB1bGxSZXF1ZXN0NDMzMzU1NTM1,4148,closed,0,Remove outdated note from DatetimeAccessor docstring,6628425,Noticed this today. This note in the `DatetimeAccessor` docstring is no longer relevant; these fields have been calendar-aware for some time.,2020-06-11T22:02:43Z,2020-06-11T23:24:03Z,2020-06-11T23:23:28Z,2020-06-11T23:23:28Z,8f688ea92ae8416ecc3e18f6e060dad16960e9ac,,,0,e96f29fd47fe02002cd9fe5bb7791ad7784ff705,4071125feedee690364272e8fde9b94866f85bc7,MEMBER,,13221727,https://github.com/pydata/xarray/pull/4148, 456749560,MDExOlB1bGxSZXF1ZXN0NDU2NzQ5NTYw,4272,closed,0,Un-xfail cftime plotting tests,6628425,"Closes #4265 The change that broke these tests in NumPy master has now been relaxed to trigger a DeprecationWarning (https://github.com/numpy/numpy/pull/16943).",2020-07-26T13:23:07Z,2020-07-27T19:19:38Z,2020-07-26T19:04:55Z,2020-07-26T19:04:55Z,50dcdacc98906f5f5721bb6bbe1b9cef2425dc1e,,,0,5952e33350e13ae639daedd89fecec4d5ccf3ed9,83987b78a90c24731755d5fe7dc8c38ef2182aab,MEMBER,,13221727,https://github.com/pydata/xarray/pull/4272, 468297777,MDExOlB1bGxSZXF1ZXN0NDY4Mjk3Nzc3,4343,closed,0,Allow for datetime strings formatted following the default cftime format in cftime_range and partial datetime string indexing,6628425,"This PR adds support for datetime strings formatted following the default cftime format (YYYY-MM-DD hh:mm:ss) in `cftime_range` and partial datetime string indexing. - [x] Closes #4337 - [x] Tests added - [x] Passes `isort . && black . && mypy . && flake8` - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` ",2020-08-15T11:55:11Z,2020-08-17T23:27:10Z,2020-08-17T23:27:07Z,2020-08-17T23:27:06Z,5198360c0bc28dd7528e909c6b6ccffe731474ad,,,0,f832c74e11c847dea082a33eaec74abbe941eacb,e6c111355137a123488c8dad48d473b32e9e5366,MEMBER,,13221727,https://github.com/pydata/xarray/pull/4343, 468311097,MDExOlB1bGxSZXF1ZXN0NDY4MzExMDk3,4344,closed,0,Fix overflow-related bug in computing means of cftime.datetime arrays,6628425,"Going through `pandas.TimedeltaIndex` within `duck_array_ops._to_pytimedelta` leads to overflow problems (presumably it casts to a `""timedelta64[ns]""` type internally). This PR updates the logic to directly use NumPy to do the casting, first to `""timedelta64[us]""`, then to `datetime.timedelta`. - [x] Closes #4341 - [x] Tests added - [x] Passes `isort . && black . && mypy . && flake8` - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` ",2020-08-15T13:08:33Z,2020-08-15T20:05:29Z,2020-08-15T20:05:23Z,2020-08-15T20:05:23Z,26547d19d477cc77461c09b3aadd55f7eb8b4dbf,,,0,6c66b05acd2b309322aa5c6d3c2664299f872e79,e6c111355137a123488c8dad48d473b32e9e5366,MEMBER,,13221727,https://github.com/pydata/xarray/pull/4344, 486197524,MDExOlB1bGxSZXF1ZXN0NDg2MTk3NTI0,4418,closed,0,Add try/except logic to handle renaming of cftime datetime base class,6628425,"`cftime` is planning on renaming the base class for its datetime objects from `cftime.datetime` to `cftime.datetime_base`. See discussion in https://github.com/Unidata/cftime/issues/198 and https://github.com/Unidata/cftime/pull/199. This PR adds the appropriate logic in xarray to handle this in a backwards-compatible way. In the documentation in places where we refer to `` :py:class:`cftime.datetime` `` objects, I have modified things to read ``` ``cftime`` datetime ```. Being more generic is probably better in any case, as in most instances we do not explicitly mean that the base class can be used, only subclasses of the base class. cc: @jswhit - [x] Passes `isort . && black . && mypy . && flake8` - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` ",2020-09-13T15:25:53Z,2020-09-19T13:30:02Z,2020-09-19T13:29:14Z,,ff68600f14c23b8b5e88f1cdbea268769e2e57e3,,,0,62c993e9a93ce0d897e068fe671121eebf71c975,66ab0ae4f3aa3c461357a5a895405e81357796b1,MEMBER,,13221727,https://github.com/pydata/xarray/pull/4418, 505367432,MDExOlB1bGxSZXF1ZXN0NTA1MzY3NDMy,4517,closed,0,Eliminate use of calendar-naive cftime objects,6628425,"This is a minor cleanup to remove our use of calendar-naive cftime datetime objects (it just occurs in one test). The behavior of the `cftime.datetime` constructor is set to change in Unidata/cftime#202. By default it will create a calendar-aware datetime with a Gregorian calendar, instead of a calendar-naive datetime. In xarray we don't have a real need to use calendar-naive datetimes, so I think it's just best to remove our use of them. - [x] Passes `isort . && black . && mypy . && flake8` ",2020-10-18T00:24:22Z,2020-10-19T15:21:12Z,2020-10-19T15:20:37Z,2020-10-19T15:20:37Z,0f0a5ed8521172bd1e9e217c6fd6db8e23d5be56,,,0,00797c010f6903a4802460e706c7de34562dbd62,15537497136345ed67e9e8b089bcd4573df0b2ea,MEMBER,,13221727,https://github.com/pydata/xarray/pull/4517, 538503497,MDExOlB1bGxSZXF1ZXN0NTM4NTAzNDk3,4684,closed,0,Ensure maximum accuracy when encoding and decoding np.datetime64[ns] values,6628425," - [x] Closes #4045 - [x] Tests added - [x] Passes `isort . && black . && mypy . && flake8` - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` This PR cleans up the logic used to encode and decode times with pandas so that by default we use `int64` values in both directions for all precisions down to nanosecond. If a user specifies an encoding (or a file is read in) such that `float` values would be required, things still work as they did before. I do this mainly by following the approach I described here: https://github.com/pydata/xarray/issues/4045#issuecomment-626257580. In the process of doing this I made a few changes to `coding.times._decode_datetime_with_pandas`: - I removed the checks on the minimum and maximum dates to decode, as the issue those checks were imposed for (#975) was fixed in pandas way back in 2016 (https://github.com/pandas-dev/pandas/issues/14068). - I used an alternate approach for fixing #2002, which allows us to continue to use the optimization made in #1414 without having to cast the input array to a `float` dtype first. Note this will change the default units that are chosen for encoding times in some instances -- previously we would never default to anything more precise than seconds -- but I think this change is for the better. cc: @aldanor @hmaarrfk this overlaps a little with your work in #4400, so I'm giving you credit here too (I hope you don't mind!).",2020-12-12T21:43:57Z,2021-02-07T23:30:41Z,2021-01-03T23:39:04Z,2021-01-03T23:39:04Z,ed255736664f8f0b4ea199c8f91bffaa89522d03,,,0,2775a609edcc356cf0f5744e7c449e5aa1bd343c,0f1eb96c924bad60ea87edd9139325adabfefa33,MEMBER,,13221727,https://github.com/pydata/xarray/pull/4684, 547961795,MDExOlB1bGxSZXF1ZXN0NTQ3OTYxNzk1,4758,closed,0,Ensure maximum accuracy when encoding and decoding cftime.datetime values,6628425," - [x] Closes #4097 - [x] Tests added - [x] Passes `isort . && black . && mypy . && flake8` - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` Following up on #4684, this PR makes changes to our encoding / decoding process such that `cftime.datetime` objects can be roundtripped exactly. In the process, because it made the tests cleaner to define, I added cftime offsets for millisecond and microsecond frequency as well. As I note in the what's new, exact roundtripping requires cftime of at least version 1.4.1, which included improvements to `cftime.num2date` (https://github.com/Unidata/cftime/pull/176, https://github.com/Unidata/cftime/pull/188) and `cftime.date2num` (https://github.com/Unidata/cftime/pull/178, https://github.com/Unidata/cftime/pull/225).",2021-01-04T00:47:32Z,2021-02-10T21:52:16Z,2021-02-10T21:44:26Z,2021-02-10T21:44:25Z,10f0227a1667c5ab3c88465ff1572065322cde77,,,0,725bcabb8c965f8829f2b82a245789eec0cbc0a6,46591d28d9fbbfc184aaf4075d330b1c8f070627,MEMBER,,13221727,https://github.com/pydata/xarray/pull/4758, 568814469,MDExOlB1bGxSZXF1ZXN0NTY4ODE0NDY5,4871,closed,0,Modify _encode_datetime_with_cftime for compatibility with cftime > 1.4.0,6628425,"- [x] Closes #4870 - [x] Tests added - [x] Passes `pre-commit run --all-files` - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` ",2021-02-06T16:34:02Z,2021-02-07T23:12:33Z,2021-02-07T23:12:30Z,2021-02-07T23:12:30Z,46591d28d9fbbfc184aaf4075d330b1c8f070627,,,0,a27deddd7366bec64770381a6f9dd09b48105a91,ec7f628bf38b37df213fe3b5ad68d3f70824b864,MEMBER,,13221727,https://github.com/pydata/xarray/pull/4871, 577159967,MDExOlB1bGxSZXF1ZXN0NTc3MTU5OTY3,4939,closed,0,Add DataArrayCoarsen.reduce and DatasetCoarsen.reduce methods,6628425,"As suggested by @dcherian, this was quite similar to `rolling`; it was useful in particular to follow how the tests were implemented there. - [x] Closes #3741 - [x] Tests added - [x] Passes `pre-commit run --all-files` - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [x] New functions/methods are listed in `api.rst`",2021-02-21T18:49:47Z,2021-02-23T16:01:30Z,2021-02-23T16:01:27Z,2021-02-23T16:01:27Z,f554d0a487d8ee286c96002a09f00379c80bd7f4,,,0,8416383a1b427804f46a7b3c076e4b7503c3bafb,eb7e112d45a9edebd8e5fb4f873e3e6adb18824a,MEMBER,,13221727,https://github.com/pydata/xarray/pull/4939, 586244249,MDExOlB1bGxSZXF1ZXN0NTg2MjQ0MjQ5,5006,closed,0,Adapt exception handling logic in CFTimeIndex.__sub__ and __rsub__,6628425,"The exception that was raised in pandas when a `datetime.timedelta` object outside the range that could be expressed in units of nanoseconds was passed to the `pandas.TimedeltaIndex` constructor changed from an `OverflowError` to an `OutOfBoundsTimedelta` error in the development version of pandas. This PR adjusts our exception handling logic in `CFTimeIndex.__sub__` and `CFTimeIndex.__rsub__` to account for this. - [x] closes #4947
Previous versions of pandas: ```python >>> import pandas as pd; from datetime import timedelta >>> pd.TimedeltaIndex([timedelta(days=300 * 365)]) Traceback (most recent call last): File ""pandas/_libs/tslibs/timedeltas.pyx"", line 263, in pandas._libs.tslibs.timedeltas.array_to_timedelta64 TypeError: Expected unicode, got datetime.timedelta During handling of the above exception, another exception occurred: Traceback (most recent call last): File """", line 1, in File ""/Users/spencer/Software/miniconda3/envs/xarray-tests/lib/python3.7/site-packages/pandas/core/indexes/timedeltas.py"", line 157, in __new__ data, freq=freq, unit=unit, dtype=dtype, copy=copy File ""/Users/spencer/Software/miniconda3/envs/xarray-tests/lib/python3.7/site-packages/pandas/core/arrays/timedeltas.py"", line 216, in _from_sequence data, inferred_freq = sequence_to_td64ns(data, copy=copy, unit=unit) File ""/Users/spencer/Software/miniconda3/envs/xarray-tests/lib/python3.7/site-packages/pandas/core/arrays/timedeltas.py"", line 926, in sequence_to_td64ns data = objects_to_td64ns(data, unit=unit, errors=errors) File ""/Users/spencer/Software/miniconda3/envs/xarray-tests/lib/python3.7/site-packages/pandas/core/arrays/timedeltas.py"", line 1036, in objects_to_td64ns result = array_to_timedelta64(values, unit=unit, errors=errors) File ""pandas/_libs/tslibs/timedeltas.pyx"", line 268, in pandas._libs.tslibs.timedeltas.array_to_timedelta64 File ""pandas/_libs/tslibs/timedeltas.pyx"", line 221, in pandas._libs.tslibs.timedeltas.convert_to_timedelta64 File ""pandas/_libs/tslibs/timedeltas.pyx"", line 166, in pandas._libs.tslibs.timedeltas.delta_to_nanoseconds OverflowError: Python int too large to convert to C long ``` Development version of pandas: ```python >>> import pandas as pd; from datetime import timedelta >>> pd.TimedeltaIndex([timedelta(days=300 * 365)]) Traceback (most recent call last): File ""pandas/_libs/tslibs/timedeltas.pyx"", line 348, in pandas._libs.tslibs.timedeltas.array_to_timedelta64 TypeError: Expected unicode, got datetime.timedelta During handling of the above exception, another exception occurred: Traceback (most recent call last): File ""pandas/_libs/tslibs/timedeltas.pyx"", line 186, in pandas._libs.tslibs.timedeltas.delta_to_nanoseconds OverflowError: Python int too large to convert to C long The above exception was the direct cause of the following exception: Traceback (most recent call last): File """", line 1, in File ""/Users/spencer/software/pandas/pandas/core/indexes/timedeltas.py"", line 161, in __new__ tdarr = TimedeltaArray._from_sequence_not_strict( File ""/Users/spencer/software/pandas/pandas/core/arrays/timedeltas.py"", line 270, in _from_sequence_not_strict data, inferred_freq = sequence_to_td64ns(data, copy=copy, unit=unit) File ""/Users/spencer/software/pandas/pandas/core/arrays/timedeltas.py"", line 970, in sequence_to_td64ns data = objects_to_td64ns(data, unit=unit, errors=errors) File ""/Users/spencer/software/pandas/pandas/core/arrays/timedeltas.py"", line 1079, in objects_to_td64ns result = array_to_timedelta64(values, unit=unit, errors=errors) File ""pandas/_libs/tslibs/timedeltas.pyx"", line 362, in pandas._libs.tslibs.timedeltas.array_to_timedelta64 File ""pandas/_libs/tslibs/timedeltas.pyx"", line 353, in pandas._libs.tslibs.timedeltas.array_to_timedelta64 File ""pandas/_libs/tslibs/timedeltas.pyx"", line 306, in pandas._libs.tslibs.timedeltas.convert_to_timedelta64 File ""pandas/_libs/tslibs/timedeltas.pyx"", line 189, in pandas._libs.tslibs.timedeltas.delta_to_nanoseconds pandas._libs.tslibs.conversion.OutOfBoundsTimedelta: Python int too large to convert to C long ```
",2021-03-07T12:28:25Z,2021-03-07T13:22:06Z,2021-03-07T13:22:03Z,2021-03-07T13:22:03Z,b610a3c4317474b4b999c23cf66d1dc55c9b3cd6,,,0,a62bd7b23ed0410a16dace8f9193a4659d1733c5,67903ff08ec9ea1b5c259df634dc65444ae97eb6,MEMBER,,13221727,https://github.com/pydata/xarray/pull/5006, 614839003,MDExOlB1bGxSZXF1ZXN0NjE0ODM5MDAz,5154,closed,0,Catch either OutOfBoundsTimedelta or OverflowError in CFTimeIndex.__sub__ and CFTimeIndex.__rsub__,6628425,"It seems that pandas did not include the change that led to #5006 in their latest release. Perhaps it is safer to just catch either error regardless of the pandas version. - [x] Closes #5147 ",2021-04-14T00:24:34Z,2021-04-14T15:44:17Z,2021-04-14T13:27:10Z,2021-04-14T13:27:10Z,9b60f01066c1209b719ab3a3b111aa66b5fc3e26,,,0,af898e1243e656414e065530913d3ac785047397,f94de6b4504482ab206f93ec800608f2e1f47b19,MEMBER,,13221727,https://github.com/pydata/xarray/pull/5154, 617378253,MDExOlB1bGxSZXF1ZXN0NjE3Mzc4MjUz,5180,closed,0,Convert calendar to lowercase in standard calendar checks,6628425,"This fixes the issue in #5093, by ensuring that we always convert the calendar to lowercase before checking if it is one of the standard calendars in the decoding and encoding process. I've been careful to test that the calendar attribute is faithfully roundtripped despite this, uppercase letters and all. ~~I think part of the reason this went unnoticed for a while was that we could still decode times like this if cftime was installed; it is only in the case when cftime was not installed that our logic failed. This is because `cftime.num2date` already converts the calendar to lowercase internally.~~ Upon re-reading @pont-us's issue description, while it didn't cause an error, the behavior was incorrect with cftime installed too. I updated the test to check the dtype is `np.datetime64` as well. - [x] Closes #5093 - [x] Tests added - [x] Passes `pre-commit run --all-files` - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` ",2021-04-17T20:44:57Z,2021-04-18T10:17:11Z,2021-04-18T10:17:08Z,2021-04-18T10:17:08Z,44f4ae11019ca9c9e7280c41d9d2fd86cf86ccce,,,0,87cf204615e5fbeebcc9f3ba4793f918ee74dfc4,c54ec94a6e4c3276eac3e2bbea3c77a040d5674a,MEMBER,,13221727,https://github.com/pydata/xarray/pull/5180, 649887380,MDExOlB1bGxSZXF1ZXN0NjQ5ODg3Mzgw,5359,closed,0,Make `kind` argument in `CFTimeIndex._maybe_cast_slice_bound` optional,6628425,"Pandas recently deprecated the `kind` argument in `Index._maybe_cast_slice_bound`, and removed its use in several internal calls: https://github.com/pandas-dev/pandas/pull/41378. This led to some errors in the CFTimeIndex tests in our upstream build. We never made use of it in `CFTimeIndex._maybe_cast_slice_bound` so the simplest fix for backwards compatibility seems to be to make it optional for now -- in previous versions of pandas it was required -- and remove it when our minimum version of pandas is at least 1.3.0. - [x] Closes #5356 - [x] Passes `pre-commit run --all-files` - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` ",2021-05-21T11:25:46Z,2021-05-23T09:47:03Z,2021-05-23T00:13:20Z,2021-05-23T00:13:20Z,ca72d56c213a1c47e54b12ee559f412e60fbf9b1,,,0,d2c1c0e0fef151737c15bf16f6647c2f8b59dfa2,84429bfa0856bf633011d3da671d2149d5db34bf,MEMBER,,13221727,https://github.com/pydata/xarray/pull/5359, 668619291,MDExOlB1bGxSZXF1ZXN0NjY4NjE5Mjkx,5461,closed,0,Remove `xfail` decorator from tests that depend on nc-time-axis,6628425,"nc-time-axis [version 1.3.0](https://github.com/SciTools/nc-time-axis/releases/tag/v1.3.0) was released today (thanks @bjlittle!), which includes various fixes for incompatibilities with the latest version of cftime. This means that our tests that depend on nc-time-axis should now pass. - [x] Closes #5344 ",2021-06-11T22:44:46Z,2021-06-12T12:57:55Z,2021-06-12T12:57:53Z,2021-06-12T12:57:52Z,2290a5fd8e1b2ae49a1276364c0f1c0524abbf60,,,0,9c49f45b9cd840b9b4385f11017fc8e44563594b,4434f034a36886609ac0492d3307954163ecbea6,MEMBER,,13221727,https://github.com/pydata/xarray/pull/5461, 668850436,MDExOlB1bGxSZXF1ZXN0NjY4ODUwNDM2,5463,closed,0,Explicitly state datetime units in array constructors in `test_datetime_mean`,6628425," This addresses the `test_datetime_mean` failures reported in #5366. Pandas now requires that we make sure the units of datetime arrays are specified explicitly in array constructors: https://github.com/pandas-dev/pandas/issues/36615#issuecomment-860040013. - [x] Passes `pre-commit run --all-files` ",2021-06-12T11:48:22Z,2021-06-12T13:20:33Z,2021-06-12T12:58:43Z,2021-06-12T12:58:43Z,5a14d7d398be7e0efc6d5c8920dc8886212c3b2a,,,0,e3d485978522e536e8b883dece24fdafb40ab801,4434f034a36886609ac0492d3307954163ecbea6,MEMBER,,13221727,https://github.com/pydata/xarray/pull/5463, 717134170,MDExOlB1bGxSZXF1ZXN0NzE3MTM0MTcw,5723,closed,0,Remove use of deprecated `kind` argument in `CFTimeIndex` tests,6628425," On the topic of FutureWarning's related to indexing in pandas (#5721), I noticed some another kind of warning in the `CFTimeIndex` tests: ``` /Users/spencer/software/xarray/xarray/tests/test_cftimeindex.py:350: FutureWarning: 'kind' argument in get_slice_bound is deprecated and will be removed in a future version. Do not pass it. result = index.get_slice_bound(""0001"", ""left"", kind) ``` I think it's safe to silence these by removing the `kind` argument from these tests. We never used it anyway in `CFTimeIndex`. This is sort of a follow-up to #5359. - [x] Passes `pre-commit run --all-files` - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` ",2021-08-21T10:49:53Z,2021-10-24T11:37:02Z,2021-10-24T09:55:33Z,2021-10-24T09:55:33Z,69dec51cfca065f2abdc9933c938c8c03e694184,,,0,a3cc075fc0f0e8690cc452c6c4adaab0f82c3f5e,214bbe09fb34496eadb4f266d3bb8c943cdae85d,MEMBER,,13221727,https://github.com/pydata/xarray/pull/5723, 720939818,MDExOlB1bGxSZXF1ZXN0NzIwOTM5ODE4,5744,closed,0,Install development version of nc-time-axis in upstream build,6628425,"I think this would be good to do anyway, but I'm also curious to see if it fixes the cftime plotting tests in #5743. ",2021-08-27T00:49:55Z,2021-08-27T13:16:25Z,2021-08-27T12:49:33Z,2021-08-27T12:49:33Z,b34f92b1bbe0d85ac51db7b7eb5ff02431242edc,,,0,eca23315661c45bb2654a8c1ba86c7c60ae94106,4fd81b51101aceaad08570f1368ad4b50a946da5,MEMBER,,13221727,https://github.com/pydata/xarray/pull/5744, 911548008,PR_kwDOAMm_X842VR5o,6489,closed,0,Ensure datetime-like variables are left unmodified by `decode_cf_variable`,6628425," It seems rare that `decode_cf_variable` would be called on variables that contain datetime-like objects already, but in the case that it is, it seems best to let those variables pass through unmodified. - [x] Closes #6453 - [x] Tests added - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst`",2022-04-17T20:45:53Z,2022-04-18T18:00:49Z,2022-04-18T15:29:19Z,2022-04-18T15:29:19Z,4b18065af6acff72f479a17bda23b1401285732f,,,0,31e47e168853c5a571607c83816cdb2d7c3d3a54,586992e8d2998751cb97b1cab4d3caa9dca116e0,MEMBER,,13221727,https://github.com/pydata/xarray/pull/6489, 934696832,PR_kwDOAMm_X843tleA,6598,closed,0,Fix overflow issue in decode_cf_datetime for dtypes <= np.uint32,6628425,"- [x] Closes #6589 - [x] Tests added - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` ",2022-05-12T11:14:15Z,2022-05-15T15:00:44Z,2022-05-15T14:42:32Z,2022-05-15T14:42:32Z,8de706151e183f448e1af9115770713d18e229f1,,,0,cbb72aadeba1fb267eb623a9e55647906a6668b4,6bb2b855498b5c68d7cca8cceb710365d58e6048,MEMBER,,13221727,https://github.com/pydata/xarray/pull/6598, 976982660,PR_kwDOAMm_X846O5KE,6717,closed,0,Accommodate `OutOfBoundsTimedelta` error when decoding times,6628425,"The development version of pandas raises an `OutOfBoundsTimedelta` error instead of an `OverflowError` in `pd.to_timedelta` if the timedelta cannot be represented with nanosecond precision. Therefore we must also be ready to catch that when decoding times. The `OutOfBoundsTimedelta` exception [was added](https://github.com/pandas-dev/pandas/pull/34448) in pandas version 1.1, which is prior to [our current minimum version (1.2)](https://github.com/pydata/xarray/blob/main/ci/requirements/min-all-deps.yml#L37), so it should be safe to import without a version check. - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` ",2022-06-23T10:53:22Z,2022-06-24T18:48:54Z,2022-06-24T18:48:18Z,2022-06-24T18:48:18Z,6c8db5ed005e000b35ad8b6ea9080105e608e976,,,0,176f00b37ad40f55ba875f03d1d6beaafefdab2d,abad670098a48ab8f876117c6b2cf3db8aff05dc,MEMBER,,13221727,https://github.com/pydata/xarray/pull/6717, 1032111288,PR_kwDOAMm_X849hMS4,6940,closed,0,Enable taking the mean of dask-backed cftime arrays,6628425,"This was essentially enabled by @dcherian in #6556, but we did not remove the error that prevented computing the mean of a dask-backed cftime array. This PR removes that error, and adds some tests. One minor modification in `_timedelta_to_seconds` was needed for compatibility with scalar cftime arrays. This happens to address the second part of #5897, so I added a regression test for that. It seems like we decided to simply document the behavior in the first part (https://github.com/pydata/xarray/issues/5898, https://github.com/dcherian/xarray/commit/99bfe128066ec3ef1b297650a47e2dd0a45801a8), but I'm not sure if we intend to change that behavior eventually or not. - [x] Tests added - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` ",2022-08-21T19:24:37Z,2022-09-10T12:28:16Z,2022-09-09T16:48:19Z,2022-09-09T16:48:19Z,25537623edafd4a2f99a011ebb91ae55bccb96a2,,,0,caf11162e0b9c99948cd2e33694adf84d7896fc3,abe1e613a96b000ae603c53d135828df532b952e,MEMBER,"{""enabled_by"": {""login"": ""dcherian"", ""id"": 2448579, ""node_id"": ""MDQ6VXNlcjI0NDg1Nzk="", ""avatar_url"": ""https://avatars.githubusercontent.com/u/2448579?v=4"", ""gravatar_id"": """", ""url"": ""https://api.github.com/users/dcherian"", ""html_url"": ""https://github.com/dcherian"", ""followers_url"": ""https://api.github.com/users/dcherian/followers"", ""following_url"": ""https://api.github.com/users/dcherian/following{/other_user}"", ""gists_url"": ""https://api.github.com/users/dcherian/gists{/gist_id}"", ""starred_url"": ""https://api.github.com/users/dcherian/starred{/owner}{/repo}"", ""subscriptions_url"": ""https://api.github.com/users/dcherian/subscriptions"", ""organizations_url"": ""https://api.github.com/users/dcherian/orgs"", ""repos_url"": ""https://api.github.com/users/dcherian/repos"", ""events_url"": ""https://api.github.com/users/dcherian/events{/privacy}"", ""received_events_url"": ""https://api.github.com/users/dcherian/received_events"", ""type"": ""User"", ""site_admin"": false}, ""merge_method"": ""squash"", ""commit_title"": ""Enable taking the mean of dask-backed cftime arrays (#6940)"", ""commit_message"": ""Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>\r\nCo-authored-by: Deepak Cherian ""}",13221727,https://github.com/pydata/xarray/pull/6940, 1046326317,PR_kwDOAMm_X84-Xawt,6988,closed,0,Simplify datetime64 `dt.calendar` tests,6628425,"This PR simplifies the tests for the calendar attribute on the `dt` accessor when using a `datetime64[ns]`-dtype DataArray. Instead of creating random-valued datetime arrays, we can use arrays of zeros (i.e. 1970-01-01), since the values of the datetimes should not be relevant to these tests (only their type matters). I suspect this should address #6906, because it eliminates the need to convert to `datetime64[ns]`, though I still feel as though there is a more fundamental pandas issue lurking there. ",2022-09-05T12:39:30Z,2022-09-09T09:50:55Z,2022-09-08T23:34:44Z,2022-09-08T23:34:43Z,77d961a8c43444e16b51e9700e7805a9e7e0d190,,,0,0f9566c4f28372e4dc5ecd77196fe0a929e513c3,18454c218002e48e1643ce8e25654262e5f592ad,MEMBER,,13221727,https://github.com/pydata/xarray/pull/6988, 1081069771,PR_kwDOAMm_X85Ab9DL,7147,closed,0,Include variable name in message if `decode_cf_variable` raises an error,6628425,"- [x] Closes #7145 - [x] Tests added - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` I'm not sure if there is a better way to do this, but this is one way to address #7145. The error message for the example now looks like: ``` >>> xr.decode_cf(ds) Traceback (most recent call last): File ""/Users/spencer/software/xarray/xarray/coding/times.py"", line 275, in decode_cf_datetime dates = _decode_datetime_with_pandas(flat_num_dates, units, calendar) File ""/Users/spencer/software/xarray/xarray/coding/times.py"", line 210, in _decode_datetime_with_pandas raise OutOfBoundsDatetime( pandas._libs.tslibs.np_datetime.OutOfBoundsDatetime: Cannot decode times from a non-standard calendar, 'noleap', using pandas. During handling of the above exception, another exception occurred: Traceback (most recent call last): File ""/Users/spencer/software/xarray/xarray/coding/times.py"", line 180, in _decode_cf_datetime_dtype result = decode_cf_datetime(example_value, units, calendar, use_cftime) File ""/Users/spencer/software/xarray/xarray/coding/times.py"", line 277, in decode_cf_datetime dates = _decode_datetime_with_cftime( File ""/Users/spencer/software/xarray/xarray/coding/times.py"", line 202, in _decode_datetime_with_cftime cftime.num2date(num_dates, units, calendar, only_use_cftime_datetimes=True) File ""src/cftime/_cftime.pyx"", line 605, in cftime._cftime.num2date File ""src/cftime/_cftime.pyx"", line 404, in cftime._cftime.cast_to_int OverflowError: time values outside range of 64 bit signed integers During handling of the above exception, another exception occurred: Traceback (most recent call last): File ""/Users/spencer/software/xarray/xarray/conventions.py"", line 523, in decode_cf_variables new_vars[k] = decode_cf_variable( File ""/Users/spencer/software/xarray/xarray/conventions.py"", line 369, in decode_cf_variable var = times.CFDatetimeCoder(use_cftime=use_cftime).decode(var, name=name) File ""/Users/spencer/software/xarray/xarray/coding/times.py"", line 688, in decode dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime) File ""/Users/spencer/software/xarray/xarray/coding/times.py"", line 190, in _decode_cf_datetime_dtype raise ValueError(msg) ValueError: unable to decode time units 'days since 0001-01-01' with ""calendar 'noleap'"". Try opening your dataset with decode_times=False or installing cftime if it is not installed. During handling of the above exception, another exception occurred: Traceback (most recent call last): File """", line 1, in File ""/Users/spencer/software/xarray/xarray/conventions.py"", line 659, in decode_cf vars, attrs, coord_names = decode_cf_variables( File ""/Users/spencer/software/xarray/xarray/conventions.py"", line 534, in decode_cf_variables raise type(e)(f""Failed to decode variable {k!r}: {e}"") ValueError: Failed to decode variable 'invalid_times': unable to decode time units 'days since 0001-01-01' with ""calendar 'noleap'"". Try opening your dataset with decode_times=False or installing cftime if it is not installed. ```",2022-10-08T17:53:23Z,2022-10-12T16:24:45Z,2022-10-12T15:25:42Z,2022-10-12T15:25:42Z,96db9f804cf6bb3ed5e333237b69cb7c47b527e3,,,0,d2e2b7f93ac18513c23c4c45796cd7c5c63743d9,9f390f50718ee94237084cbc1badb66f9a8083d6,MEMBER,,13221727,https://github.com/pydata/xarray/pull/7147, 1088466399,PR_kwDOAMm_X85A4K3f,7171,closed,0,Set `longdouble=False` in `cftime.date2num` within the date encoding context,6628425,"Currently, the default behavior of `cftime.date2num` is to return integer values when possible (i.e. when the encoding units allow), and fall back to returning float64 values when that is not possible. Recently, [cftime added the option to use float128 as the fallback dtype](https://github.com/Unidata/cftime/pull/284#issuecomment-1176098280), which enables greater potential roundtrip precision. This is through the `longdouble` flag to `cftime.date2num`, which currently defaults to `False`. It was intentionally set to `False` by default, because netCDF does not support storing float128 values in files, and so, without any changes, would otherwise break xarray's encoding procedure. The desire in cftime, however, is to eventually set this flag to `True` by default (https://github.com/Unidata/cftime/issues/297). This PR makes the necessary changes in xarray to adapt to this eventual new default. Essentially if the `longdouble` argument is allowed in the user's version of `cftime.date2num`, we explicitly set it to `False` to preserve the current float64 fallback behavior within the context of encoding times. There are a few more places where `date2num` is used (some additional places in the tests, and [in `calendar_ops.py`](https://github.com/pydata/xarray/blob/93f1ba226086d5a916f54653e870a2943fe09ab7/xarray/coding/calendar_ops.py#L277)), but in those places using float128 values would not present a problem. At some point we might consider relaxing this behavior in xarray, since it is possible to store float128 values in zarr stores for example, but for the time being the simplest approach seems to be to stick with float64 for all backends (it would be complicated to have backend-specific defaults). cc: @jswhit",2022-10-16T18:20:58Z,2022-10-18T16:38:24Z,2022-10-18T16:37:57Z,2022-10-18T16:37:57Z,9df2dfca57e1c672f6faf0f7945d2f38921a4bb2,,,0,05853187cc1bb1d25cecad6c7de9c633a3ac4ec8,93f1ba226086d5a916f54653e870a2943fe09ab7,MEMBER,,13221727,https://github.com/pydata/xarray/pull/7171, 1096720529,PR_kwDOAMm_X85BXqCR,7201,closed,0,Emit a warning when converting datetime or timedelta values to nanosecond precision,6628425,"This PR addresses #7175 by converting datetime or timedelta values to nanosecond precision even if pandas does not. For the time being we emit a warning when pandas does not do the conversion, but we do (right now this is only in the development version of pandas). When things stabilize in pandas we can consider relaxing this constraint in xarray as well. This got a little bit more complicated due to the presence of timezone-aware datetimes in pandas, but hopefully the tests cover those cases now. - [x] Closes #7175 - [x] Closes #7197 - [x] Tests added - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` ",2022-10-23T23:17:07Z,2022-10-26T16:07:16Z,2022-10-26T16:00:33Z,2022-10-26T16:00:33Z,be6594e6e327c95cfa64c8c6c06143022d0b6743,,,0,27593b9be8fdc63e4c28304016189891337b0056,519abb7bde020f2b27cb7e3dfddec8c6eecb7722,MEMBER,,13221727,https://github.com/pydata/xarray/pull/7201, 1104395386,PR_kwDOAMm_X85B07x6,7238,closed,0,Improve non-nanosecond warning,6628425," Thanks for the feedback @hmaarrfk. Is this what you had in mind? - [x] Closes #7237 For example running this script: ```python import numpy as np import xarray as xr times = [np.datetime64(""2000-01-01"", ""us"")] var = xr.Variable([""time""], times) da = xr.DataArray(times) ``` leads to the following warnings: ``` $ python test_warning.py test_warning.py:6: UserWarning: Converting non-nanosecond precision datetime values to nanosecond precision. This behavior can eventually be relaxed in xarray, as it is an artifact from pandas which is now beginning to support non-nanosecond precision values. This warning is caused by passing non-nanosecond np.datetime64 or np.timedelta64 values to the DataArray or Variable constructor; it can be silenced by converting the values to nanosecond precision ahead of time. var = xr.Variable([""time""], times) test_warning.py:7: UserWarning: Converting non-nanosecond precision datetime values to nanosecond precision. This behavior can eventually be relaxed in xarray, as it is an artifact from pandas which is now beginning to support non-nanosecond precision values. This warning is caused by passing non-nanosecond np.datetime64 or np.timedelta64 values to the DataArray or Variable constructor; it can be silenced by converting the values to nanosecond precision ahead of time. da = xr.DataArray(times) ```",2022-10-30T11:44:56Z,2022-11-04T20:37:27Z,2022-11-04T20:13:19Z,2022-11-04T20:13:19Z,a744e63642e066b2c25778f40fec63fc47d15a7b,,,0,7823d04d142abbe8ba96290fbdb0cc77c1dcde73,6179d8e881947e71ec9528c65d05159ed3921563,MEMBER,,13221727,https://github.com/pydata/xarray/pull/7238, 1120402737,PR_kwDOAMm_X85Cx_0x,7284,closed,0,Enable `origin` and `offset` arguments in `resample`,6628425,"This PR enables the `origin` and `offset` arguments in `resample`. This was simple to do in the case of data indexed by a `DatetimeIndex`, but naturally required changes to our internal implementation of `resample` for data indexed by a `CFTimeIndex`. Fortunately those changes were fairly straightforward to port over from pandas. This does not do anything to address the deprecation of `base` noted in #7266, but is an important first step toward getting up to speed with the latest version of pandas, both on the `DatetimeIndex` side and the `CFTimeIndex` side. This way we will at least be able to handle that deprecation in the same way for each. - [x] Tests added - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` I think things are fairly comprehensively implemented and tested here, but I'm marking this as a draft for now as I want to see if I can reduce the number of cftime resampling tests some, which have multiplied with the addition of these new arguments. ",2022-11-13T15:23:01Z,2022-11-29T00:06:46Z,2022-11-28T23:38:52Z,2022-11-28T23:38:52Z,1083c9d3f9ff7b5b03ffb65fa0cf7876c2e73a1a,,,0,4dbf69482e3f647e7e64f8c17a816ce3970c279e,78b27ecce58c5fe74a75a11c69fd48b5a7a8da61,MEMBER,"{""enabled_by"": {""login"": ""dcherian"", ""id"": 2448579, ""node_id"": ""MDQ6VXNlcjI0NDg1Nzk="", ""avatar_url"": ""https://avatars.githubusercontent.com/u/2448579?v=4"", ""gravatar_id"": """", ""url"": ""https://api.github.com/users/dcherian"", ""html_url"": ""https://github.com/dcherian"", ""followers_url"": ""https://api.github.com/users/dcherian/followers"", ""following_url"": ""https://api.github.com/users/dcherian/following{/other_user}"", ""gists_url"": ""https://api.github.com/users/dcherian/gists{/gist_id}"", ""starred_url"": ""https://api.github.com/users/dcherian/starred{/owner}{/repo}"", ""subscriptions_url"": ""https://api.github.com/users/dcherian/subscriptions"", ""organizations_url"": ""https://api.github.com/users/dcherian/orgs"", ""repos_url"": ""https://api.github.com/users/dcherian/repos"", ""events_url"": ""https://api.github.com/users/dcherian/events{/privacy}"", ""received_events_url"": ""https://api.github.com/users/dcherian/received_events"", ""type"": ""User"", ""site_admin"": false}, ""merge_method"": ""squash"", ""commit_title"": ""Enable `origin` and `offset` arguments in `resample` (#7284)"", ""commit_message"": ""* Initial work toward enabling origin and offset arguments in resample\r\n\r\n* [pre-commit.ci] auto fixes from pre-commit.com hooks\r\n\r\nfor more information, see https://pre-commit.ci\r\n\r\n* Fix _convert_offset_to_timedelta\r\n\r\n* Reduce number of tests\r\n\r\n* Address initial review comments\r\n\r\n* Add more typing information\r\n\r\n* Make cftime import lazy\r\n\r\n* Fix module_available import and test\r\n\r\n* Remove old origin argument\r\n\r\n* Add type annotations for resample_cftime.py\r\n\r\n* Add None as a possibility for closed and label\r\n\r\n* Add what's new entry\r\n\r\n* Add missing type annotation\r\n\r\n* Delete added line\r\n\r\n* Fix typing errors\r\n\r\n* Add comment and test for as_timedelta stub\r\n\r\n* Remove old code\r\n\r\n* [test-upstream]\r\n\r\nCo-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>\r\nCo-authored-by: Deepak Cherian ""}",13221727,https://github.com/pydata/xarray/pull/7284, 1138343103,PR_kwDOAMm_X85D2by_,7331,closed,0,Fix PR number in what’s new,6628425,I noticed the PR number was off in my what’s new entry in #7284. This fixes that.,2022-11-29T02:20:18Z,2022-11-29T07:37:06Z,2022-11-29T07:37:05Z,2022-11-29T07:37:05Z,1581fe84c3946928839d643ccb36d53a54ca475e,,,0,d819612e49d098b01cf17b816a41b6592f6820cc,1083c9d3f9ff7b5b03ffb65fa0cf7876c2e73a1a,MEMBER,,13221727,https://github.com/pydata/xarray/pull/7331, 1158271947,PR_kwDOAMm_X85FCdPL,7373,closed,0,Add `inclusive` argument to `cftime_range` and `date_range` and deprecate `closed` argument,6628425," Following pandas, this PR adds an `inclusive` argument to `xarray.cftime_range` and `xarray.date_range` and deprecates the `closed` argument. Pandas will be removing the `closed` argument soon in their `date_range` implementation, but we will continue supporting it to allow for our own deprecation cycle. I think we may also need to update our minimum pandas version to 1.4 for this, since earlier versions of pandas do not support the `inclusive` argument. - [x] Closes #6985 - [x] Tests added - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` ",2022-12-10T23:40:47Z,2023-02-06T17:51:47Z,2023-02-06T17:51:46Z,2023-02-06T17:51:46Z,fb748be127c88b4bbd0c7f654e0a0d2ebd154ef8,,,0,3b1cba02fa383c9a40278edcd9b9d3e904225fea,f46cd708f8e220272173e2fc3e66c7688df45c39,MEMBER,,13221727,https://github.com/pydata/xarray/pull/7373, 1198165657,PR_kwDOAMm_X85Hao6Z,7441,closed,0,Preserve formatting of reference time units under pandas 2.0.0,6628425,"As suggested by @keewis, to preserve existing behavior in xarray, this PR forces any object passed to `format_timestamp` to be converted to a string using `strftime` with a constant format. This addresses the failing tests related to the units encoding in #7420. - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` ",2023-01-15T20:09:24Z,2023-04-01T12:41:44Z,2023-04-01T12:36:56Z,2023-04-01T12:36:56Z,84607c3b1d61e3bc2d4b07b4f12f41a40b027f6f,,,0,839881f8d35abfa3fa16b9467e0e6059ac33d5f0,1c81162755457b3f4dc1f551f0321c75ec9daf6c,MEMBER,,13221727,https://github.com/pydata/xarray/pull/7441, 1199410428,PR_kwDOAMm_X85HfYz8,7444,closed,0,Preserve `base` and `loffset` arguments in `resample`,6628425,"While pandas is getting set to remove the `base` and `loffset` arguments in `resample`, we have not had a chance to emit a deprecation warning for them yet in xarray (https://github.com/pydata/xarray/issues/7420). This PR preserves their functionality in xarray and should hopefully give users some extra time to adapt. Deprecation warnings for each are added so that we can eventually remove them. I've taken the liberty to define a `TimeResampleGrouper` object, since we need some way to carry the `loffset` argument through the `resample` chain, even though it will no longer be allowed on the `pd.Grouper` object. Currently it is not particularly complicated, so hopefully it would be straightforward to adapt to what is envisioned in https://github.com/pydata/xarray/issues/6610#issuecomment-1341296800. - [x] closes #7266 - [x] Tests added - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` ",2023-01-16T19:16:39Z,2023-03-08T18:16:12Z,2023-03-08T16:55:22Z,2023-03-08T16:55:22Z,6d771fc82228bdaf8a4b77d0ceec1cc444ebd090,,,0,d922d713b0e671747fbd55b5d7f51e037d0c59da,821dc24b5f3ed91b843a634bf8513a26046269ef,MEMBER,,13221727,https://github.com/pydata/xarray/pull/7444, 1304731317,PR_kwDOAMm_X85NxJ61,7731,closed,0,Continue to use nanosecond-precision Timestamps in precision-sensitive areas,6628425," This addresses the remaining cftime-related test failures in #7707 by introducing a function that always returns a nanosecond-precision Timestamp object. Despite no corresponding test failures, for safety I grepped and went ahead and replaced the `pd.Timestamp` constructor with this function in a few other areas. I also updated our documentation to replace any mentions of the ""Timestamp-valid range"" with ""nanosecond-precision range"" since Timestamps are now more flexible, and included a note that we have an issue open for relaxing this nanosecond-precision assumption in xarray eventually. While in principle I think it would be fine if `CFTimeIndex.to_datetimeindex` returned a `DatetimeIndex` with non-nanosecond-precision values, since we don't use `to_datetimeindex` anywhere outside of tests, in its current state it was returning nonsense values: ``` >>> import pandas as pd >>> import xarray as xr >>> times = xr.cftime_range(""0001"", periods=5) >>> times.to_datetimeindex() DatetimeIndex(['1754-08-30 22:43:41.128654848', '1754-08-31 22:43:41.128654848', '1754-09-01 22:43:41.128654848', '1754-09-02 22:43:41.128654848', '1754-09-03 22:43:41.128654848'], dtype='datetime64[ns]', freq=None) ``` This is due to the assumption in `cftime_to_nptime` that the resulting array will have nanosecond-precision values. We can (and should) address this eventually, but for the sake of quickly supporting pandas version two I decided to be conservative and punt this off to be part of #7493. `cftime_to_nptime` is used in places other than `to_datetimeindex`, so modifying it has other impacts downstream.",2023-04-06T13:06:50Z,2023-04-13T15:17:14Z,2023-04-13T14:58:34Z,2023-04-13T14:58:34Z,c9c1c6d681b68d36c3145da3223f16d649fcf9ab,,,0,4e24ca83c650a144de356e52d532e5d2a05238a6,13a47fdb6b1a49d510e088113b5a86788d29eafb,MEMBER,,13221727,https://github.com/pydata/xarray/pull/7731, 1541626039,PR_kwDOAMm_X85b41i3,8272,closed,0,Fix datetime encoding precision loss regression for units requiring floating point values,6628425," This PR proposes a fix to #8271. I think the basic issue is that the only time we need to update the `needed_units` is if the `data_delta` does not evenly divide the `ref_delta`. If it does evenly divide it--as it does in the example in #8271--and we try to update the `needed_units` solely according to the value of the `ref_delta`, we run the risk of resetting them to something that would be coarser than the data requires. If it does not evenly divide it, we are safe to reset the `needed_units` because they will be guaranteed to be finer-grained than the data requires. I modified `test_roundtrip_float_times` to reflect the example given by @larsbuntemeyer in #8271. @kmuehlbauer let me know if this fix makes sense to you. - [x] Closes #8271 - [x] Tests added - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` ",2023-10-04T11:12:59Z,2023-10-06T14:09:34Z,2023-10-06T14:08:51Z,2023-10-06T14:08:51Z,1b0012a44aa45c67858489bc815928e1712dbd00,,,0,8f271a3548e9de650b8a8d2ef4ad2646788ab7e9,d5f17858e5739c986bfb52e7f2ad106bb4489364,MEMBER,,13221727,https://github.com/pydata/xarray/pull/8272, 1580738343,PR_kwDOAMm_X85eOCcn,8393,closed,0,Port fix from pandas-dev/pandas#55283 to cftime resample,6628425," The remaining failing cftime resample tests in https://github.com/pydata/xarray/issues/8091 happen to be related to a bug that was fixed in the pandas implementation, https://github.com/pandas-dev/pandas/pull/55283, leading answers to change in some circumstances. This PR ports that bug fix to xarray's implementation of resample for data indexed by a `CFTimeIndex`. - [x] Fixes remaining failing cftime resample tests in https://github.com/pydata/xarray/issues/8091 - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` A simple example where answers change in pandas is the following: #### Previously ``` >>> import numpy as np; import pandas as pd >>> index = pd.date_range(""2000"", periods=5, freq=""5D"") >>> series = pd.Series(np.arange(index.size), index=index) >>> series.resample(""2D"", closed=""right"", label=""right"", offset=""1s"").mean() 2000-01-01 00:00:01 0.0 2000-01-03 00:00:01 NaN 2000-01-05 00:00:01 1.0 2000-01-07 00:00:01 NaN 2000-01-09 00:00:01 NaN 2000-01-11 00:00:01 2.0 2000-01-13 00:00:01 NaN 2000-01-15 00:00:01 3.0 2000-01-17 00:00:01 NaN 2000-01-19 00:00:01 NaN 2000-01-21 00:00:01 4.0 Freq: 2D, dtype: float64 ``` #### Currently ``` >>> import numpy as np; import pandas as pd >>> index = pd.date_range(""2000"", periods=5, freq=""5D"") >>> series = pd.Series(np.arange(index.size), index=index) >>> series.resample(""2D"", closed=""right"", label=""right"", offset=""1s"").mean() 2000-01-01 00:00:01 0.0 2000-01-03 00:00:01 NaN 2000-01-05 00:00:01 NaN 2000-01-07 00:00:01 1.0 2000-01-09 00:00:01 NaN 2000-01-11 00:00:01 2.0 2000-01-13 00:00:01 NaN 2000-01-15 00:00:01 NaN 2000-01-17 00:00:01 3.0 2000-01-19 00:00:01 NaN 2000-01-21 00:00:01 4.0 Freq: 2D, dtype: float64 ``` This PR allows us to reproduce this change in xarray for data indexed by a `CFTimeIndex`. The bin edges were incorrect in the previous case; see https://github.com/pandas-dev/pandas/pull/52064#issuecomment-1785893752 for @MarcoGorelli's nice explanation as to why.",2023-10-31T11:12:09Z,2023-11-02T09:40:46Z,2023-11-02T04:12:51Z,2023-11-02T04:12:51Z,d933578ebdc4105a456bada4864f8ffffd7a2ced,,,0,264c41108ffe770bb5a182e2f6c882596fb9cdad,cfe4d71fae70930ac6776bd53fe2a93875a84515,MEMBER,,13221727,https://github.com/pydata/xarray/pull/8393, 1587405002,PR_kwDOAMm_X85eneDK,8415,closed,0,Deprecate certain cftime frequency strings following pandas,6628425,"Following several upstream PRs in pandas, this PR deprecates cftime frequency strings `""A""`, `""AS""`, `""Q""`, `""M""`, `""H""`, `""T""`, `""S""`, `""L""`, and `""U""` in favor of `""Y""`, `""YS""`, `""QE""`, `""ME""`, `""h""`, `""min""`, `""s""`, `""ms""`, and `""us""`. Similarly following pandas, it makes a breaking change to have `infer_freq` return the latter frequencies instead of the former. There are a few places in the tests and one place in the code where we need some version-specific logic to retain support for older pandas versions. @aulemahal it would be great if you could take a look to make sure that I handled this breaking change properly / fully in the `date_range_like` case. I also took the liberty to transition to using `""Y""`, `""YS""`, `""h""`, `""min""`, `""s""`, `""ms""`, `""us""`, and `""ns""` within our code, tests, and documentation to reduce the amount of warnings emitted. I have held off on switching to `""QE""`, `""ME""`, and anchored offsets involving `""Y""` or `""YS""` in pandas-related code since those are not supported in older versions of pandas. The deprecation warning looks like this: ``` >>> xr.cftime_range(""2000"", periods=5, freq=""M"") :1: FutureWarning: 'M' is deprecated and will be removed in a future version. Please use 'ME' instead of 'M'. CFTimeIndex([2000-01-31 00:00:00, 2000-02-29 00:00:00, 2000-03-31 00:00:00, 2000-04-30 00:00:00, 2000-05-31 00:00:00], dtype='object', length=5, calendar='standard', freq='ME') ``` - [x] Closes #8394 - [x] Addresses the `convert_calendar` and `date_range_like` test failures in #8091 - [x] Tests added - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst`",2023-11-05T12:27:59Z,2023-11-16T15:37:27Z,2023-11-16T15:19:40Z,2023-11-16T15:19:40Z,dfe6435c270957b0322e0d31db4f59a257f2d54d,,,0,6c1995e4c5e36a5b21e568bb286295b453da5110,141147434cb1f4547ffff5e28900eeb487704f08,MEMBER,,13221727,https://github.com/pydata/xarray/pull/8415, 1660231411,PR_kwDOAMm_X85i9R7z,8575,closed,0,Add chunk-friendly code path to `encode_cf_datetime` and `encode_cf_timedelta`,6628425," I finally had a moment to think about this some more following discussion in https://github.com/pydata/xarray/pull/8253. This PR adds a chunk-friendly code path to `encode_cf_datetime` and `encode_cf_timedelta`, which enables lazy encoding of time-like values, and by extension, preservation of chunks when writing time-like values to zarr. With these changes, the test added by @malmans2 in #8253 passes. Though it largely reuses existing code, the lazy encoding implemented in this PR is stricter than eager encoding in a couple ways: 1. It requires either both the encoding units and dtype be prescribed, or neither be prescribed; prescribing one or the other is not supported, since it requires inferring one or the other from the data. In the case that neither is specified, the dtype is set to `np.int64` and the units are either `""nanoseconds since 1970-01-01""` or `""microseconds since 1970-01-01""` depending on whether we are encoding `np.datetime64[ns]` values or `cftime.datetime` objects. In the case of `timedelta64[ns]` values, the units are set to `""nanoseconds""`. 2. In addition, if an integer dtype is prescribed, but the units are set such that floating point values would be required, it raises instead of modifying the units to enable integer encoding. This is a requirement since the data units may differ between chunks, so overriding could result in inconsistent units. As part of this PR, since dask requires we know the dtype of the array returned by the function passed to `map_blocks`, I also added logic to handle casting to the specified encoding dtype in an overflow-and-integer safe manner. This means an informative error message would be raised in the situation described in #8542: ``` OverflowError: Not possible to cast encoded times from dtype('int64') to dtype('int16') without overflow. Consider removing the dtype encoding, at which point xarray will make an appropriate choice, or explicitly switching to a larger integer dtype. ``` I eventually want to think about this on the decoding side as well, but that can wait for another PR. - [x] Closes #7132 - [x] Closes #8230 - [x] Closes #8432 - [x] Closes #8253 - [x] Addresses #8542 - [x] Tests added - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` ",2023-12-30T01:25:17Z,2024-01-30T02:17:58Z,2024-01-29T19:12:30Z,2024-01-29T19:12:30Z,d8c3b1ac591914998ce608159a15b4b41cc53c73,,,0,d9d9701545c330075184e9bf30fb54fb2db46aee,e22b47511f4188e2203c5753de4a0a36094c2e83,MEMBER,,13221727,https://github.com/pydata/xarray/pull/8575, 1741855895,PR_kwDOAMm_X85n0pyX,8782,closed,0,Fix non-nanosecond casting behavior for `expand_dims`,6628425," This PR fixes the issue noted in https://github.com/pydata/xarray/issues/7493#issuecomment-1953091000 that non-nanosecond precision datetime or timedelta values passed to `expand_dims` would not be cast to nanosecond precision. The underlying issue was that the `_possibly_convert_datetime_or_timedelta_index` function did not appropriately handle being passed `PandasIndexingAdapter` objects. - [x] Fixes https://github.com/pydata/xarray/issues/7493#issuecomment-1953091000 - [x] Tests added - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` ",2024-02-24T15:38:41Z,2024-02-27T18:52:58Z,2024-02-27T18:51:49Z,2024-02-27T18:51:49Z,2983c5326c085334ed3e262db1ac3faa0e784586,,,0,4a0808ff990d8156174135303be9463acd1ba1f6,f63ec87476db065a58d423670b8829abc8d1e746,MEMBER,,13221727,https://github.com/pydata/xarray/pull/8782, 1821982214,PR_kwDOAMm_X85smT4G,8942,open,0,WIP: Support calendar-specific `cftime.datetime` instances,6628425," Since cftime version 1.3.0, the base `cftime.datetime` object can be calendar-aware, obviating the need for calendar-specific subclasses like `cftime.DatetimeNoLeap`. This PR aims to finally enable the use of these objects in xarray. We can also use this moment to remove cruft around accommodating inexact cftime datetime arithmetic, since that has been fixed since cftime version 1.2.0. The idea will be to support both for a period of time and eventually drop support for the calendar-specific subclasses. I do not think too much should need to change within xarray—the main challenge will be to see if we can maintain adequate test coverage without multiplying the number of cftime tests by two. This draft PR is at least a start towards that. - [ ] Closes #4336 - [ ] Closes #4853 - [ ] Closes #5551 - [ ] Closes #8298 - [ ] Closes #8941 - [ ] Tests added - [ ] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [ ] New functions/methods are listed in `api.rst` ",2024-04-14T14:33:06Z,2024-04-14T15:41:08Z,,,276893d4870bd56f0cd9ee9e60c55ac30c8f9902,,,1,73f35b182724503c04be99d29e691ead2e51b768,b004af5174a4b0e32519df792a4f625d5548a9f0,MEMBER,,13221727,https://github.com/pydata/xarray/pull/8942, 1852603663,PR_kwDOAMm_X85ubH0P,8996,closed,0,Mark `test_use_cftime_false_standard_calendar_in_range` as an expected failure,6628425," Per https://github.com/pydata/xarray/issues/8844#issuecomment-2089427222, for the time being this marks `test_use_cftime_false_standard_calendar_in_range` as an expected failure under NumPy 2. Hopefully we'll be able to fix the upstream issue in pandas eventually.",2024-05-03T01:05:21Z,2024-05-03T15:21:48Z,2024-05-03T15:21:48Z,2024-05-03T15:21:48Z,c2cd1dd27fa0723f498c9cbe758cce413f6d91bd,,,0,6c34e5f027a37a04fcf6366813c4eab70646fd78,f5ae623f892af6c8bc6e14b8796d84e3b978eb5f,MEMBER,,13221727,https://github.com/pydata/xarray/pull/8996, 1854627268,PR_kwDOAMm_X85ui13E,8999,open,0,Port negative frequency fix for `pandas.date_range` to `cftime_range`,6628425," Like `pandas.date_range`, `cftime_range` would previously return dates outside the range of the specified start and end dates if provided a negative frequency: ``` >>> start = cftime.DatetimeGregorian(2023, 10, 31) >>> end = cftime.DatetimeGregorian(2021, 11, 1) >>> xr.cftime_range(start, end, freq=""-1YE"") CFTimeIndex([2023-12-31 00:00:00, 2022-12-31 00:00:00, 2021-12-31 00:00:00], dtype='object', length=3, calendar='standard', freq='-1YE-DEC') ``` This PR ports a bug fix from pandas (https://github.com/pandas-dev/pandas/issues/56147) to prevent this from happening. The above example now produces: ``` >>> start = cftime.DatetimeGregorian(2023, 10, 31) >>> end = cftime.DatetimeGregorian(2021, 11, 1) >>> xr.cftime_range(start, end, freq=""-1YE"") CFTimeIndex([2022-12-31 00:00:00, 2021-12-31 00:00:00], dtype='object', length=2, calendar='standard', freq=None) ``` Since this is a bug fix, we do not make any attempt to preserve the old behavior if an earlier version of pandas is installed. In the testing context this means we skip some tests for pandas versions less than 3.0. - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` ",2024-05-04T14:48:08Z,2024-05-04T14:51:26Z,,,20c06e4c2916fb1a8732ab57b787f8e9c17975f9,,,0,a74c12d229310d05c339dc35fb211e3e4961af40,aaa778cffb89baaece31882e03a7f4af0adfe798,MEMBER,,13221727,https://github.com/pydata/xarray/pull/8999,