home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

25 rows where user = 8708062 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: issue_url, reactions, created_at (date), updated_at (date)

issue 4

  • CFTimeIndex Resampling 18
  • Quarter offset implemented (base is now latest pydata-master). 4
  • Reduce length of cftime resample tests 2
  • Quarter offset support for cftime 1

user 1

  • jwenfai · 25 ✖

author_association 1

  • CONTRIBUTOR 25
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
481229888 https://github.com/pydata/xarray/pull/2879#issuecomment-481229888 https://api.github.com/repos/pydata/xarray/issues/2879 MDEyOklzc3VlQ29tbWVudDQ4MTIyOTg4OA== jwenfai 8708062 2019-04-09T12:26:43Z 2019-04-09T12:26:43Z CONTRIBUTOR

Wow, that's quick. The updated tests look fine to me so go ahead and merge it.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Reduce length of cftime resample tests 430460404
481195895 https://github.com/pydata/xarray/pull/2879#issuecomment-481195895 https://api.github.com/repos/pydata/xarray/issues/2879 MDEyOklzc3VlQ29tbWVudDQ4MTE5NTg5NQ== jwenfai 8708062 2019-04-09T10:29:05Z 2019-04-09T10:29:05Z CONTRIBUTOR

Thanks for taking on the task of shortening test times! If the coverage is the same, I think the rewritten tests should be good.

Just two things I feel I should mention: - Testing even and odd multiples for resampling frequencies for a frequency class (e.g., '11MS' and '12M' for monthlies) I don't quite remember what the issue was but there were tests that passed for even/odd resampling frequencies but fail for the other. Perhaps the tests could be rewritten to (1) switch da_freq and freq and (2) use odd frequencies for constructing the DataArray so that when * 2 and \\ 2 operations are performed, even and odd resampling frequencies are obtained. There were some issues with resampling to 12H so maybe use a frequency that multiplies or divides to 12H. - Testing resampling from one freq type to another (e.g., from '3Q' to '11MS') Again I don't remember the details other than that a problem exists/existed. Perhaps something to do with _get_range_edges since that's the part of the code where the original index of a DataArray interacts with the resampling frequency.

Both of the problems I mentioned might have been from the earliest iteration of CFTimeIndex resampling so they might have no relevance now.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Reduce length of cftime resample tests 430460404
467248139 https://github.com/pydata/xarray/pull/2721#issuecomment-467248139 https://api.github.com/repos/pydata/xarray/issues/2721 MDEyOklzc3VlQ29tbWVudDQ2NzI0ODEzOQ== jwenfai 8708062 2019-02-26T00:58:25Z 2019-02-26T00:58:25Z CONTRIBUTOR

@spencerkclark Thanks for improving the docstring and cleaning up the extra whitespaces. They look ok to me.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Quarter offset implemented (base is now latest pydata-master). 403601219
460078355 https://github.com/pydata/xarray/pull/2721#issuecomment-460078355 https://api.github.com/repos/pydata/xarray/issues/2721 MDEyOklzc3VlQ29tbWVudDQ2MDA3ODM1NQ== jwenfai 8708062 2019-02-03T19:01:04Z 2019-02-03T19:01:04Z CONTRIBUTOR

I didn't realize support for normalization was more involved; normalization-related code should all be removed now. Duplicate entry on whats-new.rst is gone as well.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Quarter offset implemented (base is now latest pydata-master). 403601219
460009606 https://github.com/pydata/xarray/pull/2593#issuecomment-460009606 https://api.github.com/repos/pydata/xarray/issues/2593 MDEyOklzc3VlQ29tbWVudDQ2MDAwOTYwNg== jwenfai 8708062 2019-02-02T23:48:09Z 2019-02-02T23:48:09Z CONTRIBUTOR

All tests passed. Thanks, @spencerkclark and @shoyer, for all the help!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  CFTimeIndex Resampling 387924616
460009536 https://github.com/pydata/xarray/pull/2721#issuecomment-460009536 https://api.github.com/repos/pydata/xarray/issues/2721 MDEyOklzc3VlQ29tbWVudDQ2MDAwOTUzNg== jwenfai 8708062 2019-02-02T23:46:43Z 2019-02-02T23:46:43Z CONTRIBUTOR

Sure, I'll add resample support back in once #2593 is merged.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Quarter offset implemented (base is now latest pydata-master). 403601219
459999349 https://github.com/pydata/xarray/pull/2721#issuecomment-459999349 https://api.github.com/repos/pydata/xarray/issues/2721 MDEyOklzc3VlQ29tbWVudDQ1OTk5OTM0OQ== jwenfai 8708062 2019-02-02T21:09:25Z 2019-02-02T21:09:25Z CONTRIBUTOR

Fixes implemented and tests are all passing. I had to modify the tests in test_cftime_offsets.py now that _default_month = 3 for QuarterBegin and QuarterEnd.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Quarter offset implemented (base is now latest pydata-master). 403601219
457963032 https://github.com/pydata/xarray/pull/2593#issuecomment-457963032 https://api.github.com/repos/pydata/xarray/issues/2593 MDEyOklzc3VlQ29tbWVudDQ1Nzk2MzAzMg== jwenfai 8708062 2019-01-27T23:08:06Z 2019-01-27T23:08:06Z CONTRIBUTOR

I just merged a fix for the test failures with pandas 0.24. If you merge in master that should fix your issues here too.

Fix works, all tests are passing now. Thanks!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  CFTimeIndex Resampling 387924616
457953081 https://github.com/pydata/xarray/pull/2702#issuecomment-457953081 https://api.github.com/repos/pydata/xarray/issues/2702 MDEyOklzc3VlQ29tbWVudDQ1Nzk1MzA4MQ== jwenfai 8708062 2019-01-27T20:50:58Z 2019-01-27T20:50:58Z CONTRIBUTOR

I rebased, seems like the failing checks are the same as the ones affecting the resample pull request.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Quarter offset support for cftime 402678943
456581862 https://github.com/pydata/xarray/pull/2593#issuecomment-456581862 https://api.github.com/repos/pydata/xarray/issues/2593 MDEyOklzc3VlQ29tbWVudDQ1NjU4MTg2Mg== jwenfai 8708062 2019-01-22T22:07:24Z 2019-01-22T22:07:24Z CONTRIBUTOR

The pandas-dev build job is still failing but everything else is passing.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  CFTimeIndex Resampling 387924616
456176524 https://github.com/pydata/xarray/pull/2593#issuecomment-456176524 https://api.github.com/repos/pydata/xarray/issues/2593 MDEyOklzc3VlQ29tbWVudDQ1NjE3NjUyNA== jwenfai 8708062 2019-01-21T19:22:38Z 2019-01-21T19:22:38Z CONTRIBUTOR

I made the changes. Did not preview the .rst files but they should render correctly.

For the non-standard calendars, I can't think of any way to test them except to compare them against output from pandas. I think it's fine as long as the original and resampled indices do not go over 28 days.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  CFTimeIndex Resampling 387924616
455907774 https://github.com/pydata/xarray/pull/2593#issuecomment-455907774 https://api.github.com/repos/pydata/xarray/issues/2593 MDEyOklzc3VlQ29tbWVudDQ1NTkwNzc3NA== jwenfai 8708062 2019-01-20T22:17:51Z 2019-01-20T22:17:51Z CONTRIBUTOR

Thanks for showing me how the code could be improved @spencerkclark. I've pushed the changes.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  CFTimeIndex Resampling 387924616
455816654 https://github.com/pydata/xarray/pull/2593#issuecomment-455816654 https://api.github.com/repos/pydata/xarray/issues/2593 MDEyOklzc3VlQ29tbWVudDQ1NTgxNjY1NA== jwenfai 8708062 2019-01-19T21:28:10Z 2019-01-19T21:28:10Z CONTRIBUTOR

I've looked at the failed tests on CONDA_ENV=py36-pandas-dev and made necessary fixes on my end so that all tests pass. I am not sure if that conda env is using cftime 1.0.3.4 but it needs to be if the tests are to pass. Also, needless print statements from my last commit have been removed.

  1. Failures seem to be due to cftime 1.0.0, tests pass on cftime 1.0.3.4 xarray/tests/test_cftime_offsets.py::test_dayofweek_after_cftime_range[A] FAILED [ 33%] xarray/tests/test_cftime_offsets.py::test_dayofweek_after_cftime_range[M] FAILED [ 33%] xarray/tests/test_cftime_offsets.py::test_dayofweek_after_cftime_range[D] FAILED [ 33%] xarray/tests/test_cftime_offsets.py::test_dayofyear_after_cftime_range[A] FAILED [ 33%] xarray/tests/test_cftime_offsets.py::test_dayofyear_after_cftime_range[M] FAILED [ 33%] xarray/tests/test_cftime_offsets.py::test_dayofyear_after_cftime_range[D] FAILED [ 33%]
  2. Failure due to 3 reasons: (1) I did not set defaults for closed and label, (2) year values used in test are too low, making it possible for year 0 to appear when resampling, which is invalid for julian, gregorian, and proleptic_gregorian calendars, and (3) NotImplementedError being raised due to the assumption that cftime resampling hasn't been implemented. xarray/tests/test_cftimeindex.py::test_resample_error[365_day] FAILED [ 37%] xarray/tests/test_cftimeindex.py::test_resample_error[360_day] FAILED [ 37%] xarray/tests/test_cftimeindex.py::test_resample_error[julian] FAILED [ 37%] xarray/tests/test_cftimeindex.py::test_resample_error[all_leap] FAILED [ 37%] xarray/tests/test_cftimeindex.py::test_resample_error[366_day] FAILED [ 37%] xarray/tests/test_cftimeindex.py::test_resample_error[gregorian] FAILED [ 37%] xarray/tests/test_cftimeindex.py::test_resample_error[proleptic_gregorian] FAILED [ 37%]
  3. Failure seems to be due to cftime 1.0.0, tests pass on cftime 1.0.3.4 xarray/tests/test_coding_times.py::test_cf_timedelta[timedeltas7-days-nan] FAILED [ 53%]
  4. Since cftime resampling is implemented, there's no need to raise NotImplementedError xarray/tests/test_dataarray.py::TestDataArray::test_resample_cftimeindex FAILED [ 58%]
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  CFTimeIndex Resampling 387924616
455807650 https://github.com/pydata/xarray/pull/2593#issuecomment-455807650 https://api.github.com/repos/pydata/xarray/issues/2593 MDEyOklzc3VlQ29tbWVudDQ1NTgwNzY1MA== jwenfai 8708062 2019-01-19T19:20:28Z 2019-01-19T19:20:28Z CONTRIBUTOR

Just saw your comment. I've actually resolved the merge conflicts on my end and haven't pushed them yet. Checks for index[0] < datetime_bins[0] and index[lenidx - 1] > datetime_bins[lenbin - 1] have also been added.

The test_resampler runtime has been reduced to 1 minute and it seems like it has decent coverage since ValueErrors are still being raised. Wrote another test for _get_range_edges that might help cut down the number of cases that needs to be tested with the slower test_resampler function.

Going to push my changes and see what CI says.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  CFTimeIndex Resampling 387924616
455788936 https://github.com/pydata/xarray/pull/2593#issuecomment-455788936 https://api.github.com/repos/pydata/xarray/issues/2593 MDEyOklzc3VlQ29tbWVudDQ1NTc4ODkzNg== jwenfai 8708062 2019-01-19T15:19:01Z 2019-01-19T15:19:01Z CONTRIBUTOR

Ignore my last comment, I made a really silly mistake. I assumed it was a - b instead of b - a for the exact cftime function. 5808 of 5920 tests now pass and the remaining 112 are ignored due to ValueError: "value falls before first bin".

I think writing targeted unit tests are the last thing on the agenda, so I'll get right on that.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  CFTimeIndex Resampling 387924616
455734854 https://github.com/pydata/xarray/pull/2593#issuecomment-455734854 https://api.github.com/repos/pydata/xarray/issues/2593 MDEyOklzc3VlQ29tbWVudDQ1NTczNDg1NA== jwenfai 8708062 2019-01-19T01:11:59Z 2019-01-19T01:12:08Z CONTRIBUTOR

Thanks for raising the issue with the cftime devs. I've tested the function and it produces exact datetime values but there's still a problem with extra bins and nan location mismatch. I'll report back once I have a clearer idea of what's going on.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  CFTimeIndex Resampling 387924616
455584540 https://github.com/pydata/xarray/pull/2593#issuecomment-455584540 https://api.github.com/repos/pydata/xarray/issues/2593 MDEyOklzc3VlQ29tbWVudDQ1NTU4NDU0MA== jwenfai 8708062 2019-01-18T15:28:36Z 2019-01-18T15:28:36Z CONTRIBUTOR

Your impressions are correct, sorry if my earlier comments confused you. I was just guessing that imprecise first and last values might be the cause for extra bins; I haven't actually tested if it was actually true.

The first and last values are returned by _adjust_bin_anchored when isinstance(offset, CFTIME_TICKS). Since date subtraction happens within _adjust_bin_anchored, some test cases have imprecise first and last values.

I'll provide examples of extra bins later in the day.

For now, here's an example of the datetime imprecision in _adjust_bin_anchored:

python import xarray as xr import numpy as np from xarray.coding.cftime_offsets import normalize_date, to_offset from xarray.core.utils import safe_cast_to_index freq = '600003T' closed = 'right' label = 'left' base = 12 time_range_kwargs = dict(start='2004-01-01T12:07:01', periods=37, freq='A') cftime_index = xr.cftime_range(**time_range_kwargs) da_cftime = xr.DataArray(np.arange(100., 100. + cftime_index.size), [('time', cftime_index)]) group = da_cftime['time'] index = safe_cast_to_index(group) offset = to_offset(freq) first = index.min() last = index.max() base = base % offset.n start_day = normalize_date(first) base_td = type(offset)(n=base).as_timedelta() start_day += base_td foffset = (first - start_day) % offset.as_timedelta() loffset = (last - start_day) % offset.as_timedelta() print(first, '\n', start_day, '\n', base_td, '\n', foffset, '\n', loffset, '\n', first - start_day, '\n', last - start_day)

which gave me

2004-12-31 12:07:01 2004-12-31 00:12:00 0:12:00 11:55:01.000008 232 days, 18:22:01.000008 11:55:01.000008 13149 days, 11:55:01.000008

The extra 8 microseconds shouldn't be there.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  CFTimeIndex Resampling 387924616
455392167 https://github.com/pydata/xarray/pull/2593#issuecomment-455392167 https://api.github.com/repos/pydata/xarray/issues/2593 MDEyOklzc3VlQ29tbWVudDQ1NTM5MjE2Nw== jwenfai 8708062 2019-01-18T01:17:38Z 2019-01-18T01:17:38Z CONTRIBUTOR

I think the "values falls before first bin" errors are all from pandas, where datetime arithmetic is exact, so they could not be due to cftime, right? I'll take a look at the 6AS-JUN tests.

Oh no, I meant that except for all the "values falls before first bin" errors, most (if not all) of the errors are due to shape mismatch between the resampled cftime and pandas arrays. Of the ones I've inspected, the resampled cftime array always has 1 more bin than pandas, e.g.: E (shapes (175204,), (175205,) mismatch) E x: array([100., nan, nan, ..., nan, nan, 114.]) E y: array([100., nan, nan, ..., nan, nan, 114.]) I was thinking that imprecise datetime arithmetic by cftime might cause an extra bin to be generated.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  CFTimeIndex Resampling 387924616
455386887 https://github.com/pydata/xarray/pull/2593#issuecomment-455386887 https://api.github.com/repos/pydata/xarray/issues/2593 MDEyOklzc3VlQ29tbWVudDQ1NTM4Njg4Nw== jwenfai 8708062 2019-01-18T00:50:29Z 2019-01-18T00:50:29Z CONTRIBUTOR

Interesting, I'm not sure what's leading to the difference in behavior between platforms (I'm on a Mac). If you can distill the precision discrepancy to a minimal example, it might be worth reporting upstream to cftime. Though I agree, from xarray's perspective this is probably not a major concern.

Could this issue be related to the linear algebra library? A minimal example would be the earlier example you gave. For both cftime 1.0.0 and 1.0.3.4, I get "mismatch 60.0%".

The logic I wrote in CFTimeGrouper.first_items was in fact based in part on lib.generate_bins_dt64. Indeed I omitted these checks related to whether the time series values were all within the bin edges or not. I think it would actually be straightforward to add these in just before the call to np.searchsorted (in my code datetime_bins corresponds to binner in pandas and index corresponds to values). I would be inclined to do this rather than try to deviate from pandas' behavior here for now.

So I keep pandas' logic (the first bin has 1 day and 1 microsecond added to it) and raise a Value Error when either index[0] < datetime_bins[0] or index[lenidx - 1] > datetime_bins[lenbin - 1].

Which version of pandas are you testing against? I'm testing against the dev version and with your branch I can't seem to reproduce any failures that are not related to the "values falls before first bin" error in the XT case. I have not tried any of the 6AS_JUN cases yet.

I'm testing against the dev version, 11 commits behind. Could the errors for XT that I get but you don't be due to cftime/linear algebra library issue? There may be enough error accumulated for hourly frequencies over 140 years that cftime_range generates an extra bin compared to pandas date_range (haven't checked all manually but I believe the majority of the non-"values falls before first bin" errors are due to extra bin(s)). 6AS_JUN only has 8 failed tests all due to "x and y nan location mismatch".

Yeah that's too long :). I agree we'll eventually need to pare down the tests. A big reason for this is the length of the resampled time series in some of the cases (e.g. hourly frequencies over a 140 year period produce very long arrays, e.g. in the XT case). There's also probably a lot of overlap between the existing test cases as far as the logic they exercise (e.g. do we need to test against frequencies '2H', '5H', '7H', '12H', and '8001H' or would just '7H' suffice etc.?).

We probably don't. I forget the reason but early on in development, resampling tests failed for some time ranges when using purely odd frequencies while others failed with purely even ones. Resampling tests for12H/24H frequencies might not be needed now that `_adjust_bin_edges' is being used.

While it's definitely good to test directly against pandas for some examples, in place of some of those tests, we might want to consider writing some targeted unit tests for the methods in resample_cftime.py, like _get_time_bins, CFTimeGrouper.first_items, etc. With those it would probably be easier to write some tests that use minimal data but exercise all the conditional logic.

I'll look into writing targeted unit tests.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  CFTimeIndex Resampling 387924616
454940009 https://github.com/pydata/xarray/pull/2593#issuecomment-454940009 https://api.github.com/repos/pydata/xarray/issues/2593 MDEyOklzc3VlQ29tbWVudDQ1NDk0MDAwOQ== jwenfai 8708062 2019-01-16T21:02:18Z 2019-01-16T21:02:18Z CONTRIBUTOR

Hi @spencerkclark, sorry it took so long to get back to you. I've implemented your simplified resampling logic. Some of the logic had to be altered since pandas have made updates.

It's great not having to delineate between upsampling/downsampling cases! I ran into some issues though and I thought maybe an extra pair of eyes could help me diagnose them:

  1. cftime : Not really important but I cannot reproduce the results you obtained for cftime 1.0.3.4. I've tried Python 2.7 and 3.6, conda packages and also building from source, Windows machine and the Windows Ubuntu shell --- datetime arithmetic precision problem persists. To work around this issue, I'm using assert_allclose with default tolerances on the tests as suggested.

  2. pandas : The pandas library refuses to resample certain indices and throws a "values falls before first bin" error. The error comes from bins=lib.generate_bins_dt64(...) around line 1400 of pandas/core/resample.py and is a direct consequence of the _adjust_bin_edges operation adding 1 extra day minus 1 nanosecond causing the first value of sorted bin_edges to be larger than the first sorted ax_values. My current workaround is to use pytest.mark.xfail(raises=ValueError).

CFTimeIndex resampling does not encounter the same error. Nevertheless, I've changed the CFTimeIndex resampling logic so that the first bin value does not have 1 day minus 1 microsecond added to it to (hopefully) rectify the error. Testing against pandas resampling results does not show any difference between the corrected and uncorrected CFTimeIndex resampling code.

  1. xarray : Ignoring the aforementioned issue with pandas, xarray resampling results for certain time ranges do not match pandas', specifically these two: dict(start='1892-01-01T12:00:00', periods=15, freq='5256113T'), labeled XT, and dict(start='1892', periods=10, freq='6AS-JUN'), labeled 6AS_JUN. XT seems to be causing the most problem, which might be due to its rather challenging freq specification.

Since I've rewritten test_cftimeindex_resample.py based on your gists, a lot more test cases are being generated. Without XT and 6AS_JUN, the tests take about 40 minutes to run on my machine; including them bumps that time up to 3 hours. The number of tests should be pared down prior to merging but I think they're helpful right now for identifying problems. I've included test results in XML for you and other collaborators to compare against. One file contains the results with the 1 day minus 1 microsecond fix applied and the other is without the fix. They can be imported into PyCharm, but I'm not sure if they can be read any other way. Test Results - pytest_in_test_cftimeindex_resample_py.zip

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  CFTimeIndex Resampling 387924616
450544589 https://github.com/pydata/xarray/pull/2593#issuecomment-450544589 https://api.github.com/repos/pydata/xarray/issues/2593 MDEyOklzc3VlQ29tbWVudDQ1MDU0NDU4OQ== jwenfai 8708062 2018-12-30T07:45:24Z 2018-12-30T07:45:24Z CONTRIBUTOR

I'm on a break right now and I'll look more closely at the alternative solution when I'm back, but from what I've read in your comment the solution makes sense. Also, nice job catching why _adjust_bin_edges was needed and writing a comprehensive explanation in the code.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  CFTimeIndex Resampling 387924616
449555459 https://github.com/pydata/xarray/pull/2593#issuecomment-449555459 https://api.github.com/repos/pydata/xarray/issues/2593 MDEyOklzc3VlQ29tbWVudDQ0OTU1NTQ1OQ== jwenfai 8708062 2018-12-22T08:39:59Z 2018-12-22T08:39:59Z CONTRIBUTOR

Glad to see that I'm not the only one getting different results. And I agree (biased as I am) that the additional bin at the end of resampled time series is superfluous.

If pandas master with the altered resampling logic will be the definitive version going forward, should development of CFTimeIndex resampling be suspended until this version of pandas master is released and xarray uses it as a dependency?

On a somewhat related note, looking over the latest pandas master resample.py made me realize that https://github.com/pydata/xarray/blob/4317c697900c80604dee793ffc1186e5c57a03fd/xarray/core/resample_cftime.py#L114-L118 is now wrong due to changes made 16 days ago (https://github.com/pandas-dev/pandas/issues/24127) . Since non-integer offset frequencies are not supported by BaseCFTimeOffset, (is_day and offset.n == 1 should just be is_day), which can be further simplified to: python if isinstance(offset, CFTIME_TICKS): return _adjust_dates_anchored(first, last, offset, closed=closed, base=base)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  CFTimeIndex Resampling 387924616
449529506 https://github.com/pydata/xarray/pull/2593#issuecomment-449529506 https://api.github.com/repos/pydata/xarray/issues/2593 MDEyOklzc3VlQ29tbWVudDQ0OTUyOTUwNg== jwenfai 8708062 2018-12-22T00:23:34Z 2018-12-22T00:23:34Z CONTRIBUTOR

Can anyone confirm that the latest unreleased build of pandas gives resample results that are different from pandas 0.23.4?

I get results that match cftime resampling for downsampling and upsampling without having to rely on .dropna().

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  CFTimeIndex Resampling 387924616
448448286 https://github.com/pydata/xarray/pull/2593#issuecomment-448448286 https://api.github.com/repos/pydata/xarray/issues/2593 MDEyOklzc3VlQ29tbWVudDQ0ODQ0ODI4Ng== jwenfai 8708062 2018-12-19T02:36:38Z 2018-12-19T02:36:38Z CONTRIBUTOR

@spencerkclark Sorry about that, must be a couple of commits behind. Should be resolved now.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  CFTimeIndex Resampling 387924616
447718468 https://github.com/pydata/xarray/pull/2593#issuecomment-447718468 https://api.github.com/repos/pydata/xarray/issues/2593 MDEyOklzc3VlQ29tbWVudDQ0NzcxODQ2OA== jwenfai 8708062 2018-12-17T04:20:51Z 2018-12-17T04:20:51Z CONTRIBUTOR

@spencerkclark Thanks for the detailed review! I'll fix up my code over the next few days.

I haven't completely solved the upsampling issue yet but I think I might have some clues as to what's happening. Timedelta operations on cftime.datetime does not always return correct values. Sometimes, they are a few microseconds or one second off.

The issue can be sidestepped by shifting the the bins 1 second forward for closed=='right' and 1 second back for closed=='left' in groupby.py, but this obviously introduces issues for resampling operations at the second and microsecond resolution. This workaround doesn't pass all the tests. An extra time bin is still sometimes created. You'll see what I mean when I make a new commit sometime next week.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  CFTimeIndex Resampling 387924616

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 18.218ms · About: xarray-datasette