issue_comments: 377642905
This data as json
| html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
|---|---|---|---|---|---|---|---|---|---|---|---|
| https://github.com/pydata/xarray/pull/2031#issuecomment-377642905 | https://api.github.com/repos/pydata/xarray/issues/2031 | 377642905 | MDEyOklzc3VlQ29tbWVudDM3NzY0MjkwNQ== | 5635139 | 2018-03-30T23:08:12Z | 2018-03-30T23:08:12Z | MEMBER | Any thoughts on this approach of writing out the result on a slice of a sample dataset / dataarray? I've been thinking about expect tests, as described by @yminsky here. That would be something like: - Have some example datasets (similar to what we do now, though with a well known seed) - Run our functions and save to a file, as a known good output - During tests, compare the result to the known good output - Where different, raise and show the diff That's a bit harder with numerical data than with small lists of words (the example in the link), but also helpful - we don't have to manually construct the result in python - just check the first time & commit the result. And would enable tests across moderately sized data, rather than only 'toy' examples. |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
309976469 |