issue_comments
20 rows where issue = 337267315 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: reactions, created_at (date), updated_at (date)
issue 1
- xarray.backends refactor · 20 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
428055468 | https://github.com/pydata/xarray/pull/2261#issuecomment-428055468 | https://api.github.com/repos/pydata/xarray/issues/2261 | MDEyOklzc3VlQ29tbWVudDQyODA1NTQ2OA== | jhamman 2443309 | 2018-10-09T04:26:34Z | 2018-10-09T04:26:34Z | MEMBER | Nice work on this @shoyer. Really excited to set this free. |
{ "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 3, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray.backends refactor 337267315 | |
428039712 | https://github.com/pydata/xarray/pull/2261#issuecomment-428039712 | https://api.github.com/repos/pydata/xarray/issues/2261 | MDEyOklzc3VlQ29tbWVudDQyODAzOTcxMg== | shoyer 1217238 | 2018-10-09T02:34:23Z | 2018-10-09T02:34:23Z | MEMBER | Yep, that's my plan. I just did a read through code again and identified a few unreachable lines, which I removed. I'll merge when CI passes. |
{ "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 } |
xarray.backends refactor 337267315 | |
427932443 | https://github.com/pydata/xarray/pull/2261#issuecomment-427932443 | https://api.github.com/repos/pydata/xarray/issues/2261 | MDEyOklzc3VlQ29tbWVudDQyNzkzMjQ0Mw== | pep8speaks 24736507 | 2018-10-08T18:21:06Z | 2018-10-09T02:31:16Z | NONE | Hello @shoyer! Thanks for updating the PR.
Comment last updated on October 09, 2018 at 02:31 Hours UTC |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray.backends refactor 337267315 | |
428038801 | https://github.com/pydata/xarray/pull/2261#issuecomment-428038801 | https://api.github.com/repos/pydata/xarray/issues/2261 | MDEyOklzc3VlQ29tbWVudDQyODAzODgwMQ== | jhamman 2443309 | 2018-10-09T02:28:41Z | 2018-10-09T02:28:41Z | MEMBER | based on the arrival of #2476 (!), I suggest we merge this. I think we've had enough review to justify this being put into a release candidate in the relatively near future. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray.backends refactor 337267315 | |
425047446 | https://github.com/pydata/xarray/pull/2261#issuecomment-425047446 | https://api.github.com/repos/pydata/xarray/issues/2261 | MDEyOklzc3VlQ29tbWVudDQyNTA0NzQ0Ng== | shoyer 1217238 | 2018-09-27T10:54:32Z | 2018-09-27T10:54:32Z | MEMBER | At some point soon I'm just going to merge this, more review or not! Hopefully a release candidate will catch any major issues. |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray.backends refactor 337267315 | |
424946237 | https://github.com/pydata/xarray/pull/2261#issuecomment-424946237 | https://api.github.com/repos/pydata/xarray/issues/2261 | MDEyOklzc3VlQ29tbWVudDQyNDk0NjIzNw== | jhamman 2443309 | 2018-09-27T03:23:41Z | 2018-09-27T03:23:41Z | MEMBER | I'd also be happy to see this go in. We could use a review from someone other than me. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray.backends refactor 337267315 | |
424945880 | https://github.com/pydata/xarray/pull/2261#issuecomment-424945880 | https://api.github.com/repos/pydata/xarray/issues/2261 | MDEyOklzc3VlQ29tbWVudDQyNDk0NTg4MA== | shoyer 1217238 | 2018-09-27T03:21:10Z | 2018-09-27T03:21:10Z | MEMBER | I'd love to move this forward. I think it will fix some serious usability and performance issues with distributed reads/writes of netCDF files. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray.backends refactor 337267315 | |
419190187 | https://github.com/pydata/xarray/pull/2261#issuecomment-419190187 | https://api.github.com/repos/pydata/xarray/issues/2261 | MDEyOklzc3VlQ29tbWVudDQxOTE5MDE4Nw== | shoyer 1217238 | 2018-09-06T18:10:24Z | 2018-09-06T18:10:24Z | MEMBER | Here are the latest benchmarking numbers. I added a netCDF4 write benchmark based on https://github.com/pydata/xarray/issues/2389, both with and without dask-distributed:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray.backends refactor 337267315 | |
416748490 | https://github.com/pydata/xarray/pull/2261#issuecomment-416748490 | https://api.github.com/repos/pydata/xarray/issues/2261 | MDEyOklzc3VlQ29tbWVudDQxNjc0ODQ5MA== | shoyer 1217238 | 2018-08-28T21:34:21Z | 2018-08-28T21:34:21Z | MEMBER | +1 on a release candidate. That's part of why I was thinking of using this as an excuse for the 0.11 release. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray.backends refactor 337267315 | |
416443277 | https://github.com/pydata/xarray/pull/2261#issuecomment-416443277 | https://api.github.com/repos/pydata/xarray/issues/2261 | MDEyOklzc3VlQ29tbWVudDQxNjQ0MzI3Nw== | shoyer 1217238 | 2018-08-28T03:57:54Z | 2018-08-28T03:57:54Z | MEMBER | I just ran the benchmark suite again and now see improvement across the board:
I considered adding another benchmark with dask-distributed, but the numbers look very similar to those for multi-processing or threads. It doesn't seem to provide a useful additional signal and makes the whole IO benchmarking suite run about 30% slower to add the distributed tests. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray.backends refactor 337267315 | |
415082921 | https://github.com/pydata/xarray/pull/2261#issuecomment-415082921 | https://api.github.com/repos/pydata/xarray/issues/2261 | MDEyOklzc3VlQ29tbWVudDQxNTA4MjkyMQ== | shoyer 1217238 | 2018-08-22T15:54:09Z | 2018-08-22T15:54:09Z | MEMBER | ASV benchmark results for the dataset io tests (created with Most of the changed benchmarks have improved by ~20%, with the exceptions of |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray.backends refactor 337267315 | |
414389492 | https://github.com/pydata/xarray/pull/2261#issuecomment-414389492 | https://api.github.com/repos/pydata/xarray/issues/2261 | MDEyOklzc3VlQ29tbWVudDQxNDM4OTQ5Mg== | shoyer 1217238 | 2018-08-20T17:01:38Z | 2018-08-20T17:01:38Z | MEMBER | This is ready for further review and testing. Things are working for writes with dask-distributed, including with h5netcdf (requires the 0.6.2 release of h5netcdf) and on Windows (https://github.com/pydata/xarray/issues/1738). Follow-ups for future work: - I managed to work around the need for a reentrant lock (https://github.com/dask/dask/issues/3832) but using a reentrant lock would be a nice clean-up. - Currently I'm using the "close after each write" strategy with dask-distributed (https://github.com/dask/distributed/issues/2163). This works OK for netCDF4 and h5netcdf, but for the SciPy netCDF writer it's basically a non-starter, because SciPy only writes complete files (https://github.com/scipy/scipy/issues/9157) -- so I'm still having SciPy raise an error. It would be nice to also support the "write complete files" strategy, which could have significantly better performance at the cost of memory usage. We might need some new API for this. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray.backends refactor 337267315 | |
409032803 | https://github.com/pydata/xarray/pull/2261#issuecomment-409032803 | https://api.github.com/repos/pydata/xarray/issues/2261 | MDEyOklzc3VlQ29tbWVudDQwOTAzMjgwMw== | shoyer 1217238 | 2018-07-30T22:29:30Z | 2018-07-30T22:29:30Z | MEMBER | I think it's a matter of missing some of the required locks and/or not syncing files before pickling FileManager object. I'm currently working through the locking logic again... On Mon, Jul 30, 2018 at 2:13 PM Joe Hamman notifications@github.com wrote:
|
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray.backends refactor 337267315 | |
409012504 | https://github.com/pydata/xarray/pull/2261#issuecomment-409012504 | https://api.github.com/repos/pydata/xarray/issues/2261 | MDEyOklzc3VlQ29tbWVudDQwOTAxMjUwNA== | jhamman 2443309 | 2018-07-30T21:13:37Z | 2018-07-30T21:13:37Z | MEMBER |
Any ideas of what is not working yet? I spent a fair bit of time wrestling with the distributed write problem earlier this year so can perhaps be of help here. Also cc @pwolfram who was an early interested party in this LRU cache idea. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray.backends refactor 337267315 | |
408894526 | https://github.com/pydata/xarray/pull/2261#issuecomment-408894526 | https://api.github.com/repos/pydata/xarray/issues/2261 | MDEyOklzc3VlQ29tbWVudDQwODg5NDUyNg== | shoyer 1217238 | 2018-07-30T15:01:10Z | 2018-07-30T15:01:10Z | MEMBER | Note that this isn't quite working for Dask distributed yet. On Mon, Jul 30, 2018 at 4:19 AM Fabien Maussion notifications@github.com wrote:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray.backends refactor 337267315 | |
408830242 | https://github.com/pydata/xarray/pull/2261#issuecomment-408830242 | https://api.github.com/repos/pydata/xarray/issues/2261 | MDEyOklzc3VlQ29tbWVudDQwODgzMDI0Mg== | fmaussion 10050469 | 2018-07-30T11:19:29Z | 2018-07-30T11:19:29Z | MEMBER | This is great! I like it, this simplifies the internals a lot. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray.backends refactor 337267315 | |
408714992 | https://github.com/pydata/xarray/pull/2261#issuecomment-408714992 | https://api.github.com/repos/pydata/xarray/issues/2261 | MDEyOklzc3VlQ29tbWVudDQwODcxNDk5Mg== | shoyer 1217238 | 2018-07-29T23:53:31Z | 2018-07-29T23:53:31Z | MEMBER | I finished porting this to the other backends and have now officially deprecated the I'm tentatively marking this for the 0.11 release, since there's a decent chance that this will cause some breakage (we will definitely want to test this on some real work-loads before the release). It's been about 9 months since the 0.10 release, so this is probably a good time to make another major release, anyways. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray.backends refactor 337267315 | |
408642640 | https://github.com/pydata/xarray/pull/2261#issuecomment-408642640 | https://api.github.com/repos/pydata/xarray/issues/2261 | MDEyOklzc3VlQ29tbWVudDQwODY0MjY0MA== | shoyer 1217238 | 2018-07-29T00:03:57Z | 2018-07-29T00:03:57Z | MEMBER | As an experiment, I rewrote the SciPy netCDF backend to use FileManager:
- The code is now significantly simpler -- all the ensure_open() business could simply be removed.
- We used to see a bunch of warnings about not closing memory mapped files ("RuntimeWarning: Cannot close a netcdf_file opened with mmap=True, when netcdf_variables or arrays referring to its data still exist."). These have all gone away!
- |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray.backends refactor 337267315 | |
404216747 | https://github.com/pydata/xarray/pull/2261#issuecomment-404216747 | https://api.github.com/repos/pydata/xarray/issues/2261 | MDEyOklzc3VlQ29tbWVudDQwNDIxNjc0Nw== | shoyer 1217238 | 2018-07-11T15:42:53Z | 2018-07-11T15:42:53Z | MEMBER | OK, this is ready for review. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray.backends refactor 337267315 | |
403543994 | https://github.com/pydata/xarray/pull/2261#issuecomment-403543994 | https://api.github.com/repos/pydata/xarray/issues/2261 | MDEyOklzc3VlQ29tbWVudDQwMzU0Mzk5NA== | shoyer 1217238 | 2018-07-09T16:46:05Z | 2018-07-09T16:46:05Z | MEMBER | @jhamman thanks for taking a look. I'm going to push another iteration of this shortly (OK, a major rewrite) where there is only a single FileManager object which uses an LRU cache. |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray.backends refactor 337267315 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 4