issue_comments
8 rows where author_association = "MEMBER", issue = 283388962 and user = 306380 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- fix distributed writes · 8 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
362773103 | https://github.com/pydata/xarray/pull/1793#issuecomment-362773103 | https://api.github.com/repos/pydata/xarray/issues/1793 | MDEyOklzc3VlQ29tbWVudDM2Mjc3MzEwMw== | mrocklin 306380 | 2018-02-03T03:13:04Z | 2018-02-03T03:13:04Z | MEMBER | Honestly we don't have a very clean mechanism for this. Probably you want to look at |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
fix distributed writes 283388962 | |
362698024 | https://github.com/pydata/xarray/pull/1793#issuecomment-362698024 | https://api.github.com/repos/pydata/xarray/issues/1793 | MDEyOklzc3VlQ29tbWVudDM2MjY5ODAyNA== | mrocklin 306380 | 2018-02-02T20:28:55Z | 2018-02-02T20:28:55Z | MEMBER | Performance-wise Dask locks will probably add 1-10ms of communication overhead (probably on the lower end of that), plus whatever contention there will be from locking. You can make these locks as fine-grained as you want, for example by defining a lock-per-filename with |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
fix distributed writes 283388962 | |
362673511 | https://github.com/pydata/xarray/pull/1793#issuecomment-362673511 | https://api.github.com/repos/pydata/xarray/issues/1793 | MDEyOklzc3VlQ29tbWVudDM2MjY3MzUxMQ== | mrocklin 306380 | 2018-02-02T18:56:16Z | 2018-02-02T18:56:16Z | MEMBER | SerializableLock isn't appropriate here if you want inter process locking. Dask's lock is probably better here if you're running with the distributed scheduler. On Feb 2, 2018 1:38 PM, "Joe Hamman" notifications@github.com wrote:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
fix distributed writes 283388962 | |
362590407 | https://github.com/pydata/xarray/pull/1793#issuecomment-362590407 | https://api.github.com/repos/pydata/xarray/issues/1793 | MDEyOklzc3VlQ29tbWVudDM2MjU5MDQwNw== | mrocklin 306380 | 2018-02-02T13:46:18Z | 2018-02-02T13:46:18Z | MEMBER | For reference, the line
would have to be replaced with
To get the same result. However there were a few more calls to compute hidden in various functions (like |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
fix distributed writes 283388962 | |
362589762 | https://github.com/pydata/xarray/pull/1793#issuecomment-362589762 | https://api.github.com/repos/pydata/xarray/issues/1793 | MDEyOklzc3VlQ29tbWVudDM2MjU4OTc2Mg== | mrocklin 306380 | 2018-02-02T13:43:33Z | 2018-02-02T13:43:33Z | MEMBER | I've pushed a fix for the In the future I suspect that the |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
fix distributed writes 283388962 | |
360548130 | https://github.com/pydata/xarray/pull/1793#issuecomment-360548130 | https://api.github.com/repos/pydata/xarray/issues/1793 | MDEyOklzc3VlQ29tbWVudDM2MDU0ODEzMA== | mrocklin 306380 | 2018-01-25T17:59:34Z | 2018-01-25T17:59:34Z | MEMBER | I can take a look at the future not iterable issue sometime tomorrow.
My guess is that this would be easy with a friendly storage target. I'm not sure though. cc @jakirkham who has been active on this topic recently. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
fix distributed writes 283388962 | |
357105359 | https://github.com/pydata/xarray/pull/1793#issuecomment-357105359 | https://api.github.com/repos/pydata/xarray/issues/1793 | MDEyOklzc3VlQ29tbWVudDM1NzEwNTM1OQ== | mrocklin 306380 | 2018-01-12T00:23:09Z | 2018-01-12T00:23:09Z | MEMBER | I don't know. I would want to look at the fail case locally. I can try to do this near term, no promises though :/ |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
fix distributed writes 283388962 | |
352908509 | https://github.com/pydata/xarray/pull/1793#issuecomment-352908509 | https://api.github.com/repos/pydata/xarray/issues/1793 | MDEyOklzc3VlQ29tbWVudDM1MjkwODUwOQ== | mrocklin 306380 | 2017-12-19T22:39:43Z | 2017-12-19T22:39:43Z | MEMBER | The zarr test seems a bit different. I think your issue here is that you are trying to use synchronous API with the async test harness. I've changed your test and pushed to your branch (hope you don't mind). Relevant docs are here: http://distributed.readthedocs.io/en/latest/develop.html#writing-tests Async testing is nicer in many ways, but does require you to be a bit familiar with the async/tornado API. I also suspect that operations like |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
fix distributed writes 283388962 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1