issue_comments
284 rows where user = 306380 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: reactions, created_at (date), updated_at (date)
user 1
- mrocklin · 284 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
1510444389 | https://github.com/pydata/xarray/issues/7716#issuecomment-1510444389 | https://api.github.com/repos/pydata/xarray/issues/7716 | IC_kwDOAMm_X85aB41l | mrocklin 306380 | 2023-04-16T17:57:26Z | 2023-04-16T17:57:26Z | MEMBER | That makes sense. Just following up, but this fails today:
It sounds like this will work itself out though and no further work here needs to be done (unless someone wants to go press some green buttons on conda-forge somewhere) |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
bad conda solve with pandas 2 1654022522 | |
1510434421 | https://github.com/pydata/xarray/issues/7716#issuecomment-1510434421 | https://api.github.com/repos/pydata/xarray/issues/7716 | IC_kwDOAMm_X85aB2Z1 | mrocklin 306380 | 2023-04-16T17:10:12Z | 2023-04-16T17:10:12Z | MEMBER | This was the environment, solved on M1 Mac
I can try to minify this in a bit, although I'm on airport wifi right now, and it has started to kick me off, I suspect due to these sorts of activities. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
bad conda solve with pandas 2 1654022522 | |
1510432559 | https://github.com/pydata/xarray/issues/7716#issuecomment-1510432559 | https://api.github.com/repos/pydata/xarray/issues/7716 | IC_kwDOAMm_X85aB18v | mrocklin 306380 | 2023-04-16T17:01:50Z | 2023-04-16T17:01:50Z | MEMBER | I'm still running into this today when using only conda-forge
When I add defaults the problem goes away
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
bad conda solve with pandas 2 1654022522 | |
925015359 | https://github.com/pydata/xarray/issues/5648#issuecomment-925015359 | https://api.github.com/repos/pydata/xarray/issues/5648 | IC_kwDOAMm_X843Ip0_ | mrocklin 306380 | 2021-09-22T15:01:06Z | 2021-09-22T15:01:06Z | MEMBER | It looks like there are some other Dask folks participating. I'll step back and let them take over on our end. On Wed, Sep 22, 2021 at 9:53 AM Hameer Abbasi @.***> wrote:
|
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Duck array compatibility meeting 956103236 | |
924104833 | https://github.com/pydata/xarray/issues/5648#issuecomment-924104833 | https://api.github.com/repos/pydata/xarray/issues/5648 | IC_kwDOAMm_X843FLiB | mrocklin 306380 | 2021-09-21T15:32:58Z | 2021-09-21T15:32:58Z | MEMBER | Surprisingly I happen to be free tomorrow at exactly that time. I've blocked it off. If you want to send a calendar invite to mrocklin at coiled that would be welcome. On Tue, Sep 21, 2021 at 10:27 AM Tom Nicholas @.***> wrote:
|
{ "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 } |
Duck array compatibility meeting 956103236 | |
889468552 | https://github.com/pydata/xarray/issues/5648#issuecomment-889468552 | https://api.github.com/repos/pydata/xarray/issues/5648 | IC_kwDOAMm_X841BDaI | mrocklin 306380 | 2021-07-29T21:21:38Z | 2021-07-29T21:21:38Z | MEMBER | I would be happy to attend and look forward to what I'm sure will be a vigorous discussion :) Thank you for providing convenient links to reading materials ahead of time. As a warning, my responsiveness to github comments these days is not what it used to be. If I miss something here then please forgive me. |
{ "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 } |
Duck array compatibility meeting 956103236 | |
856124510 | https://github.com/pydata/xarray/issues/5426#issuecomment-856124510 | https://api.github.com/repos/pydata/xarray/issues/5426 | MDEyOklzc3VlQ29tbWVudDg1NjEyNDUxMA== | mrocklin 306380 | 2021-06-07T17:31:00Z | 2021-06-07T17:31:00Z | MEMBER | Also cc'ing @gjoseph92 |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Implement dask.sizeof for xarray.core.indexing.ImplicitToExplicitIndexingAdapter 908971901 | |
852685733 | https://github.com/pydata/xarray/issues/5426#issuecomment-852685733 | https://api.github.com/repos/pydata/xarray/issues/5426 | MDEyOklzc3VlQ29tbWVudDg1MjY4NTczMw== | mrocklin 306380 | 2021-06-02T03:23:35Z | 2021-06-02T03:23:35Z | MEMBER | I think that the next thing to do here is to try to replicate this locally and watch the stealing logic to figure out why these tasks aren't moving. At this point we're just guessing. @jrbourbeau can I ask you to add this to the stack of issues to have folks look into? |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Implement dask.sizeof for xarray.core.indexing.ImplicitToExplicitIndexingAdapter 908971901 | |
852683916 | https://github.com/pydata/xarray/issues/5426#issuecomment-852683916 | https://api.github.com/repos/pydata/xarray/issues/5426 | MDEyOklzc3VlQ29tbWVudDg1MjY4MzkxNg== | mrocklin 306380 | 2021-06-02T03:18:37Z | 2021-06-02T03:18:37Z | MEMBER | Yeah, that size being very small shouldn't be a problem |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Implement dask.sizeof for xarray.core.indexing.ImplicitToExplicitIndexingAdapter 908971901 | |
852675828 | https://github.com/pydata/xarray/issues/5426#issuecomment-852675828 | https://api.github.com/repos/pydata/xarray/issues/5426 | MDEyOklzc3VlQ29tbWVudDg1MjY3NTgyOA== | mrocklin 306380 | 2021-06-02T02:58:13Z | 2021-06-02T02:58:13Z | MEMBER | Hrm, the root dependency does appear to be of type
I'm not sure what's going on with it |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Implement dask.sizeof for xarray.core.indexing.ImplicitToExplicitIndexingAdapter 908971901 | |
852672930 | https://github.com/pydata/xarray/issues/5426#issuecomment-852672930 | https://api.github.com/repos/pydata/xarray/issues/5426 | MDEyOklzc3VlQ29tbWVudDg1MjY3MjkzMA== | mrocklin 306380 | 2021-06-02T02:50:28Z | 2021-06-02T02:50:28Z | MEMBER | This is what it looks like in practice for me FWIW |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Implement dask.sizeof for xarray.core.indexing.ImplicitToExplicitIndexingAdapter 908971901 | |
852671075 | https://github.com/pydata/xarray/issues/5426#issuecomment-852671075 | https://api.github.com/repos/pydata/xarray/issues/5426 | MDEyOklzc3VlQ29tbWVudDg1MjY3MTA3NQ== | mrocklin 306380 | 2021-06-02T02:45:48Z | 2021-06-02T02:45:48Z | MEMBER | Ideally Dask would be able to be robust to this kind of mis-assignment of object size, but it's particularly hard in this situation. We can't try to serialize these things because if we're wrong and the size actually is massive then we blow out the worker. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Implement dask.sizeof for xarray.core.indexing.ImplicitToExplicitIndexingAdapter 908971901 | |
852670723 | https://github.com/pydata/xarray/issues/5426#issuecomment-852670723 | https://api.github.com/repos/pydata/xarray/issues/5426 | MDEyOklzc3VlQ29tbWVudDg1MjY3MDcyMw== | mrocklin 306380 | 2021-06-02T02:44:55Z | 2021-06-02T02:44:55Z | MEMBER | It may also be that we don't want to inline zarr objects (The graph is likely to be cheaper to move if we don't inline them). However we may want Zarr objects to report themselves as easy to move by defining their approximate size with |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Implement dask.sizeof for xarray.core.indexing.ImplicitToExplicitIndexingAdapter 908971901 | |
852666752 | https://github.com/pydata/xarray/issues/5426#issuecomment-852666752 | https://api.github.com/repos/pydata/xarray/issues/5426 | MDEyOklzc3VlQ29tbWVudDg1MjY2Njc1Mg== | mrocklin 306380 | 2021-06-02T02:34:48Z | 2021-06-02T02:34:48Z | MEMBER | Do you run into poor load balancing as well when using Zarr with Xarray? My guess here is that there are a few tasks in the graph that report multi-TB sizes and so are highly resistant to being moved around. I haven't verified that though |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Implement dask.sizeof for xarray.core.indexing.ImplicitToExplicitIndexingAdapter 908971901 | |
852656740 | https://github.com/pydata/xarray/issues/5426#issuecomment-852656740 | https://api.github.com/repos/pydata/xarray/issues/5426 | MDEyOklzc3VlQ29tbWVudDg1MjY1Njc0MA== | mrocklin 306380 | 2021-06-02T02:09:50Z | 2021-06-02T02:09:50Z | MEMBER | Thinking about this some more, it might be some other object, like a Zarr store, that is on only a couple of these machines. I recall that recently we switched Zarr from being in every task to being in only a few tasks. The problem here might be reversed, that we actually want to view Zarr stores in this case as quite cheap. cc @TomAugspurger who I think was actively making decisions around that time. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Implement dask.sizeof for xarray.core.indexing.ImplicitToExplicitIndexingAdapter 908971901 | |
755421422 | https://github.com/pydata/xarray/pull/4746#issuecomment-755421422 | https://api.github.com/repos/pydata/xarray/issues/4746 | MDEyOklzc3VlQ29tbWVudDc1NTQyMTQyMg== | mrocklin 306380 | 2021-01-06T16:50:12Z | 2021-01-06T16:50:12Z | MEMBER | If anyone here has time to review https://github.com/dask/dask/pull/7033 that would be greatly appreciated :) |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Faster unstacking 777153550 | |
663148752 | https://github.com/pydata/xarray/issues/4208#issuecomment-663148752 | https://api.github.com/repos/pydata/xarray/issues/4208 | MDEyOklzc3VlQ29tbWVudDY2MzE0ODc1Mg== | mrocklin 306380 | 2020-07-23T17:57:55Z | 2020-07-23T17:57:55Z | MEMBER | Dask collections tokenize quickly. We just use the name I think. |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Support for duck Dask Arrays 653430454 | |
663123118 | https://github.com/pydata/xarray/issues/4208#issuecomment-663123118 | https://api.github.com/repos/pydata/xarray/issues/4208 | MDEyOklzc3VlQ29tbWVudDY2MzEyMzExOA== | mrocklin 306380 | 2020-07-23T17:05:30Z | 2020-07-23T17:05:30Z | MEMBER |
Ah, great. My bad.
I think that you would want to make a pint array rechunk method that called down to the dask array rechunk method. My guess is that this might come up in other situations as well.
I think that implementing the It's also possible that we could look at the |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Support for duck Dask Arrays 653430454 | |
663119539 | https://github.com/pydata/xarray/issues/4208#issuecomment-663119539 | https://api.github.com/repos/pydata/xarray/issues/4208 | MDEyOklzc3VlQ29tbWVudDY2MzExOTUzOQ== | mrocklin 306380 | 2020-07-23T16:58:27Z | 2020-07-23T16:58:27Z | MEMBER | My guess is that we could steal the xarray.DataArray implementations over to Pint without causing harm. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Support for duck Dask Arrays 653430454 | |
663119334 | https://github.com/pydata/xarray/issues/4208#issuecomment-663119334 | https://api.github.com/repos/pydata/xarray/issues/4208 | MDEyOklzc3VlQ29tbWVudDY2MzExOTMzNA== | mrocklin 306380 | 2020-07-23T16:58:06Z | 2020-07-23T16:58:06Z | MEMBER | In Xarray we implemented the Dask collection spec. https://docs.dask.org/en/latest/custom-collections.html#the-dask-collection-interface We might want to do that with Pint as well, if they're going to contain Dask things. That way Dask operations like |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Support for duck Dask Arrays 653430454 | |
617198555 | https://github.com/pydata/xarray/pull/3989#issuecomment-617198555 | https://api.github.com/repos/pydata/xarray/issues/3989 | MDEyOklzc3VlQ29tbWVudDYxNzE5ODU1NQ== | mrocklin 306380 | 2020-04-21T13:59:49Z | 2020-04-21T13:59:49Z | MEMBER | Yeah, my sense here is that it probably makes sense to relax the assertion that only async |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Fix distributed tests on upstream-dev 603937718 | |
615501070 | https://github.com/pydata/xarray/issues/3213#issuecomment-615501070 | https://api.github.com/repos/pydata/xarray/issues/3213 | MDEyOklzc3VlQ29tbWVudDYxNTUwMTA3MA== | mrocklin 306380 | 2020-04-17T23:08:18Z | 2020-04-17T23:08:18Z | MEMBER | @amueller have you all connected with @hameerabbasi ? I'm not surprised to hear that there are performance issues with pydata/sparse relative to scipy.sparse, but Hameer has historically been pretty open to working to resolve issues quickly. I'm not sure if there is already an ongoing conversation between the two groups, but I'd recommend replacing "we've chosen not to use pydata/sparse because it isn't feature complete enough for us" with "in order for us to use pydata/sparse we would need the following features". |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
How should xarray use/support sparse arrays? 479942077 | |
603635112 | https://github.com/pydata/xarray/issues/2692#issuecomment-603635112 | https://api.github.com/repos/pydata/xarray/issues/2692 | MDEyOklzc3VlQ29tbWVudDYwMzYzNTExMg== | mrocklin 306380 | 2020-03-25T04:34:26Z | 2020-03-25T04:34:26Z | MEMBER | Gah! On Tue, Mar 24, 2020 at 8:17 PM Joe Hamman notifications@github.com wrote:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Xarray tutorial at SciPy 2019? 400948664 | |
598800439 | https://github.com/pydata/xarray/issues/3791#issuecomment-598800439 | https://api.github.com/repos/pydata/xarray/issues/3791 | MDEyOklzc3VlQ29tbWVudDU5ODgwMDQzOQ== | mrocklin 306380 | 2020-03-13T16:12:53Z | 2020-03-13T16:12:53Z | MEMBER | I wonder if there are multi-dimensional analogs that might be interesting. @eric-czech , if you have time to say a bit more about the data and operation that you're trying to do I think it would be an interesting exercise to see how to do that operation with Xarray's current functionality. I wouldn't be surprised to learn that there was some way to do what you wanted that went under a different name here. |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Self joins with non-unique indexes 569176457 | |
561921753 | https://github.com/pydata/xarray/pull/3584#issuecomment-561921753 | https://api.github.com/repos/pydata/xarray/issues/3584 | MDEyOklzc3VlQ29tbWVudDU2MTkyMTc1Mw== | mrocklin 306380 | 2019-12-05T01:16:03Z | 2019-12-05T01:16:03Z | MEMBER |
That sounds like a reasonable expectation, but honestly it's been a while, so I don't fully trust my knowledge here. It might be worth adding some runtime checks into the |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Make dask names change when chunking Variables by different amounts. 530657789 | |
557615479 | https://github.com/pydata/xarray/issues/3563#issuecomment-557615479 | https://api.github.com/repos/pydata/xarray/issues/3563 | MDEyOklzc3VlQ29tbWVudDU1NzYxNTQ3OQ== | mrocklin 306380 | 2019-11-22T17:12:07Z | 2019-11-22T17:12:07Z | MEMBER | You're probably already aware, but https://examples.dask.org and https://github.com/dask/dask-examples might be a nice model to look at. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
environment file for binderized examples 527296094 | |
546053879 | https://github.com/pydata/xarray/pull/3425#issuecomment-546053879 | https://api.github.com/repos/pydata/xarray/issues/3425 | MDEyOklzc3VlQ29tbWVudDU0NjA1Mzg3OQ== | mrocklin 306380 | 2019-10-24T18:52:08Z | 2019-10-24T18:52:08Z | MEMBER | Thanks @jsignell (and all). I'm really jazzed about this. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Html repr 510294810 | |
540843642 | https://github.com/pydata/xarray/pull/3276#issuecomment-540843642 | https://api.github.com/repos/pydata/xarray/issues/3276 | MDEyOklzc3VlQ29tbWVudDU0MDg0MzY0Mg== | mrocklin 306380 | 2019-10-10T23:49:12Z | 2019-10-10T23:49:12Z | MEMBER | Woo! On Thu, Oct 10, 2019 at 4:44 PM crusaderky notifications@github.com wrote:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
map_blocks 488243328 | |
527187603 | https://github.com/pydata/xarray/pull/3258#issuecomment-527187603 | https://api.github.com/repos/pydata/xarray/issues/3258 | MDEyOklzc3VlQ29tbWVudDUyNzE4NzYwMw== | mrocklin 306380 | 2019-09-02T15:37:18Z | 2019-09-02T15:37:18Z | MEMBER | I'm glad to see progress here. FWIW, I think that many people would be quite happy with a version that just worked for DataArrays, in case that's faster to get in than the full solution with DataSets. |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
[WIP] Add map_blocks. 484752930 | |
526756738 | https://github.com/pydata/xarray/pull/3258#issuecomment-526756738 | https://api.github.com/repos/pydata/xarray/issues/3258 | MDEyOklzc3VlQ29tbWVudDUyNjc1NjczOA== | mrocklin 306380 | 2019-08-30T21:31:49Z | 2019-08-30T21:32:02Z | MEMBER | Then you can construct a tuple as a task |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
[WIP] Add map_blocks. 484752930 | |
525966384 | https://github.com/pydata/xarray/pull/3258#issuecomment-525966384 | https://api.github.com/repos/pydata/xarray/issues/3258 | MDEyOklzc3VlQ29tbWVudDUyNTk2NjM4NA== | mrocklin 306380 | 2019-08-28T23:54:48Z | 2019-08-28T23:54:48Z | MEMBER | Dask doesn't traverse through tuples to find possible keys, so the keys here are hidden from view:
I recommend changing wrapping tuples with lists:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
[WIP] Add map_blocks. 484752930 | |
518355147 | https://github.com/pydata/xarray/pull/3117#issuecomment-518355147 | https://api.github.com/repos/pydata/xarray/issues/3117 | MDEyOklzc3VlQ29tbWVudDUxODM1NTE0Nw== | mrocklin 306380 | 2019-08-05T18:53:39Z | 2019-08-05T18:53:39Z | MEMBER | Woot! Thanks @nvictus ! |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Support for __array_function__ implementers (sparse arrays) [WIP] 467771005 | |
517311370 | https://github.com/pydata/xarray/pull/3117#issuecomment-517311370 | https://api.github.com/repos/pydata/xarray/issues/3117 | MDEyOklzc3VlQ29tbWVudDUxNzMxMTM3MA== | mrocklin 306380 | 2019-08-01T14:27:13Z | 2019-08-01T14:27:13Z | MEMBER | Checking in here. This was a fun project during SciPy Sprints that both showed a lot of potential and generated a lot of excitement. But of course as we all returned home other things came up and this has lingered for a while. How can we best preserve this work? Two specific questions:
|
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Support for __array_function__ implementers (sparse arrays) [WIP] 467771005 | |
514687031 | https://github.com/pydata/xarray/pull/2255#issuecomment-514687031 | https://api.github.com/repos/pydata/xarray/issues/2255 | MDEyOklzc3VlQ29tbWVudDUxNDY4NzAzMQ== | mrocklin 306380 | 2019-07-24T15:43:04Z | 2019-07-24T15:43:04Z | MEMBER | I'm glad to hear it! I'm curious, are there features in rioxarray that could be pushed upstream? On Wed, Jul 24, 2019 at 8:39 AM Alan D. Snow notifications@github.com wrote:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Add automatic chunking to open_rasterio 336371511 | |
514682988 | https://github.com/pydata/xarray/pull/2255#issuecomment-514682988 | https://api.github.com/repos/pydata/xarray/issues/2255 | MDEyOklzc3VlQ29tbWVudDUxNDY4Mjk4OA== | mrocklin 306380 | 2019-07-24T15:33:10Z | 2019-07-24T15:33:10Z | MEMBER | I've abandoned this PR. If anyone has time to pick it up, that would be welcome. I think that it would have positive impact. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Add automatic chunking to open_rasterio 336371511 | |
513504716 | https://github.com/pydata/xarray/pull/1820#issuecomment-513504716 | https://api.github.com/repos/pydata/xarray/issues/1820 | MDEyOklzc3VlQ29tbWVudDUxMzUwNDcxNg== | mrocklin 306380 | 2019-07-20T22:48:30Z | 2019-07-20T22:48:30Z | MEMBER | I'll say that I'm looking forward to this getting in, mostly so that I can raise an issue about adding Dask's chunked array images :) |
{ "total_count": 2, "+1": 0, "-1": 0, "laugh": 1, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 } |
WIP: html repr 287844110 | |
513504690 | https://github.com/pydata/xarray/pull/1820#issuecomment-513504690 | https://api.github.com/repos/pydata/xarray/issues/1820 | MDEyOklzc3VlQ29tbWVudDUxMzUwNDY5MA== | mrocklin 306380 | 2019-07-20T22:47:57Z | 2019-07-20T22:47:57Z | MEMBER |
Yeah, we just use raw HTML |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
WIP: html repr 287844110 | |
511209094 | https://github.com/pydata/xarray/issues/1375#issuecomment-511209094 | https://api.github.com/repos/pydata/xarray/issues/1375 | MDEyOklzc3VlQ29tbWVudDUxMTIwOTA5NA== | mrocklin 306380 | 2019-07-14T14:50:45Z | 2019-07-14T14:50:45Z | MEMBER | @nvictus has been working on this at #3117 |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Sparse arrays 221858543 | |
510947988 | https://github.com/pydata/xarray/issues/1938#issuecomment-510947988 | https://api.github.com/repos/pydata/xarray/issues/1938 | MDEyOklzc3VlQ29tbWVudDUxMDk0Nzk4OA== | mrocklin 306380 | 2019-07-12T16:23:08Z | 2019-07-12T16:23:08Z | MEMBER | @jacobtomlinson got things sorta-working with NEP-18 and CuPy in an afternoon in Iris (with a strong emphasis on "kinda"). On the CuPy side you're fine. If you're on NumPy 1.16 you'll need to enable the
If you're using Numpy 1.17 then this is on by default. I think that most of the work here is on the Xarray side. We'll need to remove things like explicit type checks. |
{ "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 2, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Hooks for XArray operations 299668148 | |
510943157 | https://github.com/pydata/xarray/issues/1375#issuecomment-510943157 | https://api.github.com/repos/pydata/xarray/issues/1375 | MDEyOklzc3VlQ29tbWVudDUxMDk0MzE1Nw== | mrocklin 306380 | 2019-07-12T16:07:42Z | 2019-07-12T16:07:42Z | MEMBER | @rgommers might be able to recommend someone |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Sparse arrays 221858543 | |
507046745 | https://github.com/pydata/xarray/issues/1627#issuecomment-507046745 | https://api.github.com/repos/pydata/xarray/issues/1627 | MDEyOklzc3VlQ29tbWVudDUwNzA0Njc0NQ== | mrocklin 306380 | 2019-06-30T15:49:08Z | 2019-06-30T15:49:08Z | MEMBER | Thought I'd bump this (hopefully no one minds). I think that this is great! |
{ "total_count": 5, "+1": 5, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
html repr of xarray object (for the notebook) 264747372 | |
504758362 | https://github.com/pydata/xarray/pull/3027#issuecomment-504758362 | https://api.github.com/repos/pydata/xarray/issues/3027 | MDEyOklzc3VlQ29tbWVudDUwNDc1ODM2Mg== | mrocklin 306380 | 2019-06-23T14:39:46Z | 2019-06-23T14:39:46Z | MEMBER | Does the green check mark here mean that we're all good @shoyer ? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Ensure explicitly indexed arrays are preserved 456963929 | |
502749637 | https://github.com/pydata/xarray/pull/3027#issuecomment-502749637 | https://api.github.com/repos/pydata/xarray/issues/3027 | MDEyOklzc3VlQ29tbWVudDUwMjc0OTYzNw== | mrocklin 306380 | 2019-06-17T16:11:44Z | 2019-06-17T16:11:44Z | MEMBER | I think that relaxing the astype constrain seems quite reasonable. I'll clean this up on the Dask side. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Ensure explicitly indexed arrays are preserved 456963929 | |
502550448 | https://github.com/pydata/xarray/issues/3009#issuecomment-502550448 | https://api.github.com/repos/pydata/xarray/issues/3009 | MDEyOklzc3VlQ29tbWVudDUwMjU1MDQ0OA== | mrocklin 306380 | 2019-06-17T06:24:12Z | 2019-06-17T06:24:12Z | MEMBER | OK, reproduced. I'll take a look later today. Thanks for pointing me to that @max-sixty . |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Xarray test suite failing with dask-master 454168102 | |
502432894 | https://github.com/pydata/xarray/issues/3009#issuecomment-502432894 | https://api.github.com/repos/pydata/xarray/issues/3009 | MDEyOklzc3VlQ29tbWVudDUwMjQzMjg5NA== | mrocklin 306380 | 2019-06-16T08:42:36Z | 2019-06-16T08:42:36Z | MEMBER | I believe that this is now resolved. Please let me know |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Xarray test suite failing with dask-master 454168102 | |
502290814 | https://github.com/pydata/xarray/issues/3022#issuecomment-502290814 | https://api.github.com/repos/pydata/xarray/issues/3022 | MDEyOklzc3VlQ29tbWVudDUwMjI5MDgxNA== | mrocklin 306380 | 2019-06-14T21:48:12Z | 2019-06-14T21:48:12Z | MEMBER |
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
LazilyOuterIndexedArray doesn't support slicing with slice objects 456239422 | |
501707221 | https://github.com/pydata/xarray/issues/3009#issuecomment-501707221 | https://api.github.com/repos/pydata/xarray/issues/3009 | MDEyOklzc3VlQ29tbWVudDUwMTcwNzIyMQ== | mrocklin 306380 | 2019-06-13T13:40:10Z | 2019-06-13T13:40:10Z | MEMBER | cc @pentschev |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Xarray test suite failing with dask-master 454168102 | |
484178174 | https://github.com/pydata/xarray/issues/2692#issuecomment-484178174 | https://api.github.com/repos/pydata/xarray/issues/2692 | MDEyOklzc3VlQ29tbWVudDQ4NDE3ODE3NA== | mrocklin 306380 | 2019-04-17T17:06:37Z | 2019-04-17T17:06:37Z | MEMBER | There is usually a BoF at the end of the conference around planning for the next conference. I suggest that a few of us show up and see if we can get engaged in the process for next year. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Xarray tutorial at SciPy 2019? 400948664 | |
480862757 | https://github.com/pydata/xarray/issues/2873#issuecomment-480862757 | https://api.github.com/repos/pydata/xarray/issues/2873 | MDEyOklzc3VlQ29tbWVudDQ4MDg2Mjc1Nw== | mrocklin 306380 | 2019-04-08T14:45:50Z | 2019-04-08T14:45:50Z | MEMBER | I'm also unable to reproduce this on my local MacBook Pro, though I haven't tried with the same versions as you have here. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Dask distributed tests fail locally 430188626 | |
480835950 | https://github.com/pydata/xarray/issues/2873#issuecomment-480835950 | https://api.github.com/repos/pydata/xarray/issues/2873 | MDEyOklzc3VlQ29tbWVudDQ4MDgzNTk1MA== | mrocklin 306380 | 2019-04-08T13:40:04Z | 2019-04-08T13:40:04Z | MEMBER | That does not look familiar to me, no. Two questions:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Dask distributed tests fail locally 430188626 | |
478290795 | https://github.com/pydata/xarray/issues/2692#issuecomment-478290795 | https://api.github.com/repos/pydata/xarray/issues/2692 | MDEyOklzc3VlQ29tbWVudDQ3ODI5MDc5NQ== | mrocklin 306380 | 2019-03-30T21:28:28Z | 2019-03-30T21:28:28Z | MEMBER | Looking at the tutorial schedule it looks like it was not accepted, but that there is a TBA slot. Any information here @jhamman ? Did you all receive a rejection response? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Xarray tutorial at SciPy 2019? 400948664 | |
472141327 | https://github.com/pydata/xarray/issues/2807#issuecomment-472141327 | https://api.github.com/repos/pydata/xarray/issues/2807 | MDEyOklzc3VlQ29tbWVudDQ3MjE0MTMyNw== | mrocklin 306380 | 2019-03-12T19:09:58Z | 2019-03-12T19:09:58Z | MEMBER |
Typically in Dask we run the user defined function on an empty version of the data and hope that it provides an appropriately shaped output. If it fails during this process, we ask the user to provide sufficient information for us to populate metadata. Maybe something similar would work here? Xarray would construct a dummy Xarray chunk, apply the user defined function onto that chunk, and then extrapolate metadata out from there somehow. I'm likely glossing over several important details, but hopefully the general gist of what I'm trying to convey above is somewhat sensible, even if not doable. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
can the callables of apply_ufunc + dask get a typed/labeled array 420139027 | |
465770077 | https://github.com/pydata/xarray/pull/2782#issuecomment-465770077 | https://api.github.com/repos/pydata/xarray/issues/2782 | MDEyOklzc3VlQ29tbWVudDQ2NTc3MDA3Nw== | mrocklin 306380 | 2019-02-20T21:54:15Z | 2019-02-20T21:54:15Z | MEMBER | I'm glad to see this. I'll also be curious to see what the performance will look like. cc @llllllllll |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
enable loading remote hdf5 files 412645481 | |
449531351 | https://github.com/pydata/xarray/pull/2589#issuecomment-449531351 | https://api.github.com/repos/pydata/xarray/issues/2589 | MDEyOklzc3VlQ29tbWVudDQ0OTUzMTM1MQ== | mrocklin 306380 | 2018-12-22T00:43:16Z | 2018-12-22T00:43:16Z | MEMBER | ``` mrocklin@carbon:~$ conda search rasterio=1 Loading channels: done Name Version Build Channelrasterio 1.0.13 py27hc38cc03_0 pkgs/main |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
added some logic to deal with rasterio objects in addition to filepaths 387123860 | |
449530939 | https://github.com/pydata/xarray/pull/2589#issuecomment-449530939 | https://api.github.com/repos/pydata/xarray/issues/2589 | MDEyOklzc3VlQ29tbWVudDQ0OTUzMDkzOQ== | mrocklin 306380 | 2018-12-22T00:38:34Z | 2018-12-22T00:38:34Z | MEMBER |
It looks like @jjhelmus resolved this upstream . It seems like https://github.com/ContinuumIO/anaconda-issues is a good issue tracker to know :) |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
added some logic to deal with rasterio objects in addition to filepaths 387123860 | |
448747075 | https://github.com/pydata/xarray/pull/2589#issuecomment-448747075 | https://api.github.com/repos/pydata/xarray/issues/2589 | MDEyOklzc3VlQ29tbWVudDQ0ODc0NzA3NQ== | mrocklin 306380 | 2018-12-19T21:17:52Z | 2018-12-19T21:17:52Z | MEMBER | https://github.com/ContinuumIO/anaconda-issues/issues/10443 On Wed, Dec 19, 2018 at 4:14 PM Jonathan J. Helmus notifications@github.com wrote:
|
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
added some logic to deal with rasterio objects in addition to filepaths 387123860 | |
448699184 | https://github.com/pydata/xarray/pull/2589#issuecomment-448699184 | https://api.github.com/repos/pydata/xarray/issues/2589 | MDEyOklzc3VlQ29tbWVudDQ0ODY5OTE4NA== | mrocklin 306380 | 2018-12-19T18:34:47Z | 2018-12-19T18:34:47Z | MEMBER |
@jjhelmus is there a good way to report things like this other than pinging you directly? (which I'm more than happy to continue doing :)) |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
added some logic to deal with rasterio objects in addition to filepaths 387123860 | |
440064660 | https://github.com/pydata/xarray/issues/1815#issuecomment-440064660 | https://api.github.com/repos/pydata/xarray/issues/1815 | MDEyOklzc3VlQ29tbWVudDQ0MDA2NDY2MA== | mrocklin 306380 | 2018-11-19T22:27:31Z | 2018-11-19T22:27:31Z | MEMBER | FYI @magonser |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
apply_ufunc(dask='parallelized') with multiple outputs 287223508 | |
432016733 | https://github.com/pydata/xarray/pull/2500#issuecomment-432016733 | https://api.github.com/repos/pydata/xarray/issues/2500 | MDEyOklzc3VlQ29tbWVudDQzMjAxNjczMw== | mrocklin 306380 | 2018-10-22T22:42:11Z | 2018-10-22T22:42:11Z | MEMBER | I'm not sure that I understand the failure here. Can someone verify that this is related to these changes? ``` =================================== FAILURES =================================== ____ TestCfGrib.test_read ____ self = <xarray.tests.test_backends.TestCfGrib object at 0x7fd47fc30b00> def test_read(self): expected = {'number': 2, 'time': 3, 'air_pressure': 2, 'latitude': 3, 'longitude': 4} with open_example_dataset('example.grib', engine='cfgrib') as ds:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Avoid use of deprecated get= parameter in tests 372640063 | |
431934087 | https://github.com/pydata/xarray/pull/2500#issuecomment-431934087 | https://api.github.com/repos/pydata/xarray/issues/2500 | MDEyOklzc3VlQ29tbWVudDQzMTkzNDA4Nw== | mrocklin 306380 | 2018-10-22T18:49:11Z | 2018-10-22T18:49:11Z | MEMBER | Yes, github is still having issues. https://status.github.com/messages We are working through the backlogs of webhook deliveries and Pages builds. We continue to monitor as the site recovers. On Mon, Oct 22, 2018 at 2:48 PM Maximilian Roos notifications@github.com wrote:
|
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Avoid use of deprecated get= parameter in tests 372640063 | |
429309135 | https://github.com/pydata/xarray/issues/2480#issuecomment-429309135 | https://api.github.com/repos/pydata/xarray/issues/2480 | MDEyOklzc3VlQ29tbWVudDQyOTMwOTEzNQ== | mrocklin 306380 | 2018-10-12T12:29:47Z | 2018-10-12T12:29:47Z | MEMBER | This should be fixed with https://github.com/dask/dask/pull/4081 |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
test_apply_dask_new_output_dimension is broken on master with dask-dev 369310993 | |
429156168 | https://github.com/pydata/xarray/issues/2480#issuecomment-429156168 | https://api.github.com/repos/pydata/xarray/issues/2480 | MDEyOklzc3VlQ29tbWVudDQyOTE1NjE2OA== | mrocklin 306380 | 2018-10-11T23:34:31Z | 2018-10-11T23:34:31Z | MEMBER | No need to bother with the reproducible example. As a warning, there might be some increased churn like this if we move forward with some of the proposed dask array changes. On Thu, Oct 11, 2018, 7:32 PM Matthew Rocklin mrocklin@gmail.com wrote:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
test_apply_dask_new_output_dimension is broken on master with dask-dev 369310993 | |
429155894 | https://github.com/pydata/xarray/issues/2480#issuecomment-429155894 | https://api.github.com/repos/pydata/xarray/issues/2480 | MDEyOklzc3VlQ29tbWVudDQyOTE1NTg5NA== | mrocklin 306380 | 2018-10-11T23:32:59Z | 2018-10-11T23:32:59Z | MEMBER | Yeah, I noticed this too. I have a fix already in a PR On Thu, Oct 11, 2018, 5:24 PM Stephan Hoyer notifications@github.com wrote:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
test_apply_dask_new_output_dimension is broken on master with dask-dev 369310993 | |
417084035 | https://github.com/pydata/xarray/issues/2390#issuecomment-417084035 | https://api.github.com/repos/pydata/xarray/issues/2390 | MDEyOklzc3VlQ29tbWVudDQxNzA4NDAzNQ== | mrocklin 306380 | 2018-08-29T19:54:49Z | 2018-08-29T19:54:49Z | MEMBER | Sorry, I meant |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Why are there two compute calls for plot? 355308699 | |
417076999 | https://github.com/pydata/xarray/issues/2389#issuecomment-417076999 | https://api.github.com/repos/pydata/xarray/issues/2389 | MDEyOklzc3VlQ29tbWVudDQxNzA3Njk5OQ== | mrocklin 306380 | 2018-08-29T19:32:17Z | 2018-08-29T19:32:17Z | MEMBER | I wouldn't expect this to sway things too much, but yes, there is a chance that that would happen. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Large pickle overhead in ds.to_netcdf() involving dask.delayed functions 355264812 | |
417072024 | https://github.com/pydata/xarray/issues/2389#issuecomment-417072024 | https://api.github.com/repos/pydata/xarray/issues/2389 | MDEyOklzc3VlQ29tbWVudDQxNzA3MjAyNA== | mrocklin 306380 | 2018-08-29T19:15:10Z | 2018-08-29T19:15:10Z | MEMBER |
You can make it a separate task (often done by wrapping with dask.delayed) and then use that key within other objets. This does create a data dependency though, which can make the graph somewhat more complex. In normal use of Pickle these things are cached and reused. Unfortunately we can't do this because we're sending the tasks to different machines, each of which will need to deserialize independently. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Large pickle overhead in ds.to_netcdf() involving dask.delayed functions 355264812 | |
406572020 | https://github.com/pydata/xarray/issues/2298#issuecomment-406572020 | https://api.github.com/repos/pydata/xarray/issues/2298 | MDEyOklzc3VlQ29tbWVudDQwNjU3MjAyMA== | mrocklin 306380 | 2018-07-20T11:20:59Z | 2018-07-20T11:20:59Z | MEMBER | Two thoughts:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Making xarray math lazy 342180429 | |
401028473 | https://github.com/pydata/xarray/pull/2255#issuecomment-401028473 | https://api.github.com/repos/pydata/xarray/issues/2255 | MDEyOklzc3VlQ29tbWVudDQwMTAyODQ3Mw== | mrocklin 306380 | 2018-06-28T13:08:58Z | 2018-06-29T14:00:19Z | MEMBER | ```python import os if not os.path.exists('myfile.tif'): import requests response = requests.get('https://oin-hotosm.s3.amazonaws.com/5abae68e65bd8f00110f3e42/0/5abae68e65bd8f00110f3e43.tif') with open('myfile.tif', 'wb') as f: f.write(response.content) import dask dask.config.set({'array.chunk-size': '1MiB'}) import xarray as xr ds = xr.open_rasterio('myfile.tif', chunks=True) # this only reads metadata to start
Also depends on https://github.com/dask/dask/pull/3679 . Without that PR it will use values that are similar, but don't precisely align with 1024. Oh, I should point out that the image has tiles of size (512, 512) |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Add automatic chunking to open_rasterio 336371511 | |
398838600 | https://github.com/pydata/xarray/issues/2237#issuecomment-398838600 | https://api.github.com/repos/pydata/xarray/issues/2237 | MDEyOklzc3VlQ29tbWVudDM5ODgzODYwMA== | mrocklin 306380 | 2018-06-20T17:48:49Z | 2018-06-20T17:48:49Z | MEMBER | I've implemented something here: https://github.com/dask/dask/pull/3648 Playing with it would be welcome. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
why time grouping doesn't preserve chunks 333312849 | |
398586226 | https://github.com/pydata/xarray/issues/2237#issuecomment-398586226 | https://api.github.com/repos/pydata/xarray/issues/2237 | MDEyOklzc3VlQ29tbWVudDM5ODU4NjIyNg== | mrocklin 306380 | 2018-06-20T00:26:39Z | 2018-06-20T00:26:39Z | MEMBER | Thanks. This example helps.
I'm not sure I understand this. The situation on the whole does seem sensible to me though. This starts to look a little bit like a proper shuffle situation (using dataframe terminology). Each of your 365 output partitions would presumably touch 1/12th of your input partitions, leading to a quadratic number of tasks. If after doing something you then wanted to rearrange your data back then presumably that would require an equivalent number of extra tasks. Am I understanding the situation correctly? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
why time grouping doesn't preserve chunks 333312849 | |
398582100 | https://github.com/pydata/xarray/issues/2237#issuecomment-398582100 | https://api.github.com/repos/pydata/xarray/issues/2237 | MDEyOklzc3VlQ29tbWVudDM5ODU4MjEwMA== | mrocklin 306380 | 2018-06-19T23:59:58Z | 2018-06-19T23:59:58Z | MEMBER | So if you're willing to humor me for a moment with dask.array examples, if you have an array that's currently partitioned by month:
And you do something by |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
why time grouping doesn't preserve chunks 333312849 | |
398581508 | https://github.com/pydata/xarray/issues/2237#issuecomment-398581508 | https://api.github.com/repos/pydata/xarray/issues/2237 | MDEyOklzc3VlQ29tbWVudDM5ODU4MTUwOA== | mrocklin 306380 | 2018-06-19T23:56:22Z | 2018-06-19T23:56:22Z | MEMBER | So my question was "if you're grouping data by month, and it's already partitioned by month, then why are the indices out of order?" However it may be that you've answer this in your most recent comment, I'm not sure. It may also be that I'm not understanding the situation. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
why time grouping doesn't preserve chunks 333312849 | |
398577207 | https://github.com/pydata/xarray/issues/2237#issuecomment-398577207 | https://api.github.com/repos/pydata/xarray/issues/2237 | MDEyOklzc3VlQ29tbWVudDM5ODU3NzIwNw== | mrocklin 306380 | 2018-06-19T23:29:37Z | 2018-06-19T23:29:37Z | MEMBER |
Maybe. We'll blow out the scheduler with too many tasks. With one large task we'll probably just start losing workers from memory errors. In your example what does the chunking of the indexed array likely to look like? How is the interaction between contiguous regions of the index and the chunk structure of the indexed array? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
why time grouping doesn't preserve chunks 333312849 | |
398575620 | https://github.com/pydata/xarray/issues/2237#issuecomment-398575620 | https://api.github.com/repos/pydata/xarray/issues/2237 | MDEyOklzc3VlQ29tbWVudDM5ODU3NTYyMA== | mrocklin 306380 | 2018-06-19T23:20:23Z | 2018-06-19T23:20:23Z | MEMBER | It's also probably worth thinking about the kind of operations you're trying to do, and how streamable they are. For example, if you were to take a dataset that was partitioned chronologically by month and then do some sort of day-of-month grouping then that would require the full dataset to be in memory at once. If you're doing something like grouping on every month (keeping months of different years separate) then presumably your index is already sorted, and so you should be fine with the current behavior. It might be useful to take a look at how the various XArray cases you care about convert to dask array slicing operations. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
why time grouping doesn't preserve chunks 333312849 | |
398573000 | https://github.com/pydata/xarray/issues/2237#issuecomment-398573000 | https://api.github.com/repos/pydata/xarray/issues/2237 | MDEyOklzc3VlQ29tbWVudDM5ODU3MzAwMA== | mrocklin 306380 | 2018-06-19T23:03:53Z | 2018-06-19T23:03:53Z | MEMBER | OK, so lowering down to a dask array conversation, lets look at a couple examples. First, lets look at the behavior of a sorted index: ```python import dask.array as da x = da.ones((20, 20), chunks=(4, 5)) x.chunks ((4, 4, 4, 4, 4), (5, 5, 5, 5))``` If we index that array with a sorted index, we are able to efficiently preserve chunking: ```python import numpy as np x[np.arange(20), :].chunks ((4, 4, 4, 4, 4), (5, 5, 5, 5))x[np.arange(20) // 2, :].chunks ((8, 8, 4), (5, 5, 5, 5))``` However if the index isn't sorted then everything goes into one big chunk: ```python x[np.arange(20) % 3, :].chunks ((20,), (5, 5, 5, 5))``` We could imagine a few alternatives here:
I don't really have a strong intuition for how the xarray operations transform into dask array operations (my brain is a bit tired right now, so thinking is hard) but my guess is that they would benefit from the second case. (A pure dask.array example would be welcome). Now we have to consider how enacting a policy like "put contiguous index regions into the same chunk" might go wrong, and how we might defend against it generally.
In the example above we have a hundred input chunks and a hundred contiguous regions in our index. Seems good. However each output chunk touches each input chunk, so this will likely create 10,000 tasks, which we should probably consider a fail case here. So we learn that we need to look pretty carefully at how the values within the index interact with the chunk structure in order to know if we can do this well. This isn't an insurmountable problem, but isn't trivial either. In principle we're looking for a function that takes in two inputs:
And outputs a bunch of smaller indexes to pass on to various chunks. However, it hopefully does this in a way that is efficient, and fails early if it's going to emit a bunch of very small slices. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
why time grouping doesn't preserve chunks 333312849 | |
398500088 | https://github.com/pydata/xarray/issues/2238#issuecomment-398500088 | https://api.github.com/repos/pydata/xarray/issues/2238 | MDEyOklzc3VlQ29tbWVudDM5ODUwMDA4OA== | mrocklin 306380 | 2018-06-19T18:31:04Z | 2018-06-19T18:31:04Z | MEMBER | We can add this back in. I anticipate having to do a bugfix release within a week or two. Long term you probably want to do the following:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Failing test with dask_distributed 333480301 | |
398218407 | https://github.com/pydata/xarray/issues/2237#issuecomment-398218407 | https://api.github.com/repos/pydata/xarray/issues/2237 | MDEyOklzc3VlQ29tbWVudDM5ODIxODQwNw== | mrocklin 306380 | 2018-06-18T22:43:25Z | 2018-06-18T22:43:25Z | MEMBER | I think that it would be useful to consider many possible cases of how people might want to chunk dask arrays with out-of-order indices, and the desired chunking outputs. XArray users like those here can provide some of those use cases. We'll have to gather others from other communities. Maybe once we have enough use cases gathered then rules for what correct behavior should be will emerge? On Mon, Jun 18, 2018 at 5:16 PM Stephan Hoyer notifications@github.com wrote:
|
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
why time grouping doesn't preserve chunks 333312849 | |
397615169 | https://github.com/pydata/xarray/issues/2234#issuecomment-397615169 | https://api.github.com/repos/pydata/xarray/issues/2234 | MDEyOklzc3VlQ29tbWVudDM5NzYxNTE2OQ== | mrocklin 306380 | 2018-06-15T13:10:56Z | 2018-06-15T13:10:56Z | MEMBER | Replicated on pangeo.pydata.org. I created a local cluster on pangeo and found that things worked fine, suggesting that it was due to a version mismatch between the client and workers. I then ran client.get_versions(check=True) and found that many things were very far out of sync, which made me curious to see if we were using the right image. Looking at my worker_template.yaml file and at the cluster.pod_template everything looks fine. I think that the next step is to verify the contents of the worker image. I'm headed out the door at the moment though. I can try to take another look at this in a bit. Alternatively I would welcome others carrying this on. On Fri, Jun 15, 2018 at 8:59 AM Ryan Abernathey notifications@github.com wrote:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
fillna error with distributed 332762756 | |
385501221 | https://github.com/pydata/xarray/issues/2042#issuecomment-385501221 | https://api.github.com/repos/pydata/xarray/issues/2042 | MDEyOklzc3VlQ29tbWVudDM4NTUwMTIyMQ== | mrocklin 306380 | 2018-04-30T19:20:04Z | 2018-04-30T19:20:04Z | MEMBER |
I'm aware. See this doc listed above for rasterio: https://rasterio.readthedocs.io/en/latest/topics/windowed-rw.html#writing Background here is that rasterio more-or-less wraps around GDAL, but with interfaces that are somewhat more idiomatic to this community.
We've run into these issues before as well. Typically we handle them with locks of various types. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Anyone working on a to_tiff? Alternatively, how do you write an xarray to a geotiff? 312203596 | |
385488636 | https://github.com/pydata/xarray/issues/2042#issuecomment-385488636 | https://api.github.com/repos/pydata/xarray/issues/2042 | MDEyOklzc3VlQ29tbWVudDM4NTQ4ODYzNg== | mrocklin 306380 | 2018-04-30T18:34:21Z | 2018-04-30T18:34:21Z | MEMBER | My first attempt would be to use this API: https://rasterio.readthedocs.io/en/latest/topics/windowed-rw.html#writing |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Anyone working on a to_tiff? Alternatively, how do you write an xarray to a geotiff? 312203596 | |
385488169 | https://github.com/pydata/xarray/issues/2042#issuecomment-385488169 | https://api.github.com/repos/pydata/xarray/issues/2042 | MDEyOklzc3VlQ29tbWVudDM4NTQ4ODE2OQ== | mrocklin 306380 | 2018-04-30T18:32:44Z | 2018-04-30T18:32:44Z | MEMBER |
If you're able to expand on this that would be welcome.
My hope would be that rasterio/GDAL would handle the many-file-format issue for us if they support writing in chunks. I also lack experience here though. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Anyone working on a to_tiff? Alternatively, how do you write an xarray to a geotiff? 312203596 | |
385452187 | https://github.com/pydata/xarray/issues/2093#issuecomment-385452187 | https://api.github.com/repos/pydata/xarray/issues/2093 | MDEyOklzc3VlQ29tbWVudDM4NTQ1MjE4Nw== | mrocklin 306380 | 2018-04-30T16:27:37Z | 2018-04-30T16:27:37Z | MEMBER | My guess is that geotiff chunks will be much smaller than is ideal for dask.array. We might want to expand those chunk sizes by some multiple. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Default chunking in GeoTIFF images 318950038 | |
385451826 | https://github.com/pydata/xarray/issues/2042#issuecomment-385451826 | https://api.github.com/repos/pydata/xarray/issues/2042 | MDEyOklzc3VlQ29tbWVudDM4NTQ1MTgyNg== | mrocklin 306380 | 2018-04-30T16:26:13Z | 2018-04-30T16:26:13Z | MEMBER | When writing https://github.com/pydata/xarray/issues/2093 I came across this issue and thought I'd weigh in. The GIS community seems like a fairly close neighbor to XArray's current community. Some API compatibility here might be a good to expand the community. I definitely agree that GeoTiff does not implement the full XArray model, but it might be useful to support the subset of datasets that do, just so that round-trip operations can occur. For example, it might be nice if the following worked: ```python dset = xr.open_rasterio(...) do modest modifications to destdset.to_rasterio(...) ``` My hope would be that the rasterio/GDAL data model would be consistent enough so that we could detect and err early if the dataset was not well-formed. |
{ "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Anyone working on a to_tiff? Alternatively, how do you write an xarray to a geotiff? 312203596 | |
383817119 | https://github.com/pydata/xarray/issues/2074#issuecomment-383817119 | https://api.github.com/repos/pydata/xarray/issues/2074 | MDEyOklzc3VlQ29tbWVudDM4MzgxNzExOQ== | mrocklin 306380 | 2018-04-24T06:22:39Z | 2018-04-24T06:22:39Z | MEMBER | When doing benchmarks with things that might call BLAS operations in multiple threads I recommend setting the OMP_NUM_THREADS environment variable to 1. This will avoid oversubscription. On Mon, Apr 23, 2018 at 7:32 PM, Keisuke Fujii notifications@github.com wrote:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray.dot() dask problems 316618290 | |
383651390 | https://github.com/pydata/xarray/issues/2074#issuecomment-383651390 | https://api.github.com/repos/pydata/xarray/issues/2074 | MDEyOklzc3VlQ29tbWVudDM4MzY1MTM5MA== | mrocklin 306380 | 2018-04-23T17:12:04Z | 2018-04-23T17:12:04Z | MEMBER | { "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray.dot() dask problems 316618290 | ||
383109977 | https://github.com/pydata/xarray/issues/1938#issuecomment-383109977 | https://api.github.com/repos/pydata/xarray/issues/1938 | MDEyOklzc3VlQ29tbWVudDM4MzEwOTk3Nw== | mrocklin 306380 | 2018-04-20T14:15:38Z | 2018-04-20T14:15:38Z | MEMBER | Thanks for taking the initiative here @hameerabbasi ! It's good to see something up already. Here is a link to the discussion that I think @hameerabbasi is referring to: http://numpy-discussion.10968.n7.nabble.com/new-NEP-np-AbstractArray-and-np-asabstractarray-tt45282.html#none I haven't read through that entirely yet, was arrayish decided on by the community or was the term still up for discussion? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Hooks for XArray operations 299668148 | |
383104966 | https://github.com/pydata/xarray/issues/1938#issuecomment-383104966 | https://api.github.com/repos/pydata/xarray/issues/1938 | MDEyOklzc3VlQ29tbWVudDM4MzEwNDk2Ng== | mrocklin 306380 | 2018-04-20T13:59:23Z | 2018-04-20T13:59:23Z | MEMBER | Happy with arrayish too On Fri, Apr 20, 2018 at 9:59 AM, Matthew Rocklin mrocklin@gmail.com wrote:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Hooks for XArray operations 299668148 | |
383104907 | https://github.com/pydata/xarray/issues/1938#issuecomment-383104907 | https://api.github.com/repos/pydata/xarray/issues/1938 | MDEyOklzc3VlQ29tbWVudDM4MzEwNDkwNw== | mrocklin 306380 | 2018-04-20T13:59:09Z | 2018-04-20T13:59:09Z | MEMBER | What name should we go with? I have a slight preference for duckarray over arrayish but happy with whatever the group decides. On Fri, Apr 20, 2018 at 1:51 AM, Hameer Abbasi notifications@github.com wrote:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Hooks for XArray operations 299668148 | |
382901777 | https://github.com/pydata/xarray/issues/1938#issuecomment-382901777 | https://api.github.com/repos/pydata/xarray/issues/1938 | MDEyOklzc3VlQ29tbWVudDM4MjkwMTc3Nw== | mrocklin 306380 | 2018-04-19T22:36:48Z | 2018-04-19T22:36:48Z | MEMBER | Doing this externally sounds sensible to me. Thoughts on a good name? duck_array seems to be free on PyPI On Thu, Apr 19, 2018 at 4:23 PM, Stephan Hoyer notifications@github.com wrote:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Hooks for XArray operations 299668148 | |
382709490 | https://github.com/pydata/xarray/issues/1938#issuecomment-382709490 | https://api.github.com/repos/pydata/xarray/issues/1938 | MDEyOklzc3VlQ29tbWVudDM4MjcwOTQ5MA== | mrocklin 306380 | 2018-04-19T12:05:22Z | 2018-04-19T12:05:22Z | MEMBER | In https://github.com/pydata/sparse/issues/1#issuecomment-370248174 @shoyer mentions that some work could likely progress in XArray before deciding on the VarArgs in multipledispatch. If XArray maintainers have time it might be valuable to lay out how that would look so that other devs can try it out. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Hooks for XArray operations 299668148 | |
371813468 | https://github.com/pydata/xarray/issues/1895#issuecomment-371813468 | https://api.github.com/repos/pydata/xarray/issues/1895 | MDEyOklzc3VlQ29tbWVudDM3MTgxMzQ2OA== | mrocklin 306380 | 2018-03-09T13:35:38Z | 2018-03-09T13:35:38Z | MEMBER | If things are operational then we're fine. It may be that a lot of this cost was due to other serialization things in gcsfs, zarr, or other. On Fri, Mar 9, 2018 at 12:33 AM, Joe Hamman notifications@github.com wrote:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Avoid Adapters in task graphs? 295270362 | |
371561783 | https://github.com/pydata/xarray/issues/1974#issuecomment-371561783 | https://api.github.com/repos/pydata/xarray/issues/1974 | MDEyOklzc3VlQ29tbWVudDM3MTU2MTc4Mw== | mrocklin 306380 | 2018-03-08T17:32:08Z | 2018-03-08T17:32:08Z | MEMBER | Seeing a good thing twice never hurts. The audience is likely not entirely the same. It's also probably the motivation for their interest. It might be useful as an introduction. On Thu, Mar 8, 2018 at 12:30 PM, Alistair Miles notifications@github.com wrote:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray/zarr cloud demo 303270676 | |
371548028 | https://github.com/pydata/xarray/issues/1974#issuecomment-371548028 | https://api.github.com/repos/pydata/xarray/issues/1974 | MDEyOklzc3VlQ29tbWVudDM3MTU0ODAyOA== | mrocklin 306380 | 2018-03-08T16:49:38Z | 2018-03-08T16:49:38Z | MEMBER | Recorded video if you want: https://youtu.be/rSOJKbfNBNk On Thu, Mar 8, 2018 at 11:38 AM, Alistair Miles notifications@github.com wrote:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray/zarr cloud demo 303270676 | |
371462262 | https://github.com/pydata/xarray/issues/1971#issuecomment-371462262 | https://api.github.com/repos/pydata/xarray/issues/1971 | MDEyOklzc3VlQ29tbWVudDM3MTQ2MjI2Mg== | mrocklin 306380 | 2018-03-08T11:35:25Z | 2018-03-08T11:35:25Z | MEMBER | FWIW most of the logic within the dask collections (array, dataframe, delayed) is only tested with Obviously though for things like writing to disk it's useful to check different schedulers. |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Should we be testing against multiple dask schedulers? 302930480 | |
368575548 | https://github.com/pydata/xarray/issues/1873#issuecomment-368575548 | https://api.github.com/repos/pydata/xarray/issues/1873 | MDEyOklzc3VlQ29tbWVudDM2ODU3NTU0OA== | mrocklin 306380 | 2018-02-26T17:11:32Z | 2018-02-26T17:11:32Z | MEMBER | From Anaconda's David Mason (I don't know his github handle):
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Documentation is inaccessible via HTTPS 293272998 | |
368569053 | https://github.com/pydata/xarray/issues/1873#issuecomment-368569053 | https://api.github.com/repos/pydata/xarray/issues/1873 | MDEyOklzc3VlQ29tbWVudDM2ODU2OTA1Mw== | mrocklin 306380 | 2018-02-26T16:52:21Z | 2018-02-26T16:52:21Z | MEMBER | Not really, no. I tend to push these upstream to either Anaconda's IT or NumFOCUS. cc @aterrel |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Documentation is inaccessible via HTTPS 293272998 | |
368267730 | https://github.com/pydata/xarray/issues/1938#issuecomment-368267730 | https://api.github.com/repos/pydata/xarray/issues/1938 | MDEyOklzc3VlQ29tbWVudDM2ODI2NzczMA== | mrocklin 306380 | 2018-02-24T23:11:28Z | 2018-02-24T23:11:28Z | MEMBER | cc @jcrist , who has historically been interested in how we solve this problem within dask.array |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Hooks for XArray operations 299668148 | |
368159542 | https://github.com/pydata/xarray/issues/1938#issuecomment-368159542 | https://api.github.com/repos/pydata/xarray/issues/1938 | MDEyOklzc3VlQ29tbWVudDM2ODE1OTU0Mg== | mrocklin 306380 | 2018-02-23T22:41:54Z | 2018-02-23T22:41:54Z | MEMBER | I would want to see how magical it was. @llllllllll 's calibration of "mild metaprogramming" may differ slightly from my own :) Eventually if multipledispatch becomes a dependency of xarray then we should consider changing the decision-making process away from being just me though. Relatedly, SymPy also just adopted it (by vendoring) as a dependency. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Hooks for XArray operations 299668148 | |
368068500 | https://github.com/pydata/xarray/issues/1938#issuecomment-368068500 | https://api.github.com/repos/pydata/xarray/issues/1938 | MDEyOklzc3VlQ29tbWVudDM2ODA2ODUwMA== | mrocklin 306380 | 2018-02-23T16:54:37Z | 2018-02-23T16:54:37Z | MEMBER | Import times on multipledispatch have improved thanks to work by @llllllllll . They could probably be further improved if people wanted to invest modest intellectual effort here. Costs scale with the number of type signatures on each operation. In blaze this was very high, well into the hundreds, in our case it would be, I think, more modest around 2-10. (also, historical note, multipledispatch predates my involvement in Blaze). When possible it would be useful to upstream these concerns to NumPy, even if we have to move faster than NumPy is able to support. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Hooks for XArray operations 299668148 | |
367802779 | https://github.com/pydata/xarray/issues/1935#issuecomment-367802779 | https://api.github.com/repos/pydata/xarray/issues/1935 | MDEyOklzc3VlQ29tbWVudDM2NzgwMjc3OQ== | mrocklin 306380 | 2018-02-22T19:58:55Z | 2018-02-22T19:58:55Z | MEMBER | +1 on reporting upstream if convenient |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Not compatible with PyPy and dask.array. 299346082 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
issue >30