issue_comments: 153258925
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/643#issuecomment-153258925 | https://api.github.com/repos/pydata/xarray/issues/643 | 153258925 | MDEyOklzc3VlQ29tbWVudDE1MzI1ODkyNQ== | 1217238 | 2015-11-03T06:39:39Z | 2015-11-03T06:39:39Z | MEMBER | A useful point of reference here is to compare xray's performance to pandas: ``` In [20]: %%timeit t = pd.DataFrame({'x': np.arange(10000)}) for _ in t.iterrows(): pass ....: 1 loops, best of 3: 558 ms per loop In [21]: %%timeit t = DataArray(np.arange(10000)) for _ in t: pass ....: 1 loops, best of 3: 1.49 s per loop ``` So we're about 2.5-3x slower than pandas. We might be able to catch up there with some careful optimization, but I doubt we would be able to do much better. Basically, the issue is that DataArray (like pandas's Series) is not an extension type (written in C/Cython), which means that it is much slower to construct -- iterating over it entails a lot of Python logic and a lot of memory allocation (e.g., a new dictionary for every instance). We could imagine converting xray's core data objects to Cython extension types, but I doubt the tradeoff in easy of use would be worth it. I prefer to encourage users to use xray's easy interface to NumPy when entering into something performance critical like a loop. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
114732169 |