issue_comments: 1340806519
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/7363#issuecomment-1340806519 | https://api.github.com/repos/pydata/xarray/issues/7363 | 1340806519 | IC_kwDOAMm_X85P6xV3 | 8382834 | 2022-12-07T11:06:21Z | 2022-12-07T11:06:21Z | CONTRIBUTOR | Yes, this is representative of my dataset :) . Ok, interesting. I start this on my machine (Ubuntu 20.04, with 16GB of RAM, 15.3GB reported by the system as max available for memory).
``` [ins] In [1]: import numpy as np ...: import xarray as xr ...: import datetime ...: ...: # create two timeseries', second is for reindex ...: itime = np.arange(0, 3208464).astype("<M8[s]") ...: itime2 = np.arange(0, 4000000).astype("<M8[s]") ...: ...: # create two dataset with the time only ...: ds1 = xr.Dataset({"time": itime}) ...: ds2 = xr.Dataset({"time": itime2}) ...: ...: # add random data to ds1 ...: ds1 = ds1.expand_dims("station") ...: ds1 = ds1.assign({"test": (["station", "time"], np.random.rand(106, 3208464))}) [ins] In [2]: %%time ...: ds3 = ds1.reindex(time=ds2.time) ...: ...: Killed ``` I will try again later trying with fewer things open so I can start from a lower RAM level / more available RAM and see if this helps. Can this be a different in performance due to different versions? What kind of machine are you running on? Still, not being able to do this with over 9GB of RAM available feels a bit limiting :) . |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
1479121713 |