home / github / issue_comments

Menu
  • GraphQL API
  • Search all tables

issue_comments: 662806773

This data as json

html_url issue_url id node_id user created_at updated_at author_association body reactions performed_via_github_app issue
https://github.com/pydata/xarray/issues/4236#issuecomment-662806773 https://api.github.com/repos/pydata/xarray/issues/4236 662806773 MDEyOklzc3VlQ29tbWVudDY2MjgwNjc3Mw== 8098361 2020-07-23T03:56:09Z 2020-07-23T03:56:09Z NONE

Thanks for the suggestion of functools.partial. I have (amazingly) never used it before so it's great to learn new things. If it's a way of 'fixing' existing args to a function that requires more arguments than you want to pass it -- The sum(x, y) => sum2=partial(sum(x, 2)) => sum2(x) sort of example -- then at first glance isn't this the opposite to what I want to do? ie. to pass more args to the callback. I suspect I'm approaching this the wrong way though, going from your last paragraph above. I'm just playing with a minimal sample now.

Otherwise, I do agree with you about when args would need to be passed, ie. individual file processing that can't be done outside. Obviously if you don't need args, don't pass any. While I see now my use case doesn't need that, there still might be others that do, though this might be rare (later I'll need to add a dimension for each file with a value that varies between files, but luckily I can extract that from the filename). I was imagining additional args working something like the way the schedule module handles Job callbacks . ``` import schedule schedule.Job.do? Signature: schedule.Job.do(self, job_func, args, *kwargs) Docstring: Specifies the job_func that should be called every time the job runs.

Any additional arguments are passed on to job_func when the job runs.

:param job_func: The function to be scheduled :return: The invoked job instance File: d:\anaconda3\lib\site-packages\schedule__init__.py Type: function ``` My original intent was cutting down the data I was loading from large files by managing that through the preprocess callback. But this is where I readily admit not knowing how xarray handles things under the covers which means I do things the wrong (sub-optimal?) way. I'm not the only one that is struggling with what is optimal though; Unexpected behaviour when chunking with multiple netcdf files in xarray/dask

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  659142789
Powered by Datasette · Queries took 0.949ms · About: xarray-datasette