home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

12 rows where user = 1277781 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: issue_url, reactions, created_at (date), updated_at (date)

issue 7

  • xr.cov() and xr.corr() 3
  • xarray.map 3
  • Stack + to_array before to_xarray is much faster that a simple to_xarray 2
  • MultiIndex serialization to NetCDF 1
  • 0.8.2 incompatible with pandas 0.20.1 ? 1
  • bfill behavior dask arrays with small chunk size 1
  • warn when updating coord.values : indexes are not updated 1

user 1

  • kefirbandi · 12 ✖

author_association 1

  • CONTRIBUTOR 12
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
706937046 https://github.com/pydata/xarray/pull/4484#issuecomment-706937046 https://api.github.com/repos/pydata/xarray/issues/4484 MDEyOklzc3VlQ29tbWVudDcwNjkzNzA0Ng== kefirbandi 1277781 2020-10-12T07:36:34Z 2020-10-12T07:36:34Z CONTRIBUTOR

If we're doing to do this, I would suggest that the right signature is xarray.map(func, *datasets, **optional_kwargs), matching Python's builtin map.

What I'd like to ensure is a clean separation between the arguments of xarray.map and func. Map has three "own" parameters, like func, datasets and keep_attrs. By using the **kwargs approach we are excluding these parameter names from func. Not saying that is likely that anyone would apply a function with such parameter names. But not impossible either. Also having a real dict for keyword args (and maybe a list for positional arguments of func) is more explicit.

In my implementation the order of parameters is datasets and then func to match that of Dataset.map with the implicit self. But probably func and then the datasets is more intuitive.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  xarray.map 714228717
705066436 https://github.com/pydata/xarray/pull/4484#issuecomment-705066436 https://api.github.com/repos/pydata/xarray/issues/4484 MDEyOklzc3VlQ29tbWVudDcwNTA2NjQzNg== kefirbandi 1277781 2020-10-07T16:56:04Z 2020-10-07T16:56:04Z CONTRIBUTOR
* Are there many other cases outside of `xr.dot` which only operate on `DataArray`s? If not, we could update that function to take a `Dataset`

I think it would be a good idea to extend dot to Datasets. However a user may wish to map a custom DataArray function to Dataset.

* Maybe jumping ahead — are there functions where the result of `func(ds1, ds2)` shouldn't be that function mapped over the matching variables?

Not sure of the context of this. In the most general case one can certainly implement any function on ds1 and ds2. Or are you referring to the built-ins such as .dot?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  xarray.map 714228717
704037491 https://github.com/pydata/xarray/pull/4484#issuecomment-704037491 https://api.github.com/repos/pydata/xarray/issues/4484 MDEyOklzc3VlQ29tbWVudDcwNDAzNzQ5MQ== kefirbandi 1277781 2020-10-06T05:32:04Z 2020-10-06T05:32:04Z CONTRIBUTOR

Could I ask what the common use cases for this would be? If I understand correctly, running map(x, y, lambda x: x + y) is equivalent to x + y.

The motivating use case was that I wanted to compute the dot-product of two DataSets (=all of their matching variables). But in general any other function which is not as simple as x + y could be used here.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  xarray.map 714228717
652009055 https://github.com/pydata/xarray/issues/2459#issuecomment-652009055 https://api.github.com/repos/pydata/xarray/issues/2459 MDEyOklzc3VlQ29tbWVudDY1MjAwOTA1NQ== kefirbandi 1277781 2020-06-30T19:53:46Z 2020-06-30T19:53:46Z CONTRIBUTOR

I've reimplemented from_dataframe to make use of in #4184, and it indeed makes things much, much faster! The original example in this thread is now 40x faster.

Very good news! Thanks for implementing it!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Stack + to_array before to_xarray is much faster that a simple to_xarray 365973662
634200431 https://github.com/pydata/xarray/pull/4089#issuecomment-634200431 https://api.github.com/repos/pydata/xarray/issues/4089 MDEyOklzc3VlQ29tbWVudDYzNDIwMDQzMQ== kefirbandi 1277781 2020-05-26T18:31:31Z 2020-05-26T18:31:31Z CONTRIBUTOR

@AndrewWilliams3142 I see. Thanks.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  xr.cov() and xr.corr() 623751213
634157768 https://github.com/pydata/xarray/pull/4089#issuecomment-634157768 https://api.github.com/repos/pydata/xarray/issues/4089 MDEyOklzc3VlQ29tbWVudDYzNDE1Nzc2OA== kefirbandi 1277781 2020-05-26T17:12:41Z 2020-05-26T17:12:41Z CONTRIBUTOR

Well, actually I was thinking, that correcting it for someone who is working on the code on a daily basis is ~30 seconds. For me, I think, it would be quite a bit of overhead for a single character...

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  xr.cov() and xr.corr() 623751213
633921230 https://github.com/pydata/xarray/pull/4089#issuecomment-633921230 https://api.github.com/repos/pydata/xarray/issues/4089 MDEyOklzc3VlQ29tbWVudDYzMzkyMTIzMA== kefirbandi 1277781 2020-05-26T09:40:12Z 2020-05-26T09:40:12Z CONTRIBUTOR

Just a small comment: in the docs (http://xarray.pydata.org/en/latest/generated/xarray.cov.html#xarray.cov) there is a typo: da_a is declared twice, the second should really be da_b.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  xr.cov() and xr.corr() 623751213
611687777 https://github.com/pydata/xarray/issues/2699#issuecomment-611687777 https://api.github.com/repos/pydata/xarray/issues/2699 MDEyOklzc3VlQ29tbWVudDYxMTY4Nzc3Nw== kefirbandi 1277781 2020-04-09T18:36:36Z 2020-04-09T18:36:36Z CONTRIBUTOR

I encountered this bug a few days ago. I understand it isn't trivial to fix, but would it be possible to check and throw an exception? Still better than having it go unnoticed. Thanks

{
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  bfill behavior dask arrays with small chunk size 402413097
592991059 https://github.com/pydata/xarray/issues/2459#issuecomment-592991059 https://api.github.com/repos/pydata/xarray/issues/2459 MDEyOklzc3VlQ29tbWVudDU5Mjk5MTA1OQ== kefirbandi 1277781 2020-02-29T20:27:20Z 2020-02-29T20:27:20Z CONTRIBUTOR

I know this is not a recent thread but I found no resolution, and we just ran in the same issue recently. In our case we had a pandas series of roughly 15 milliion entries, with a 3-level multi-index which had to be converted to an xarray.DataArray. The .to_xarray took almost 2 minutes. Unstack + to_array took it down to roughly 3 seconds, provided the last level of the multi index was unstacked.

However a much faster solution was through numpy array. The below code is based on the idea of Igor Raush

(In this case df is a dataframe with a single column, or a series) arr = np.full(df.index.levshape, np.nan) arr[tuple(df.index.codes)] = df.values.flat da = xr.DataArray(arr,dims=df.index.names,coords=dict(zip(df.index.names, df.index.levels)))

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Stack + to_array before to_xarray is much faster that a simple to_xarray 365973662
566208875 https://github.com/pydata/xarray/issues/3470#issuecomment-566208875 https://api.github.com/repos/pydata/xarray/issues/3470 MDEyOklzc3VlQ29tbWVudDU2NjIwODg3NQ== kefirbandi 1277781 2019-12-16T19:33:46Z 2019-12-16T19:33:46Z CONTRIBUTOR

Is it already decided what the resolution should be? * Giving a warning, as the title of this thread suggests? * Disable setting .values directly for dimensions? * Or making sure that .indexes are updated when .values are set directly

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  warn when updating coord.values : indexes are not updated 514792972
478056621 https://github.com/pydata/xarray/issues/1077#issuecomment-478056621 https://api.github.com/repos/pydata/xarray/issues/1077 MDEyOklzc3VlQ29tbWVudDQ3ODA1NjYyMQ== kefirbandi 1277781 2019-03-29T16:10:24Z 2019-03-29T16:10:24Z CONTRIBUTOR

I now came across this issue, which still seems to be open. Are the statements made earlier still valid? Are there any concrete plans maybe to fix this in the near future?

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  MultiIndex serialization to NetCDF 187069161
328168623 https://github.com/pydata/xarray/issues/1563#issuecomment-328168623 https://api.github.com/repos/pydata/xarray/issues/1563 MDEyOklzc3VlQ29tbWVudDMyODE2ODYyMw== kefirbandi 1277781 2017-09-08T17:41:05Z 2017-09-08T17:41:05Z CONTRIBUTOR

Actually I just saw that the requirement for xarray 0.8.2 is: pandas >= 0.15.0, I don't know whether it is possible to specify: 0.19.1>=pandas>=0.15.0.

Just ran into this issue when wanted to install packages for some newcomers in our company.

But actually we solved the issue by adding strict version requirement for both xarray and pandas. (We need these versions because we have some pickled files using these formats which we need to read. Until I find the time to get rid of them.)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  0.8.2 incompatible with pandas 0.20.1 ? 256251595

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 4963.622ms · About: xarray-datasette