home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

3 rows where author_association = "CONTRIBUTOR" and issue = 955043280 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 2

  • malmans2 2
  • scottstanie 1

issue 1

  • `polyfit` with weights alters the DataArray in place · 3 ✖

author_association 1

  • CONTRIBUTOR · 3 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
1561328867 https://github.com/pydata/xarray/issues/5644#issuecomment-1561328867 https://api.github.com/repos/pydata/xarray/issues/5644 IC_kwDOAMm_X85dD_zj malmans2 22245117 2023-05-24T15:02:44Z 2023-05-24T15:02:44Z CONTRIBUTOR

Do you know where the in-place modification is happening? We could just copy there and fix this particular issue.

Not sure, but I'll take a look!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  `polyfit` with weights alters the DataArray in place 955043280
1557440032 https://github.com/pydata/xarray/issues/5644#issuecomment-1557440032 https://api.github.com/repos/pydata/xarray/issues/5644 IC_kwDOAMm_X85c1KYg malmans2 22245117 2023-05-22T15:35:54Z 2023-05-22T15:35:54Z CONTRIBUTOR

Hi! I was about to open a new issue about this, but looks like it's a known issue and there's a stale PR... Let me know if I can help to get this fixed!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  `polyfit` with weights alters the DataArray in place 955043280
888522258 https://github.com/pydata/xarray/issues/5644#issuecomment-888522258 https://api.github.com/repos/pydata/xarray/issues/5644 IC_kwDOAMm_X8409cYS scottstanie 8291800 2021-07-28T18:20:49Z 2021-07-28T18:20:49Z CONTRIBUTOR

As a temporary workout for my case, I'm just going to do

python In [3]: pf = (da.copy(True)).polyfit("z", deg=2, w=np.arange(nz)) In [5]: da.max(), da.mean() Out[5]: (<xarray.DataArray ()> array(0.99878237), <xarray.DataArray ()> array(0.50869358))

The thing I don't understand is that _to_temp_dataset seems to be trying to do a deep copy (based on the argument names)

https://github.com/pydata/xarray/blob/da99a5664df4f5013c2f6b0e758394bec5e0bc80/xarray/core/dataarray.py#L490-L491

But it is acting like a shallow copy:

python In [6]: pf = (da.copy(False)).polyfit("z", deg=2, w=np.arange(nz)) In [7]: da.max(), da.mean() Out[7]: (<xarray.DataArray ()> array(8.92217147), <xarray.DataArray ()> array(2.29014397))

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  `polyfit` with weights alters the DataArray in place 955043280

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 14.23ms · About: xarray-datasette