home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

5 rows where issue = 305373563 sorted by updated_at descending

✖
✖

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 4

  • djhoese 2
  • shoyer 1
  • fujiisoup 1
  • stale[bot] 1

author_association 3

  • CONTRIBUTOR 2
  • MEMBER 2
  • NONE 1

issue 1

  • Inconsistent type conversion when doing numpy.sum gvies different results · 5 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
585535983 https://github.com/pydata/xarray/issues/1989#issuecomment-585535983 https://api.github.com/repos/pydata/xarray/issues/1989 MDEyOklzc3VlQ29tbWVudDU4NTUzNTk4Mw== stale[bot] 26384082 2020-02-13T03:46:00Z 2020-02-13T03:46:00Z NONE

In order to maintain a list of currently relevant issues, we mark issues as stale after a period of inactivity

If this issue remains relevant, please comment here or remove the stale label; otherwise it will be marked as closed automatically

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Inconsistent type conversion when doing numpy.sum gvies different results 305373563
373226888 https://github.com/pydata/xarray/issues/1989#issuecomment-373226888 https://api.github.com/repos/pydata/xarray/issues/1989 MDEyOklzc3VlQ29tbWVudDM3MzIyNjg4OA== fujiisoup 6815844 2018-03-15T01:10:37Z 2018-03-15T01:15:33Z MEMBER

I notice that bottleneck does the dtype conversion. I think in your environment bottleneck is installed.

```python In [9]: np.sum(a) # equivalent to a.sum(), using bottleneck Out[9]: <xarray.DataArray ()> array(499943.21875)

In [10]: np.sum(a.data) # numpy native Out[10]: 499941.53

In [15]: bn.nansum(a.data) Out[15]: 499943.21875

In [11]: a.sum(dim=('x', 'y')) # multiple dims calls native np.sum Out[11]: <xarray.DataArray ()> array(499941.53, dtype=float32) ```

It might be an upstream issue.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Inconsistent type conversion when doing numpy.sum gvies different results 305373563
373221321 https://github.com/pydata/xarray/issues/1989#issuecomment-373221321 https://api.github.com/repos/pydata/xarray/issues/1989 MDEyOklzc3VlQ29tbWVudDM3MzIyMTMyMQ== djhoese 1828519 2018-03-15T00:37:26Z 2018-03-15T00:38:04Z CONTRIBUTOR

@shoyer In my examples rows = cols = 1000 (xarray 0.10.1).

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Inconsistent type conversion when doing numpy.sum gvies different results 305373563
373221066 https://github.com/pydata/xarray/issues/1989#issuecomment-373221066 https://api.github.com/repos/pydata/xarray/issues/1989 MDEyOklzc3VlQ29tbWVudDM3MzIyMTA2Ng== shoyer 1217238 2018-03-15T00:36:02Z 2018-03-15T00:36:02Z MEMBER

Strangely, it says: AttributeError: module 'xarray' has no attribute 'show_versions' Perhaps I am on a very old version?

Yes, you're using a version of xarray prior to 0.10.

What value are you using for rows/cols in your example?

Note that due to a quirk of NumPy, np.sum(a) actually corresponds to a.sum(). For xarray, a.sum() skips NaNs by default, so it's equivalent to np.nansum() or bottleneck.nansum() (if bottleneck is installed).

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Inconsistent type conversion when doing numpy.sum gvies different results 305373563
373219624 https://github.com/pydata/xarray/issues/1989#issuecomment-373219624 https://api.github.com/repos/pydata/xarray/issues/1989 MDEyOklzc3VlQ29tbWVudDM3MzIxOTYyNA== djhoese 1828519 2018-03-15T00:27:35Z 2018-03-15T00:27:35Z CONTRIBUTOR

Example:

``` import numpy as np import xarray as xr a = xr.DataArray(np.random.random((rows, cols)).astype(np.float32), dims=('y', 'x')) In [65]: np.sum(a).data Out[65]: array(499858.0625)

In [66]: np.sum(a.data) Out[66]: 499855.19

In [67]: np.sum(a.data.astype(np.float64)) Out[67]: 499855.21635645436

In [68]: np.sum(a.data.astype(np.float32)) Out[68]: 499855.19 ```

I realized after making this example that nansum gives expected results: ``` a = xr.DataArray(np.random.random((rows, cols)).astype(np.float32), dims=('y', 'x')) In [83]: np.nansum(a.data) Out[83]: 500027.81

In [84]: np.nansum(a) Out[84]: 500027.81

In [85]: np.nansum(a.data.astype(np.float64)) Out[85]: 500027.77103802469

In [86]: np.nansum(a.astype(np.float64)) Out[86]: 500027.77103802469 ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Inconsistent type conversion when doing numpy.sum gvies different results 305373563

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 2240.381ms · About: xarray-datasette
  • Sort ascending
  • Sort descending
  • Facet by this
  • Hide this column
  • Show all columns
  • Show not-blank rows