home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

2 rows where issue = 343659822 and user = 10050469 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • fmaussion · 2 ✖

issue 1

  • float32 instead of float64 when decoding int16 with scale_factor netcdf var using xarray · 2 ✖

author_association 1

  • MEMBER 2
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
412495621 https://github.com/pydata/xarray/issues/2304#issuecomment-412495621 https://api.github.com/repos/pydata/xarray/issues/2304 MDEyOklzc3VlQ29tbWVudDQxMjQ5NTYyMQ== fmaussion 10050469 2018-08-13T12:04:10Z 2018-08-13T12:04:10Z MEMBER

I think we are still talking about different things. In the example by @Thomas-Z above there is still a problem at the line:

```python

Comparing both dataframes with float32 precision (1e-6)

mask = np.isclose(df_nc['var'], df_xr['var'], rtol=0, atol=1e-6) ```

As discussed several times above, this test is misleading: it should assert for atol=0.01, which is the real accuracy of the underlying data. For this purpose float32 is more than good enough.

@shoyer said:

I would be happy to add options for whether to default to float32 or float64 precision.

so we would welcome a PR in this direction! I don't think we need to change the default behavior though, as there is a slight possibility that some people are relying on the data being float32.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  float32 instead of float64 when decoding int16 with scale_factor netcdf var using xarray  343659822
410680371 https://github.com/pydata/xarray/issues/2304#issuecomment-410680371 https://api.github.com/repos/pydata/xarray/issues/2304 MDEyOklzc3VlQ29tbWVudDQxMDY4MDM3MQ== fmaussion 10050469 2018-08-06T11:41:38Z 2018-08-06T11:41:38Z MEMBER

As mentioned in the original issue the modification is straightforward. Any ideas if this could be integrated to xarray anytime soon ?

Some people might prefer float32, so it is not as straightforward as it seems. It might be possible to add an option for this, but I didn't look into the details.

You'll have a float64 in the end but you won't get your precision back

Note that this is a fake sense of precision, because in the example above the compression used is lossy, i.e. precision was lost at compression and the actual precision is now 0.01:

short agc_40hz(time, meas_ind) ; agc_40hz:_FillValue = 32767s ; agc_40hz:units = "dB" ; agc_40hz:scale_factor = 0.01 ;

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  float32 instead of float64 when decoding int16 with scale_factor netcdf var using xarray  343659822

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 69.49ms · About: xarray-datasette