home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

1 row where author_association = "NONE", issue = 343659822 and user = 18679148 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • ACHMartin · 1 ✖

issue 1

  • float32 instead of float64 when decoding int16 with scale_factor netcdf var using xarray · 1 ✖

author_association 1

  • NONE · 1 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
852069023 https://github.com/pydata/xarray/issues/2304#issuecomment-852069023 https://api.github.com/repos/pydata/xarray/issues/2304 MDEyOklzc3VlQ29tbWVudDg1MjA2OTAyMw== ACHMartin 18679148 2021-06-01T12:03:55Z 2021-06-07T20:48:00Z NONE

Dear all and thank you for your work on Xarray,

Link to @magau comment, I have a netcdf with multiple variables in different format (float, short, byte). Using open_mfdataset 'short' and 'byte' are converted in 'float64' (no scaling, but some masking for the float data). It doesn't raise major issue for me, but it is taking plenty of memory space for nothing.

Below an example of the 3 format from (ncdump -h): short total_nobs(time, lat, lon) ; total_nobs:long_name = "Number of SSS in the time interval" ; total_nobs:valid_min = 0s ; total_nobs:valid_max = 10000s ; float pct_var(time, lat, lon) ; pct_var:_FillValue = NaNf ; pct_var:long_name = "Percentage of SSS_variability that is expected to be not explained by the products" ; pct_var:units = "%" ; pct_var:valid_min = 0. ; pct_var:valid_max = 100. ; byte sss_qc(time, lat, lon) ; sss_qc:long_name = "Sea Surface Salinity Quality, 0=Good; 1=Bad" ; sss_qc:valid_min = 0b ; sss_qc:valid_max = 1b ;

And how they appear after opening in as xarray using open_mfdataset: total_nobs (time, lat, lon) float64 dask.array<chunksize=(48, 584, 1388), meta=np.ndarray> pct_var (time, lat, lon) float32 dask.array<chunksize=(48, 584, 1388), meta=np.ndarray> sss_qc (time, lat, lon) float64 dask.array<chunksize=(48, 584, 1388), met

Is there any recommandation? Regards

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  float32 instead of float64 when decoding int16 with scale_factor netcdf var using xarray  343659822

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 14.848ms · About: xarray-datasette