home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

3 rows where issue = 363299007 and user = 35968931 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: reactions, created_at (date)

user 1

  • TomNicholas · 3 ✖

issue 1

  • save "encoding" when using open_mfdataset · 3 ✖

author_association 1

  • MEMBER 3
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
610008589 https://github.com/pydata/xarray/issues/2436#issuecomment-610008589 https://api.github.com/repos/pydata/xarray/issues/2436 MDEyOklzc3VlQ29tbWVudDYxMDAwODU4OQ== TomNicholas 35968931 2020-04-06T20:05:10Z 2020-04-06T20:05:10Z MEMBER

@TomNicholas I forgot about this sorry.

No worries!

I just made a quick check with the latest xarray master and I still have the problem ... see code.

3498 added a new keyword argument to open_mfdataset, to choose which file to load to attributes from, can you try using that?

time.encoding is empty while it is as expected when opening any of the files with open_dataset instead

If this is the case, then to solve your original problem, you could also try using the preprocess argument to open_mfdataset to store the encoding somewhere where it won't be lost? i.e.

```python def store_encoding(ds): encoding = ds['time'].encoding ds.time.attrs['calendar_encoding'] = encoding return ds

snw = xr.open_mfdataset(l_f, combine='nested', concat_dim='time', master_file=lf[0], preprocess=store_encoding)['snw'] ```

Related question but maybe out of line, is there any way to know that the snw.time type is cftime.DatetimeNoLeap (as it is visible in the overview of snw.time)?

I'm not familiar with these classes, but presumably you mean more than just checking with isinstance()? e.g. python from cftime import DatetimeNoLeap print(isinstance(snw.time.values, cftime.DatetimeNoLeap))

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  save "encoding" when using open_mfdataset 363299007
609479629 https://github.com/pydata/xarray/issues/2436#issuecomment-609479629 https://api.github.com/repos/pydata/xarray/issues/2436 MDEyOklzc3VlQ29tbWVudDYwOTQ3OTYyOQ== TomNicholas 35968931 2020-04-05T20:44:00Z 2020-04-05T20:44:00Z MEMBER

@sbiner I know it's been a while, but I expect that #3498 and #3877 probably resolve your issue?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  save "encoding" when using open_mfdataset 363299007
449737841 https://github.com/pydata/xarray/issues/2436#issuecomment-449737841 https://api.github.com/repos/pydata/xarray/issues/2436 MDEyOklzc3VlQ29tbWVudDQ0OTczNzg0MQ== TomNicholas 35968931 2018-12-24T14:02:57Z 2018-12-24T14:02:57Z MEMBER

If open_mfdataset() is actually dropping the encoding, then this is an issue related to #1614. That's because in open_mfdataset() while the attrs are explicitly set to those of the first supplied dataset, I don't see any similar explicit treatment of the encoding. I think that means the encoding is being set by what happens inside the core of auto_combine(), and is presumably being lost upon some of the concat or merge operations which happen inside auto_combine().

So I think to fix this then either open_mfdataset() should contain explicit treatment of the encoding, or the rules for propagating the encoding through the auto_combine() should be solidified.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  save "encoding" when using open_mfdataset 363299007

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 468.519ms · About: xarray-datasette