home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

4 rows where issue = 173773358 and user = 1217238 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

These facets timed out: author_association, issue

user 1

  • shoyer · 4 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
258877559 https://github.com/pydata/xarray/issues/992#issuecomment-258877559 https://api.github.com/repos/pydata/xarray/issues/992 MDEyOklzc3VlQ29tbWVudDI1ODg3NzU1OQ== shoyer 1217238 2016-11-07T16:05:04Z 2016-11-07T16:05:04Z MEMBER

Agreed, it's awkward to have this information on variables.

I was somewhat opposed to adding more state to the Dataset object but it seems like the necessary solution here. I'm not sure we need it in the Dataset constructor though -- could just have encoding as an attribute you need to modify. Honestly, could probably do the same for DataArray.encoding -- it's pretty low level.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Creating unlimited dimensions with xarray.Dataset.to_netcdf 173773358
243546130 https://github.com/pydata/xarray/issues/992#issuecomment-243546130 https://api.github.com/repos/pydata/xarray/issues/992 MDEyOklzc3VlQ29tbWVudDI0MzU0NjEzMA== shoyer 1217238 2016-08-30T19:07:19Z 2016-08-30T19:07:19Z MEMBER

However, when the dataset is indexed/subset/resampled along the unlimited dimension, it would make sense that its state is dropped. But that would require a lot of ifs and buts, so I suggest we leave that aside for now.

This is exactly how Variable.encoding currently works: any operation that creates a new variable from the original variable drops the encoding.

If we put this encoding information on the variable corresponding to the dimension, any time you save a Dataset using that exact same dimension variable, it would be saved as unlimited size. So if you only modify other dimensions (e.g., with resampling or indexing), the unlimited dimension would indeed persist, as you desire.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Creating unlimited dimensions with xarray.Dataset.to_netcdf 173773358
243287795 https://github.com/pydata/xarray/issues/992#issuecomment-243287795 https://api.github.com/repos/pydata/xarray/issues/992 MDEyOklzc3VlQ29tbWVudDI0MzI4Nzc5NQ== shoyer 1217238 2016-08-29T23:24:21Z 2016-08-29T23:24:21Z MEMBER

Yes, we could put this in encoding if we want to preserve through reading/writing files. NetCDF4 supports multiple unlimited dimensions. Netcdf3 does not. On Mon, Aug 29, 2016 at 1:56 PM Jonas notifications@github.com wrote:

OK, I'd be up for taking a shot at it.

Since it is per-variable and specific to netCDF, I guess the perfect place to add this is in the encoding dictionary that you can pass to to_netcdf https://github.com/pydata/xarray/blob/606e1d9c7efd72e10b530a688d6ef870e8ec1843/xarray/backends/api.py#L316, right? Maybe as key unlimited? E.g.

ds.to_netcdf(encoding={'time': dict(unlimited=True)})

I need to look up whether netCDF allows for defining more than one unlimited dimension, otherwise that must throw an error.

And then it is just about passing None as length to CreateDimension, at least in netCDF4 and scipy.io.netcdf. But I did not look into how xarray handles that under the hood.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/pydata/xarray/issues/992#issuecomment-243253510, or mute the thread https://github.com/notifications/unsubscribe-auth/ABKS1mTPcRqf3JI-yRk4wnIQ_xgKE3Wvks5qk0eXgaJpZM4JveCA .

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Creating unlimited dimensions with xarray.Dataset.to_netcdf 173773358
243167445 https://github.com/pydata/xarray/issues/992#issuecomment-243167445 https://api.github.com/repos/pydata/xarray/issues/992 MDEyOklzc3VlQ29tbWVudDI0MzE2NzQ0NQ== shoyer 1217238 2016-08-29T15:57:27Z 2016-08-29T15:57:27Z MEMBER

Currently it's not supported, but yes, we could absolutely add it as an option. I would be happy to add this functionality if someone makes a pull request. This won't be very useful for editing the file with xarray of course because we don't support editing netcdf files without making a complete copy.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Creating unlimited dimensions with xarray.Dataset.to_netcdf 173773358

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 4862.767ms · About: xarray-datasette