home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

9 rows where issue = 432019600 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: reactions, created_at (date), updated_at (date)

user 4

  • shoyer 3
  • dcherian 2
  • fujiisoup 2
  • dnowacki-usgs 2

author_association 2

  • MEMBER 7
  • CONTRIBUTOR 2

issue 1

  • Safely open / close netCDF files without resource locking · 9 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
486511229 https://github.com/pydata/xarray/issues/2887#issuecomment-486511229 https://api.github.com/repos/pydata/xarray/issues/2887 MDEyOklzc3VlQ29tbWVudDQ4NjUxMTIyOQ== dnowacki-usgs 13837821 2019-04-25T03:57:59Z 2019-04-25T03:57:59Z CONTRIBUTOR

Took a stab at implementing these functions.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Safely open / close netCDF files without resource locking 432019600
485962269 https://github.com/pydata/xarray/issues/2887#issuecomment-485962269 https://api.github.com/repos/pydata/xarray/issues/2887 MDEyOklzc3VlQ29tbWVudDQ4NTk2MjI2OQ== fujiisoup 6815844 2019-04-23T20:30:02Z 2019-04-23T20:30:02Z MEMBER

Just to clarify, load_dataset would be equivalent to:

def load_dataset(args, kwargs): with xarray.open_dataset(args, **kwargs) as ds: return ds.load()

Yes, that is actually in my mind.

I added a tag good first issue for this. I love to implement this by myself but I don't think I have enough free time near future...

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Safely open / close netCDF files without resource locking 432019600
482818859 https://github.com/pydata/xarray/issues/2887#issuecomment-482818859 https://api.github.com/repos/pydata/xarray/issues/2887 MDEyOklzc3VlQ29tbWVudDQ4MjgxODg1OQ== shoyer 1217238 2019-04-13T15:10:22Z 2019-04-14T21:12:17Z MEMBER

Just to clarify, load_dataset would be equivalent to: python def load_dataset(*args, **kwargs): with xarray.open_dataset(*args, **kwargs) as ds: return ds.load() This also seems pretty reasonable to me. I’ve written a version of this utility function a handful of times, so I at least would find it useful.

Would we also want load_dataarray?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Safely open / close netCDF files without resource locking 432019600
483012030 https://github.com/pydata/xarray/issues/2887#issuecomment-483012030 https://api.github.com/repos/pydata/xarray/issues/2887 MDEyOklzc3VlQ29tbWVudDQ4MzAxMjAzMA== dnowacki-usgs 13837821 2019-04-14T16:32:21Z 2019-04-14T16:32:21Z CONTRIBUTOR

I used to use the autoclose=True argument to open_dataset() all the time to avoid this issue. Then it was deprecated with the LRU introduction, and I now do a two-line ds = open_dataset().load() and then ds.close(). Just adding my use-case and what I found to be a regression in terms of usability. Something like load_dataset() would be helpful but I wonder if adding an argument to the existing open_dataset() would be better.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Safely open / close netCDF files without resource locking 432019600
482783365 https://github.com/pydata/xarray/issues/2887#issuecomment-482783365 https://api.github.com/repos/pydata/xarray/issues/2887 MDEyOklzc3VlQ29tbWVudDQ4Mjc4MzM2NQ== fujiisoup 6815844 2019-04-13T07:05:24Z 2019-04-13T07:05:24Z MEMBER

I didn't notice that with statement can be used for open_dataset. Thanks for the comments.

I personally want to a simpler function for the daily (not so big data) analysis without caring open / close stuff. (and probably also for new users) Is it too much if we add load_dataset function that works similar to np.load_txt in terms of the file handling?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Safely open / close netCDF files without resource locking 432019600
482637696 https://github.com/pydata/xarray/issues/2887#issuecomment-482637696 https://api.github.com/repos/pydata/xarray/issues/2887 MDEyOklzc3VlQ29tbWVudDQ4MjYzNzY5Ng== shoyer 1217238 2019-04-12T16:26:17Z 2019-04-12T16:26:17Z MEMBER

I think this is more of a limitation of netCDF-C / HDF5 than xarray. For example, this example works if you use SciPy's netCDF reader/writer: ``` import xarray as xr ds = xr.Dataset({'var': ('x', [0, 1, 2])}) ds.to_netcdf('test.nc', engine='scipy')

ds_read = xr.open_dataset('test.nc', engine='scipy') ds.to_netcdf('test.nc', engine='scipy') ```

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Safely open / close netCDF files without resource locking 432019600
482620638 https://github.com/pydata/xarray/issues/2887#issuecomment-482620638 https://api.github.com/repos/pydata/xarray/issues/2887 MDEyOklzc3VlQ29tbWVudDQ4MjYyMDYzOA== dcherian 2448579 2019-04-12T15:36:39Z 2019-04-12T15:36:39Z MEMBER

OK. But what about the usual scientist workflow where you work in multiple cells

``` ds.load()

do computation

```

next cell: do more computation ...

ds.close() # is this right? ds.to_netcdf('test.nc')

I wonder if we should add autoclose as a kwarg for to_netcdf. If autoclose=True, remove that file from the LRU cache if present, and then write to file.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Safely open / close netCDF files without resource locking 432019600
482357656 https://github.com/pydata/xarray/issues/2887#issuecomment-482357656 https://api.github.com/repos/pydata/xarray/issues/2887 MDEyOklzc3VlQ29tbWVudDQ4MjM1NzY1Ng== shoyer 1217238 2019-04-11T22:58:37Z 2019-04-11T22:58:47Z MEMBER

This pattern should work: python with xr.open_dataset('test.nc') as ds: ds.load() ds.to_netcdf('test.nc')

{
    "total_count": 3,
    "+1": 3,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Safely open / close netCDF files without resource locking 432019600
482357619 https://github.com/pydata/xarray/issues/2887#issuecomment-482357619 https://api.github.com/repos/pydata/xarray/issues/2887 MDEyOklzc3VlQ29tbWVudDQ4MjM1NzYxOQ== dcherian 2448579 2019-04-11T22:58:28Z 2019-04-11T22:58:28Z MEMBER

what is the recommended code pattern for reading a file; adding a few variables; and then writing back to the same file?

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Safely open / close netCDF files without resource locking 432019600

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 1350.765ms · About: xarray-datasette