home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

3 rows where issue = 269700511 and user = 90008 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • hmaarrfk · 3 ✖

issue 1

  • Append along an unlimited dimension to an existing netCDF file · 3 ✖

author_association 1

  • CONTRIBUTOR 3
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
735428830 https://github.com/pydata/xarray/issues/1672#issuecomment-735428830 https://api.github.com/repos/pydata/xarray/issues/1672 MDEyOklzc3VlQ29tbWVudDczNTQyODgzMA== hmaarrfk 90008 2020-11-29T17:34:44Z 2020-11-29T17:35:04Z CONTRIBUTOR

It isn't really part of any library. I don't really have plans of making it into a public library. I think the discussion is really around the xarray API, and what functions to implement at first.

Then somebody can take the code and integrate it into the decided upon API.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Append along an unlimited dimension to an existing netCDF file 269700511
685222909 https://github.com/pydata/xarray/issues/1672#issuecomment-685222909 https://api.github.com/repos/pydata/xarray/issues/1672 MDEyOklzc3VlQ29tbWVudDY4NTIyMjkwOQ== hmaarrfk 90008 2020-09-02T01:17:05Z 2020-09-02T01:17:05Z CONTRIBUTOR

Small prototype, but maybe it can help boost the development.

```python import netCDF4 def _expand_variable(nc_variable, data, expanding_dim, nc_shape, added_size): # For time deltas, we must ensure that we use the same encoding as # what was previously stored. # We likely need to do this as well for variables that had custom # econdings too if hasattr(nc_variable, 'calendar'): data.encoding = { 'units': nc_variable.units, 'calendar': nc_variable.calendar, } data_encoded = xr.conventions.encode_cf_variable(data) # , name=name) left_slices = data.dims.index(expanding_dim) right_slices = data.ndim - left_slices - 1 nc_slice = (slice(None),) * left_slices + (slice(nc_shape, nc_shape + added_size),) + (slice(None),) * (right_slices) nc_variable[nc_slice] = data_encoded.data def append_to_netcdf(filename, ds_to_append, unlimited_dims): if isinstance(unlimited_dims, str): unlimited_dims = [unlimited_dims] if len(unlimited_dims) != 1: # TODO: change this so it can support multiple expanding dims raise ValueError( "We only support one unlimited dim for now, " f"got {len(unlimited_dims)}.") unlimited_dims = list(set(unlimited_dims)) expanding_dim = unlimited_dims[0] with netCDF4.Dataset(filename, mode='a') as nc: nc_dims = set(nc.dimensions.keys()) nc_coord = nc[expanding_dim] nc_shape = len(nc_coord) added_size = len(ds_to_append[expanding_dim]) variables, attrs = xr.conventions.encode_dataset_coordinates(ds_to_append) for name, data in variables.items(): if expanding_dim not in data.dims: # Nothing to do, data assumed to the identical continue nc_variable = nc[name] _expand_variable(nc_variable, data, expanding_dim, nc_shape, added_size) from xarray.tests.test_dataset import create_append_test_data from xarray.testing import assert_equal ds, ds_to_append, ds_with_new_var = create_append_test_data() filename = 'test_dataset.nc' ds.to_netcdf(filename, mode='w', unlimited_dims=['time']) append_to_netcdf('test_dataset.nc', ds_to_append, unlimited_dims='time') loaded = xr.load_dataset('test_dataset.nc') assert_equal(xr.concat([ds, ds_to_append], dim="time"), loaded) ```
{
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Append along an unlimited dimension to an existing netCDF file 269700511
684833575 https://github.com/pydata/xarray/issues/1672#issuecomment-684833575 https://api.github.com/repos/pydata/xarray/issues/1672 MDEyOklzc3VlQ29tbWVudDY4NDgzMzU3NQ== hmaarrfk 90008 2020-09-01T12:58:52Z 2020-09-01T12:58:52Z CONTRIBUTOR

I think I got a basic prototype working.

That said, I think a real challenge lies in supporting the numerous backends and lazy arrays.

For example, I was only able to add data in peculiar fashions using the netcdf4 library which may trigger complex computations many times.

Is this a use case that we must optimize for now?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Append along an unlimited dimension to an existing netCDF file 269700511

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 739.85ms · About: xarray-datasette