home / github

Menu
  • GraphQL API
  • Search all tables

issues

Table actions
  • GraphQL API for issues

4 rows where repo = 13221727 and user = 4992424 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date), closed_at (date)

type 2

  • issue 2
  • pull 2

state 1

  • closed 4

repo 1

  • xarray · 4 ✖
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
221387277 MDU6SXNzdWUyMjEzODcyNzc= 1372 decode_cf() loads chunked arrays darothen 4992424 closed 0     7 2017-04-12T20:52:48Z 2018-04-12T23:38:02Z 2018-04-12T23:38:02Z NONE      

Currently using xarray version 0.9.2 and dask version 0.14.0.

Suppose you load a NetCDF file with the chunks parameter:

python ds = xr.open_dataset("my_data.nc", decode_cf=False, chunks={'lon': 10, 'lat': 10})

The data is loaded as dask arrays, as expected. But if we then manually call xarray.decode_cf(), it'll eagerly load the data. Is this the expected behavior, or should decode_cf() preserve the laziness of the data?

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1372/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
121740837 MDU6SXNzdWUxMjE3NDA4Mzc= 678 Save to netCDF with record dimension? darothen 4992424 closed 0     6 2015-12-11T16:20:35Z 2018-01-08T20:11:27Z 2018-01-08T20:11:27Z NONE      

Is it currently possible in xray to identify a coordinate as a record dimension when saving to netCDF? Saving a Dataset to disk - even when it's a Dataset that was directly read in from a netCDF file with a record dimension - seems to destroy any indication that there was a record dimension. For instance, reading in CESM output tapes and then immediately saving them to disk demotes "time" from being a record dimension.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/678/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
208215185 MDExOlB1bGxSZXF1ZXN0MTA2NTkyMjUx 1272 Groupby-like API for resampling darothen 4992424 closed 0   0.10 2415632 27 2017-02-16T19:04:07Z 2017-09-22T16:27:36Z 2017-09-22T16:27:35Z NONE   0 pydata/xarray/pulls/1272

This is a work-in-progress to resolve #1269.

  • [x] Basic functionality
  • [x] Cleanly deprecate old API
  • [x] New test cases
  • [x] Documentation / examples
  • [x] "What's new"

Openly welcome feedback/critiques on how I approached this. Subclassing Data{Array/set}GroupBy may not be the best way, but it would be easy enough to re-write the necessary helper functions (just apply(), I think) so that we do not need to inherit form them directly. Additional issues I'm working to resolve:

  • [x] I tried make sure that calls using the old API won't break by refactoring the old logic to _resample_immediately(). This may not be the best approach!
  • [x] Similarly, I copied all the original test cases and added the suffix ..._old_api; these could trivially be placed into their related test cases for the new API.
  • [x] BUG: keep_attrs is ignored when you call it on methods chained to Dataset.resample(). Oddly enough, if I hard-code keep_attrs=True inside reduce_array() in DatasetResample::reduce it works just fine. I haven't figured out where the kwarg is getting lost.
  • [x] BUG: Some of the test cases (for instance, test_resample_old_vs_new_api) fail because the resampling by calling self.groupby_cls ends up not working - it crashes because the group sizes that get computed are not what it expects. Occurs with both new and old API
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1272/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
220011864 MDExOlB1bGxSZXF1ZXN0MTE0NjgwMzcy 1356 Add DatetimeAccessor for accessing datetime fields via `.dt` attribute darothen 4992424 closed 0     9 2017-04-06T19:48:19Z 2017-04-29T01:19:12Z 2017-04-29T01:18:59Z NONE   0 pydata/xarray/pulls/1356
  • [x] Partially closes #358
  • [x] tests added / passed
  • [x] passes git diff upstream/master | flake8 --diff
  • [x] whatsnew entry

This uses the register_dataarray_accessor to add attributes similar to those in pandastimeseries which let the users quickly access datetime fields from an underlying array of datetime-like values. The referenced issue (#358) also asks about adding similar accessors for.str`, but this is a more complex topic - I think a compelling use-case would help in figuring out what the critical functionality is

Virtual time fields

Presumably this could be used to augment Dataset._get_virtual_variable(). A season field would need to be added as a special field to the accessor.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1356/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 59.307ms · About: xarray-datasette