home / github

Menu
  • GraphQL API
  • Search all tables

issues

Table actions
  • GraphQL API for issues

2 rows where state = "closed" and user = 17701232 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date), closed_at (date)

type 1

  • issue 2

state 1

  • closed · 2 ✖

repo 1

  • xarray 2
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
179969119 MDU6SXNzdWUxNzk5NjkxMTk= 1019 groupby_bins: exclude bin or assign bin with nan when bin has no values byersiiasa 17701232 closed 0     10 2016-09-29T07:09:02Z 2016-10-03T21:54:38Z 2016-10-03T15:22:15Z NONE      

When using groupby_bins there are cases where no values are found for some of the bins specified. Currently, it appears that in these cases, the bin is skipped, with no value neither a bin entry added to the output dataarray.

Is there a way to identify which bins have been skipped. Or preferably, is it possible to have an option to include those bins, but with nan values. This would make comparing two dataarrays easier in cases where despite the same bin intervals as inputs, the outputs result in dataarrays with different variable and coordinates lengths.

``` import xarray as xr var = xr.open_dataset('c:\users\saveMWE.nc') pop = xr.open_dataset('c:\users\savePOP.nc')

binns includes very small bin to test this

binns = [-100, -50, 0, 50, 50.00001, 100] binned = pop.p2010T.groupby_bins(var.EnsembleMean, binns).sum() print binned print binned.EnsembleMean_bins ```

In this case, no data falls in the 4th bin between 50 and 50.00001.

<xarray.DataArray 'p2010T' (EnsembleMean_bins: 4)> array([ 2.64352214e+09, 3.46869168e+09, 3.08998110e+08, 1.48247440e+07]) Coordinates: * EnsembleMean_bins (EnsembleMean_bins) object '(0, 50]' '(-50, 0]' ... <xarray.DataArray 'EnsembleMean_bins' (EnsembleMean_bins: 4)> array(['(0, 50]', '(-50, 0]', '(51, 100]', '(-100, -50]'], dtype=object)

Obviously one can count the lengths but this doesn't indicate which bin was skipped. An option to include the empty bin with a nan value would be useful! Thanks

bins_example.zip

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1019/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
155741762 MDU6SXNzdWUxNTU3NDE3NjI= 851 xr.concat and xr.to_netcdf new filesize byersiiasa 17701232 closed 0     4 2016-05-19T13:51:17Z 2016-05-20T08:08:44Z 2016-05-19T21:13:04Z NONE      

I am having an issue whereby I read in two very similar netcdfs. I concatenate them through one dimension (time), and write back to a new netcdf. However the new filesize is enourmous, and I can't work out why.

More details in stackoverflow question here: http://stackoverflow.com/questions/37324106/python-xarray-concat-new-file-size

Thanks

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/851/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 23.954ms · About: xarray-datasette