home / github

Menu
  • Search all tables
  • GraphQL API

issues

Table actions
  • GraphQL API for issues

12 rows where milestone = 650893, state = "closed" and type = "issue" sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: user, comments, closed_at, author_association, created_at (date), updated_at (date), closed_at (date)

type 1

  • issue · 12 ✖

state 1

  • closed · 12 ✖

repo 1

  • xarray 12
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
33637243 MDU6SXNzdWUzMzYzNzI0Mw== 131 Dataset summary methods jhamman 2443309 closed 0   0.2 650893 10 2014-05-16T00:17:56Z 2023-09-28T12:42:34Z 2014-05-21T21:47:29Z MEMBER      

Add summary methods to Dataset object. For example, it would be great if you could summarize a entire dataset in a single line.

(1) Mean of all variables in dataset.

python mean_ds = ds.mean()

(2) Mean of all variables in dataset along a dimension:

python time_mean_ds = ds.mean(dim='time')

In the case where a dimension is specified and there are variables that don't use that dimension, I'd imagine you would just pass that variable through unchanged.

Related to #122.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/131/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
37841310 MDU6SXNzdWUzNzg0MTMxMA== 183 Checklist for v0.2 release shoyer 1217238 closed 0 shoyer 1217238 0.2 650893 1 2014-07-15T00:25:27Z 2014-08-14T20:01:17Z 2014-08-14T20:01:17Z MEMBER      

Requirements: - [x] Better documentation: - [x] Tutorial introduces DataArray before and independently of Dataset - [x] Revise README to emphasize that xray generalizes pandas to N-dimensions - [x] New FAQ section to clarify relationship of xray to pandas, Iris and CDAT (#112) - [x] Update What's New - [x] More consistent names: - [x] dimensions -> dims and coordinates -> coords (#190) - [x] noncoordinates -> noncoords - [x] Lay groundwork for non-index coordinates (#197): - [x] Require specifying attrs with a keyword argument in Dataset.__init__ (to make room for coords) - [x] Don't allow indexing array.coords[0] - [x] Remove the to argument from Dataset.apply (it will be much less clearly useful when we have non-index coords) - [x] Add warning in the docs (clarify that "linked dataset variables" are going away)

Nice to have: - [x] Support modifying DataArray dimensions/coordinates in place (#180) - [ ] Automatic alignment in mathematical operations (#184) - [ ] Revised interface for CF encoding/decoding (#155, #175)

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/183/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
38848839 MDU6SXNzdWUzODg0ODgzOQ== 190 Consistent use of abbreviations: attrs, dims, coords shoyer 1217238 closed 0   0.2 650893 3 2014-07-27T19:38:35Z 2014-08-14T07:24:29Z 2014-08-14T07:24:29Z MEMBER      

Right now, we use ds.attrs but the keyword argument is still attributes. We should resolve this inconsistency.

We also use dimensions and coordinates instead of the natural abbreviations dims and coords (although dims is used in the Variable constructor). Should we switch to the abbreviated versions for consistency with attrs?

Note that I switched to attrs in part because of its use in other packages (h5py, pytables and blz). There is not as clear precedent for what to call dimensions and coordinates.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/190/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
37840634 MDU6SXNzdWUzNzg0MDYzNA== 182 DataArray.loc should accept boolean arrays shoyer 1217238 closed 0   0.2 650893 0 2014-07-15T00:12:01Z 2014-07-31T06:52:41Z 2014-07-31T06:52:41Z MEMBER      

Allowing boolean arrays for .loc would make xray more consistent with pandas: http://pandas.pydata.org/pandas-docs/stable/indexing.html#different-choices-for-indexing-loc-iloc-and-ix

There is basically no ambiguity since True and False are more or less useless coordinate labels

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/182/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
37211553 MDU6SXNzdWUzNzIxMTU1Mw== 180 Support modifying DataArray dimensions and coordinates in-place shoyer 1217238 closed 0   0.2 650893 0 2014-07-06T04:30:36Z 2014-07-31T04:46:16Z 2014-07-31T04:46:16Z MEMBER      

The key thing is to (shallow) copy the underlying dataset before making any changes. The shallow copy is probably OK here -- modifying dimensions and coordinates is unlikely to be done in any inner loops.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/180/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
34003882 MDU6SXNzdWUzNDAwMzg4Mg== 140 Dataset.apply method shoyer 1217238 closed 0   0.2 650893 0 2014-05-21T17:08:47Z 2014-07-31T04:45:29Z 2014-07-31T04:45:29Z MEMBER      

Dataset reduce methods (#131) suggested to me that it would be nice to support applying functions which map over all data arrays in a dataset. The signature of Dataset.apply could be modeled after GroupBy.apply and the implementation would be similar to #137 (but simpler).

For example, I should be able to write ds.apply(np.mean).

Note: It's still worth having #137 as a separate implementation because it can do some additional validation for dimensions and skip variables where the aggregation doesn't make sense.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/140/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
37031948 MDU6SXNzdWUzNzAzMTk0OA== 178 Use "XIndex" instead of "Index"? shoyer 1217238 closed 0   0.2 650893 1 2014-07-02T22:43:46Z 2014-07-14T23:58:46Z 2014-07-14T23:58:46Z MEMBER      

In #161, I renamed xray.Coordinate to xray.Index.

To better distinguish xray's Index from pandas, let's call it XIndex instead.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/178/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
34352255 MDU6SXNzdWUzNDM1MjI1NQ== 142 Rename "coordinates" to "indices"? shoyer 1217238 closed 0   0.2 650893 2 2014-05-27T08:47:51Z 2014-07-10T09:38:26Z 2014-06-22T00:44:26Z MEMBER      

For users of pandas, the xray interface would be more obvious if we referred to what we currently call "coordinates" as "indices."

This would entail renaming the coordinates property to indices, xray.Coordinate to xray.Index and the xray.Coordinate.as_index property to as_pandas_index (all with deprecation warnings).

Possible downsides: 1. The xray data model would be less obvious to people familiar with the NetCDF. 2. There is some potential for confusion between xray.Index and pandas.Index: - The only real difference is that xray's Index is a xray.Variable object, and thus is dimension aware and has attributes. - In principle, xray.Index should have all the necessary properties to act like an index (or rather, it already has most of these properties and should get the rest). - Unfortunately, pandas doesn't accept non-pandas.Index objects as indices, nor will it properly convert an xray.Index into a pandas.Index.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/142/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
32928338 MDU6SXNzdWUzMjkyODMzOA== 116 Allow DataArray objects without named dimensions? shoyer 1217238 closed 0   0.2 650893 2 2014-05-06T20:12:41Z 2014-07-06T03:38:48Z 2014-07-06T03:38:48Z MEMBER      

At PyData SV, @mrocklin suggested that by default, array broadcasting should fall back on numpy's shape based broadcasting. This would also simplify directly constructing DataArray objects (#115).

The trick will be to make this work with xray's internals, which currently assume that dimensions are always named by strings.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/116/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
36211623 MDU6SXNzdWUzNjIxMTYyMw== 167 Unable to load pickle Dataset that was picked with cPickle rzlee 2382049 closed 0 shoyer 1217238 0.2 650893 1 2014-06-21T00:02:43Z 2014-06-22T01:40:58Z 2014-06-22T01:40:58Z NONE      

``` import cPickle as pickle import xray import numpy as np import pandas as pd

foo_values = np.random.RandomState(0).rand(3,4) times = pd.date_range('2001-02-03', periods=3) ds = xray.Dataset({'time': ('time', times), 'foo': (['time', 'space'], foo_values)})

with open('mypickle.pkl', 'w') as f: pickle.dump(ds, f)

with open('mypickle.pkl') as f: myds = pickle.load(f)

myds ```

This code results in: <repr(<xray.dataset.Dataset at 0x7f95a3290d90>) failed: AttributeError: mapping>

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/167/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
35262649 MDU6SXNzdWUzNTI2MjY0OQ== 148 API: rename "labeled" and "indexed" shoyer 1217238 closed 0   0.2 650893 0 2014-06-09T06:17:09Z 2014-06-22T00:44:26Z 2014-06-22T00:44:26Z MEMBER      

I'd like to rename the Dataset/DataArray methods labeled and indexed, so that they are more obviously variants on a theme, similarly to how pandas distinguishes between methods .loc and .iloc (and .at/.iat, etc.). Some options include: 1. Rename indexed to ilabeled. 2. Rename indexed/labeled to isel/sel.

I like option 2 (particularly because it's shorter), but to avoid confusion with the select method, we would need to also rename select/unselect to something else. I would suggest select_vars and drop_vars.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/148/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
32928159 MDU6SXNzdWUzMjkyODE1OQ== 115 Direct constructor for DataArray objects shoyer 1217238 closed 0   0.2 650893 1 2014-05-06T20:10:19Z 2014-06-11T16:53:58Z 2014-06-11T16:53:58Z MEMBER      

It shouldn't be necessary to put arrays in a Dataset to make a DataArray.

See also: https://github.com/xray/xray/issues/85#issuecomment-38875079

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/115/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 25.721ms · About: xarray-datasette