home / github

Menu
  • Search all tables
  • GraphQL API

issues

Table actions
  • GraphQL API for issues

7 rows where milestone = 1143506 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: comments, closed_at, created_at (date), updated_at (date), closed_at (date)

type 2

  • pull 5
  • issue 2

state 1

  • closed 7

repo 1

  • xarray 7
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
88339814 MDExOlB1bGxSZXF1ZXN0Mzc2NjMwMDc= 434 One less copy when reading big-endian data with engine='scipy' shoyer 1217238 closed 0   0.5.1 1143506 0 2015-06-15T06:59:55Z 2015-06-15T07:51:44Z 2015-06-15T07:51:41Z MEMBER   0 pydata/xarray/pulls/434
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/434/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
88240870 MDExOlB1bGxSZXF1ZXN0Mzc2NDc1NDQ= 433 Assign order shoyer 1217238 closed 0   0.5.1 1143506 0 2015-06-14T20:09:04Z 2015-06-15T01:16:45Z 2015-06-15T01:16:31Z MEMBER   0 pydata/xarray/pulls/433

xray.Dataset.assign and xray.Dataset.assign_coords now assign new variables in sorted (alphabetical) order, mirroring the behavior in pandas. Previously, the order was arbitrary.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/433/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
87025092 MDExOlB1bGxSZXF1ZXN0MzczNzY5Njk= 429 Add pipe method copied from pandas shoyer 1217238 closed 0   0.5.1 1143506 0 2015-06-10T16:19:52Z 2015-06-11T16:45:57Z 2015-06-11T16:45:56Z MEMBER   0 pydata/xarray/pulls/429

The implementation here is directly copied from pandas: https://github.com/pydata/pandas/pull/10253

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/429/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
83700033 MDU6SXNzdWU4MzcwMDAzMw== 416 Automatically decode netCDF data to native endianness shoyer 1217238 closed 0   0.5.1 1143506 1 2015-06-01T21:23:52Z 2015-06-10T16:01:00Z 2015-06-06T03:51:13Z MEMBER      

Unfortunately, netCDF3 is big endian, but most modern CPUs are little endian.

Cython requires that data match native endianness in order to perform operations. This means that users can get strange errors when performing aggregations with bottleneck or after converting an xray dataset to pandas.

It would be nice to handle this automatically as part of the "decoding" process. I don't think there are any particular advantages to preserving non-native endianness (except, I suppose, for serialization back to another netCDF3 file). My understanding is that most calculations require native endianness, anyways.

CC @bareid

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/416/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
85692656 MDExOlB1bGxSZXF1ZXN0MzcwODQ0MjM= 427 Fix concat for identical index variables shoyer 1217238 closed 0   0.5.1 1143506 0 2015-06-06T04:05:10Z 2015-06-07T06:03:23Z 2015-06-07T06:03:16Z MEMBER   0 pydata/xarray/pulls/427

Fixes #425

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/427/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
85662203 MDU6SXNzdWU4NTY2MjIwMw== 425 xray.concat fails in an edge case involving identical coordinate variables shoyer 1217238 closed 0   0.5.1 1143506 0 2015-06-06T00:22:45Z 2015-06-07T06:03:16Z 2015-06-07T06:03:16Z MEMBER      

python ds1 = xray.Dataset({'foo': 1.5}, {'x': 0, 'y': 1}) ds2 = xray.Dataset({'foo': 2.5}, {'x': 1, 'y': 1}) xray.concat([ds1, ds2], 'y')

``` ValueError Traceback (most recent call last) <ipython-input-235-69cea5440248> in <module>() 1 ds1 = xray.Dataset({'foo': 1.5}, {'x': 0, 'y': 1}) 2 ds2 = xray.Dataset({'foo': 2.5}, {'x': 1, 'y': 1}) ----> 3 xray.concat([ds1, ds2], 'y')

/Users/shoyer/dev/xray/xray/core/alignment.pyc in concat(objs, dim, indexers, mode, concat_over, compat) 276 raise ValueError('must supply at least one object to concatenate') 277 cls = type(first_obj) --> 278 return cls._concat(objs, dim, indexers, mode, concat_over, compat) 279 280

/Users/shoyer/dev/xray/xray/core/dataset.pyc in _concat(cls, datasets, dim, indexers, mode, concat_over, compat) 1732 for k in concat_over: 1733 vars = ensure_common_dims([ds._variables[k] for ds in datasets]) -> 1734 concatenated[k] = Variable.concat(vars, dim, indexers) 1735 1736 concatenated._coord_names.update(datasets[0].coords)

/Users/shoyer/dev/xray/xray/core/dataset.pyc in setitem(self, key, value) 637 raise NotImplementedError('cannot yet use a dictionary as a key ' 638 'to set Dataset values') --> 639 self.update({key: value}) 640 641 def delitem(self, key):

/Users/shoyer/dev/xray/xray/core/dataset.pyc in update(self, other, inplace) 1224 """ 1225 return self.merge( -> 1226 other, inplace=inplace, overwrite_vars=list(other), join='left') 1227 1228 def merge(self, other, inplace=False, overwrite_vars=set(),

/Users/shoyer/dev/xray/xray/core/dataset.pyc in merge(self, other, inplace, overwrite_vars, compat, join) 1291 raise ValueError('cannot merge: the following variables are ' 1292 'coordinates on one dataset but not the other: %s' -> 1293 % list(ambiguous_coords)) 1294 1295 obj = self if inplace else self.copy()

ValueError: cannot merge: the following variables are coordinates on one dataset but not the other: ['y'] ```

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/425/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
85670978 MDExOlB1bGxSZXF1ZXN0MzcwODIzNjk= 426 Decode non-native endianness shoyer 1217238 closed 0   0.5.1 1143506 0 2015-06-06T01:31:14Z 2015-06-06T03:51:14Z 2015-06-06T03:51:13Z MEMBER   0 pydata/xarray/pulls/426

Fixes #416

By the way, it turns out the simple work around for this was to install netCDF4 -- only scipy.io.netcdf returns the big-endian arrays directly.

CC @bareid

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/426/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 23.953ms · About: xarray-datasette