home / github

Menu
  • Search all tables
  • GraphQL API

pull_requests

Table actions
  • GraphQL API for pull_requests

5 rows where milestone = 664063

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: user, base, author_association, created_at (date), updated_at (date), closed_at (date), merged_at (date)

id ▼ node_id number state locked title user body created_at updated_at closed_at merged_at merge_commit_sha assignee milestone draft head base author_association auto_merge repo url merged_by
15556956 MDExOlB1bGxSZXF1ZXN0MTU1NTY5NTY= 113 closed 0 Most of Python 3 support takluyver 327925 This isn't entirely finished, but I need to stop working on it for a bit, and I think enough of it is ready to be reviewed. The core code is passing its tests; the remaining failures are all in talking to the Scipy and netCDF4 backends. I also have PRs open against Scipy (scipy/scipy#3617) and netCDF4 (Unidata/netcdf4-python#252) to fix bugs I've encountered there. Particular issues that came up: - There were quite a few circular imports. For now, I've fudged these to work rather than trying to reorganise the code. - `isinstance(x, int)` doesn't reliably catch numpy integer types - see e.g. numpy/numpy#2951. I changed several such cases to `isinstance(x, (int, np.integer))`. 2014-05-06T18:31:56Z 2014-07-15T20:36:05Z 2014-05-09T01:39:01Z 2014-05-09T01:39:01Z 184fd39c0fa1574a03439998138297bdb193674c   0.1.1 664063 0 6dbd8910080e9210700501c0ea671cf0dc44d90f 8d6fbd7f4469ce73ed94cf09602efa0498f9dab6 MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/113  
15798892 MDExOlB1bGxSZXF1ZXN0MTU3OTg4OTI= 126 closed 0 Return numpy.datetime64 arrays for non-standard calendars jhamman 2443309 Fixes issues in #118 and #121 2014-05-13T00:22:51Z 2015-07-27T05:38:06Z 2014-05-16T00:21:08Z 2014-05-16T00:21:08Z e80836b9736fcfba1af500c08aab22bcda4e8912   0.1.1 664063 0 e07bc93589bbd23fe3bfa1ae1e1daf15eebf83f2 ed3143e3082ba339d35dc4678ddabc7e175dd6b8 MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/126  
15820652 MDExOlB1bGxSZXF1ZXN0MTU4MjA2NTI= 127 closed 0 initial implementation of support for NetCDF groups alimanfoo 703554 Just to start getting familiar with xray, I've had a go at implementing support for opening a dataset from a specific group within a NetCDF file. I haven't tested on real data but there are a couple of unit tests covering simple cases. Let me know if you'd like to take this forward, happy to work on it further. 2014-05-13T13:12:53Z 2014-06-27T17:23:33Z 2014-05-16T01:46:09Z 2014-05-16T01:46:09Z efece21b5fce99465a52c866b890e34f19d5bd37   0.1.1 664063 0 28b0ba59b33f63dcd6f6cb05666b3cd98211f4b4 ed3143e3082ba339d35dc4678ddabc7e175dd6b8 CONTRIBUTOR   xarray 13221727 https://github.com/pydata/xarray/pull/127  
15862812 MDExOlB1bGxSZXF1ZXN0MTU4NjI4MTI= 129 closed 0 Require only numpy 1.7 for the benefit of readthedocs shoyer 1217238 ReadTheDocs comes with pre-built packages for the basic scientific python stack, but some of these packages are old (e.g., numpy is 1.7.1). The only way to upgrade packages on readthedocs is to use a virtual environment and a requirements.txt. Unfortunately, this means we can't upgrade both numpy and pandas simultaneously, because pandas may get built first and link against the wrong version of numpy. We inadvertantly stumbled upon a work around to build the "latest" docs by first installing numpy in the (cached) virtual environment, and then later (in another commit), adding pandas to the requirements.txt file. However, this is a real hack and makes it impossible to maintain different versions of the docs, such as for tagged releases. Accordingly, this commit relaxes the numpy version requirement so we can use a version that readthedocs already has installed. (We actually don't really need a newer version of numpy for any current functionality in xray, although it's nice to have for support for missing value functions like nanmean.) 2014-05-14T06:41:30Z 2014-06-25T23:40:31Z 2014-05-15T07:21:22Z 2014-05-15T07:21:22Z b020100a03b394cc08b5cb504a08a64af1253ba7   0.1.1 664063 0 0b33e2ab862f27b688d8ababa954265942720164 ed3143e3082ba339d35dc4678ddabc7e175dd6b8 MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/129  
16037950 MDExOlB1bGxSZXF1ZXN0MTYwMzc5NTA= 134 closed 0 Fix concatenating Variables with dtype=datetime64 shoyer 1217238 This is an alternative to #125 which I think is a little cleaner. Basically, there was a bug where `Variable.values` for datetime64 arrays always made a copy of values. This made it impossible to edit variable values in-place. @akleeman would appreciate your thoughts. 2014-05-19T05:39:46Z 2014-06-28T01:08:03Z 2014-05-20T19:09:28Z 2014-05-20T19:09:28Z 6e9268f01681c37a9603ef67a46aa96d29955fb8   0.1.1 664063 0 e9e1866dfdf13b9656c923c1d8f077e9bad225d8 c425967c5f23f46ec1100ccdf472a3fbc0a51ade MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/134  

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [pull_requests] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [state] TEXT,
   [locked] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [body] TEXT,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [merged_at] TEXT,
   [merge_commit_sha] TEXT,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [draft] INTEGER,
   [head] TEXT,
   [base] TEXT,
   [author_association] TEXT,
   [auto_merge] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [url] TEXT,
   [merged_by] INTEGER REFERENCES [users]([id])
);
CREATE INDEX [idx_pull_requests_merged_by]
    ON [pull_requests] ([merged_by]);
CREATE INDEX [idx_pull_requests_repo]
    ON [pull_requests] ([repo]);
CREATE INDEX [idx_pull_requests_milestone]
    ON [pull_requests] ([milestone]);
CREATE INDEX [idx_pull_requests_assignee]
    ON [pull_requests] ([assignee]);
CREATE INDEX [idx_pull_requests_user]
    ON [pull_requests] ([user]);
Powered by Datasette · Queries took 22.872ms · About: xarray-datasette