home / github

Menu
  • Search all tables
  • GraphQL API

pull_requests

Table actions
  • GraphQL API for pull_requests

9 rows where assignee = 6213168

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date), closed_at (date), merged_at (date)

id ▼ node_id number state locked title user body created_at updated_at closed_at merged_at merge_commit_sha assignee milestone draft head base author_association auto_merge repo url merged_by
306273567 MDExOlB1bGxSZXF1ZXN0MzA2MjczNTY3 3202 closed 0 chunk sparse arrays crusaderky 6213168 Closes #3191 @shoyer I completely disabled wrapping in ImplicitToExplicitIndexingAdapter for sparse arrays, cupy arrays, etc. I'm not sure if it's desirable; the chief problem is that I don't think I understand the purpose of ImplicitToExplicitIndexingAdapter to begin with... some enlightenment would be appreciated. 2019-08-11T11:19:16Z 2019-08-12T21:02:31Z 2019-08-12T21:02:25Z 2019-08-12T21:02:24Z c782637ec2e4758de1ed63bbd36610cba1a57db8 crusaderky 6213168   0 38efffc3b152c2e9a4a34c4cf49bf60b5519b813 fc44bae87856a357e669a519915486f7e46733c1 MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/3202  
323020965 MDExOlB1bGxSZXF1ZXN0MzIzMDIwOTY1 3358 closed 0 Rolling minimum dependency versions policy crusaderky 6213168 Closes #3222 Closes #3293 - Drop support for Python 3.5 - Upgrade numpy to 1.14 (24 months old) - Upgrade pandas to 0.24 (12 months old) - Downgrade scipy to 1.0 (policy allows for 1.2, but it breaks numpy=1.14) - Downgrade dask to 1.2 (6 months old) - Other upgrades/downgrades to comply with the policy - CI tool to verify that the minimum dependencies requirements in CI are compliant with the policy - Overhaul CI environment for readthedocs Out of scope: - Purge away all OrderedDict's 2019-09-30T23:50:39Z 2019-10-09T02:02:29Z 2019-10-08T21:23:47Z 2019-10-08T21:23:47Z 6fb272c0fde4bfaca9b6322b18ac2cf962e26ee3 crusaderky 6213168   0 b20349c302cb81ad58df37f68e2aa11763c95f8b 4254b4af33843f711459e5242018cd1d678ad3a0 MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/3358  
325076349 MDExOlB1bGxSZXF1ZXN0MzI1MDc2MzQ5 3375 closed 0 Speed up isel and __getitem__ crusaderky 6213168 First iterative improvement for #2799. Speed up Dataset.isel up to 33% and DataArray.isel up to 25% (when there are no indices and the numpy array is small). 15% speedup when there are indices. Benchmarks can be found in #2799. 2019-10-06T21:27:42Z 2019-10-10T09:21:56Z 2019-10-09T18:01:30Z 2019-10-09T18:01:30Z 3f0049ffc51e4c709256cf174c435f741370148d crusaderky 6213168   0 1e99f83e4664982a976be400cd4a4f2f95bb22c2 132733a917171fcb1f269406eb9e6668cbb7e376 MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/3375  
340145984 MDExOlB1bGxSZXF1ZXN0MzQwMTQ1OTg0 3515 closed 0 Recursive tokenization crusaderky 6213168 After misreading the dask documentation <https://docs.dask.org/en/latest/custom-collections.html#deterministic-hashing>, I was under the impression that the output of ``__dask_tokenize__`` would be recursively parsed, like it happens for ``__getstate__`` or ``__reduce__``. That's not the case - the output of ``__dask_tokenize__`` is just fed into a str() function so it has to be made explicitly recursive! 2019-11-12T22:35:13Z 2019-11-13T00:54:32Z 2019-11-13T00:53:27Z 2019-11-13T00:53:27Z e70138b61033081e3bfab3aaaec5997716cd7109 crusaderky 6213168   0 36ad4f7d4d2f238cccb20d48c83d604ad431c49d b74f80ca2df4920f711f9fe5762458c53ce3c2c6 MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/3515  
461335401 MDExOlB1bGxSZXF1ZXN0NDYxMzM1NDAx 4296 closed 0 Increase support window of all dependencies crusaderky 6213168 Closes #4295 Increase width of the sliding window for minimum supported version: - setuptools from 6 months sliding window to hardcoded >= 38.4, and to 42 months sliding window starting from July 2021 - dask and distributed from 6 months sliding window to hardcoded >= 2.9, and to 12 months sliding window starting from January 2021 - all other libraries from 6 months to 12 months sliding window 2020-08-01T18:55:54Z 2020-08-14T09:52:46Z 2020-08-14T09:52:42Z 2020-08-14T09:52:42Z 8fab5a2449d8368251f96fc2b9d1eaa3040894e6 crusaderky 6213168   0 b4f4644ca774ce49555304cb241f88be03fd3fec 1791c3b6f9852edca977c68c0bf52ed4406ef7b0 MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/4296  
461438023 MDExOlB1bGxSZXF1ZXN0NDYxNDM4MDIz 4297 closed 0 Lazily load resource files crusaderky 6213168 - Marginal speed-up and RAM footprint reduction when not running in Jupyter Notebook - Closes #4294 2020-08-01T21:31:36Z 2020-09-22T05:32:38Z 2020-08-02T07:05:15Z 2020-08-02T07:05:15Z f99c6cca2df959df3db3c57592db97287fd28f15 crusaderky 6213168   0 90c1563f4e37c9d289c149066fdc05aa08874baa 9058114f70d07ef04654d1d60718442d0555b84b MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/4297  
1676653484 PR_kwDOAMm_X85j77Os 8606 closed 0 Clean up Dims type annotation crusaderky 6213168   2024-01-12T15:05:40Z 2024-01-18T18:14:15Z 2024-01-16T10:26:08Z 2024-01-16T10:26:08Z 1580c2c47cca425d47e3a4c2777a625dadba0a8f crusaderky 6213168   0 cdb693e30856667947f32efb760c51940e26f250 357a44474df6d02555502d600776e27a86a12f3f MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/8606  
1684677511 PR_kwDOAMm_X85kaiOH 8618 closed 0 Re-enable mypy checks for parse_dims unit tests crusaderky 6213168 As per https://github.com/pydata/xarray/pull/8606#discussion_r1452680454 2024-01-18T11:32:28Z 2024-01-19T15:49:33Z 2024-01-18T15:34:23Z 2024-01-18T15:34:23Z f4d2609979e8d0fbeee9821ce140898cf3d54e80 crusaderky 6213168   0 bc7ad7425265128ad08d5844dd8963cf8f4fb81c 24ad846b89faa4e549a0cc6382b4735c872c371d MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/8618  
1749654587 PR_kwDOAMm_X85oSZw7 8797 closed 0 tokenize() should ignore difference between None and {} attrs crusaderky 6213168 - Closes #8788 2024-02-29T12:22:24Z 2024-03-01T11:15:30Z 2024-03-01T03:29:51Z 2024-03-01T03:29:51Z 604bb6d08b942f774a3ba2a2900061959d2e091d crusaderky 6213168   0 4eb05f0f73c535455f457e650036c86cdfaf4aa2 a241845c0dfcb8a5a0396f5ef7602e9dae6155c0 MEMBER   xarray 13221727 https://github.com/pydata/xarray/pull/8797  

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [pull_requests] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [state] TEXT,
   [locked] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [body] TEXT,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [merged_at] TEXT,
   [merge_commit_sha] TEXT,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [draft] INTEGER,
   [head] TEXT,
   [base] TEXT,
   [author_association] TEXT,
   [auto_merge] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [url] TEXT,
   [merged_by] INTEGER REFERENCES [users]([id])
);
CREATE INDEX [idx_pull_requests_merged_by]
    ON [pull_requests] ([merged_by]);
CREATE INDEX [idx_pull_requests_repo]
    ON [pull_requests] ([repo]);
CREATE INDEX [idx_pull_requests_milestone]
    ON [pull_requests] ([milestone]);
CREATE INDEX [idx_pull_requests_assignee]
    ON [pull_requests] ([assignee]);
CREATE INDEX [idx_pull_requests_user]
    ON [pull_requests] ([user]);
Powered by Datasette · Queries took 22.104ms · About: xarray-datasette