home / github

Menu
  • Search all tables
  • GraphQL API

issues

Table actions
  • GraphQL API for issues

9 rows where assignee = 6213168 and type = "pull" sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: comments, created_at (date), updated_at (date), closed_at (date)

type 1

  • pull · 9 ✖

state 1

  • closed 9

repo 1

  • xarray 9
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
2161133346 PR_kwDOAMm_X85oSZw7 8797 tokenize() should ignore difference between None and {} attrs crusaderky 6213168 closed 0 crusaderky 6213168   1 2024-02-29T12:22:24Z 2024-03-01T11:15:30Z 2024-03-01T03:29:51Z MEMBER   0 pydata/xarray/pulls/8797
  • Closes #8788
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8797/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
2088095900 PR_kwDOAMm_X85kaiOH 8618 Re-enable mypy checks for parse_dims unit tests crusaderky 6213168 closed 0 crusaderky 6213168   1 2024-01-18T11:32:28Z 2024-01-19T15:49:33Z 2024-01-18T15:34:23Z MEMBER   0 pydata/xarray/pulls/8618

As per https://github.com/pydata/xarray/pull/8606#discussion_r1452680454

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8618/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
2079054085 PR_kwDOAMm_X85j77Os 8606 Clean up Dims type annotation crusaderky 6213168 closed 0 crusaderky 6213168   1 2024-01-12T15:05:40Z 2024-01-18T18:14:15Z 2024-01-16T10:26:08Z MEMBER   0 pydata/xarray/pulls/8606  
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8606/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
671216158 MDExOlB1bGxSZXF1ZXN0NDYxNDM4MDIz 4297 Lazily load resource files crusaderky 6213168 closed 0 crusaderky 6213168   4 2020-08-01T21:31:36Z 2020-09-22T05:32:38Z 2020-08-02T07:05:15Z MEMBER   0 pydata/xarray/pulls/4297
  • Marginal speed-up and RAM footprint reduction when not running in Jupyter Notebook
  • Closes #4294
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4297/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
671108068 MDExOlB1bGxSZXF1ZXN0NDYxMzM1NDAx 4296 Increase support window of all dependencies crusaderky 6213168 closed 0 crusaderky 6213168   7 2020-08-01T18:55:54Z 2020-08-14T09:52:46Z 2020-08-14T09:52:42Z MEMBER   0 pydata/xarray/pulls/4296

Closes #4295

Increase width of the sliding window for minimum supported version: - setuptools from 6 months sliding window to hardcoded >= 38.4, and to 42 months sliding window starting from July 2021 - dask and distributed from 6 months sliding window to hardcoded >= 2.9, and to 12 months sliding window starting from January 2021 - all other libraries from 6 months to 12 months sliding window

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4296/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
521842949 MDExOlB1bGxSZXF1ZXN0MzQwMTQ1OTg0 3515 Recursive tokenization crusaderky 6213168 closed 0 crusaderky 6213168   1 2019-11-12T22:35:13Z 2019-11-13T00:54:32Z 2019-11-13T00:53:27Z MEMBER   0 pydata/xarray/pulls/3515

After misreading the dask documentation https://docs.dask.org/en/latest/custom-collections.html#deterministic-hashing, I was under the impression that the output of __dask_tokenize__ would be recursively parsed, like it happens for __getstate__ or __reduce__. That's not the case - the output of __dask_tokenize__ is just fed into a str() function so it has to be made explicitly recursive!

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3515/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
503163130 MDExOlB1bGxSZXF1ZXN0MzI1MDc2MzQ5 3375 Speed up isel and __getitem__ crusaderky 6213168 closed 0 crusaderky 6213168   5 2019-10-06T21:27:42Z 2019-10-10T09:21:56Z 2019-10-09T18:01:30Z MEMBER   0 pydata/xarray/pulls/3375

First iterative improvement for #2799.

Speed up Dataset.isel up to 33% and DataArray.isel up to 25% (when there are no indices and the numpy array is small). 15% speedup when there are indices.

Benchmarks can be found in #2799.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3375/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
500582648 MDExOlB1bGxSZXF1ZXN0MzIzMDIwOTY1 3358 Rolling minimum dependency versions policy crusaderky 6213168 closed 0 crusaderky 6213168   24 2019-09-30T23:50:39Z 2019-10-09T02:02:29Z 2019-10-08T21:23:47Z MEMBER   0 pydata/xarray/pulls/3358

Closes #3222 Closes #3293

  • Drop support for Python 3.5
  • Upgrade numpy to 1.14 (24 months old)
  • Upgrade pandas to 0.24 (12 months old)
  • Downgrade scipy to 1.0 (policy allows for 1.2, but it breaks numpy=1.14)
  • Downgrade dask to 1.2 (6 months old)
  • Other upgrades/downgrades to comply with the policy
  • CI tool to verify that the minimum dependencies requirements in CI are compliant with the policy
  • Overhaul CI environment for readthedocs

Out of scope: - Purge away all OrderedDict's

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3358/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
479359010 MDExOlB1bGxSZXF1ZXN0MzA2MjczNTY3 3202 chunk sparse arrays crusaderky 6213168 closed 0 crusaderky 6213168   4 2019-08-11T11:19:16Z 2019-08-12T21:02:31Z 2019-08-12T21:02:25Z MEMBER   0 pydata/xarray/pulls/3202

Closes #3191

@shoyer I completely disabled wrapping in ImplicitToExplicitIndexingAdapter for sparse arrays, cupy arrays, etc. I'm not sure if it's desirable; the chief problem is that I don't think I understand the purpose of ImplicitToExplicitIndexingAdapter to begin with... some enlightenment would be appreciated.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3202/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 883.885ms · About: xarray-datasette