issues
6 rows where repo = 13221727 and user = 923438 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: comments, state_reason, created_at (date), updated_at (date), closed_at (date)
| id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 482543307 | MDU6SXNzdWU0ODI1NDMzMDc= | 3232 | Use pytorch as backend for xarrays | fjanoos 923438 | open | 0 | 49 | 2019-08-19T21:45:15Z | 2022-07-20T18:01:56Z | NONE | I would be interested in using pytorch as a backend for xarrays - because: a) pytorch is very similar to numpy - so the conceptual overhead is small b) [most helpful] enable having a GPU as the underlying hardware for compute - which would provide non-trivial speed up c) it would allow seamless integration with deep-learning algorithms and techniques Any thoughts on what the interest for such a feature might be ? I would be open to implementing parts of it - so any suggestions on where I could start ? Thanks |
{
"url": "https://api.github.com/repos/pydata/xarray/issues/3232/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
reopened | xarray 13221727 | issue | |||||||
| 495382528 | MDU6SXNzdWU0OTUzODI1Mjg= | 3320 | Error saving xr.Dataset with timezone aware time index to netcdf format. | fjanoos 923438 | open | 0 | 1 | 2019-09-18T18:20:42Z | 2022-01-17T21:23:02Z | NONE | When I try to save a xr.Dataset that was created from a pandas dataframe with tz-aware time index ( see #3291) - xarray converts the time index into a int64 nanosecs For example, this is what the converted dataset looks like:
Now when I try to save this dataset using
I get the following error:
Dropping into pdb when this error is hit - it looks like the problem is with the time index.
After converting the time index into a regular int index by:
And this also works !!
Any ideas on what I can do about this ? Thanks! -firdaus |
{
"url": "https://api.github.com/repos/pydata/xarray/issues/3320/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
xarray 13221727 | issue | ||||||||
| 480786385 | MDU6SXNzdWU0ODA3ODYzODU= | 3218 | merge_asof functionality | fjanoos 923438 | closed | 0 | 6 | 2019-08-14T16:57:22Z | 2021-07-21T18:18:20Z | 2021-07-21T18:18:20Z | NONE | Would it be possible to add some functionality to xarray merge that mimics pandas merge_asof ? This would be very useful when aligning timeseries dataarrays where the two arrays are misaligned. Thanks. |
{
"url": "https://api.github.com/repos/pydata/xarray/issues/3218/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
completed | xarray 13221727 | issue | ||||||
| 496688781 | MDU6SXNzdWU0OTY2ODg3ODE= | 3330 | Feature requests for DataArray.rolling | fjanoos 923438 | closed | 0 | 1 | 2019-09-21T18:58:21Z | 2021-07-08T16:29:18Z | 2021-07-08T16:29:18Z | NONE | In |
{
"url": "https://api.github.com/repos/pydata/xarray/issues/3330/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
completed | xarray 13221727 | issue | ||||||
| 490618213 | MDU6SXNzdWU0OTA2MTgyMTM= | 3291 | xr.DataSet.from_dataframe / xr.DataArray.from_series does not preserve DateTimeIndex with timezone | fjanoos 923438 | open | 0 | 4 | 2019-09-07T10:10:40Z | 2021-04-21T21:00:41Z | NONE | Problem DescriptionWhen using DataSet.from_dataframe (DataArray.from_series) to convert a pandas dataframe with DateTimeIndex having a timezone - xarray convert the datetime into a nanosecond index - rather than keeping it as a datetime-index type. MCVE Code Sample
Expected OutputAfter removing the tz localization from the DateTimeIndex of the dataframe , the conversion to a DataSet preserves the time-index (without converting it to nanoseconds)
Output of
|
{
"url": "https://api.github.com/repos/pydata/xarray/issues/3291/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
xarray 13221727 | issue | ||||||||
| 496809167 | MDU6SXNzdWU0OTY4MDkxNjc= | 3332 | Memory usage of `da.rolling().construct` | fjanoos 923438 | closed | 0 | 5 | 2019-09-22T17:35:06Z | 2021-02-16T15:00:37Z | 2021-02-16T15:00:37Z | NONE | If I were to do |
{
"url": "https://api.github.com/repos/pydata/xarray/issues/3332/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] (
[id] INTEGER PRIMARY KEY,
[node_id] TEXT,
[number] INTEGER,
[title] TEXT,
[user] INTEGER REFERENCES [users]([id]),
[state] TEXT,
[locked] INTEGER,
[assignee] INTEGER REFERENCES [users]([id]),
[milestone] INTEGER REFERENCES [milestones]([id]),
[comments] INTEGER,
[created_at] TEXT,
[updated_at] TEXT,
[closed_at] TEXT,
[author_association] TEXT,
[active_lock_reason] TEXT,
[draft] INTEGER,
[pull_request] TEXT,
[body] TEXT,
[reactions] TEXT,
[performed_via_github_app] TEXT,
[state_reason] TEXT,
[repo] INTEGER REFERENCES [repos]([id]),
[type] TEXT
);
CREATE INDEX [idx_issues_repo]
ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
ON [issues] ([user]);

