home / github

Menu
  • Search all tables
  • GraphQL API

issues

Table actions
  • GraphQL API for issues

2 rows where user = 4605410 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

type 1

  • issue 2

state 1

  • open 2

repo 1

  • xarray 2
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
512205079 MDU6SXNzdWU1MTIyMDUwNzk= 3445 Merge fails when sparse Dataset has overlapping dimension values k-a-mendoza 4605410 open 0     3 2019-10-24T22:08:12Z 2021-07-08T17:43:57Z   NONE      

Sparse numpy arrays used in a merge operation seem to fail under certain coordinate settings. for example, this works perfectly:

```python import xarray as xr import numpy as np

data_array1 = xr.DataArray(data,name='default', dims=['source','receiver','time'], coords={'source':['X.1'], 'receiver':['X.2'], 'time':time}).to_dataset() data_array2 = xr.DataArray(data,name='default', dims=['source','receiver','time'], coords={'source':['X.2'], 'receiver':['X.1'], 'time':time}).to_dataset()

dataset1 = xr.merge([data_array1,data_array2])

```

But this raises an IndexError: Only indices with at most one iterable index are supported. from the sparse package:

```python import xarray as xr import numpy as np import sparse

data = sparse.COO.from_numpy(np.random.uniform(-1,1,(1,1,100))) time = np.linspace(0,1,num=100)

data_array1 = xr.DataArray(data,name='default', dims=['source','receiver','time'], coords={'source':['X.1'], 'receiver':['X.2'], 'time':time}).to_dataset() data_array2 = xr.DataArray(data,name='default', dims=['source','receiver','time'], coords={'source':['X.2'], 'receiver':['X.1'], 'time':time}).to_dataset()

dataset1 = xr.merge([data_array1,data_array2]) ```

I have noticed this occurs when the merger would seem to add dimensions filled with nan values.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3445/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
458236359 MDU6SXNzdWU0NTgyMzYzNTk= 3035 Feature Request: Additional parallel IO format support: ASDF, ph5, or similar k-a-mendoza 4605410 open 0     1 2019-06-19T21:32:04Z 2020-04-07T06:42:15Z   NONE      

Problem description

Currently, Xarray supports reading/writing a variety of dataformats. Geoscience data in particular is a stated use-case domain. However, in the documentation, it seems as though geoscience mostly refers to hydrologic and atmospheric data.

It would be very useful to more domains of geoscience if xarray also supported read/writes to formats encountered regularly in geophysics, either something like ph5, ASDF, or the like. Already projects like obsplus deliver some xarray->seismic formats -> xarray functionality, but have yet to venture into the parallel read/write operations that make xarray so attractive.

I am not sure what the overhead would be in adapting xarray to use these existing packages, or in creating common interfaces for use with these packages.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3035/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 25.569ms · About: xarray-datasette