home / github

Menu
  • Search all tables
  • GraphQL API

issues

Table actions
  • GraphQL API for issues

5 rows where user = 2941720 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date), closed_at (date)

type 2

  • issue 3
  • pull 2

state 2

  • open 3
  • closed 2

repo 1

  • xarray 5
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
247697176 MDU6SXNzdWUyNDc2OTcxNzY= 1499 Reusing coordinate doesn't show in the dimensions lewisacidic 2941720 closed 0     10 2017-08-03T12:55:35Z 2023-12-02T02:50:25Z 2023-12-02T02:50:25Z CONTRIBUTOR      

For a DataArray, when reusing a coordinate for multiple dimensions (is this expected usage?), it only shows once in the repr:

```python

x = xr.IndexVariable(data=range(5), dims='x') da = xr.DataArray(data=np.random.randn(5, 5), coords={'x': x}, dims=('x', 'x')) da <xarray.DataArray (x: 5)> array([[ 0.704139, 0.135638, -0.84717 , -0.580167, 0.95755 ], [ 0.966196, -0.126107, 0.547461, 1.075547, -0.477495], [-0.507956, -0.671571, 1.271085, 0.007741, -0.37878 ], [-0.969021, -0.440854, 0.062914, -0.3337 , -0.775898], [ 0.86893 , 0.227861, 1.831021, 0.702769, 0.868767]]) Coordinates: * x (x) int64 0 1 2 3 4 ```

I think it should be

python <xarray.DataArray (x: 5, x: 5)> array([[ ... ]]) Coordinates: * x (x) int64 0 1 2 3 4

Otherwise, everything appears to work exactly as I would expect.

This isn't an issue for Datasets:

```python

xr.Dataset({'da': da}) <xarray.Dataset> Dimensions: (x: 5) Coordinates: * x (x) int64 0 1 2 3 4 Data variables: da (x, x) float64 0.08976 0.1049 -1.291 -0.4605 -0.005165 -0.3259 ... ```

Thanks!

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1499/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
230566456 MDExOlB1bGxSZXF1ZXN0MTIxODk5MjA2 1421 Adding arbitrary object serialization lewisacidic 2941720 open 0     8 2017-05-23T01:59:37Z 2022-06-09T14:50:17Z   CONTRIBUTOR   0 pydata/xarray/pulls/1421

This adds support for object serialization using the netCDF4-python backend..

Minimum working (at least appears to..) example, no tests yet.

I added allow_object kwarg (rather than allow_pickle, no reason to firmly attach pickle to the api, could use something else for other backends).

This is now for:

  • to_netcdf
  • AbstractDataStore (a True value raises NotImplementedError for everything but NetCDF4DataStore)
  • cf_encoder which when True alters its behaviour to allow dtype('O') through.

NetCDF4DataStore handles this independently from the cf_encoder/decoder. The dtype support made it hard to decouple, plus I think object serialization is a backend dependent issue.

There's a lot of potential for refactoring, just pushed this to get opinions about whether this was a reasonable approach - I'm relatively new to open source, so would appreciate any constructive feedback/criticisms!

  • [ ] Closes #xxxx
  • [ ] Tests added / passed
  • [ ] Passes git diff upstream/master | flake8 --diff
  • [ ] Fully documented, including whats-new.rst for all changes and api.rst for new API

^ these will come later!

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1421/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
247703455 MDU6SXNzdWUyNDc3MDM0NTU= 1500 Support for attributes with different dtypes when serialising to netcdf4 lewisacidic 2941720 open 0     4 2017-08-03T13:18:12Z 2020-03-17T14:18:39Z   CONTRIBUTOR      

At the moment, bool and dates aren't supported as attributes when serializing to netcdf4:

```python

da = xr.DataArray(range(5), attrs={'test': True}) da <xarray.DataArray (dim_0: 5)> array([0, 1, 2, 3, 4]) Dimensions without coordinates: dim_0 Attributes: test: True

da.to_netcdf('test_bool.nc') ... TypeError: illegal data type for attribute, must be one of dict_keys(['S1', 'i1', 'u1', 'i2', 'u2', 'i4', 'u4', 'i8', 'u8', 'f4', 'f8']), got b1

da = xr.DataArray(range(5), attrs={'test': pd.to_datetime('now')}) da <xarray.DataArray (dim_0: 5)> array([0, 1, 2, 3, 4]) Dimensions without coordinates: dim_0 Attributes: test: 2017-08-03 13:02:29

da.to_netcdf('test_dt.nc') ... TypeError: Invalid value for attr: 2017-08-03 13:02:29 must be a number string, ndarray or a list/tuple of numbers/strings for serialization to netCDF files ```

I assume bool attributes aren't supported by netcdf4-python and dates are difficult (could always just write these as a string), but this would be really nice to have if possible.

As an aside, using h5netcdf works for bools, but coerces them to int64.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1500/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
230158616 MDU6SXNzdWUyMzAxNTg2MTY= 1415 Save arbitrary Python objects to netCDF lewisacidic 2941720 open 0     5 2017-05-20T14:58:42Z 2019-04-21T05:08:03Z   CONTRIBUTOR      

I am looking to transition from pandas to xarray, and the only feature that I am really missing is the ability to seamlessly save arrays of python objects to hdf5 (or netCDF). This might be an issue for the backend netCDF4 libraries instead, but I thought I would post it here first to see what the opinions were about this functionality.

For context, Pandas allows this by using pytables' ObjectAtom to serialize the object using pickle, then saves as a variable length bytes data type. It is already possible to do this using netCDF4, by applying to each object in the array np.fromstring(pickle.dumps(obj), dtype=np.uint8), and saving these using a uint8 VLType. Then retrieving is simply pickle.reads(obj.tostring()) for each array.

I know pickle can be a security problem, it can cause an problem if you try to save a numerical array that accidently has dtype=object (pandas gives a warning), and that this is probably quite slow (I think pandas pickles a list containing all the objects for speed), but it would be incredibly convenient.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1415/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
230168414 MDExOlB1bGxSZXF1ZXN0MTIxNjMxODUz 1416 Moved register_dataset_accessor examples docs to appropriate docstring lewisacidic 2941720 closed 0     1 2017-05-20T17:43:25Z 2017-05-21T20:18:10Z 2017-05-21T20:18:10Z CONTRIBUTOR   0 pydata/xarray/pulls/1416

Just noticed this when reading through the code - VERY minor.

The Examples docstring for register_dataarray_accessor referred to register_dataset_accessor instead. Moved the example to register_dataset_accessor.

  • [ ] Closes #xxxx
  • [ ] Tests added / passed
  • [ ] Passes git diff upstream/master | flake8 --diff
  • [ ] Fully documented, including whats-new.rst for all changes and api.rst for new API

^ hopefully don't need to fill these in

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1416/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 1122.707ms · About: xarray-datasette