home / github

Menu
  • Search all tables
  • GraphQL API

issues

Table actions
  • GraphQL API for issues

22 rows where user = 8699967 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: comments, closed_at, created_at (date), updated_at (date), closed_at (date)

type 2

  • issue 15
  • pull 7

state 2

  • closed 19
  • open 3

repo 1

  • xarray 22
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
894788930 MDU6SXNzdWU4OTQ3ODg5MzA= 5336 ENH: Add keep_encoding to global options snowman2 8699967 closed 0     5 2021-05-18T21:12:50Z 2023-10-14T03:48:59Z 2023-10-14T03:48:58Z CONTRIBUTOR      

Is your feature request related to a problem? Please describe. https://corteva.github.io/rioxarray/stable/getting_started/manage_information_loss.html

Original data: python rds.green.attrs, rds.green.encoding ({'nodata': 0, 'units': ('DN', 'DN')}, {'dtype': 'float64', 'grid_mapping': 'spatial_ref', 'scale_factor': 1.0, 'add_offset': 0.0, '_FillValue': nan, 'source': 'netcdf:../../test/test_data/input/PLANET_SCOPE_3D.nc:green'}) python with xarray.set_options(keep_attrs=True): new_ds = rds.green + rds.green new_ds.attrs, new_ds.encoding {'nodata': 0, 'units': ('DN', 'DN')}, {} Describe the solution you'd like python with xarray.set_options(keep_attrs=True, keep_encoding=True): new_ds = rds.green + rds.green new_ds.attrs, new_ds.encoding ({'nodata': 0, 'units': ('DN', 'DN')}, {'dtype': 'float64', 'grid_mapping': 'spatial_ref', 'scale_factor': 1.0, 'add_offset': 0.0, '_FillValue': nan, 'source': 'netcdf:../../test/test_data/input/PLANET_SCOPE_3D.nc:green'})

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5336/reactions",
    "total_count": 3,
    "+1": 3,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
1630746106 I_kwDOAMm_X85hMzX6 7645 encode_cf_variable triggers AttributeError: 'DataArray' object has no attribute '_data' snowman2 8699967 closed 0     3 2023-03-19T02:23:44Z 2023-03-20T14:21:07Z 2023-03-20T14:21:06Z CONTRIBUTOR      

What happened?

AttributeError: 'DataArray' object has no attribute '_data'

What did you expect to happen?

No error.

Minimal Complete Verifiable Example

```Python

https://github.com/corteva/rioxarray/blob/21284f67db536d9c104aa872ab0bbc261259e59e/test/integration/test_integration_rioxarray.py#L1818-L1844

import numpy import xarray import rioxarray

test_da = xarray.DataArray( numpy.zeros((5, 5)), dims=("y", "x"), coords={"y": numpy.arange(1, 6), "x": numpy.arange(2, 7)}, ) test_da.values[1, 1] = -1.1 test_nd = test_da.rio.write_nodata(-1.1) test_nd.rio.write_transform( Affine.from_gdal(425047, 3.0, 0.0, 4615780, 0.0, -3.0), inplace=True ) test_nd.rio.write_crs("EPSG:4326", inplace=True) test_nd.rio.to_raster("dtype.tif", dtype=numpy.uint8) ```

MVCE confirmation

  • [X] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
  • [X] Complete example — the example is self-contained, including all data and the text of any traceback.
  • [X] Verifiable example — the example copy & pastes into an IPython prompt or Binder notebook, returning the result.
  • [X] New issue — a search of GitHub Issues suggests this is not a duplicate.

Relevant log output

```Python rioxarray/raster_writer.py:288: in to_raster data = encode_cf_variable(out_data).values.astype(numpy_dtype) /usr/share/miniconda/envs/test/lib/python3.9/site-packages/xarray/conventions.py:296: in encode_cf_variable var = coder.encode(var, name=name) /usr/share/miniconda/envs/test/lib/python3.9/site-packages/xarray/coding/times.py:690: in encode ) or contains_cftime_datetimes(variable): /usr/share/miniconda/envs/test/lib/python3.9/site-packages/xarray/core/common.py:1818: in contains_cftime_datetimes return _contains_cftime_datetimes(var._data)


self = <xarray.DataArray (y: 5, x: 5)> array([[ 0. , 0. , 0. , 0. , 0. ], [ 0. , -1.1, 0. , 0. , 0. ], [... (y) int64 1 2 3 4 5 * x (x) int64 2 3 4 5 6 spatial_ref int64 0 Attributes: _FillValue: -1.1 name = '_data'

def __getattr__(self, name: str) -> Any:
    if name not in {"__dict__", "__setstate__"}:
        # this avoids an infinite loop when pickle looks for the
        # __setstate__ attribute before the xarray object is initialized
        for source in self._attr_sources:
            with suppress(KeyError):
                return source[name]
  raise AttributeError(
        f"{type(self).__name__!r} object has no attribute {name!r}"
    )

E AttributeError: 'DataArray' object has no attribute '_data'

/usr/share/miniconda/envs/test/lib/python3.9/site-packages/xarray/core/common.py:276: AttributeError ```

Anything else we need to know?

No response

Environment

This is the latest version of xarray/numpy (pre-release versions).

https://github.com/corteva/rioxarray/actions/runs/4458288974/jobs/7829970587

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7645/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
1227144046 I_kwDOAMm_X85JJLtu 6577 ENH: list_engines function snowman2 8699967 open 0     4 2022-05-05T20:27:14Z 2023-01-13T17:30:14Z   CONTRIBUTOR      

Is your feature request related to a problem?

It can be difficult to know what engines are available and where they come from. This could help with that.

Describe the solution you'd like

python import xarray xarray.list_engines() Output: ``` Name | Description | Documentation


rasterio | Open geospatial files (GeoTiff) | https://corteva.github.io/rioxarray ```

Describe alternatives you've considered

No response

Additional context

No response

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6577/reactions",
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
894780870 MDExOlB1bGxSZXF1ZXN0NjQ3MDk0NjA5 5335 ENH: Preserve attrs in to_dataframe() snowman2 8699967 open 0     1 2021-05-18T21:00:20Z 2022-06-09T14:50:16Z   CONTRIBUTOR   0 pydata/xarray/pulls/5335
  • [x] Closes #5327
  • [x] Tests added
  • [x] Passes pre-commit run --all-files
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5335/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1062681040 PR_kwDOAMm_X84u-muV 6024 REF: Make mypy manual stage with pre-commit snowman2 8699967 closed 0     11 2021-11-24T17:15:41Z 2022-02-09T15:07:07Z 2022-02-09T04:28:55Z CONTRIBUTOR   0 pydata/xarray/pulls/6024

With this setup, the mypy hook is only added to the run if you specify it: pre-commit run --all-files --hook-stage manual cc: @dcherian

https://pre-commit.com/#confining-hooks-to-run-at-certain-stages

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6024/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1093737479 I_kwDOAMm_X85BMRwH 6138 DEP: Drop Python 3.7 Support snowman2 8699967 closed 0     1 2022-01-04T20:42:21Z 2022-01-11T21:22:46Z 2022-01-11T21:22:46Z CONTRIBUTOR      

Community references: - NEP-29 says numpy dropped support for Python 3.7 in December 2021. - pandas (https://github.com/pandas-dev/pandas/issues/41678) - opendatacube is Python 3.8+ already ref - django ref - pyproj (https://github.com/pyproj4/pyproj/issues/930) - rioxarray (https://github.com/corteva/rioxarray/issues/451)

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6138/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
1003587347 PR_kwDOAMm_X84sGYQv 5808 DEP: Deprecate rasterio backend snowman2 8699967 closed 0     8 2021-09-22T00:59:15Z 2021-10-04T07:36:22Z 2021-10-02T20:38:36Z CONTRIBUTOR   0 pydata/xarray/pulls/5808
  • [x] Closes #4697
  • [x] Passes pre-commit run --all-files
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5808/reactions",
    "total_count": 3,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 1,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1003294855 PR_kwDOAMm_X84sFXBe 5805 DOC: Use pyproj to generate 2D latlon & fix cartopy UTM CRS snowman2 8699967 closed 0     2 2021-09-21T21:14:12Z 2021-09-23T19:41:05Z 2021-09-23T18:30:55Z CONTRIBUTOR   0 pydata/xarray/pulls/5805
  • [x] Passes pre-commit run --all-files

Using pyproj should be more efficient & simpler to use.

With latest cartopy, I get this error with 18N: CRSError: Invalid projection: +proj=utm +ellps=WGS84 +units=m +zone=18N +no_defs +type=crs: (Internal Proj Error: proj_create: Error 1027 (Invalid value for an argument): utm: Invalid value for zone) Removing N fixes the issue.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5805/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
893714363 MDU6SXNzdWU4OTM3MTQzNjM= 5327 ENH: Preserve attrs when converting to pandas dataframe snowman2 8699967 open 0     0 2021-05-17T21:06:26Z 2021-05-17T21:06:26Z   CONTRIBUTOR      

Is your feature request related to a problem? Please describe. ```python import xarray

xds = xarray.DataArray([1], name="a", dims="a", attrs={"long_name": "Description about data"}) python xds.attrs Output: {'long_name': 'Description about data'} python xds.to_dataframe().a.attrs Output: {} `` **Describe the solution you'd like** It would be nice if the attributes of the DataArray were preserved in eachpandas.Seriesand the attributes of eachDatasetwere preserved on thepandas.Dataframe`

Additional context

Things to be wary about is that it the pandas documentation says the attrs is experimental.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5327/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
561132832 MDExOlB1bGxSZXF1ZXN0MzcxOTk5MTQx 3757 DOC: Add rioxarray and other external examples snowman2 8699967 closed 0     4 2020-02-06T16:39:43Z 2020-03-05T13:36:25Z 2020-03-05T12:56:12Z CONTRIBUTOR   0 pydata/xarray/pulls/3757
  • [x] Addresses https://github.com/pydata/xarray/issues/2723#issuecomment-582961469
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3757/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
404088563 MDU6SXNzdWU0MDQwODg1NjM= 2723 Implementation of CRS storage in rasterio with PROJ.4 & WKT snowman2 8699967 closed 0     10 2019-01-29T02:01:20Z 2020-02-06T16:20:55Z 2020-02-06T14:58:07Z CONTRIBUTOR      

Continuation of the discussion from #2722 to move onto implementation details.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2723/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
512584839 MDU6SXNzdWU1MTI1ODQ4Mzk= 3449 open_rasterio: Need to move Affine object to left side snowman2 8699967 closed 0     2 2019-10-25T15:37:45Z 2019-10-25T15:47:34Z 2019-10-25T15:40:09Z CONTRIBUTOR      

MCVE Code Sample

python xarray.open_rasterio(...)

Problem Description

Need to move the transform to the left side.

Warning: DeprecationWarning: Right multiplication will be prohibited in version 3.0 x, _ = (np.arange(nx) + 0.5, np.zeros(nx) + 0.5) * transform

Should be: x, _ = transform * (np.arange(nx) + 0.5, np.zeros(nx) + 0.5)

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3449/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
504043555 MDExOlB1bGxSZXF1ZXN0MzI1NzcxMzcw 3383 added geocube and rioxarray to related projects snowman2 8699967 closed 0     1 2019-10-08T13:26:52Z 2019-10-08T14:36:53Z 2019-10-08T14:36:53Z CONTRIBUTOR   0 pydata/xarray/pulls/3383
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3383/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
403971686 MDU6SXNzdWU0MDM5NzE2ODY= 2722 [discussion] Use WKT or PROJ.4 string for CRS representation? snowman2 8699967 closed 0     4 2019-01-28T19:31:01Z 2019-01-28T22:45:52Z 2019-01-28T22:45:52Z CONTRIBUTOR      

Background

PROJ.4 is a popular format for storing projection strings. It has a nice and simple interface that is easy to use. However, converting to the PROJ.4 format from other formats can cause loss of useful projection information (PROJ.4 contains a warning about it here and other users have had issues with losing information in the conversion, an example is here). A lossless, though more complex, alternative is the WKT string. The WKT string is also going through improvements and will include the WKT2 (2015 and 2018) versions with useful information.

The rasterio project has already made the switch to use the WKT string.

Discussion

The issue is meant to be a place to discuss whether the xarray project should use the WKT or the PROJ.4 format moving forward for storing CRS strings.

@fmaussion @sgillies @cratcliff @djhoese @rouault @kbevers

Feel free to include others that you think would provide valuable information to this discussion.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2722/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
276131376 MDU6SXNzdWUyNzYxMzEzNzY= 1736 Rasterio missing _FillValue in DataArray snowman2 8699967 closed 0     5 2017-11-22T16:31:47Z 2018-12-17T16:19:45Z 2018-01-19T08:54:58Z CONTRIBUTOR      

Problem description

When xarray opens a dataset, it stores the the _FillValue in the encoding and replaces values with NaN values. However, using open_rasterio, this behavior does not occur and the _FillValue value is missing. It only has the transform, crs, is_tiled, and res attributes. Also, the encoding is empty.

Expected Output

It would be nice to have the _FillValue as an attribute or in the encoding.

Output of xr.show_versions()

INSTALLED VERSIONS ------------------ commit: None python: 3.6.3.final.0 python-bits: 64 OS: Linux OS-release: 4.10.0-40-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: en_US.UTF-8 xarray: 0.10.0rc2 pandas: 0.21.0 numpy: 1.13.2 scipy: 1.0.0 netCDF4: 1.3.1 h5netcdf: 0.5.0 Nio: dev_20170921-05806a2 bottleneck: 1.2.1 cyordereddict: None dask: 0.15.4 matplotlib: 2.1.0 cartopy: None seaborn: None setuptools: 36.6.0 pip: 9.0.1 conda: None pytest: 3.2.5 IPython: 6.2.1 sphinx: None
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1736/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
344058811 MDU6SXNzdWUzNDQwNTg4MTE= 2308 Proposal: Update rasterio backend to store CRS/nodata information in standard locations. snowman2 8699967 closed 0     10 2018-07-24T14:18:58Z 2018-10-09T12:46:54Z 2018-10-09T12:46:54Z CONTRIBUTOR      

Problem description

Currently the way data is stored in the dataaray when using xarray.open_rasterio the crs and nodata information is stored in an attributes. It would be nice to be able to have them stored in standard locations so that other tools (rasterio, QGIS, GDAL) can find the information properly after dumping to a file with to_netcdf().

Proposed solutions

The nodata should be loaded into _FillValue

I propose that the CRS information be stored using the CF spatial_ref convention as it is supported by the main open source GIS tools. To do so, you add the crs coordinate to the dataset/dataarray. And then, you add the spatial_ref attribute to the crs which is stored as a crs WKT string. Next, you add the grid_mapping attribute to all associated variables that contains the coordinate name crs as the grid_mapping.

Here is an example of how it would look on a dataset: <xarray.Dataset> Dimensions: (x: 65, y: 31) Coordinates: * x (x) float64 ... * y (y) float64 ... time datetime64[ns] ... crs int64 ... Data variables: ndvi (y, x) float64 ... Attributes: Here is how the crs or spatial_ref coodinate variable would look: <xarray.DataArray 'crs' ()> array(0) Coordinates: time datetime64[ns] ... crs int64 0 Attributes: spatial_ref: PROJCS["UTM Zone 15, Northern Hemisphere",GEOGCS["WGS 84",D... And here is how it would look on the variables: <xarray.DataArray 'ndvi' (y: 31, x: 65)> array([[ ...]]) Coordinates: * x (x) float64 ... * y (y) float64 ... time datetime64[ns] ... crs int64 0 Attributes: grid_mapping: crs

More information about this is in https://github.com/pydata/xarray/issues/2288.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2308/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
276246800 MDExOlB1bGxSZXF1ZXN0MTU0Mjg2MDkx 1740 rasterio backend: added nodatavals attribute snowman2 8699967 closed 0     3 2017-11-23T01:36:07Z 2018-01-19T08:54:58Z 2018-01-19T08:54:58Z CONTRIBUTOR   0 pydata/xarray/pulls/1740

Connected with issue #1736

  • [x] Closes #1736
  • [x] Tests added / passed
  • [x] Passes git diff upstream/master **/*py | flake8 --diff
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1740/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
241714748 MDU6SXNzdWUyNDE3MTQ3NDg= 1474 Selecting time with different variable and dimensions names snowman2 8699967 closed 0     10 2017-07-10T13:37:31Z 2017-07-10T16:53:25Z 2017-07-10T16:25:20Z CONTRIBUTOR      

I am having trouble selecting time from this DataArray (Notice that the dimension is 'Time' and the variable/coordinate is 'Times': <xarray.DataArray 'RAINC' (Time: 16, south_north: 5, west_east: 5)> dask.array<getitem, shape=(16, 5, 5), dtype=float32, chunksize=(1, 5, 5)> Coordinates: XLAT (south_north, west_east) float32 40.3474 40.3502 40.3529 ... XLONG (south_north, west_east) float32 -111.749 -111.679 -111.608 ... Times (Time) datetime64[ns] 2016-08-23T22:00:00 2016-08-23T23:00:00 ... Dimensions without coordinates: Time, south_north, west_east Attributes: FieldType: 104 MemoryOrder: XY description: ACCUMULATED TOTAL CUMULUS PRECIPITATION units: mm stagger: coordinates: XLONG XLAT XTIME I have tried several different methods: python data = data[{self.lsm_time_dim: [pd.to_datetime(time_step)]}] python data = data[{self.lsm_time_dim: pd.to_datetime(time_step)}] python data = data[{self.lsm_time_dim: str(time_step)}] And they all end with a similar error: ``` ../gsshapy/grid/grid_to_gssha.py:634: in _load_lsm_data data = data[{self.lsm_time_dim: [pd.to_datetime(time_step)]}] ../../../tethys/miniconda/envs/gssha/lib/python3.6/site-packages/xarray/core/dataarray.py:472: in getitem return self.isel(self._item_key_to_dict(key)) ../../../tethys/miniconda/envs/gssha/lib/python3.6/site-packages/xarray/core/dataarray.py:679: in isel ds = self._to_temp_dataset().isel(drop=drop, indexers) ../../../tethys/miniconda/envs/gssha/lib/python3.6/site-packages/xarray/core/dataset.py:1143: in isel new_var = var.isel(**var_indexers) ../../../tethys/miniconda/envs/gssha/lib/python3.6/site-packages/xarray/core/variable.py:570: in isel return self[tuple(key)] ../../../tethys/miniconda/envs/gssha/lib/python3.6/site-packages/xarray/core/variable.py:400: in getitem values = self._indexable_data[key] ../../../tethys/miniconda/envs/gssha/lib/python3.6/site-packages/xarray/core/indexing.py:545: in getitem result = self.array[key]


self = DatetimeIndex(['2016-08-23 22:00:00', '2016-08-23 23:00:00', '2016-08-24 00:00:00', '2016-08-24 01:00:0...:00:00', '2016-08-24 12:00:00', '2016-08-24 13:00:00'], dtype='datetime64[ns]', freq=None) key = array([Timestamp('2016-08-23 22:00:00')], dtype=object)

def __getitem__(self, key):
    """
        This getitem defers to the underlying array, which by-definition can
        only handle list-likes, slices, and integer scalars
        """

    is_int = is_integer(key)
    if is_scalar(key) and not is_int:
        raise ValueError

    getitem = self._data.__getitem__
    if is_int:
        val = getitem(key)
        return self._box_func(val)
    else:
        if com.is_bool_indexer(key):
            key = np.asarray(key)
            if key.all():
                key = slice(0, None, None)
            else:
                key = lib.maybe_booleans_to_slice(key.view(np.uint8))

        attribs = self._get_attributes_dict()

        is_period = isinstance(self, ABCPeriodIndex)
        if is_period:
            freq = self.freq
        else:
            freq = None
            if isinstance(key, slice):
                if self.freq is not None and key.step is not None:
                    freq = key.step * self.freq
                else:
                    freq = self.freq

        attribs['freq'] = freq
      result = getitem(key)

E IndexError: arrays used as indices must be of integer (or boolean) type ```

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1474/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
233744905 MDU6SXNzdWUyMzM3NDQ5MDU= 1442 Pangaea snowman2 8699967 closed 0     2 2017-06-06T00:09:10Z 2017-06-08T09:51:04Z 2017-06-08T09:51:04Z CONTRIBUTOR      

Just wanted to share an extension for a subset of land surface and weather models: https://github.com/snowman2/pangaea

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1442/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
225757620 MDU6SXNzdWUyMjU3NTc2MjA= 1395 Time conversion overflow with minutes snowman2 8699967 closed 0     2 2017-05-02T16:59:34Z 2017-05-02T19:59:48Z 2017-05-02T19:59:48Z CONTRIBUTOR      

File "/var/lib/miniconda/envs/gssha/lib/python2.7/site-packages/xarray/backends/api.py", line 515, in open_mfdataset **kwargs) for p in paths] File "/var/lib/miniconda/envs/gssha/lib/python2.7/site-packages/xarray/backends/api.py", line 310, in open_dataset return maybe_decode_store(store, lock) File "/var/lib/miniconda/envs/gssha/lib/python2.7/site-packages/xarray/backends/api.py", line 226, in maybe_decode_store drop_variables=drop_variables) File "/var/lib/miniconda/envs/gssha/lib/python2.7/site-packages/xarray/conventions.py", line 951, in decode_cf decode_coords, drop_variables=drop_variables) File "/var/lib/miniconda/envs/gssha/lib/python2.7/site-packages/xarray/conventions.py", line 884, in decode_cf_variables decode_times=decode_times) File "/var/lib/miniconda/envs/gssha/lib/python2.7/site-packages/xarray/conventions.py", line 821, in decode_cf_variable data = DecodedCFDatetimeArray(data, units, calendar) File "/var/lib/miniconda/envs/gssha/lib/python2.7/site-packages/xarray/conventions.py", line 398, in __init__ raise ValueError(msg) ValueError: unable to decode time units u'minutes since 2011-03-05 03:00:00' with the default calendar. Try opening your dataset with decode_times=False. Full traceback: Traceback (most recent call last): File "/var/lib/miniconda/envs/gssha/lib/python2.7/site-packages/xarray/conventions.py", line 389, in __init__ result = decode_cf_datetime(example_value, units, calendar) File "/var/lib/miniconda/envs/gssha/lib/python2.7/site-packages/xarray/conventions.py", line 157, in decode_cf_datetime dates = _decode_datetime_with_netcdf4(flat_num_dates, units, calendar) File "/var/lib/miniconda/envs/gssha/lib/python2.7/site-packages/xarray/conventions.py", line 99, in _decode_datetime_with_netcdf4 dates = np.asarray(nc4.num2date(num_dates, units, calendar)) File "netCDF4/_netCDF4.pyx", line 5358, in netCDF4._netCDF4.num2date (netCDF4/_netCDF4.c:66601) OverflowError: Python int too large to convert to C long

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1395/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
223891484 MDU6SXNzdWUyMjM4OTE0ODQ= 1383 open_mfdataset not finding coordinates snowman2 8699967 closed 0     3 2017-04-24T17:17:02Z 2017-04-24T19:58:12Z 2017-04-24T19:58:12Z CONTRIBUTOR      

It there a good way to define the coordinates in the dataset if you know the variable names beforehand?

When loading in a folder of datasets with lat, lon variables using open_mfdataset , the lat. lon variables are not recognized as coordinates. NCDUMP: dimensions: east_west = 1201 ; north_south = 1001 ; time = 1 ; variables: float lat(north_south, east_west) ; lat:units = "degree_north" ; lat:standard_name = "latitude" ; lat:long_name = "latitude" ; lat:scale_factor = 1.f ; lat:add_offset = 0.f ; lat:missing_value = -9999.f ; lat:_FillValue = -9999.f ; lat:vmin = 0.f ; lat:vmax = 0.f ; float lon(north_south, east_west) ; lon:units = "degree_east" ; lon:standard_name = "longitude" ; lon:long_name = "longitude" ; lon:scale_factor = 1.f ; lon:add_offset = 0.f ; lon:missing_value = -9999.f ; lon:_FillValue = -9999.f ; lon:vmin = 0.f ; lon:vmax = 0.f ; ... XARRAY: ``` <xarray.Dataset> Dimensions: (RelSMC_profiles: 4, SmLiqFrac_profiles: 4, SoilMoist_profiles: 4, SoilTemp_profiles: 4, east_west: 1201, north_south: 1001, time: 240) Coordinates: * time (time) datetime64[ns] 2011-01-30 2011-01-30T03:00:00 ... Dimensions without coordinates: RelSMC_profiles, SmLiqFrac_profiles, SoilMoist_profiles, SoilTemp_profiles, east_west, north_south Data variables: lat (time, north_south, east_west) float64 26.0 26.0 ... lon (time, north_south, east_west) float64 58.0 58.01 ...

``` What is the best way to set them as coordinates in the dataset?

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1383/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
223440405 MDU6SXNzdWUyMjM0NDA0MDU= 1380 open_mfdataset and add time dimension snowman2 8699967 closed 0     3 2017-04-21T16:56:35Z 2017-04-21T19:44:00Z 2017-04-21T18:49:57Z CONTRIBUTOR      

I am working with the Grib2 format and the time is buried in the attributes of the variables: ```python import pandas as pd import xarray as xr

path_to_file = 'hrrr.t01z.wrfsfcf00.grib2' with xr.open_dataset(path_to_file, engine='pynio') as xd: print(pd.to_datetime(xd['TMP_P0_L1_GLC0'].attrs['initial_time'], format="%m/%d/%Y (%H:%M)")) ```

I would like to take advantage of how to use the concatenation methods here: http://xarray.pydata.org/en/stable/io.html#id6

However, there is currently no time dimension. Is there a way to concatenate the files together while adding a new time dimension and variable? Will this use dask arrays?

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1380/reactions",
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 35.057ms · About: xarray-datasette