home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

21 rows where author_association = "CONTRIBUTOR" and user = 8833517 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: issue_url, reactions, created_at (date), updated_at (date)

issue 16

  • Documentation of DataArray does not warn that inferring dimension names is deprecated 3
  • local build of docs fails with OSError: [UT_PARSE] Failed to open UDUNITS-2 XML unit database 3
  • Removed skipna argument from count, any, all [GH755] 2
  • xarray.Dataset.var - xarray.DataArray.var - does it have ddof=1 parameter? 1
  • Support from reading unformatted Fortran files 1
  • xr.DataArray.to_series returns a (mutable) view 1
  • NaN values for variables when converting from a pandas dataframe to xarray.DataSet 1
  • Clarify documentation of argmin/argmax 1
  • Make 0d-DataArray compatible for indexing. 1
  • "write to read-only" Error in xarray.open_mfdataset() when trying to write to a netcdf file 1
  • Fix DataArray.copy documentation: remove confusing mention of 'dataset' (Gh3606) 1
  • Reimplement GroupBy.argmax 1
  • Record processing steps into history attribute with context manager 1
  • Missing name in index of series created from DataArray.to_series() after reindex 1
  • Add xr.open_dataset("file.tif", engine="rasterio") to docs 1
  • rasterio backend not supporting complex_int16 type 1

user 1

  • sjvrijn · 21 ✖

author_association 1

  • CONTRIBUTOR · 21 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
1447776213 https://github.com/pydata/xarray/issues/5491#issuecomment-1447776213 https://api.github.com/repos/pydata/xarray/issues/5491 IC_kwDOAMm_X85WS0_V sjvrijn 8833517 2023-02-28T08:36:07Z 2023-02-28T08:36:07Z CONTRIBUTOR

Both #4697 and https://github.com/corteva/rioxarray/pull/353 have been merged, so this issue can be closed

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  rasterio backend not supporting complex_int16 type 924722006
1435670314 https://github.com/pydata/xarray/issues/5018#issuecomment-1435670314 https://api.github.com/repos/pydata/xarray/issues/5018 IC_kwDOAMm_X85Vkpcq sjvrijn 8833517 2023-02-18T13:31:40Z 2023-02-18T13:31:40Z CONTRIBUTOR

The example tests by @yhlam currently pass in xarray: 2023.2.1.dev7+g21d86450. Using git bisect, it seems like this issue was fixed as part of the explicit indexes PR #5692. I guess that means this issue can be closed?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Missing name in index of series created from DataArray.to_series() after reindex 826990294
1134360245 https://github.com/pydata/xarray/issues/4914#issuecomment-1134360245 https://api.github.com/repos/pydata/xarray/issues/4914 IC_kwDOAMm_X85DnPa1 sjvrijn 8833517 2022-05-23T08:35:38Z 2022-05-23T08:35:38Z CONTRIBUTOR

Now #4896 has been merged, can this issue be closed?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Record processing steps into history attribute with context manager 809332917
1010927962 https://github.com/pydata/xarray/issues/5386#issuecomment-1010927962 https://api.github.com/repos/pydata/xarray/issues/5386 IC_kwDOAMm_X848QYla sjvrijn 8833517 2022-01-12T11:11:16Z 2022-01-12T11:11:16Z CONTRIBUTOR

The link to corteva has been added to the rasterio user guide in https://github.com/pydata/xarray/pull/5808, so I think this issue can be closed too.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add xr.open_dataset("file.tif", engine="rasterio") to docs 903922477
1003782259 https://github.com/pydata/xarray/issues/4476#issuecomment-1003782259 https://api.github.com/repos/pydata/xarray/issues/4476 IC_kwDOAMm_X8471IBz sjvrijn 8833517 2022-01-02T22:03:22Z 2022-01-02T22:03:22Z CONTRIBUTOR

Using git bisect and @zxdawn's example, I've narrowed it down to commit bdcfab524e.

I'm guessing the exact culprit is the removal of argmin and argmax from NAN_REDUCE_METHODS in xarray/core/ops.py because dedicated implementations were added for Variable/DataSet/DataArray.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Reimplement GroupBy.argmax 712217045
1003778566 https://github.com/pydata/xarray/pull/3566#issuecomment-1003778566 https://api.github.com/repos/pydata/xarray/issues/3566 IC_kwDOAMm_X8471HIG sjvrijn 8833517 2022-01-02T21:31:52Z 2022-01-02T21:31:52Z CONTRIBUTOR

Apart from a trivial conflict in xarray\core\indexing.py, this PR still looks fine to merge.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Make 0d-DataArray compatible for indexing. 527553050
753304428 https://github.com/pydata/xarray/issues/2911#issuecomment-753304428 https://api.github.com/repos/pydata/xarray/issues/2911 MDEyOklzc3VlQ29tbWVudDc1MzMwNDQyOA== sjvrijn 8833517 2021-01-01T11:25:53Z 2021-01-01T11:25:53Z CONTRIBUTOR

@tomchor For small snippets including it directly into the docs seems best to me, but an explicit link to the commit/file seems fine too. I've seen links to github issues and blog posts in the docs, so linking to a commit for a larger piece of code doesn't seem out of the ordinary to me.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Support from reading unformatted Fortran files 435532136
728353649 https://github.com/pydata/xarray/issues/2949#issuecomment-728353649 https://api.github.com/repos/pydata/xarray/issues/2949 MDEyOklzc3VlQ29tbWVudDcyODM1MzY0OQ== sjvrijn 8833517 2020-11-16T21:58:09Z 2020-11-16T21:58:09Z CONTRIBUTOR

Since PR #3126 seems to be closed due to performance issues, is this underlying issue still considered an issue, or should it be closed?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  xr.DataArray.to_series returns a (mutable) view 442063346
686403682 https://github.com/pydata/xarray/issues/4169#issuecomment-686403682 https://api.github.com/repos/pydata/xarray/issues/4169 MDEyOklzc3VlQ29tbWVudDY4NjQwMzY4Mg== sjvrijn 8833517 2020-09-03T10:39:09Z 2020-09-03T10:39:09Z CONTRIBUTOR

@EliT1626 Can you provide a smaller, faster example including imports etc. that produces the same error? And what OS are you using? Windows 10?

I've tried to reproduce it on Linux since you mention it as a possible Windows problem, but it took very long to run. It finished without error after changing end_date to dt.date(2010, 1, 31), but as I have no idea what your code does, I can't be sure the date range isn't part of the problem.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  "write to read-only" Error in xarray.open_mfdataset() when trying to write to a netcdf file 643035732
670971584 https://github.com/pydata/xarray/issues/1050#issuecomment-670971584 https://api.github.com/repos/pydata/xarray/issues/1050 MDEyOklzc3VlQ29tbWVudDY3MDk3MTU4NA== sjvrijn 8833517 2020-08-08T20:39:11Z 2020-08-08T20:39:11Z CONTRIBUTOR

In core/nanops.py there are some explicit defaults of ddof=0 within xarray, but I'm not sure if those are always used or if there are also cases where var (or std) are directly passed on to numpy/bottleneck/dask.

I'm considering two different options to clarify this:

  1. Add a docstring section on the ddof parameter specifying it uses ddof=0 as default for the reduction methods that use it, i.e. var and std. Possibly just copied from numpy's var page.
  2. Refer to numpy's documentation page in the docstring of all reduction methods for further reference.

Both would require some logic in core/ops.py: either to check for which reduce methods need a ddof paragraph, or to create the proper url (which has to adjust min and max to np.amin and np.amax respectively)

Is there any clear preference from anyone about this?

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  xarray.Dataset.var - xarray.DataArray.var - does it have ddof=1 parameter? 183713222
670967112 https://github.com/pydata/xarray/issues/3264#issuecomment-670967112 https://api.github.com/repos/pydata/xarray/issues/3264 MDEyOklzc3VlQ29tbWVudDY3MDk2NzExMg== sjvrijn 8833517 2020-08-08T19:49:25Z 2020-08-08T19:49:36Z CONTRIBUTOR

@rdrussotto Since v0.16.0 (specifically https://github.com/pydata/xarray/commit/bdcfab524ef1c852abe6dabcfabc7292f058fddc), argmin/argmax have been explicitly defined with dedicated docstrings. Is the issue you raised still a problem?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Clarify documentation of argmin/argmax 485390288
664460211 https://github.com/pydata/xarray/issues/4257#issuecomment-664460211 https://api.github.com/repos/pydata/xarray/issues/4257 MDEyOklzc3VlQ29tbWVudDY2NDQ2MDIxMQ== sjvrijn 8833517 2020-07-27T15:19:07Z 2020-07-27T15:19:07Z CONTRIBUTOR

@ocefpaf Thanks for trying anyway

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  local build of docs fails with OSError: [UT_PARSE] Failed to open UDUNITS-2 XML unit database 664458864
664456140 https://github.com/pydata/xarray/issues/4257#issuecomment-664456140 https://api.github.com/repos/pydata/xarray/issues/4257 MDEyOklzc3VlQ29tbWVudDY2NDQ1NjE0MA== sjvrijn 8833517 2020-07-27T15:12:05Z 2020-07-27T15:12:05Z CONTRIBUTOR

@ocefpaf : (xarray-docs) rijnsjvan@turbine:~$ echo $0 -bash (xarray-docs) rijnsjvan@turbine:~$ echo $UDUNITS2_XML_PATH /home/rijnsjvan/miniconda3/envs/xarray-docs/share/udunits/udunits2.xml

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  local build of docs fails with OSError: [UT_PARSE] Failed to open UDUNITS-2 XML unit database 664458864
664434116 https://github.com/pydata/xarray/issues/4257#issuecomment-664434116 https://api.github.com/repos/pydata/xarray/issues/4257 MDEyOklzc3VlQ29tbWVudDY2NDQzNDExNg== sjvrijn 8833517 2020-07-27T14:34:53Z 2020-07-27T14:34:53Z CONTRIBUTOR

@ocefpaf @dcherian I'm guessing it's some bad package install too, but that surprises me since it's a fresh miniconda install to begin with... Haven't tested miniconda vs anaconda yet.

Just pulled from upstream and recreated the environment, docs make error still persists.

Here are my .condarc: (xarray-docs) rijnsjvan@turbine:~$ cat .condarc channels: - intel - defaults auto_activate_base: false

and conda list: ``` (xarray-docs) rijnsjvan@turbine:~$ conda list

packages in environment at /home/rijnsjvan/miniconda3/envs/xarray-docs:

Name Version Build Channel

_libgcc_mutex 0.1 conda_forge conda-forge _openmp_mutex 4.5 0_gnu conda-forge affine 2.3.0 py_0 conda-forge alabaster 0.7.12 py_0 conda-forge antlr-python-runtime 4.7.2 py38_1001 conda-forge asciitree 0.3.3 py_2 conda-forge attrs 19.3.0 py_0 conda-forge babel 2.8.0 py_0 conda-forge backcall 0.2.0 pyh9f0ad1d_0 conda-forge bleach 3.1.5 pyh9f0ad1d_0 conda-forge bokeh 2.1.1 py38h32f6830_0 conda-forge boost-cpp 1.72.0 h8e57a91_0 conda-forge bottleneck 1.3.2 py38h8790de6_1 conda-forge brotlipy 0.7.0 py38h1e0a361_1000 conda-forge bzip2 1.0.8 h516909a_2 conda-forge ca-certificates 2020.6.20 hecda079_0 conda-forge cairo 1.16.0 hcf35c78_1003 conda-forge cartopy 0.18.0 py38h172510d_0 conda-forge certifi 2020.6.20 py38h32f6830_0 conda-forge cf-units 2.1.4 py38h8790de6_0 conda-forge cffi 1.14.1 py38h5bae8af_0 conda-forge cfgrib 0.9.8.3 py_0 conda-forge cfitsio 3.470 hce51eda_6 conda-forge cftime 1.2.1 py38h8790de6_0 conda-forge chardet 3.0.4 py38h32f6830_1006 conda-forge click 7.1.2 pyh9f0ad1d_0 conda-forge click-plugins 1.1.1 py_0 conda-forge cligj 0.5.0 py_0 conda-forge cloudpickle 1.5.0 py_0 conda-forge cryptography 3.0 py38h766eaa4_0 conda-forge curl 7.71.1 he644dc0_3 conda-forge cycler 0.10.0 py_2 conda-forge cytoolz 0.10.1 py38h516909a_0 conda-forge dask 2.21.0 py_0 conda-forge dask-core 2.21.0 py_0 conda-forge decorator 4.4.2 py_0 conda-forge defusedxml 0.6.0 py_0 conda-forge distributed 2.21.0 py38h32f6830_0 conda-forge docutils 0.16 py38h32f6830_1 conda-forge eccodes 2.18.0 hf05d9b7_0 conda-forge entrypoints 0.3 py38h32f6830_1001 conda-forge expat 2.2.9 he1b5a44_2 conda-forge fasteners 0.14.1 py_3 conda-forge fontconfig 2.13.1 h86ecdb6_1001 conda-forge freetype 2.10.2 he06d7ca_0 conda-forge freexl 1.0.5 h516909a_1002 conda-forge fsspec 0.7.4 py_0 conda-forge gdal 3.0.4 py38h172510d_10 conda-forge geos 3.8.1 he1b5a44_0 conda-forge geotiff 1.6.0 h05acad5_0 conda-forge gettext 0.19.8.1 hc5be6a0_1002 conda-forge giflib 5.2.1 h516909a_2 conda-forge glib 2.65.0 h6f030ca_0 conda-forge h5netcdf 0.8.1 py_0 conda-forge h5py 2.10.0 nompi_py38hfb01d0b_104 conda-forge hdf4 4.2.13 hf30be14_1003 conda-forge hdf5 1.10.6 nompi_h3c11f04_100 conda-forge heapdict 1.0.1 py_0 conda-forge icu 64.2 he1b5a44_1 conda-forge idna 2.10 pyh9f0ad1d_0 conda-forge imagesize 1.2.0 py_0 conda-forge importlib-metadata 1.7.0 py38h32f6830_0 conda-forge importlib_metadata 1.7.0 0 conda-forge ipykernel 5.3.4 py38h23f93f0_0 conda-forge ipython 7.16.1 py38h23f93f0_0 conda-forge ipython_genutils 0.2.0 py_1 conda-forge iris 2.4.0 py38_0 conda-forge jasper 1.900.1 h07fcdf6_1006 conda-forge jedi 0.17.2 py38h32f6830_0 conda-forge jinja2 2.11.2 pyh9f0ad1d_0 conda-forge jpeg 9d h516909a_0 conda-forge json-c 0.13.1 hbfbb72e_1002 conda-forge jsonschema 3.2.0 py38h32f6830_1 conda-forge jupyter_client 6.1.6 py_0 conda-forge jupyter_core 4.6.3 py38h32f6830_1 conda-forge kealib 1.4.13 h33137a7_1 conda-forge kiwisolver 1.2.0 py38hbf85e49_0 conda-forge krb5 1.17.1 hfafb76e_1 conda-forge lcms2 2.11 hbd6801e_0 conda-forge ld_impl_linux-64 2.34 hc38a660_9 conda-forge libaec 1.0.4 he1b5a44_1 conda-forge libblas 3.8.0 17_openblas conda-forge libcblas 3.8.0 17_openblas conda-forge libcurl 7.71.1 hcdd3856_3 conda-forge libdap4 3.20.6 h1d1bd15_1 conda-forge libedit 3.1.20191231 h46ee950_1 conda-forge libffi 3.2.1 he1b5a44_1007 conda-forge libgcc-ng 9.2.0 h24d8f2e_2 conda-forge libgdal 3.0.4 he6a97d6_10 conda-forge libgfortran-ng 7.5.0 hdf63c60_10 conda-forge libgomp 9.2.0 h24d8f2e_2 conda-forge libiconv 1.15 h516909a_1006 conda-forge libkml 1.3.0 hb574062_1011 conda-forge liblapack 3.8.0 17_openblas conda-forge libllvm9 9.0.1 he513fc3_1 conda-forge libnetcdf 4.7.4 nompi_h84807e1_105 conda-forge libopenblas 0.3.10 pthreads_hb3c22a3_4 conda-forge libpng 1.6.37 hed695b0_1 conda-forge libpq 12.3 h5513abc_0 conda-forge libsodium 1.0.17 h516909a_0 conda-forge libspatialite 4.3.0a h2482549_1038 conda-forge libssh2 1.9.0 hab1572f_4 conda-forge libstdcxx-ng 9.2.0 hdf63c60_2 conda-forge libtiff 4.1.0 hc7e4089_6 conda-forge libuuid 2.32.1 h14c3975_1000 conda-forge libwebp-base 1.1.0 h516909a_3 conda-forge libxcb 1.13 h14c3975_1002 conda-forge libxml2 2.9.10 hee79883_0 conda-forge llvmlite 0.33.0 py38h4f45e52_1 conda-forge locket 0.2.0 py_2 conda-forge lz4-c 1.9.2 he1b5a44_1 conda-forge markupsafe 1.1.1 py38h1e0a361_1 conda-forge matplotlib-base 3.3.0 py38h91b0d89_1 conda-forge mistune 0.8.4 py38h1e0a361_1001 conda-forge monotonic 1.5 py_0 conda-forge msgpack-python 1.0.0 py38hbf85e49_1 conda-forge nbconvert 5.6.1 py38h32f6830_1 conda-forge nbformat 5.0.7 py_0 conda-forge nbsphinx 0.7.1 pyh9f0ad1d_0 conda-forge ncurses 6.2 he1b5a44_1 conda-forge netcdf4 1.5.4 nompi_py38hfd55d45_100 conda-forge numba 0.50.1 py38hcb8c335_1 conda-forge numcodecs 0.6.4 py38he1b5a44_0 conda-forge numpy 1.19.1 py38h8854b6b_0 conda-forge olefile 0.46 py_0 conda-forge openjpeg 2.3.1 h981e76c_3 conda-forge openssl 1.1.1g h516909a_0 conda-forge owslib 0.20.0 py_0 conda-forge packaging 20.4 pyh9f0ad1d_0 conda-forge pandas 1.0.5 py38hcb8c335_0 conda-forge pandoc 2.10.1 h516909a_0 conda-forge pandocfilters 1.4.2 py_1 conda-forge parso 0.7.1 pyh9f0ad1d_0 conda-forge partd 1.1.0 py_0 conda-forge patsy 0.5.1 py_0 conda-forge pcre 8.44 he1b5a44_0 conda-forge pexpect 4.8.0 py38h32f6830_1 conda-forge pickleshare 0.7.5 py38h32f6830_1001 conda-forge pillow 7.2.0 py38h9776b28_1 conda-forge pip 20.1.1 py_1 conda-forge pixman 0.38.0 h516909a_1003 conda-forge poppler 0.87.0 h4190859_1 conda-forge poppler-data 0.4.9 1 conda-forge postgresql 12.3 h8573dbc_0 conda-forge proj 7.0.0 h966b41f_5 conda-forge prompt-toolkit 3.0.5 py_1 conda-forge psutil 5.7.2 py38h1e0a361_0 conda-forge pthread-stubs 0.4 h14c3975_1001 conda-forge ptyprocess 0.6.0 py_1001 conda-forge pycparser 2.20 pyh9f0ad1d_2 conda-forge pyepsg 0.4.0 py_0 conda-forge pygments 2.6.1 py_0 conda-forge pyke 1.1.1 py38h32f6830_1002 conda-forge pyopenssl 19.1.0 py_1 conda-forge pyparsing 2.4.7 pyh9f0ad1d_0 conda-forge pyproj 2.6.1.post1 py38h7521cb9_0 conda-forge pyrsistent 0.16.0 py38h1e0a361_0 conda-forge pyshp 2.1.0 py_0 conda-forge pysocks 1.7.1 py38h32f6830_1 conda-forge python 3.8.5 h425cb1d_1_cpython conda-forge python-dateutil 2.8.1 py_0 conda-forge python_abi 3.8 1_cp38 conda-forge pytz 2020.1 pyh9f0ad1d_0 conda-forge pyyaml 5.3.1 py38h1e0a361_0 conda-forge pyzmq 19.0.1 py38ha71036d_0 conda-forge rasterio 1.1.5 py38h033e0f6_0 conda-forge readline 8.0 he28a2e2_2 conda-forge requests 2.24.0 pyh9f0ad1d_0 conda-forge scipy 1.5.2 py38h8c5af15_0 conda-forge seaborn 0.10.1 1 conda-forge seaborn-base 0.10.1 py_1 conda-forge setuptools 49.2.0 py38h32f6830_0 conda-forge shapely 1.7.0 py38hd168ffb_3 conda-forge six 1.15.0 pyh9f0ad1d_0 conda-forge snowballstemmer 2.0.0 py_0 conda-forge snuggs 1.4.7 py_0 conda-forge sortedcontainers 2.2.2 pyh9f0ad1d_0 conda-forge sphinx 3.1.2 py_0 conda-forge sphinx_rtd_theme 0.5.0 pyh9f0ad1d_0 conda-forge sphinxcontrib-applehelp 1.0.2 py_0 conda-forge sphinxcontrib-devhelp 1.0.2 py_0 conda-forge sphinxcontrib-htmlhelp 1.0.3 py_0 conda-forge sphinxcontrib-jsmath 1.0.1 py_0 conda-forge sphinxcontrib-qthelp 1.0.3 py_0 conda-forge sphinxcontrib-serializinghtml 1.1.4 py_0 conda-forge sqlite 3.32.3 hcee41ef_1 conda-forge statsmodels 0.11.1 py38h1e0a361_2 conda-forge tbb 2020.1 hc9558a2_0 conda-forge tblib 1.6.0 py_0 conda-forge testpath 0.4.4 py_0 conda-forge tiledb 1.7.7 h8efa9f0_3 conda-forge tk 8.6.10 hed695b0_0 conda-forge toolz 0.10.0 py_0 conda-forge tornado 6.0.4 py38h1e0a361_1 conda-forge traitlets 4.3.3 py38h32f6830_1 conda-forge typing_extensions 3.7.4.2 py_0 conda-forge tzcode 2020a h516909a_0 conda-forge udunits2 2.2.27.6 h4e0c4b3_1001 conda-forge urllib3 1.25.10 py_0 conda-forge wcwidth 0.2.5 pyh9f0ad1d_0 conda-forge webencodings 0.5.1 py_1 conda-forge wheel 0.34.2 py_1 conda-forge xarray 0.16.0 py_0 conda-forge xerces-c 3.2.2 h8412b87_1004 conda-forge xorg-kbproto 1.0.7 h14c3975_1002 conda-forge xorg-libice 1.0.10 h516909a_0 conda-forge xorg-libsm 1.2.3 h84519dc_1000 conda-forge xorg-libx11 1.6.9 h516909a_0 conda-forge xorg-libxau 1.0.9 h14c3975_0 conda-forge xorg-libxdmcp 1.1.3 h516909a_0 conda-forge xorg-libxext 1.3.4 h516909a_0 conda-forge xorg-libxrender 0.9.10 h516909a_1002 conda-forge xorg-renderproto 0.11.1 h14c3975_1002 conda-forge xorg-xextproto 7.3.0 h14c3975_1002 conda-forge xorg-xproto 7.0.31 h14c3975_1007 conda-forge xz 5.2.5 h516909a_1 conda-forge yaml 0.2.5 h516909a_0 conda-forge zarr 2.4.0 py_0 conda-forge zeromq 4.3.2 he1b5a44_2 conda-forge zict 2.0.0 py_0 conda-forge zipp 3.1.0 py_0 conda-forge zlib 1.2.11 h516909a_1006 conda-forge zstd 1.4.5 h6597ccf_2 conda-forge ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  local build of docs fails with OSError: [UT_PARSE] Failed to open UDUNITS-2 XML unit database 664458864
662322092 https://github.com/pydata/xarray/pull/4245#issuecomment-662322092 https://api.github.com/repos/pydata/xarray/issues/4245 MDEyOklzc3VlQ29tbWVudDY2MjMyMjA5Mg== sjvrijn 8833517 2020-07-22T08:34:09Z 2020-07-24T10:01:14Z CONTRIBUTOR

@max-sixty Whoops 😅 Guess I looked over the latest version section as it was still completely empty, thanks for catching that

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fix DataArray.copy documentation: remove confusing mention of 'dataset' (Gh3606) 663355647
663074248 https://github.com/pydata/xarray/pull/4258#issuecomment-663074248 https://api.github.com/repos/pydata/xarray/issues/4258 MDEyOklzc3VlQ29tbWVudDY2MzA3NDI0OA== sjvrijn 8833517 2020-07-23T15:31:18Z 2020-07-23T15:31:18Z CONTRIBUTOR

@dcherian Docs seem to be correct: skipna is removed from DataArray.count, DataArray.any, DataArray.all while still there in e.g. DataArray.sum

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Removed skipna argument from count, any, all [GH755] 664497733
663025279 https://github.com/pydata/xarray/pull/4258#issuecomment-663025279 https://api.github.com/repos/pydata/xarray/issues/4258 MDEyOklzc3VlQ29tbWVudDY2MzAyNTI3OQ== sjvrijn 8833517 2020-07-23T14:02:36Z 2020-07-23T14:02:36Z CONTRIBUTOR

Btw, is there a reason why the functional kwarg keep_attrs has an explanation listed, but isn't listed in the signature? I was unable to find how to update that in the docs when working on this PR.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Removed skipna argument from count, any, all [GH755] 664497733
602508864 https://github.com/pydata/xarray/issues/3007#issuecomment-602508864 https://api.github.com/repos/pydata/xarray/issues/3007 MDEyOklzc3VlQ29tbWVudDYwMjUwODg2NA== sjvrijn 8833517 2020-03-23T10:27:27Z 2020-03-23T10:27:27Z CONTRIBUTOR

I recently had a similar issue and found out the cause: When transforming from a dataframe to an xarray, the xarray allocates memory for all possible combinations of the coordinates. In this particular case, you have 5 unique values for latitude and longitude in your five rows, which means there are 5*5=25 possible combinations of lat/long values. All missing values are then filled in as NaN.

Let me illustrate by recreating just your data on latitude, longitude, wind_surface and hurs:

python In [3]: data = [ ...: [34.511383, 16.467664, 29.658546, 70.481293], ...: [34.515558, 16.723973, 30.896049, 71.356644], ...: [34.517359, 16.852138, 31.514799, 71.708603], ...: [34.518970, 16.980310, 32.105423, 72.023773], ...: [34.520391, 17.108487, 32.724174, 72.106110], ...: ] In [4]: df = pd.DataFrame(data=data, columns=['lat', 'long', 'wind_surface', 'hurs']).set_index(['lat', 'long']) In [5]: df Out[5]: wind_surface hurs lat long 34.511383 16.467664 29.658546 70.481293 34.515558 16.723973 30.896049 71.356644 34.517359 16.852138 31.514799 71.708603 34.518970 16.980310 32.105423 72.023773 34.520391 17.108487 32.724174 72.106110

But for the xarray, this means it will end up creating a 5x5 array, of which only 5 values are given along the diagonal. This is very clearly visible when showing just the DataArray for a single column: python In [6]: df.to_xarray()['wind_surface'] Out[6]: <xarray.DataArray 'wind_surface' (lat: 5, long: 5)> array([[29.658546, nan, nan, nan, nan], [ nan, 30.896049, nan, nan, nan], [ nan, nan, 31.514799, nan, nan], [ nan, nan, nan, 32.105423, nan], [ nan, nan, nan, nan, 32.724174]]) Coordinates: * lat (lat) float64 34.51 34.52 34.52 34.52 34.52 * long (long) float64 16.47 16.72 16.85 16.98 17.11

However, as to_xarray() outputs a DataSet, each DataArray, i.e. column from the dataframe, is summarized as a 1D array, which makes it seem like a lot of data is just 'missing': python In [7]: df.to_xarray() Out[7]: <xarray.Dataset> Dimensions: (lat: 5, long: 5) Coordinates: * lat (lat) float64 34.51 34.52 34.52 34.52 34.52 * long (long) float64 16.47 16.72 16.85 16.98 17.11 Data variables: wind_surface (lat, long) float64 29.66 nan nan nan ... nan nan nan 32.72 hurs (lat, long) float64 70.48 nan nan nan ... nan nan nan 72.11

So it works as intended, but can throw you for a loop if you don't realize it's creating an array the size of all possible index combinations.

@shoyer can you close this issue?

{
    "total_count": 3,
    "+1": 3,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  NaN values for variables when converting from a pandas dataframe to xarray.DataSet 454073421
594014351 https://github.com/pydata/xarray/issues/3820#issuecomment-594014351 https://api.github.com/repos/pydata/xarray/issues/3820 MDEyOklzc3VlQ29tbWVudDU5NDAxNDM1MQ== sjvrijn 8833517 2020-03-03T15:38:05Z 2020-03-03T15:38:05Z CONTRIBUTOR

Waiting a few more months until it will definitely not be a problem for anyone seems fair to me :+1:

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Documentation of DataArray does not warn that inferring dimension names is deprecated 574097799
593874263 https://github.com/pydata/xarray/issues/3820#issuecomment-593874263 https://api.github.com/repos/pydata/xarray/issues/3820 MDEyOklzc3VlQ29tbWVudDU5Mzg3NDI2Mw== sjvrijn 8833517 2020-03-03T10:20:08Z 2020-03-03T10:20:08Z CONTRIBUTOR

I think that inferring dimension-names from the coords-dict is the most intuitive way to define a DataArray.

Passing a dictionary for coords is in my opinion the clearest way to indicate which coordinates belong to which dimension, so then why do I have to specify the same dimension names again?

An example of how I create them from my current project: values = xr.DataArray( values, coords={'n_high': n_highs, 'n_low': n_lows, 'rep': repetitions, 'model': models, 'idx': range(n_test_samples),}, dims=['n_high', 'n_low', 'rep', 'model', 'idx'], <-- repeated dim names attrs=attributes, )

If you expect almost everyone to use CPython or 3.7+ anyway, then I don't actually see any drawbacks, while it would regularly make code shorter and less repetitive.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Documentation of DataArray does not warn that inferring dimension names is deprecated 574097799
593673608 https://github.com/pydata/xarray/issues/3820#issuecomment-593673608 https://api.github.com/repos/pydata/xarray/issues/3820 MDEyOklzc3VlQ29tbWVudDU5MzY3MzYwOA== sjvrijn 8833517 2020-03-02T23:18:47Z 2020-03-02T23:18:47Z CONTRIBUTOR

@max-sixty Thanks for the tip. In the end it meant just changing the last line on dims. The paragraph on the coords argument is still valid after all.

On a related note: according to #727 (PR #993), this was deprecated since key-order in dictionaries was arbitrary at the time of that issue. However, their order is fixed since Python3.7, as noted in the documentation:

Changed in version 3.7: Dictionary order is guaranteed to be insertion order. This behavior was an implementation detail of CPython from 3.6.

I guess it's still too soon to 'un-deprecate' this behavior again? 👼

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Documentation of DataArray does not warn that inferring dimension names is deprecated 574097799

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 19.389ms · About: xarray-datasette