home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

123 rows where user = 221526 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: reactions, created_at (date), updated_at (date)

issue >30

  • Allow _FillValue and missing_value to differ (Fixes #1749) 5
  • Problems with distributed and opendap netCDF endpoint 5
  • ENH: Support using opened netCDF4.Dataset (Fixes #1459) 4
  • Better compression algorithms for NetCDF 4
  • Add strftime() to datetime accessor 4
  • float32 instead of float64 when decoding int16 with scale_factor netcdf var using xarray 4
  • Let's list all the netCDF files that xarray can't open 4
  • We shouldn't require a recent version of setuptools to install xarray 4
  • Support __array_ufunc__ for xarray objects. 3
  • Opening remote file with OpenDAP protocol returns "_FillValue type mismatch" error 3
  • support for units 2
  • Hooks for custom attribute handling in xarray operations 2
  • test_conventions.TestEncodeCFVariable failing on master for Appveyor Python 2.7 build 2
  • Fascinating bug in contourf 2
  • isort 2
  • Read grid mapping and bounds as coords 2
  • .item() on a DataArray with dtype='datetime64[ns]' returns int 2
  • 0.13.0 release 2
  • Remove setting of universal wheels 2
  • Fix handling of abbreviated units like msec 2
  • Using Dependabot to manage doc build and CI versions 2
  • Problem decoding times in data from OpenDAP server 2
  • xr.open_dataset(url) gives NetCDF4 (lru_cache.py) error "oc_open: Could not read url" 2
  • Coordinate variable gains coordinate on subset 2
  • save/load DataArray to numpy npz functions 1
  • Decorators for registering custom accessors in xarray 1
  • opendap and gzipped files 1
  • modified: xarray/backends/api.py 1
  • Consider how to deal with the proliferation of decoder options on open_dataset 1
  • rasm.nc is not a valid NetCDF 3 file 1
  • …

user 1

  • dopplershift · 123 ✖

author_association 1

  • CONTRIBUTOR 123
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
1467015318 https://github.com/pydata/xarray/issues/7385#issuecomment-1467015318 https://api.github.com/repos/pydata/xarray/issues/7385 IC_kwDOAMm_X85XcOCW dopplershift 221526 2023-03-13T21:51:19Z 2023-03-13T21:51:19Z CONTRIBUTOR

@dcherian Is this behavior (filling with fill_value -> inserting Nans) because they share common dimensionality in terms of name, but have different coordinate values? My expectation was something that operated more like numpy broadcasting (repeating values, not filling with anything else).

I can understand how xarray's data model yields this behavior, but in that case it might be good to improve the docs for xarray.broadcast, because it says nothing about the behavior that (seem to me) mimics xarray.align.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Unexpected NaNs in broadcast 1499473190
1448430261 https://github.com/pydata/xarray/issues/7525#issuecomment-1448430261 https://api.github.com/repos/pydata/xarray/issues/7525 IC_kwDOAMm_X85WVUq1 dopplershift 221526 2023-02-28T15:57:16Z 2023-02-28T15:57:16Z CONTRIBUTOR

@ethanrd @haileyajohnson @tdrwenski any thoughts on the above question?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  contiguous time axis  1583216781
1338290177 https://github.com/pydata/xarray/issues/7350#issuecomment-1338290177 https://api.github.com/repos/pydata/xarray/issues/7350 IC_kwDOAMm_X85PxLAB dopplershift 221526 2022-12-05T22:56:30Z 2022-12-05T22:56:30Z CONTRIBUTOR

So I'll say that taking my dataset and running nc.squeeze() was illustrative to the problem here. I see now that if a Dataset has scalar coordinates, they apply to every DataArray/Variable inside. So I see what prompts the current behavior.

I think the mismatch in my mental model is due to me coming from netCDF CF land, where coordinates for a variable are based on: 1. Other variables that match relevant shared dimension names 2. Those explicitly listed in the coordinates attribute

I see now that xarray does NOT implement that model.

This was provoked by challenges creating a new DataArray, but I see those can be solved by passing in both dims and coords (when coords contains some scalar coords).

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Coordinate variable gains coordinate on subset 1473329967
1337859447 https://github.com/pydata/xarray/issues/7350#issuecomment-1337859447 https://api.github.com/repos/pydata/xarray/issues/7350 IC_kwDOAMm_X85Pvh13 dopplershift 221526 2022-12-05T17:57:34Z 2022-12-05T17:57:34Z CONTRIBUTOR

IMO, it's not correctly implementing the rule as you phrased it. You said "still present", which isn't the case here since the coordinate wasn't present before.

The behavior I'd advocate for is that a subsetting/selection operation should never add new coordinates that weren't previously present. That by itself would be less surprising. It would also help make things more sensible given that the coordinate is only added currently in the scalar case--if you ask for more data, the coordinate isn't added, which is also unexpected given the scalar case.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Coordinate variable gains coordinate on subset 1473329967
1285866801 https://github.com/pydata/xarray/issues/7191#issuecomment-1285866801 https://api.github.com/repos/pydata/xarray/issues/7191 IC_kwDOAMm_X85MpMUx dopplershift 221526 2022-10-20T16:51:21Z 2022-10-20T16:51:21Z CONTRIBUTOR

I'm also not sure why _FillValue and missing_value should be required to have the same value.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Cannot Save NetCDF: Conflicting _FillValue and Missing_Value  1416709246
1270591340 https://github.com/pydata/xarray/pull/6981#issuecomment-1270591340 https://api.github.com/repos/pydata/xarray/issues/6981 IC_kwDOAMm_X85Lu69s dopplershift 221526 2022-10-06T19:38:24Z 2022-10-06T19:38:24Z CONTRIBUTOR

We elected not to start rebuilding things with netCDF 4.9.0 since 4.9.1 should be out realSoonNow™️ , so I don't think there's a netcdf4 package in conda-forge that has it yet.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Support the new compression argument in netCDF4 > 1.6.0 1359914824
1246106470 https://github.com/pydata/xarray/issues/7034#issuecomment-1246106470 https://api.github.com/repos/pydata/xarray/issues/7034 IC_kwDOAMm_X85KRhNm dopplershift 221526 2022-09-14T01:05:44Z 2022-09-14T01:05:44Z CONTRIBUTOR

That just worked fine for me. What version of libnetcdf and netcdf4 do you have installed?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  xarray fails to locate files via OpeNDAP 1372146714
1180471644 https://github.com/pydata/xarray/issues/6766#issuecomment-1180471644 https://api.github.com/repos/pydata/xarray/issues/6766 IC_kwDOAMm_X85GXJFc dopplershift 221526 2022-07-11T14:20:06Z 2022-07-11T14:20:06Z CONTRIBUTOR

@DanCodigaMWRA Well, given that it's failing with ncdump, you can skip netcdf4 and go straight to netcdf-c.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  xr.open_dataset(url) gives NetCDF4 (lru_cache.py) error "oc_open: Could not read url" 1299316581
1179403944 https://github.com/pydata/xarray/issues/6766#issuecomment-1179403944 https://api.github.com/repos/pydata/xarray/issues/6766 IC_kwDOAMm_X85GTEao dopplershift 221526 2022-07-08T22:22:15Z 2022-07-08T22:22:15Z CONTRIBUTOR

I just created a new Python 3.7 environment on my Mac and that worked fine. What do these show? ``` ❯ conda list curl

packages in environment at /Users/rmay/miniconda3/envs/py37:

Name Version Build Channel

curl 7.83.1 h23f1065_0 conda-forge libcurl 7.83.1 h23f1065_0 conda-forge ❯ conda list certifi

packages in environment at /Users/rmay/miniconda3/envs/py37:

Name Version Build Channel

ca-certificates 2022.6.15 h033912b_0 conda-forge certifi 2022.6.15 py37hf985489_0 conda-forge Though I agree this is not an xarray problem at this point. Does it work if you do (with the env active): ncdump -h http://psl.noaa.gov/thredds/dodsC/Datasets/NARR/monolevel/uwnd.10m.2000.nc ``` ?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  xr.open_dataset(url) gives NetCDF4 (lru_cache.py) error "oc_open: Could not read url" 1299316581
1071517967 https://github.com/pydata/xarray/issues/6374#issuecomment-1071517967 https://api.github.com/repos/pydata/xarray/issues/6374 IC_kwDOAMm_X84_3hEP dopplershift 221526 2022-03-17T21:30:22Z 2022-03-17T21:30:22Z CONTRIBUTOR

Cc @WardF @DennisHeimbigner @haileyajohnson

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Should the zarr backend support NCZarr conventions? 1172229856
1007746645 https://github.com/pydata/xarray/issues/6124#issuecomment-1007746645 https://api.github.com/repos/pydata/xarray/issues/6124 IC_kwDOAMm_X848EP5V dopplershift 221526 2022-01-07T21:16:20Z 2022-01-07T21:16:20Z CONTRIBUTOR

$0.02 from the peanut gallery is that my mental model of Dataset is that it's a dictionary on steroids, so following the Mapping protocol (including __bool__) makes sense to me. I put "I don't know" for @shoyer's poll, but if the question was "should" then I would have said "number of variables".

While I'm not going to sit here and argue that if ds: is a common operation and important feature (I don't often open files that result in empty datasets), I'd personally want a truly compelling argument for breaking from the Mapping protocol. Right now it's "Dataset works like dict plus some stuff". It would change to be "Dataset works like dict for all things except __bool__, plus some stuff". The latter to me requires more mental effort since I'm no longer strictly supplementing past experience.

{
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  bool(ds) should raise a "the truth value of a Dataset is ambiguous" error 1090229430
958120426 https://github.com/pydata/xarray/issues/5927#issuecomment-958120426 https://api.github.com/repos/pydata/xarray/issues/5927 IC_kwDOAMm_X845G8Hq dopplershift 221526 2021-11-02T19:56:21Z 2021-11-02T19:56:21Z CONTRIBUTOR

GitHub's new automated release notes may be of interest to some of this discussion. Essentially they allow you to provide a template to format the list of merged PRs on the branch since the last release.

{
    "total_count": 3,
    "+1": 3,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Release frequency 1042652334
954948424 https://github.com/pydata/xarray/issues/5913#issuecomment-954948424 https://api.github.com/repos/pydata/xarray/issues/5913 IC_kwDOAMm_X84461tI dopplershift 221526 2021-10-29T18:13:34Z 2021-10-29T18:13:34Z CONTRIBUTOR

Is this with the netcdf or with the pydap engine? If you're not sure, can you post the full error traceback?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Invalid characters in OpenDAP URL 1039113959
954028015 https://github.com/pydata/xarray/issues/5882#issuecomment-954028015 https://api.github.com/repos/pydata/xarray/issues/5882 IC_kwDOAMm_X8443U_v dopplershift 221526 2021-10-28T16:55:26Z 2021-10-28T16:55:26Z CONTRIBUTOR

Can you post the full traceback you get?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Opening remote file with OpenDAP protocol returns "_FillValue type mismatch" error 1032694511
953921596 https://github.com/pydata/xarray/issues/5882#issuecomment-953921596 https://api.github.com/repos/pydata/xarray/issues/5882 IC_kwDOAMm_X84427A8 dopplershift 221526 2021-10-28T14:49:26Z 2021-10-28T14:49:26Z CONTRIBUTOR

@saveriogzz I'm confused why you posted results for 3.6 and 3.8, given that the original issue looks like it was posted for 3.7. 🤨 At any rate, looks like your original issue, the output from show_versions() lists libnetcdf=4.7.4. That version should be fixed with regards to the _FillValue type mismatch error.

Your Python 3.8 environment does have an old version of libnetcdf. Can you try doing conda install -n py38 -c conda-forge "libnetcdf>=4.7.4" and see if that fixes your problem?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Opening remote file with OpenDAP protocol returns "_FillValue type mismatch" error 1032694511
952246143 https://github.com/pydata/xarray/issues/5882#issuecomment-952246143 https://api.github.com/repos/pydata/xarray/issues/5882 IC_kwDOAMm_X844wh9_ dopplershift 221526 2021-10-26T19:29:52Z 2021-10-26T19:29:52Z CONTRIBUTOR

@saveriogzz what is the output of conda list libnetcdf?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Opening remote file with OpenDAP protocol returns "_FillValue type mismatch" error 1032694511
951404731 https://github.com/pydata/xarray/pull/5845#issuecomment-951404731 https://api.github.com/repos/pydata/xarray/issues/5845 IC_kwDOAMm_X844tUi7 dopplershift 221526 2021-10-25T23:10:25Z 2021-10-25T23:10:25Z CONTRIBUTOR

You cannot use selectors with noarch conda packages. Full stop.

For conda-forge, it's perfectly fine to just unconditionally depend on the importlib_metadata backport, especially with the implementation done like here where you use the std lib version and fall back.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  remove requirement for setuptools.pkg_resources 1020407925
900601202 https://github.com/pydata/xarray/issues/5711#issuecomment-900601202 https://api.github.com/repos/pydata/xarray/issues/5711 IC_kwDOAMm_X841rhVy dopplershift 221526 2021-08-17T20:16:57Z 2021-08-17T20:16:57Z CONTRIBUTOR

The metadata for that package has been patched in conda-forge/conda-forge-repodata-patches-feedstock#161 to depend on python >=3.7, so this shouldn't be an issue any more. @kim-barker How are you installing xarray/setting up your environment? What conda commands are you running? What OS/platform architecture?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Broken conda-forge release xarray-0.19.0-pyhd8ed1ab_0.tar.bz2  972878124
841524018 https://github.com/pydata/xarray/issues/5291#issuecomment-841524018 https://api.github.com/repos/pydata/xarray/issues/5291 MDEyOklzc3VlQ29tbWVudDg0MTUyNDAxOA== dopplershift 221526 2021-05-14T22:00:52Z 2021-05-14T22:00:52Z CONTRIBUTOR

On conda it's usually only done to avoid problematic/heavy weight dependencies (i.e. avoiding pyqt dependency with matplotlib-base). I'm not sure it's worth doing for pooch.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  ds = xr.tutorial.load_dataset("air_temperature") with 0.18 needs engine argument 889162918
831449400 https://github.com/pydata/xarray/pull/5244#issuecomment-831449400 https://api.github.com/repos/pydata/xarray/issues/5244 MDEyOklzc3VlQ29tbWVudDgzMTQ0OTQwMA== dopplershift 221526 2021-05-03T18:34:12Z 2021-05-03T18:34:12Z CONTRIBUTOR

@andersy005 I'm curious, why do you go with multiple jobs within the workflow, and using artifacts to transfer state between them, rather than multiple steps in a single job?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add GitHub action for publishing artifacts to PyPI 873842812
830254596 https://github.com/pydata/xarray/issues/5232#issuecomment-830254596 https://api.github.com/repos/pydata/xarray/issues/5232 MDEyOklzc3VlQ29tbWVudDgzMDI1NDU5Ng== dopplershift 221526 2021-04-30T17:42:09Z 2021-04-30T17:42:09Z CONTRIBUTOR

We can avoid this by using the pypi github action thing to automatically build and upload when tagging a release on github. It uses a repo-level secret. Just a thought.

$0.02 from an outsider is that this has served us exceedingly well on MetPy. Our release process has become: 1. Close milestone 2. Adjust the auto-generated draft GitHub release (summary notes) 3. Click "publish release" -> packages uploaded to PyPI 4. Merge conda-forge update from their bots

It's almost more secure this way because the token from PyPI only has upload permissions--no need to store someone's password.

{
    "total_count": 3,
    "+1": 3,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  release v0.18.0 870292042
824227019 https://github.com/pydata/xarray/issues/5189#issuecomment-824227019 https://api.github.com/repos/pydata/xarray/issues/5189 MDEyOklzc3VlQ29tbWVudDgyNDIyNzAxOQ== dopplershift 221526 2021-04-21T17:19:21Z 2021-04-21T17:19:21Z CONTRIBUTOR

KeyError: 'tmp2m%2Etmp2m'

This looks like an issue with the encoding of the URL and what the server expects.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  KeyError pulling from Nasa server with Pydap 861684673
812643265 https://github.com/pydata/xarray/issues/4925#issuecomment-812643265 https://api.github.com/repos/pydata/xarray/issues/4925 MDEyOklzc3VlQ29tbWVudDgxMjY0MzI2NQ== dopplershift 221526 2021-04-02T18:01:21Z 2021-04-02T18:01:21Z CONTRIBUTOR

The message:

"The identifier `tmax.tmax%5b0%5d%5b0:3:620%5d%5b0:3:1404%5d' is not in the dataset."

makes me wonder if there's some issue with how the webserver is handling the escaping of certain characters (like e.g. [)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  OpenDAP Documentation Example failing with RunTimeError 811409317
696963505 https://github.com/pydata/xarray/pull/4431#issuecomment-696963505 https://api.github.com/repos/pydata/xarray/issues/4431 MDEyOklzc3VlQ29tbWVudDY5Njk2MzUwNQ== dopplershift 221526 2020-09-22T20:32:28Z 2020-09-22T20:32:28Z CONTRIBUTOR

@alexamici Force-pushing doesn't normally close it, so this is weird...

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Refactor of the big if-chain to a dictionary in the form {backend_name: backend_open}. 703550109
696954231 https://github.com/pydata/xarray/issues/4422#issuecomment-696954231 https://api.github.com/repos/pydata/xarray/issues/4422 MDEyOklzc3VlQ29tbWVudDY5Njk1NDIzMQ== dopplershift 221526 2020-09-22T20:13:50Z 2020-09-22T20:13:50Z CONTRIBUTOR

I'd say in the case of use_ctime=True that it's a bug that it ever uses pandas for date parsing.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Problem decoding times in data from OpenDAP server 701062999
696508013 https://github.com/pydata/xarray/issues/4422#issuecomment-696508013 https://api.github.com/repos/pydata/xarray/issues/4422 MDEyOklzc3VlQ29tbWVudDY5NjUwODAxMw== dopplershift 221526 2020-09-22T04:56:17Z 2020-09-22T04:56:17Z CONTRIBUTOR

Probably shouldn't raise an error for 1-1-1 since that's valid according to the Climate and Forecasting netCDF conventions (see examples 4.5 and 4.6). In fact, it works perfectly when using use_cftime=True both for the original data and reading in the data from disk.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Problem decoding times in data from OpenDAP server 701062999
684068427 https://github.com/pydata/xarray/issues/4394#issuecomment-684068427 https://api.github.com/repos/pydata/xarray/issues/4394 MDEyOklzc3VlQ29tbWVudDY4NDA2ODQyNw== dopplershift 221526 2020-08-31T22:08:56Z 2020-08-31T22:08:56Z CONTRIBUTOR

Duplicate of #1672?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Is it possible to append_dim to netcdf stores 689390592
678819159 https://github.com/pydata/xarray/issues/4370#issuecomment-678819159 https://api.github.com/repos/pydata/xarray/issues/4370 MDEyOklzc3VlQ29tbWVudDY3ODgxOTE1OQ== dopplershift 221526 2020-08-23T20:12:14Z 2020-08-23T20:12:14Z CONTRIBUTOR

Likely duplicate of #4283 .

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Not able to slice dataset using its own coordinate value, after upgrade to pandas 1.1.0 684248425
671691540 https://github.com/pydata/xarray/pull/4332#issuecomment-671691540 https://api.github.com/repos/pydata/xarray/issues/4332 MDEyOklzc3VlQ29tbWVudDY3MTY5MTU0MA== dopplershift 221526 2020-08-11T02:42:41Z 2020-08-11T02:42:41Z CONTRIBUTOR

Packaging for sphinx-autosummary-accessors looks good to me. I think the build failure is for sphinx 3.2 warnings and not the extension.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  install sphinx-autosummary-accessors from conda-forge 676497373
669666848 https://github.com/pydata/xarray/issues/4313#issuecomment-669666848 https://api.github.com/repos/pydata/xarray/issues/4313 MDEyOklzc3VlQ29tbWVudDY2OTY2Njg0OA== dopplershift 221526 2020-08-06T03:49:34Z 2020-08-06T03:49:34Z CONTRIBUTOR

So to be clear, conda can use the --file argument to read those files and treat them as if they were dependency requirements passed on the command line. Not quite the same as an environment.yml, but I haven't had any problems so far. It's been really nice since dependabot doesn't understand the environment files.

So you can point dependabot to a directory, where for pypi it looks for files ending in .txt. You can make a subset by using a directory, or on MetPy we've remove the .txt from files we don't want dependabot to update. It feels like a bit of a hack, but it works.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Using Dependabot to manage doc build and CI versions 673682661
669502001 https://github.com/pydata/xarray/issues/4313#issuecomment-669502001 https://api.github.com/repos/pydata/xarray/issues/4313 MDEyOklzc3VlQ29tbWVudDY2OTUwMjAwMQ== dopplershift 221526 2020-08-05T20:52:33Z 2020-08-05T20:52:33Z CONTRIBUTOR

So on MetPy we moved to treating our CI system as an application and pinning every direct dependency in a requirements.txt (which can be used by conda as well). We then let dependabot handle the updates. This let's us manage the updates on a package-by-package basis, where we have a single PR that lets us see what the ramifications are with regards to tests, CI, even linting.

We've been running for a limited time, but so far it has done a good job of insulating general development (coming in on PRs) from changes in the environment, which now shouldn't change on CI from run to run (yeah, yeah 2nd-order dependencies, just pin problematic ones too). For instance, for the pandas 1.1.0 breakage, we just haven't merged the PR that moves the pin there, and that has kept our doc builds green on MetPy.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Using Dependabot to manage doc build and CI versions 673682661
668891516 https://github.com/pydata/xarray/issues/4295#issuecomment-668891516 https://api.github.com/repos/pydata/xarray/issues/4295 MDEyOklzc3VlQ29tbWVudDY2ODg5MTUxNg== dopplershift 221526 2020-08-05T00:07:38Z 2020-08-05T00:07:38Z CONTRIBUTOR

cough Solving the "setuptools won't work in setup_requires because it's too late" was basically the entire driving force of pyproject.toml.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  We shouldn't require a recent version of setuptools to install xarray 671019427
668405710 https://github.com/pydata/xarray/issues/4295#issuecomment-668405710 https://api.github.com/repos/pydata/xarray/issues/4295 MDEyOklzc3VlQ29tbWVudDY2ODQwNTcxMA== dopplershift 221526 2020-08-04T06:27:20Z 2020-08-04T06:27:20Z CONTRIBUTOR

Wow...two years for pip to gain support...I would not have expected that.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  We shouldn't require a recent version of setuptools to install xarray 671019427
668395780 https://github.com/pydata/xarray/issues/4295#issuecomment-668395780 https://api.github.com/repos/pydata/xarray/issues/4295 MDEyOklzc3VlQ29tbWVudDY2ODM5NTc4MA== dopplershift 221526 2020-08-04T05:56:44Z 2020-08-04T05:56:44Z CONTRIBUTOR

I'm not here to argue, but pyproject.toml was introduced in PEP-518, which was accepted over 4 years ago. I know packaging moves slowly but I'm curious how long something has to be around before becoming "established" and ceases to be "novel". 😉

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  We shouldn't require a recent version of setuptools to install xarray 671019427
667581142 https://github.com/pydata/xarray/issues/4295#issuecomment-667581142 https://api.github.com/repos/pydata/xarray/issues/4295 MDEyOklzc3VlQ29tbWVudDY2NzU4MTE0Mg== dopplershift 221526 2020-08-01T20:10:55Z 2020-08-01T20:10:55Z CONTRIBUTOR

Rolling window seems fine to me. I will say that I don't generally bother bumping that on other projects until we run into an issue/new feature that necessitates it, though.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  We shouldn't require a recent version of setuptools to install xarray 671019427
666714053 https://github.com/pydata/xarray/issues/4287#issuecomment-666714053 https://api.github.com/repos/pydata/xarray/issues/4287 MDEyOklzc3VlQ29tbWVudDY2NjcxNDA1Mw== dopplershift 221526 2020-07-30T21:27:14Z 2020-07-30T21:27:14Z CONTRIBUTOR

The exception looks like the identical problem as #4283 .

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  failing docs CI 668166816
665435320 https://github.com/pydata/xarray/issues/4283#issuecomment-665435320 https://api.github.com/repos/pydata/xarray/issues/4283 MDEyOklzc3VlQ29tbWVudDY2NTQzNTMyMA== dopplershift 221526 2020-07-29T05:13:27Z 2020-07-29T05:13:27Z CONTRIBUTOR

Looks like (to my eye anyway) it stems from: ```python import numpy as np import pandas as pd t = np.array(['2017-09-05T12:00:00.000000000', '2017-09-05T15:00:00.000000000'], dtype='datetime64[ns]') index = pd.DatetimeIndex(t)

index.get_loc(t[0].item()) # Fails with KeyError index.get_loc(t[0]) # Works `` Fails on 1.1.0. What I have no idea is whether the.item()` call is supposed to work.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Selection with datetime64[ns] fails with Pandas 1.1.0 667550022
657136785 https://github.com/pydata/xarray/issues/4043#issuecomment-657136785 https://api.github.com/repos/pydata/xarray/issues/4043 MDEyOklzc3VlQ29tbWVudDY1NzEzNjc4NQ== dopplershift 221526 2020-07-11T22:01:55Z 2020-07-11T22:01:55Z CONTRIBUTOR

Probably worth raising upstream with the THREDDS team. I do wonder if there's some issues with the chunking/compression of the native .nc files that's at play here.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Opendap access failure error 614144170
657135521 https://github.com/pydata/xarray/issues/4208#issuecomment-657135521 https://api.github.com/repos/pydata/xarray/issues/4208 MDEyOklzc3VlQ29tbWVudDY1NzEzNTUyMQ== dopplershift 221526 2020-07-11T21:49:36Z 2020-07-11T21:49:54Z CONTRIBUTOR

Does/should any of this also consider #4212 (CuPy)?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Support for duck Dask Arrays 653430454
657135369 https://github.com/pydata/xarray/issues/4212#issuecomment-657135369 https://api.github.com/repos/pydata/xarray/issues/4212 MDEyOklzc3VlQ29tbWVudDY1NzEzNTM2OQ== dopplershift 221526 2020-07-11T21:48:17Z 2020-07-11T21:48:17Z CONTRIBUTOR

@jacobtomlinson Any idea how this would play with the work that's been going on for units here; I'm specifically wondering if xarray ( pint ( cupy )) would/could work.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add cupy support 654135405
618815341 https://github.com/pydata/xarray/pull/3998#issuecomment-618815341 https://api.github.com/repos/pydata/xarray/issues/3998 MDEyOklzc3VlQ29tbWVudDYxODgxNTM0MQ== dopplershift 221526 2020-04-24T05:50:33Z 2020-04-24T05:50:33Z CONTRIBUTOR

Ah, didn't realize bug fixes went in there too. And I thought I could get away without black, but I missed the quote style. All done.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fix handling of abbreviated units like msec 605920781
618709181 https://github.com/pydata/xarray/pull/3998#issuecomment-618709181 https://api.github.com/repos/pydata/xarray/issues/3998 MDEyOklzc3VlQ29tbWVudDYxODcwOTE4MQ== dopplershift 221526 2020-04-23T22:44:21Z 2020-04-23T22:44:21Z CONTRIBUTOR

cc @dcamron

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fix handling of abbreviated units like msec 605920781
597883721 https://github.com/pydata/xarray/pull/2844#issuecomment-597883721 https://api.github.com/repos/pydata/xarray/issues/2844 MDEyOklzc3VlQ29tbWVudDU5Nzg4MzcyMQ== dopplershift 221526 2020-03-11T21:19:31Z 2020-03-11T21:19:31Z CONTRIBUTOR

Thanks for the info. Based on that, I lean towards attrs.

I think a better rationale, though, would be to formalize the role of encoding in xarray.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Read grid mapping and bounds as coords 424265093
597374274 https://github.com/pydata/xarray/pull/2844#issuecomment-597374274 https://api.github.com/repos/pydata/xarray/issues/2844 MDEyOklzc3VlQ29tbWVudDU5NzM3NDI3NA== dopplershift 221526 2020-03-10T23:47:43Z 2020-03-10T23:47:43Z CONTRIBUTOR

As a downstream user, I just want to be told what to do (assuming encoding is part of the public API for xarray). I'd love not to have to modify our code, but that's not essential necessarily.

So to clarify: is this about whether they should be in one spot or the other? Or is it about having grid_mapping and bounds in both?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Read grid mapping and bounds as coords 424265093
570089112 https://github.com/pydata/xarray/issues/3653#issuecomment-570089112 https://api.github.com/repos/pydata/xarray/issues/3653 MDEyOklzc3VlQ29tbWVudDU3MDA4OTExMg== dopplershift 221526 2020-01-01T22:34:46Z 2020-01-01T22:34:46Z CONTRIBUTOR

This isn't accessing netCDF using opendap, it's directly accessing using HTTP. Did xarray or netCDF gain support for this and I failed to notice?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  "[Errno -90] NetCDF: file not found: b" when opening netCDF from server 543197350
565183733 https://github.com/pydata/xarray/issues/3256#issuecomment-565183733 https://api.github.com/repos/pydata/xarray/issues/3256 MDEyOklzc3VlQ29tbWVudDU2NTE4MzczMw== dopplershift 221526 2019-12-12T20:57:56Z 2019-12-12T20:57:56Z CONTRIBUTOR

Yeah, it's not much shorter, but it's a much easier concept for new users to grasp, IMO.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  .item() on a DataArray with dtype='datetime64[ns]' returns int 484699415
563511116 https://github.com/pydata/xarray/issues/3256#issuecomment-563511116 https://api.github.com/repos/pydata/xarray/issues/3256 MDEyOklzc3VlQ29tbWVudDU2MzUxMTExNg== dopplershift 221526 2019-12-10T01:00:00Z 2019-12-10T01:00:00Z CONTRIBUTOR

Well it'd be nice to have something, because what we have to do right now is: python times[0].values.astype('datetime64[ms]').astype('O') which is an abomination and terrible for teaching.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  .item() on a DataArray with dtype='datetime64[ns]' returns int 484699415
554457858 https://github.com/pydata/xarray/pull/3537#issuecomment-554457858 https://api.github.com/repos/pydata/xarray/issues/3537 MDEyOklzc3VlQ29tbWVudDU1NDQ1Nzg1OA== dopplershift 221526 2019-11-15T17:41:52Z 2019-11-15T19:02:03Z CONTRIBUTOR

IMO, it's always best to release code that you know will work rather than rely on upstream to get something into the next release in order for you not to be broken. I say that both as a downstream user of xarray and a library maintainer.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Numpy 1.18 support 523438384
547592948 https://github.com/pydata/xarray/pull/3111#issuecomment-547592948 https://api.github.com/repos/pydata/xarray/issues/3111 MDEyOklzc3VlQ29tbWVudDU0NzU5Mjk0OA== dopplershift 221526 2019-10-29T19:33:20Z 2019-10-29T19:33:20Z CONTRIBUTOR

I just got it to render fine--I blame general GitHub flakiness around notebooks lately.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  A new example Notebook that plots Discrete Sampling Geometry Data 467746047
538684049 https://github.com/pydata/xarray/pull/3367#issuecomment-538684049 https://api.github.com/repos/pydata/xarray/issues/3367 MDEyOklzc3VlQ29tbWVudDUzODY4NDA0OQ== dopplershift 221526 2019-10-05T20:05:58Z 2019-10-05T20:05:58Z CONTRIBUTOR

All good, thanks.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Remove setting of universal wheels 501730864
537694464 https://github.com/pydata/xarray/pull/3367#issuecomment-537694464 https://api.github.com/repos/pydata/xarray/issues/3367 MDEyOklzc3VlQ29tbWVudDUzNzY5NDQ2NA== dopplershift 221526 2019-10-02T21:44:03Z 2019-10-02T21:44:03Z CONTRIBUTOR

You don’t run into a problem when doing pip install xarray on Python 2. We have run into problems on our CI builds where we install from our own cache on S3, which has a list of wheels in index.html, which we install from using pip install -f <index_url> xarray. When we have an xarray wheel in there with the incorrect name, python 2.7 tries to install from it.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Remove setting of universal wheels 501730864
525441141 https://github.com/pydata/xarray/issues/3257#issuecomment-525441141 https://api.github.com/repos/pydata/xarray/issues/3257 MDEyOklzc3VlQ29tbWVudDUyNTQ0MTE0MQ== dopplershift 221526 2019-08-27T19:06:42Z 2019-08-27T19:06:42Z CONTRIBUTOR

On Travis I have some deploy hooks: 1. Using Travis' built-in PyPI support, it uploads wheels and sdist, only on tags 2. Execute a custom script to commit built docs to github pages (not RTD). On master builds, this updates dev docs. On a tag, it adds a new directory for that version of the docs.

Travis Config Doc Deploy Script

{
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 1,
    "rocket": 0,
    "eyes": 0
}
  0.13.0 release 484711431
525405541 https://github.com/pydata/xarray/issues/3257#issuecomment-525405541 https://api.github.com/repos/pydata/xarray/issues/3257 MDEyOklzc3VlQ29tbWVudDUyNTQwNTU0MQ== dopplershift 221526 2019-08-27T17:34:55Z 2019-08-27T17:34:55Z CONTRIBUTOR

The benefit to automation also makes it easier to distribute the workload to other people, helping with project sustainability. On my projects, I find it very nice that I make a new release on GitHub and packages appear on PyPI and the web docs are automatically updated to the new version.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  0.13.0 release 484711431
525404631 https://github.com/pydata/xarray/issues/3268#issuecomment-525404631 https://api.github.com/repos/pydata/xarray/issues/3268 MDEyOklzc3VlQ29tbWVudDUyNTQwNDYzMQ== dopplershift 221526 2019-08-27T17:32:32Z 2019-08-27T17:32:32Z CONTRIBUTOR

I don't mind needing to update our accessor code. My only request is don't have a version that suddenly breaks it so that we only work on the old version or the new version. 😉

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Stateful user-defined accessors 485708282
524400034 https://github.com/pydata/xarray/pull/3247#issuecomment-524400034 https://api.github.com/repos/pydata/xarray/issues/3247 MDEyOklzc3VlQ29tbWVudDUyNDQwMDAzNA== dopplershift 221526 2019-08-23T17:35:49Z 2019-08-23T17:35:49Z CONTRIBUTOR

cc: @jthielen Might make some things in MetPy easier...

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Update filter_by_attrs to use 'variables' instead of 'data_vars' 484243962
523228118 https://github.com/pydata/xarray/pull/2956#issuecomment-523228118 https://api.github.com/repos/pydata/xarray/issues/2956 MDEyOklzc3VlQ29tbWVudDUyMzIyODExOA== dopplershift 221526 2019-08-20T23:02:38Z 2019-08-20T23:02:38Z CONTRIBUTOR

Yeah, pint doesn't do __array_function__ yet. It'd be great to get this is, if for only that it lowers the barrier to make sure xarray works with pint in the future.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Picking up #1118: Do not convert subclasses of `ndarray` unless required 443157666
499921424 https://github.com/pydata/xarray/issues/2871#issuecomment-499921424 https://api.github.com/repos/pydata/xarray/issues/2871 MDEyOklzc3VlQ29tbWVudDQ5OTkyMTQyNA== dopplershift 221526 2019-06-07T15:06:29Z 2019-06-07T15:06:29Z CONTRIBUTOR

Just to correct something here, missing_value is no longer considered deprecated in the current version of the standard: http://cfconventions.org/Data/cf-conventions/cf-conventions-1.7/cf-conventions.html

Near the bottom is a specific note about removing the deprecation.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  xr.open_dataset(f1).to_netcdf(file2) is not idempotent 429914958
499699528 https://github.com/pydata/xarray/issues/2419#issuecomment-499699528 https://api.github.com/repos/pydata/xarray/issues/2419 MDEyOklzc3VlQ29tbWVudDQ5OTY5OTUyOA== dopplershift 221526 2019-06-06T23:04:11Z 2019-06-06T23:04:11Z CONTRIBUTOR

So I ran into this working with a climate scientist the other day. The use case we had was given some model output that had data like: <xarray.DataArray 'NBP' (time: 1932, lat: 192, lon: 288)> [106831872 values with dtype=float32] Coordinates: * lat (lat) float32 -90.0 -89.057594 -88.11518 ... 89.057594 90.0 * lon (lon) float32 0.0 1.25 2.5 3.75 5.0 ... 355.0 356.25 357.5 358.75 * time (time) object 1850-02-01 00:00:00 ... 2011-01-01 00:00:00 we wanted to reshape to turn time into two dimensions ("year", "month"). This was to facilitate looking at the max and min as a function of "month", take the difference, and then average over "year". Is there a way to do this already that I'm not aware of?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Document ways to reshape a DataArray 361237908
497885383 https://github.com/pydata/xarray/pull/2989#issuecomment-497885383 https://api.github.com/repos/pydata/xarray/issues/2989 MDEyOklzc3VlQ29tbWVudDQ5Nzg4NTM4Mw== dopplershift 221526 2019-05-31T23:04:03Z 2019-05-31T23:04:03Z CONTRIBUTOR

Thanks for bringing this to the 🏁 @abrammer !

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add strftime() to datetime accessor with cftimeindex and dask support 448330247
495770271 https://github.com/pydata/xarray/pull/2144#issuecomment-495770271 https://api.github.com/repos/pydata/xarray/issues/2144 MDEyOklzc3VlQ29tbWVudDQ5NTc3MDI3MQ== dopplershift 221526 2019-05-24T19:56:21Z 2019-05-24T19:56:21Z CONTRIBUTOR

So I finally have time to work on this...but if someone has working code to do this instead, I'm totally fine with that going in instead.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add strftime() to datetime accessor 323823894
484758877 https://github.com/pydata/xarray/issues/2697#issuecomment-484758877 https://api.github.com/repos/pydata/xarray/issues/2697 MDEyOklzc3VlQ29tbWVudDQ4NDc1ODg3Nw== dopplershift 221526 2019-04-19T03:47:37Z 2019-04-19T03:47:37Z CONTRIBUTOR

I haven't had any time to start on this (and I'm a few more weeks out), so feel free to take a cut. I'm not sure what @shoyer or @rabernat have in mind for API.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  read ncml files to create multifile datasets 401874795
483799967 https://github.com/pydata/xarray/issues/525#issuecomment-483799967 https://api.github.com/repos/pydata/xarray/issues/525 MDEyOklzc3VlQ29tbWVudDQ4Mzc5OTk2Nw== dopplershift 221526 2019-04-16T18:54:37Z 2019-04-16T18:54:37Z CONTRIBUTOR

@shoyer I agree with that wrapping order. I think I'd also be in favor of starting with an experiment to disable coercing to arrays.

@nbren12 The non-communicative multiplication is a consequence of operator dispatch in Python, and the reason why we want __array_function__ from numpy. Your first example dispatches to dask.array.__mul__, which doesn't know anything about pint and doesn't know how to compose its operations because there are no hooks--the pint array just gets coerced to a numpy array. The second goes to pint.Quantity.__mul__, which assumes it can wrap the dask.array (because it duck typing) and seems to succeed in doing so.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  support for units 100295585
454179986 https://github.com/pydata/xarray/issues/2656#issuecomment-454179986 https://api.github.com/repos/pydata/xarray/issues/2656 MDEyOklzc3VlQ29tbWVudDQ1NDE3OTk4Ng== dopplershift 221526 2019-01-14T22:07:19Z 2019-01-14T22:07:19Z CONTRIBUTOR

I'm not aware of any standard out there for JSON representation of netCDF, but I know it's been at least (briefly) discussed. @WardF, anything out there you're aware of?

Another spelling of this could be ds.to_dict(header_only=True), which I only suggest to mirror ncdump -h.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  dataset info in .json format 396285440
443908495 https://github.com/pydata/xarray/issues/2583#issuecomment-443908495 https://api.github.com/repos/pydata/xarray/issues/2583 MDEyOklzc3VlQ29tbWVudDQ0MzkwODQ5NQ== dopplershift 221526 2018-12-03T23:20:29Z 2018-12-03T23:20:29Z CONTRIBUTOR

@lesserwhirls @dennisHeimbigner Is there any reason to expect a difference between the downloaded file and the opendap view on TDS?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  decode_cf not scaling and off-setting correctly 386268842
432747410 https://github.com/pydata/xarray/issues/2503#issuecomment-432747410 https://api.github.com/repos/pydata/xarray/issues/2503 MDEyOklzc3VlQ29tbWVudDQzMjc0NzQxMA== dopplershift 221526 2018-10-24T17:13:14Z 2018-10-24T17:13:14Z CONTRIBUTOR

Oh, I didn't even catch that the original was on defaults.

{
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 1,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Problems with distributed and opendap netCDF endpoint 373121666
432744441 https://github.com/pydata/xarray/issues/2503#issuecomment-432744441 https://api.github.com/repos/pydata/xarray/issues/2503 MDEyOklzc3VlQ29tbWVudDQzMjc0NDQ0MQ== dopplershift 221526 2018-10-24T17:06:01Z 2018-10-24T17:06:01Z CONTRIBUTOR

That version has the fix for the issue.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Problems with distributed and opendap netCDF endpoint 373121666
432739449 https://github.com/pydata/xarray/issues/2503#issuecomment-432739449 https://api.github.com/repos/pydata/xarray/issues/2503 MDEyOklzc3VlQ29tbWVudDQzMjczOTQ0OQ== dopplershift 221526 2018-10-24T16:54:05Z 2018-10-24T16:54:05Z CONTRIBUTOR

The original version of libnetcdf in @rabernat 's environment definitely had the opendap timeout issue. Not sure if that's the root cause of the problem, or not, but it's suspect.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Problems with distributed and opendap netCDF endpoint 373121666
432422763 https://github.com/pydata/xarray/issues/2503#issuecomment-432422763 https://api.github.com/repos/pydata/xarray/issues/2503 MDEyOklzc3VlQ29tbWVudDQzMjQyMjc2Mw== dopplershift 221526 2018-10-23T21:16:05Z 2018-10-23T21:16:16Z CONTRIBUTOR

@lesserwhirls That's an interesting idea. (@rsignell-usgs That's the one.)

@rabernat What version of the conda-forge libnetcdf package is deployed wherever you're running?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Problems with distributed and opendap netCDF endpoint 373121666
432370887 https://github.com/pydata/xarray/issues/2503#issuecomment-432370887 https://api.github.com/repos/pydata/xarray/issues/2503 MDEyOklzc3VlQ29tbWVudDQzMjM3MDg4Nw== dopplershift 221526 2018-10-23T18:43:23Z 2018-10-23T18:43:23Z CONTRIBUTOR

Just so I'm clear on how the workflow looks: 1. Open dataset with NetCDF/OPeNDAP 2. Serialize NetCDFDataStore (pickle? netcdf file?) 3. Ship to Dask workers 4. Reconstitute NetCDFDataStore

Certainly does seem like there's something stale in what the remote workers are getting. Confused why it works for the others, though.

I can prioritize this a bit and dig in to see what I can figure out--though I'm teaching through tomorrow. May be able to dig into this while at ECMWF.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Problems with distributed and opendap netCDF endpoint 373121666
424100432 https://github.com/pydata/xarray/pull/2144#issuecomment-424100432 https://api.github.com/repos/pydata/xarray/issues/2144 MDEyOklzc3VlQ29tbWVudDQyNDEwMDQzMg== dopplershift 221526 2018-09-24T19:44:21Z 2018-09-24T19:44:21Z CONTRIBUTOR

Just haven't had the cycles to bring this to the finish line.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add strftime() to datetime accessor 323823894
420470343 https://github.com/pydata/xarray/pull/2398#issuecomment-420470343 https://api.github.com/repos/pydata/xarray/issues/2398 MDEyOklzc3VlQ29tbWVudDQyMDQ3MDM0Mw== dopplershift 221526 2018-09-12T00:25:54Z 2018-09-12T00:25:54Z CONTRIBUTOR

Why would you sort the array? Aren't you taking differences of values and dividing by differences between the matching coordinates?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  implement Gradient 356698348
419225007 https://github.com/pydata/xarray/issues/2368#issuecomment-419225007 https://api.github.com/repos/pydata/xarray/issues/2368 MDEyOklzc3VlQ29tbWVudDQxOTIyNTAwNw== dopplershift 221526 2018-09-06T20:10:24Z 2018-09-06T20:10:24Z CONTRIBUTOR

That sounds reasonable to me. I don't necessarily expect all of the xarray goodness to work with those files, but I do expect them to open without error.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Let's list all the netCDF files that xarray can't open 350899839
419176628 https://github.com/pydata/xarray/issues/2368#issuecomment-419176628 https://api.github.com/repos/pydata/xarray/issues/2368 MDEyOklzc3VlQ29tbWVudDQxOTE3NjYyOA== dopplershift 221526 2018-09-06T17:28:14Z 2018-09-06T17:28:14Z CONTRIBUTOR

@rabernat While I agree that they're (somewhat) confusing files, I think you're missing two things:

  1. netCDF doesn't enforce naming on dimensions and variables. Full stop. The only naming netCDF will care about is any conflict with an internal reserved name (I'm not sure that those even exist for anything besides attributes.) IMO that's a good thing, but more importantly it's not the netCDF library's job to enforce any of it.

  2. CF is an attribute convention. This also means that the conventions say absolutely nothing about naming of variables and dimensions.

IMO, xarray is being overly pedantic here. XArray states that it adopts the Common Data Model (CDM); netCDF-java and the CDM were the tools used to generate the failing examples above.

{
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Let's list all the netCDF files that xarray can't open 350899839
413723632 https://github.com/pydata/xarray/issues/988#issuecomment-413723632 https://api.github.com/repos/pydata/xarray/issues/988 MDEyOklzc3VlQ29tbWVudDQxMzcyMzYzMg== dopplershift 221526 2018-08-17T00:33:47Z 2018-08-17T00:33:47Z CONTRIBUTOR

I see your argument, but here's my problem. In this future where things work (assuming that NEP is accepted), and I want distributed computing with dask, units, and xarray, I have: xarray wrapping a pint array wrapping a dask array. I like composition, but that level of wrapping...feels wrong to me for some reason. Is there some elegance I'm missing here? (Other than array-like things playing together.)

And then I still need hooks in xarray so that when pint does a calculation, it can update the metadata in xarray; so it feels like we're back here anyway.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Hooks for custom attribute handling in xarray operations 173612265
413360749 https://github.com/pydata/xarray/issues/988#issuecomment-413360749 https://api.github.com/repos/pydata/xarray/issues/988 MDEyOklzc3VlQ29tbWVudDQxMzM2MDc0OQ== dopplershift 221526 2018-08-15T22:36:21Z 2018-08-15T22:36:21Z CONTRIBUTOR

@shoyer I know elsewhere you said you weren't sure about this idea any more, but personally I'd like to push forward on this idea. Do you have problems with this approach we need to resolve? Any chance you have some preliminary code?

I think this is the right way to solve the unit issue in XArray, since at it's core unit handling is mostly a metadata operation.

{
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Hooks for custom attribute handling in xarray operations 173612265
413281638 https://github.com/pydata/xarray/issues/2368#issuecomment-413281638 https://api.github.com/repos/pydata/xarray/issues/2368 MDEyOklzc3VlQ29tbWVudDQxMzI4MTYzOA== dopplershift 221526 2018-08-15T17:58:12Z 2018-08-15T17:58:12Z CONTRIBUTOR

Here's a sample CDL for a file: ``` netcdf temp { dimensions: profile = 1 ; station = 1 ; isobaric = 31 ; station_name_strlen = 10 ; station_description_strlen = 33 ; variables: float isobaric(station, profile, isobaric) ; isobaric:standard_name = "isobaric" ; isobaric:long_name = "isobaric" ; isobaric:units = "Pa" ; isobaric:positive = "down" ; isobaric:axis = "Z" ; float Geopotential_height_isobaric(station, profile, isobaric) ; Geopotential_height_isobaric:standard_name = "Geopotential_height_isobaric" ; Geopotential_height_isobaric:long_name = "Geopotential_height_isobaric" ; Geopotential_height_isobaric:units = "gpm" ; Geopotential_height_isobaric:coordinates = "time longitude latitude isobaric" ; char station_name(station, station_name_strlen) ; station_name:long_name = "station name" ; station_name:cf_role = "timeseries_id" ; char station_description(station, station_description_strlen) ; station_description:long_name = "station description" ; station_description:standard_name = "platform_name" ; double latitude(station) ; latitude:units = "degrees_north" ; latitude:long_name = "profile latitude" ; double longitude(station) ; longitude:units = "degrees_east" ; longitude:long_name = "profile longitude" ; double time(station, profile) ; time:units = "Hour since 2018-08-15T12:00:00Z" ; time:calendar = "proleptic_gregorian" ; time:standard_name = "time" ; time:long_name = "GRIB forecast or observation time" ;

// global attributes: :Conventions = "CDM-Extended-CF" ; :history = "Written by CFPointWriter" ; :title = "Extract Points data from Grid file /data/ldm/pub/native/grid/NCEP/GFS/Global_0p5deg/GFS_Global_0p5deg_20180815_1200.grib2.ncx3#LatLon_361X720-p25S-180p0E" ; :featureType = "timeSeriesProfile" ; :time_coverage_start = "2018-08-15T18:00:00Z" ; :time_coverage_end = "2018-08-15T18:00:00Z" ; :geospatial_lat_min = 39.9995 ; :geospatial_lat_max = 40.0005 ; :geospatial_lon_min = -105.0005 ; :geospatial_lon_max = -104.9995 ; } which gives:pytb


MissingDimensionsError Traceback (most recent call last) <ipython-input-10-d6f8d8651b9f> in <module>() 4 query.add_lonlat().accept('netcdf4') 5 nc = ncss.get_data(query) ----> 6 xr.open_dataset(NetCDF4DataStore(nc))

~/miniconda3/envs/py36/lib/python3.6/site-packages/xarray/backends/api.py in open_dataset(filename_or_obj, group, decode_cf, mask_and_scale, decode_times, autoclose, concat_characters, decode_coords, engine, chunks, lock, cache, drop_variables, backend_kwargs) 352 store = backends.ScipyDataStore(filename_or_obj) 353 --> 354 return maybe_decode_store(store) 355 356

~/miniconda3/envs/py36/lib/python3.6/site-packages/xarray/backends/api.py in maybe_decode_store(store, lock) 256 store, mask_and_scale=mask_and_scale, decode_times=decode_times, 257 concat_characters=concat_characters, decode_coords=decode_coords, --> 258 drop_variables=drop_variables) 259 260 _protect_dataset_variables_inplace(ds, cache)

~/miniconda3/envs/py36/lib/python3.6/site-packages/xarray/conventions.py in decode_cf(obj, concat_characters, mask_and_scale, decode_times, decode_coords, drop_variables) 428 vars, attrs, concat_characters, mask_and_scale, decode_times, 429 decode_coords, drop_variables=drop_variables) --> 430 ds = Dataset(vars, attrs=attrs) 431 ds = ds.set_coords(coord_names.union(extra_coords).intersection(vars)) 432 ds._file_obj = file_obj

~/miniconda3/envs/py36/lib/python3.6/site-packages/xarray/core/dataset.py in init(self, data_vars, coords, attrs, compat) 363 coords = {} 364 if data_vars is not None or coords is not None: --> 365 self._set_init_vars_and_dims(data_vars, coords, compat) 366 if attrs is not None: 367 self.attrs = attrs

~/miniconda3/envs/py36/lib/python3.6/site-packages/xarray/core/dataset.py in _set_init_vars_and_dims(self, data_vars, coords, compat) 381 382 variables, coord_names, dims = merge_data_and_coords( --> 383 data_vars, coords, compat=compat) 384 385 self._variables = variables

~/miniconda3/envs/py36/lib/python3.6/site-packages/xarray/core/merge.py in merge_data_and_coords(data, coords, compat, join) 363 indexes = dict(extract_indexes(coords)) 364 return merge_core(objs, compat, join, explicit_coords=explicit_coords, --> 365 indexes=indexes) 366 367

~/miniconda3/envs/py36/lib/python3.6/site-packages/xarray/core/merge.py in merge_core(objs, compat, join, priority_arg, explicit_coords, indexes) 433 coerced = coerce_pandas_values(objs) 434 aligned = deep_align(coerced, join=join, copy=False, indexes=indexes) --> 435 expanded = expand_variable_dicts(aligned) 436 437 coord_names, noncoord_names = determine_coords(coerced)

~/miniconda3/envs/py36/lib/python3.6/site-packages/xarray/core/merge.py in expand_variable_dicts(list_of_variable_dicts) 209 var_dicts.append(coords) 210 --> 211 var = as_variable(var, name=name) 212 sanitized_vars[name] = var 213

~/miniconda3/envs/py36/lib/python3.6/site-packages/xarray/core/variable.py in as_variable(obj, name) 112 'dimensions %r. xarray disallows such variables because they ' 113 'conflict with the coordinates used to label ' --> 114 'dimensions.' % (name, obj.dims)) 115 obj = obj.to_index_variable() 116

MissingDimensionsError: 'isobaric' has more than 1-dimension and the same name as one of its dimensions ('station', 'profile', 'isobaric'). xarray disallows such variables because they conflict with the coordinates used to label dimensions. ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Let's list all the netCDF files that xarray can't open 350899839
413279893 https://github.com/pydata/xarray/issues/2368#issuecomment-413279893 https://api.github.com/repos/pydata/xarray/issues/2368 MDEyOklzc3VlQ29tbWVudDQxMzI3OTg5Mw== dopplershift 221526 2018-08-15T17:52:36Z 2018-08-15T17:52:36Z CONTRIBUTOR

python import xarray as xr xr.open_dataset('http://thredds.ucar.edu/thredds/dodsC/grib/NCEP/GFS/Global_0p5deg/TwoD')

```pytb

MissingDimensionsError Traceback (most recent call last) <ipython-input-6-e2a87d803d99> in <module>() ----> 1 xr.open_dataset(gfs_cat.datasets[0].access_urls['OPENDAP'])

~/miniconda3/envs/py36/lib/python3.6/site-packages/xarray/backends/api.py in open_dataset(filename_or_obj, group, decode_cf, mask_and_scale, decode_times, autoclose, concat_characters, decode_coords, engine, chunks, lock, cache, drop_variables, backend_kwargs) 344 lock = _default_lock(filename_or_obj, engine) 345 with close_on_error(store): --> 346 return maybe_decode_store(store, lock) 347 else: 348 if engine is not None and engine != 'scipy':

~/miniconda3/envs/py36/lib/python3.6/site-packages/xarray/backends/api.py in maybe_decode_store(store, lock) 256 store, mask_and_scale=mask_and_scale, decode_times=decode_times, 257 concat_characters=concat_characters, decode_coords=decode_coords, --> 258 drop_variables=drop_variables) 259 260 _protect_dataset_variables_inplace(ds, cache)

~/miniconda3/envs/py36/lib/python3.6/site-packages/xarray/conventions.py in decode_cf(obj, concat_characters, mask_and_scale, decode_times, decode_coords, drop_variables) 428 vars, attrs, concat_characters, mask_and_scale, decode_times, 429 decode_coords, drop_variables=drop_variables) --> 430 ds = Dataset(vars, attrs=attrs) 431 ds = ds.set_coords(coord_names.union(extra_coords).intersection(vars)) 432 ds._file_obj = file_obj

~/miniconda3/envs/py36/lib/python3.6/site-packages/xarray/core/dataset.py in init(self, data_vars, coords, attrs, compat) 363 coords = {} 364 if data_vars is not None or coords is not None: --> 365 self._set_init_vars_and_dims(data_vars, coords, compat) 366 if attrs is not None: 367 self.attrs = attrs

~/miniconda3/envs/py36/lib/python3.6/site-packages/xarray/core/dataset.py in _set_init_vars_and_dims(self, data_vars, coords, compat) 381 382 variables, coord_names, dims = merge_data_and_coords( --> 383 data_vars, coords, compat=compat) 384 385 self._variables = variables

~/miniconda3/envs/py36/lib/python3.6/site-packages/xarray/core/merge.py in merge_data_and_coords(data, coords, compat, join) 363 indexes = dict(extract_indexes(coords)) 364 return merge_core(objs, compat, join, explicit_coords=explicit_coords, --> 365 indexes=indexes) 366 367

~/miniconda3/envs/py36/lib/python3.6/site-packages/xarray/core/merge.py in merge_core(objs, compat, join, priority_arg, explicit_coords, indexes) 433 coerced = coerce_pandas_values(objs) 434 aligned = deep_align(coerced, join=join, copy=False, indexes=indexes) --> 435 expanded = expand_variable_dicts(aligned) 436 437 coord_names, noncoord_names = determine_coords(coerced)

~/miniconda3/envs/py36/lib/python3.6/site-packages/xarray/core/merge.py in expand_variable_dicts(list_of_variable_dicts) 209 var_dicts.append(coords) 210 --> 211 var = as_variable(var, name=name) 212 sanitized_vars[name] = var 213

~/miniconda3/envs/py36/lib/python3.6/site-packages/xarray/core/variable.py in as_variable(obj, name) 112 'dimensions %r. xarray disallows such variables because they ' 113 'conflict with the coordinates used to label ' --> 114 'dimensions.' % (name, obj.dims)) 115 obj = obj.to_index_variable() 116

MissingDimensionsError: 'time' has more than 1-dimension and the same name as one of its dimensions ('reftime', 'time'). xarray disallows such variables because they conflict with the coordinates used to label dimensions. ```

{
    "total_count": 5,
    "+1": 5,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Let's list all the netCDF files that xarray can't open 350899839
410782982 https://github.com/pydata/xarray/issues/2304#issuecomment-410782982 https://api.github.com/repos/pydata/xarray/issues/2304 MDEyOklzc3VlQ29tbWVudDQxMDc4Mjk4Mg== dopplershift 221526 2018-08-06T17:17:38Z 2018-08-06T17:17:38Z CONTRIBUTOR

Ah, ok, not scaling per-se (i.e. * 0.01), but a second round of value conversion.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  float32 instead of float64 when decoding int16 with scale_factor netcdf var using xarray  343659822
410779271 https://github.com/pydata/xarray/issues/2304#issuecomment-410779271 https://api.github.com/repos/pydata/xarray/issues/2304 MDEyOklzc3VlQ29tbWVudDQxMDc3OTI3MQ== dopplershift 221526 2018-08-06T17:06:22Z 2018-08-06T17:06:22Z CONTRIBUTOR

I'm not following why the data are scaled twice.

Your point about the rounding being different is well-taken, though.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  float32 instead of float64 when decoding int16 with scale_factor netcdf var using xarray  343659822
410774955 https://github.com/pydata/xarray/issues/2304#issuecomment-410774955 https://api.github.com/repos/pydata/xarray/issues/2304 MDEyOklzc3VlQ29tbWVudDQxMDc3NDk1NQ== dopplershift 221526 2018-08-06T16:52:42Z 2018-08-06T16:52:53Z CONTRIBUTOR

@shoyer But since it's a downstream calculation issue, and does not impact the actual precision of what's being read from the file, what's wrong with saying "Use data.astype(np.float64)". It's completely identical to doing it internally to xarray.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  float32 instead of float64 when decoding int16 with scale_factor netcdf var using xarray  343659822
410769706 https://github.com/pydata/xarray/issues/2304#issuecomment-410769706 https://api.github.com/repos/pydata/xarray/issues/2304 MDEyOklzc3VlQ29tbWVudDQxMDc2OTcwNg== dopplershift 221526 2018-08-06T16:34:44Z 2018-08-06T16:36:16Z CONTRIBUTOR

A float32 values has 24 bits of precision in the significand, which is more than enough to store the 16-bits in in the original data; the exponent (8 bits) will more or less take care of the * 0.01:

```python

import numpy as np np.float32(2194 * 0.01) 21.94 ```

What you're seeing is an artifact of printing out the values. I have no idea why something is printing out a float (only 7 decimal digits) out to 17 digits. Even float64 only has 16 digits (which is overkill for this application).

The difference in subtracting the 32- and 64-bit values above are in the 8th decimal place, which is beyond the actual precision of the data; what you've just demonstrated is the difference in precision between 32-bit and 64-bit values, but it had nothing to do whatsoever with the data.

If you're really worried about precision round-off for things like std. dev, you should probably calculate it using the raw integer values and scale afterwards. (I don't actually think this is necessary, though.)

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  float32 instead of float64 when decoding int16 with scale_factor netcdf var using xarray  343659822
391783509 https://github.com/pydata/xarray/issues/2176#issuecomment-391783509 https://api.github.com/repos/pydata/xarray/issues/2176 MDEyOklzc3VlQ29tbWVudDM5MTc4MzUwOQ== dopplershift 221526 2018-05-24T16:47:20Z 2018-05-24T16:47:20Z CONTRIBUTOR

My problem with custom classes (subclasses or composition) is that you will never get those from a Dataset that you get from open_dataset(). That’s a problem for me when I’m trying to make things work for users who want to do simple scripted data analysis. I’m much more interested in #1938 (or #1118).

I’m also looking at moving from pint to unyt, which is Yt’s unit support, brought into a standalone package. Beyond some performance benefits (I’ve heard), it has the benefit of being a ndarray subclass, which means asanyarray will leave it alone. That would seem to make it easier to cram into a DataArray.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Advice on unit-aware arithmetic 325810810
390345890 https://github.com/pydata/xarray/pull/2163#issuecomment-390345890 https://api.github.com/repos/pydata/xarray/issues/2163 MDEyOklzc3VlQ29tbWVudDM5MDM0NTg5MA== dopplershift 221526 2018-05-18T22:10:36Z 2018-05-18T22:10:36Z CONTRIBUTOR

Versioneer has worked great for us. Cutting a release is triggered just by making a new release on GitHub.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Versioneer 324544072
389730062 https://github.com/pydata/xarray/pull/2144#issuecomment-389730062 https://api.github.com/repos/pydata/xarray/issues/2144 MDEyOklzc3VlQ29tbWVudDM4OTczMDA2Mg== dopplershift 221526 2018-05-17T03:04:56Z 2018-05-17T03:04:56Z CONTRIBUTOR

Since this can wait for 0.10.5, I can go slowly and get this right.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add strftime() to datetime accessor 323823894
389699324 https://github.com/pydata/xarray/pull/2144#issuecomment-389699324 https://api.github.com/repos/pydata/xarray/issues/2144 MDEyOklzc3VlQ29tbWVudDM4OTY5OTMyNA== dopplershift 221526 2018-05-16T23:40:32Z 2018-05-16T23:40:32Z CONTRIBUTOR

Any chance this makes it into 0.10.4?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add strftime() to datetime accessor 323823894
388440871 https://github.com/pydata/xarray/pull/817#issuecomment-388440871 https://api.github.com/repos/pydata/xarray/issues/817 MDEyOklzc3VlQ29tbWVudDM4ODQ0MDg3MQ== dopplershift 221526 2018-05-11T18:04:18Z 2018-05-11T18:04:18Z CONTRIBUTOR

Regarding the testing issue, another option is to use something like vcrpy to record and playback http responses for opendap requests. I've had good luck with that for Siphon.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  modified: xarray/backends/api.py 146079798
382906786 https://github.com/pydata/xarray/pull/2016#issuecomment-382906786 https://api.github.com/repos/pydata/xarray/issues/2016 MDEyOklzc3VlQ29tbWVudDM4MjkwNjc4Ng== dopplershift 221526 2018-04-19T23:04:25Z 2018-04-19T23:04:25Z CONTRIBUTOR

@shoyer Happy to put in a PR to address...provided you can tell me what that should actually look like. 😁

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Allow _FillValue and missing_value to differ (Fixes #1749) 308768432
377049291 https://github.com/pydata/xarray/pull/2016#issuecomment-377049291 https://api.github.com/repos/pydata/xarray/issues/2016 MDEyOklzc3VlQ29tbWVudDM3NzA0OTI5MQ== dopplershift 221526 2018-03-28T21:50:46Z 2018-03-28T21:50:46Z CONTRIBUTOR

Reworked due to corner case with PyNio on Python 2.7. Final solution looks simpler to me.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Allow _FillValue and missing_value to differ (Fixes #1749) 308768432
377030861 https://github.com/pydata/xarray/pull/2016#issuecomment-377030861 https://api.github.com/repos/pydata/xarray/issues/2016 MDEyOklzc3VlQ29tbWVudDM3NzAzMDg2MQ== dopplershift 221526 2018-03-28T20:47:34Z 2018-03-28T20:47:34Z CONTRIBUTOR

Fixed flake8 (unnecessary import)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Allow _FillValue and missing_value to differ (Fixes #1749) 308768432
376993012 https://github.com/pydata/xarray/pull/2016#issuecomment-376993012 https://api.github.com/repos/pydata/xarray/issues/2016 MDEyOklzc3VlQ29tbWVudDM3Njk5MzAxMg== dopplershift 221526 2018-03-28T18:41:36Z 2018-03-28T18:41:36Z CONTRIBUTOR

Rebased on current master.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Allow _FillValue and missing_value to differ (Fixes #1749) 308768432
376352184 https://github.com/pydata/xarray/pull/2016#issuecomment-376352184 https://api.github.com/repos/pydata/xarray/issues/2016 MDEyOklzc3VlQ29tbWVudDM3NjM1MjE4NA== dopplershift 221526 2018-03-27T00:08:47Z 2018-03-27T00:08:47Z CONTRIBUTOR

Test failures look unrelated--they also happen to me locally. Installing SciPy 1.0.0 locally fixes them, so I'm guessing they're caused by SciPy 1.0.1.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Allow _FillValue and missing_value to differ (Fixes #1749) 308768432
374351614 https://github.com/pydata/xarray/pull/1899#issuecomment-374351614 https://api.github.com/repos/pydata/xarray/issues/1899 MDEyOklzc3VlQ29tbWVudDM3NDM1MTYxNA== dopplershift 221526 2018-03-19T20:01:29Z 2018-03-19T20:01:29Z CONTRIBUTOR

So did this remove/rename LazilyIndexedArray in 0.10.2? Because I'm getting an attribute in the custom xarray backend I wrote: https://github.com/Unidata/siphon/blob/master/siphon/cdmr/xarray_support.py

I don't mind updating, but I wanted to make sure this was intentional.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Vectorized lazy indexing 295838143
371888626 https://github.com/pydata/xarray/issues/1976#issuecomment-371888626 https://api.github.com/repos/pydata/xarray/issues/1976 MDEyOklzc3VlQ29tbWVudDM3MTg4ODYyNg== dopplershift 221526 2018-03-09T17:45:35Z 2018-03-09T17:45:35Z CONTRIBUTOR

Somehow managed to forget about an issue I was previously involved with. Lovely. Closing in favor of that issue.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  What's wrong with "conflicting" _FillValue and missing_value? 303727896
371681303 https://github.com/pydata/xarray/pull/1962#issuecomment-371681303 https://api.github.com/repos/pydata/xarray/issues/1962 MDEyOklzc3VlQ29tbWVudDM3MTY4MTMwMw== dopplershift 221526 2018-03-09T01:20:03Z 2018-03-09T01:20:03Z CONTRIBUTOR

Right. But such hooks would be sufficient to properly maintain the units attribute on a DataArray and check whether math made sense. This could use pint under the covers.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Support __array_ufunc__ for xarray objects. 302153432
371680160 https://github.com/pydata/xarray/pull/1962#issuecomment-371680160 https://api.github.com/repos/pydata/xarray/issues/1962 MDEyOklzc3VlQ29tbWVudDM3MTY4MDE2MA== dopplershift 221526 2018-03-09T01:13:16Z 2018-03-09T01:13:16Z CONTRIBUTOR

At this point I'd be happy to have hooks that let me intercept/wrap ufunc operations, though I guess that's what #1938 is supporting in a more systematic way.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Support __array_ufunc__ for xarray objects. 302153432
371650629 https://github.com/pydata/xarray/pull/1962#issuecomment-371650629 https://api.github.com/repos/pydata/xarray/issues/1962 MDEyOklzc3VlQ29tbWVudDM3MTY1MDYyOQ== dopplershift 221526 2018-03-08T22:43:26Z 2018-03-08T22:43:26Z CONTRIBUTOR

This looks awesome. Thoughts on where you think units fits in here?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Support __array_ufunc__ for xarray objects. 302153432
368171446 https://github.com/pydata/xarray/issues/1935#issuecomment-368171446 https://api.github.com/repos/pydata/xarray/issues/1935 MDEyOklzc3VlQ29tbWVudDM2ODE3MTQ0Ng== dopplershift 221526 2018-02-23T23:43:07Z 2018-02-23T23:43:07Z CONTRIBUTOR

@tlechauve One way or another, I'm sure @lesserwhirls would be interested to hear your thoughts on THREDDS.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Not compatible with PyPy and dask.array. 299346082
367778778 https://github.com/pydata/xarray/pull/1924#issuecomment-367778778 https://api.github.com/repos/pydata/xarray/issues/1924 MDEyOklzc3VlQ29tbWVudDM2Nzc3ODc3OA== dopplershift 221526 2018-02-22T18:40:25Z 2018-02-22T18:40:25Z CONTRIBUTOR

I don't have much preference between the two--I picked one and it worked for me. I just fixed up the initial import cleanups by hand and didn't find the need for a tool to do so.

In my experience, having formatting errors hold up a PR hasn't been a problem. IMO, import ordering is a more important style issue than many of the things that flake8 catches.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  isort 298437967
367771955 https://github.com/pydata/xarray/pull/1924#issuecomment-367771955 https://api.github.com/repos/pydata/xarray/issues/1924 MDEyOklzc3VlQ29tbWVudDM2Nzc3MTk1NQ== dopplershift 221526 2018-02-22T18:17:27Z 2018-02-22T18:17:27Z CONTRIBUTOR

I recommend flake8-import-order. If you install that plugin, then flake8 will enforce import ordering and grouping.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  isort 298437967
348634874 https://github.com/pydata/xarray/issues/1749#issuecomment-348634874 https://api.github.com/repos/pydata/xarray/issues/1749 MDEyOklzc3VlQ29tbWVudDM0ODYzNDg3NA== dopplershift 221526 2017-12-01T22:48:58Z 2017-12-01T22:48:58Z CONTRIBUTOR

Given the definitions of these two, as I understand them, I think it makes sense to mask both values.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  _FillValue and missing_value not allowed for the same variable 278122300

Next page

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 27.901ms · About: xarray-datasette