issue_comments
123 rows where user = 221526 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: reactions, created_at (date), updated_at (date)
user 1
- dopplershift · 123 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
1467015318 | https://github.com/pydata/xarray/issues/7385#issuecomment-1467015318 | https://api.github.com/repos/pydata/xarray/issues/7385 | IC_kwDOAMm_X85XcOCW | dopplershift 221526 | 2023-03-13T21:51:19Z | 2023-03-13T21:51:19Z | CONTRIBUTOR | @dcherian Is this behavior (filling with I can understand how xarray's data model yields this behavior, but in that case it might be good to improve the docs for |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Unexpected NaNs in broadcast 1499473190 | |
1448430261 | https://github.com/pydata/xarray/issues/7525#issuecomment-1448430261 | https://api.github.com/repos/pydata/xarray/issues/7525 | IC_kwDOAMm_X85WVUq1 | dopplershift 221526 | 2023-02-28T15:57:16Z | 2023-02-28T15:57:16Z | CONTRIBUTOR | @ethanrd @haileyajohnson @tdrwenski any thoughts on the above question? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
contiguous time axis 1583216781 | |
1338290177 | https://github.com/pydata/xarray/issues/7350#issuecomment-1338290177 | https://api.github.com/repos/pydata/xarray/issues/7350 | IC_kwDOAMm_X85PxLAB | dopplershift 221526 | 2022-12-05T22:56:30Z | 2022-12-05T22:56:30Z | CONTRIBUTOR | So I'll say that taking my dataset and running I think the mismatch in my mental model is due to me coming from netCDF CF land, where coordinates for a variable are based on:
1. Other variables that match relevant shared dimension names
2. Those explicitly listed in the I see now that xarray does NOT implement that model. This was provoked by challenges creating a new |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Coordinate variable gains coordinate on subset 1473329967 | |
1337859447 | https://github.com/pydata/xarray/issues/7350#issuecomment-1337859447 | https://api.github.com/repos/pydata/xarray/issues/7350 | IC_kwDOAMm_X85Pvh13 | dopplershift 221526 | 2022-12-05T17:57:34Z | 2022-12-05T17:57:34Z | CONTRIBUTOR | IMO, it's not correctly implementing the rule as you phrased it. You said "still present", which isn't the case here since the coordinate wasn't present before. The behavior I'd advocate for is that a subsetting/selection operation should never add new coordinates that weren't previously present. That by itself would be less surprising. It would also help make things more sensible given that the coordinate is only added currently in the scalar case--if you ask for more data, the coordinate isn't added, which is also unexpected given the scalar case. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Coordinate variable gains coordinate on subset 1473329967 | |
1285866801 | https://github.com/pydata/xarray/issues/7191#issuecomment-1285866801 | https://api.github.com/repos/pydata/xarray/issues/7191 | IC_kwDOAMm_X85MpMUx | dopplershift 221526 | 2022-10-20T16:51:21Z | 2022-10-20T16:51:21Z | CONTRIBUTOR | I'm also not sure why |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Cannot Save NetCDF: Conflicting _FillValue and Missing_Value 1416709246 | |
1270591340 | https://github.com/pydata/xarray/pull/6981#issuecomment-1270591340 | https://api.github.com/repos/pydata/xarray/issues/6981 | IC_kwDOAMm_X85Lu69s | dopplershift 221526 | 2022-10-06T19:38:24Z | 2022-10-06T19:38:24Z | CONTRIBUTOR | We elected not to start rebuilding things with netCDF 4.9.0 since 4.9.1 should be out realSoonNow™️ , so I don't think there's a netcdf4 package in conda-forge that has it yet. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Support the new compression argument in netCDF4 > 1.6.0 1359914824 | |
1246106470 | https://github.com/pydata/xarray/issues/7034#issuecomment-1246106470 | https://api.github.com/repos/pydata/xarray/issues/7034 | IC_kwDOAMm_X85KRhNm | dopplershift 221526 | 2022-09-14T01:05:44Z | 2022-09-14T01:05:44Z | CONTRIBUTOR | That just worked fine for me. What version of libnetcdf and netcdf4 do you have installed? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray fails to locate files via OpeNDAP 1372146714 | |
1180471644 | https://github.com/pydata/xarray/issues/6766#issuecomment-1180471644 | https://api.github.com/repos/pydata/xarray/issues/6766 | IC_kwDOAMm_X85GXJFc | dopplershift 221526 | 2022-07-11T14:20:06Z | 2022-07-11T14:20:06Z | CONTRIBUTOR | @DanCodigaMWRA Well, given that it's failing with |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xr.open_dataset(url) gives NetCDF4 (lru_cache.py) error "oc_open: Could not read url" 1299316581 | |
1179403944 | https://github.com/pydata/xarray/issues/6766#issuecomment-1179403944 | https://api.github.com/repos/pydata/xarray/issues/6766 | IC_kwDOAMm_X85GTEao | dopplershift 221526 | 2022-07-08T22:22:15Z | 2022-07-08T22:22:15Z | CONTRIBUTOR | I just created a new Python 3.7 environment on my Mac and that worked fine. What do these show? ``` ❯ conda list curl packages in environment at /Users/rmay/miniconda3/envs/py37:Name Version Build Channelcurl 7.83.1 h23f1065_0 conda-forge libcurl 7.83.1 h23f1065_0 conda-forge ❯ conda list certifi packages in environment at /Users/rmay/miniconda3/envs/py37:Name Version Build Channelca-certificates 2022.6.15 h033912b_0 conda-forge
certifi 2022.6.15 py37hf985489_0 conda-forge
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xr.open_dataset(url) gives NetCDF4 (lru_cache.py) error "oc_open: Could not read url" 1299316581 | |
1071517967 | https://github.com/pydata/xarray/issues/6374#issuecomment-1071517967 | https://api.github.com/repos/pydata/xarray/issues/6374 | IC_kwDOAMm_X84_3hEP | dopplershift 221526 | 2022-03-17T21:30:22Z | 2022-03-17T21:30:22Z | CONTRIBUTOR | Cc @WardF @DennisHeimbigner @haileyajohnson |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Should the zarr backend support NCZarr conventions? 1172229856 | |
1007746645 | https://github.com/pydata/xarray/issues/6124#issuecomment-1007746645 | https://api.github.com/repos/pydata/xarray/issues/6124 | IC_kwDOAMm_X848EP5V | dopplershift 221526 | 2022-01-07T21:16:20Z | 2022-01-07T21:16:20Z | CONTRIBUTOR | $0.02 from the peanut gallery is that my mental model of While I'm not going to sit here and argue that |
{ "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
bool(ds) should raise a "the truth value of a Dataset is ambiguous" error 1090229430 | |
958120426 | https://github.com/pydata/xarray/issues/5927#issuecomment-958120426 | https://api.github.com/repos/pydata/xarray/issues/5927 | IC_kwDOAMm_X845G8Hq | dopplershift 221526 | 2021-11-02T19:56:21Z | 2021-11-02T19:56:21Z | CONTRIBUTOR | GitHub's new automated release notes may be of interest to some of this discussion. Essentially they allow you to provide a template to format the list of merged PRs on the branch since the last release. |
{ "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Release frequency 1042652334 | |
954948424 | https://github.com/pydata/xarray/issues/5913#issuecomment-954948424 | https://api.github.com/repos/pydata/xarray/issues/5913 | IC_kwDOAMm_X84461tI | dopplershift 221526 | 2021-10-29T18:13:34Z | 2021-10-29T18:13:34Z | CONTRIBUTOR | Is this with the netcdf or with the pydap engine? If you're not sure, can you post the full error traceback? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Invalid characters in OpenDAP URL 1039113959 | |
954028015 | https://github.com/pydata/xarray/issues/5882#issuecomment-954028015 | https://api.github.com/repos/pydata/xarray/issues/5882 | IC_kwDOAMm_X8443U_v | dopplershift 221526 | 2021-10-28T16:55:26Z | 2021-10-28T16:55:26Z | CONTRIBUTOR | Can you post the full traceback you get? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Opening remote file with OpenDAP protocol returns "_FillValue type mismatch" error 1032694511 | |
953921596 | https://github.com/pydata/xarray/issues/5882#issuecomment-953921596 | https://api.github.com/repos/pydata/xarray/issues/5882 | IC_kwDOAMm_X84427A8 | dopplershift 221526 | 2021-10-28T14:49:26Z | 2021-10-28T14:49:26Z | CONTRIBUTOR | @saveriogzz I'm confused why you posted results for 3.6 and 3.8, given that the original issue looks like it was posted for 3.7. 🤨 At any rate, looks like your original issue, the output from Your Python 3.8 environment does have an old version of libnetcdf. Can you try doing |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Opening remote file with OpenDAP protocol returns "_FillValue type mismatch" error 1032694511 | |
952246143 | https://github.com/pydata/xarray/issues/5882#issuecomment-952246143 | https://api.github.com/repos/pydata/xarray/issues/5882 | IC_kwDOAMm_X844wh9_ | dopplershift 221526 | 2021-10-26T19:29:52Z | 2021-10-26T19:29:52Z | CONTRIBUTOR | @saveriogzz what is the output of |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Opening remote file with OpenDAP protocol returns "_FillValue type mismatch" error 1032694511 | |
951404731 | https://github.com/pydata/xarray/pull/5845#issuecomment-951404731 | https://api.github.com/repos/pydata/xarray/issues/5845 | IC_kwDOAMm_X844tUi7 | dopplershift 221526 | 2021-10-25T23:10:25Z | 2021-10-25T23:10:25Z | CONTRIBUTOR | You cannot use selectors with noarch conda packages. Full stop. For conda-forge, it's perfectly fine to just unconditionally depend on the |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
remove requirement for setuptools.pkg_resources 1020407925 | |
900601202 | https://github.com/pydata/xarray/issues/5711#issuecomment-900601202 | https://api.github.com/repos/pydata/xarray/issues/5711 | IC_kwDOAMm_X841rhVy | dopplershift 221526 | 2021-08-17T20:16:57Z | 2021-08-17T20:16:57Z | CONTRIBUTOR | The metadata for that package has been patched in conda-forge/conda-forge-repodata-patches-feedstock#161 to depend on |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Broken conda-forge release xarray-0.19.0-pyhd8ed1ab_0.tar.bz2 972878124 | |
841524018 | https://github.com/pydata/xarray/issues/5291#issuecomment-841524018 | https://api.github.com/repos/pydata/xarray/issues/5291 | MDEyOklzc3VlQ29tbWVudDg0MTUyNDAxOA== | dopplershift 221526 | 2021-05-14T22:00:52Z | 2021-05-14T22:00:52Z | CONTRIBUTOR | On conda it's usually only done to avoid problematic/heavy weight dependencies (i.e. avoiding pyqt dependency with |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
ds = xr.tutorial.load_dataset("air_temperature") with 0.18 needs engine argument 889162918 | |
831449400 | https://github.com/pydata/xarray/pull/5244#issuecomment-831449400 | https://api.github.com/repos/pydata/xarray/issues/5244 | MDEyOklzc3VlQ29tbWVudDgzMTQ0OTQwMA== | dopplershift 221526 | 2021-05-03T18:34:12Z | 2021-05-03T18:34:12Z | CONTRIBUTOR | @andersy005 I'm curious, why do you go with multiple jobs within the workflow, and using artifacts to transfer state between them, rather than multiple steps in a single job? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Add GitHub action for publishing artifacts to PyPI 873842812 | |
830254596 | https://github.com/pydata/xarray/issues/5232#issuecomment-830254596 | https://api.github.com/repos/pydata/xarray/issues/5232 | MDEyOklzc3VlQ29tbWVudDgzMDI1NDU5Ng== | dopplershift 221526 | 2021-04-30T17:42:09Z | 2021-04-30T17:42:09Z | CONTRIBUTOR |
$0.02 from an outsider is that this has served us exceedingly well on MetPy. Our release process has become: 1. Close milestone 2. Adjust the auto-generated draft GitHub release (summary notes) 3. Click "publish release" -> packages uploaded to PyPI 4. Merge conda-forge update from their bots It's almost more secure this way because the token from PyPI only has upload permissions--no need to store someone's password. |
{ "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
release v0.18.0 870292042 | |
824227019 | https://github.com/pydata/xarray/issues/5189#issuecomment-824227019 | https://api.github.com/repos/pydata/xarray/issues/5189 | MDEyOklzc3VlQ29tbWVudDgyNDIyNzAxOQ== | dopplershift 221526 | 2021-04-21T17:19:21Z | 2021-04-21T17:19:21Z | CONTRIBUTOR |
This looks like an issue with the encoding of the URL and what the server expects. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
KeyError pulling from Nasa server with Pydap 861684673 | |
812643265 | https://github.com/pydata/xarray/issues/4925#issuecomment-812643265 | https://api.github.com/repos/pydata/xarray/issues/4925 | MDEyOklzc3VlQ29tbWVudDgxMjY0MzI2NQ== | dopplershift 221526 | 2021-04-02T18:01:21Z | 2021-04-02T18:01:21Z | CONTRIBUTOR | The message:
makes me wonder if there's some issue with how the webserver is handling the escaping of certain characters (like e.g. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
OpenDAP Documentation Example failing with RunTimeError 811409317 | |
696963505 | https://github.com/pydata/xarray/pull/4431#issuecomment-696963505 | https://api.github.com/repos/pydata/xarray/issues/4431 | MDEyOklzc3VlQ29tbWVudDY5Njk2MzUwNQ== | dopplershift 221526 | 2020-09-22T20:32:28Z | 2020-09-22T20:32:28Z | CONTRIBUTOR | @alexamici Force-pushing doesn't normally close it, so this is weird... |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Refactor of the big if-chain to a dictionary in the form {backend_name: backend_open}. 703550109 | |
696954231 | https://github.com/pydata/xarray/issues/4422#issuecomment-696954231 | https://api.github.com/repos/pydata/xarray/issues/4422 | MDEyOklzc3VlQ29tbWVudDY5Njk1NDIzMQ== | dopplershift 221526 | 2020-09-22T20:13:50Z | 2020-09-22T20:13:50Z | CONTRIBUTOR | I'd say in the case of |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Problem decoding times in data from OpenDAP server 701062999 | |
696508013 | https://github.com/pydata/xarray/issues/4422#issuecomment-696508013 | https://api.github.com/repos/pydata/xarray/issues/4422 | MDEyOklzc3VlQ29tbWVudDY5NjUwODAxMw== | dopplershift 221526 | 2020-09-22T04:56:17Z | 2020-09-22T04:56:17Z | CONTRIBUTOR | Probably shouldn't raise an error for |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Problem decoding times in data from OpenDAP server 701062999 | |
684068427 | https://github.com/pydata/xarray/issues/4394#issuecomment-684068427 | https://api.github.com/repos/pydata/xarray/issues/4394 | MDEyOklzc3VlQ29tbWVudDY4NDA2ODQyNw== | dopplershift 221526 | 2020-08-31T22:08:56Z | 2020-08-31T22:08:56Z | CONTRIBUTOR | Duplicate of #1672? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Is it possible to append_dim to netcdf stores 689390592 | |
678819159 | https://github.com/pydata/xarray/issues/4370#issuecomment-678819159 | https://api.github.com/repos/pydata/xarray/issues/4370 | MDEyOklzc3VlQ29tbWVudDY3ODgxOTE1OQ== | dopplershift 221526 | 2020-08-23T20:12:14Z | 2020-08-23T20:12:14Z | CONTRIBUTOR | Likely duplicate of #4283 . |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Not able to slice dataset using its own coordinate value, after upgrade to pandas 1.1.0 684248425 | |
671691540 | https://github.com/pydata/xarray/pull/4332#issuecomment-671691540 | https://api.github.com/repos/pydata/xarray/issues/4332 | MDEyOklzc3VlQ29tbWVudDY3MTY5MTU0MA== | dopplershift 221526 | 2020-08-11T02:42:41Z | 2020-08-11T02:42:41Z | CONTRIBUTOR | Packaging for |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
install sphinx-autosummary-accessors from conda-forge 676497373 | |
669666848 | https://github.com/pydata/xarray/issues/4313#issuecomment-669666848 | https://api.github.com/repos/pydata/xarray/issues/4313 | MDEyOklzc3VlQ29tbWVudDY2OTY2Njg0OA== | dopplershift 221526 | 2020-08-06T03:49:34Z | 2020-08-06T03:49:34Z | CONTRIBUTOR | So to be clear, So you can point dependabot to a directory, where for pypi it looks for files ending in |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Using Dependabot to manage doc build and CI versions 673682661 | |
669502001 | https://github.com/pydata/xarray/issues/4313#issuecomment-669502001 | https://api.github.com/repos/pydata/xarray/issues/4313 | MDEyOklzc3VlQ29tbWVudDY2OTUwMjAwMQ== | dopplershift 221526 | 2020-08-05T20:52:33Z | 2020-08-05T20:52:33Z | CONTRIBUTOR | So on MetPy we moved to treating our CI system as an application and pinning every direct dependency in a requirements.txt (which can be used by conda as well). We then let dependabot handle the updates. This let's us manage the updates on a package-by-package basis, where we have a single PR that lets us see what the ramifications are with regards to tests, CI, even linting. We've been running for a limited time, but so far it has done a good job of insulating general development (coming in on PRs) from changes in the environment, which now shouldn't change on CI from run to run (yeah, yeah 2nd-order dependencies, just pin problematic ones too). For instance, for the pandas 1.1.0 breakage, we just haven't merged the PR that moves the pin there, and that has kept our doc builds green on MetPy. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Using Dependabot to manage doc build and CI versions 673682661 | |
668891516 | https://github.com/pydata/xarray/issues/4295#issuecomment-668891516 | https://api.github.com/repos/pydata/xarray/issues/4295 | MDEyOklzc3VlQ29tbWVudDY2ODg5MTUxNg== | dopplershift 221526 | 2020-08-05T00:07:38Z | 2020-08-05T00:07:38Z | CONTRIBUTOR | cough Solving the "setuptools won't work in |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
We shouldn't require a recent version of setuptools to install xarray 671019427 | |
668405710 | https://github.com/pydata/xarray/issues/4295#issuecomment-668405710 | https://api.github.com/repos/pydata/xarray/issues/4295 | MDEyOklzc3VlQ29tbWVudDY2ODQwNTcxMA== | dopplershift 221526 | 2020-08-04T06:27:20Z | 2020-08-04T06:27:20Z | CONTRIBUTOR | Wow...two years for |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
We shouldn't require a recent version of setuptools to install xarray 671019427 | |
668395780 | https://github.com/pydata/xarray/issues/4295#issuecomment-668395780 | https://api.github.com/repos/pydata/xarray/issues/4295 | MDEyOklzc3VlQ29tbWVudDY2ODM5NTc4MA== | dopplershift 221526 | 2020-08-04T05:56:44Z | 2020-08-04T05:56:44Z | CONTRIBUTOR | I'm not here to argue, but |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
We shouldn't require a recent version of setuptools to install xarray 671019427 | |
667581142 | https://github.com/pydata/xarray/issues/4295#issuecomment-667581142 | https://api.github.com/repos/pydata/xarray/issues/4295 | MDEyOklzc3VlQ29tbWVudDY2NzU4MTE0Mg== | dopplershift 221526 | 2020-08-01T20:10:55Z | 2020-08-01T20:10:55Z | CONTRIBUTOR | Rolling window seems fine to me. I will say that I don't generally bother bumping that on other projects until we run into an issue/new feature that necessitates it, though. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
We shouldn't require a recent version of setuptools to install xarray 671019427 | |
666714053 | https://github.com/pydata/xarray/issues/4287#issuecomment-666714053 | https://api.github.com/repos/pydata/xarray/issues/4287 | MDEyOklzc3VlQ29tbWVudDY2NjcxNDA1Mw== | dopplershift 221526 | 2020-07-30T21:27:14Z | 2020-07-30T21:27:14Z | CONTRIBUTOR | The exception looks like the identical problem as #4283 . |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
failing docs CI 668166816 | |
665435320 | https://github.com/pydata/xarray/issues/4283#issuecomment-665435320 | https://api.github.com/repos/pydata/xarray/issues/4283 | MDEyOklzc3VlQ29tbWVudDY2NTQzNTMyMA== | dopplershift 221526 | 2020-07-29T05:13:27Z | 2020-07-29T05:13:27Z | CONTRIBUTOR | Looks like (to my eye anyway) it stems from: ```python import numpy as np import pandas as pd t = np.array(['2017-09-05T12:00:00.000000000', '2017-09-05T15:00:00.000000000'], dtype='datetime64[ns]') index = pd.DatetimeIndex(t) index.get_loc(t[0].item()) # Fails with KeyError
index.get_loc(t[0]) # Works
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Selection with datetime64[ns] fails with Pandas 1.1.0 667550022 | |
657136785 | https://github.com/pydata/xarray/issues/4043#issuecomment-657136785 | https://api.github.com/repos/pydata/xarray/issues/4043 | MDEyOklzc3VlQ29tbWVudDY1NzEzNjc4NQ== | dopplershift 221526 | 2020-07-11T22:01:55Z | 2020-07-11T22:01:55Z | CONTRIBUTOR | Probably worth raising upstream with the THREDDS team. I do wonder if there's some issues with the chunking/compression of the native .nc files that's at play here. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Opendap access failure error 614144170 | |
657135521 | https://github.com/pydata/xarray/issues/4208#issuecomment-657135521 | https://api.github.com/repos/pydata/xarray/issues/4208 | MDEyOklzc3VlQ29tbWVudDY1NzEzNTUyMQ== | dopplershift 221526 | 2020-07-11T21:49:36Z | 2020-07-11T21:49:54Z | CONTRIBUTOR | Does/should any of this also consider #4212 (CuPy)? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Support for duck Dask Arrays 653430454 | |
657135369 | https://github.com/pydata/xarray/issues/4212#issuecomment-657135369 | https://api.github.com/repos/pydata/xarray/issues/4212 | MDEyOklzc3VlQ29tbWVudDY1NzEzNTM2OQ== | dopplershift 221526 | 2020-07-11T21:48:17Z | 2020-07-11T21:48:17Z | CONTRIBUTOR | @jacobtomlinson Any idea how this would play with the work that's been going on for units here; I'm specifically wondering if xarray ( pint ( cupy )) would/could work. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Add cupy support 654135405 | |
618815341 | https://github.com/pydata/xarray/pull/3998#issuecomment-618815341 | https://api.github.com/repos/pydata/xarray/issues/3998 | MDEyOklzc3VlQ29tbWVudDYxODgxNTM0MQ== | dopplershift 221526 | 2020-04-24T05:50:33Z | 2020-04-24T05:50:33Z | CONTRIBUTOR | Ah, didn't realize bug fixes went in there too. And I thought I could get away without black, but I missed the quote style. All done. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Fix handling of abbreviated units like msec 605920781 | |
618709181 | https://github.com/pydata/xarray/pull/3998#issuecomment-618709181 | https://api.github.com/repos/pydata/xarray/issues/3998 | MDEyOklzc3VlQ29tbWVudDYxODcwOTE4MQ== | dopplershift 221526 | 2020-04-23T22:44:21Z | 2020-04-23T22:44:21Z | CONTRIBUTOR | cc @dcamron |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Fix handling of abbreviated units like msec 605920781 | |
597883721 | https://github.com/pydata/xarray/pull/2844#issuecomment-597883721 | https://api.github.com/repos/pydata/xarray/issues/2844 | MDEyOklzc3VlQ29tbWVudDU5Nzg4MzcyMQ== | dopplershift 221526 | 2020-03-11T21:19:31Z | 2020-03-11T21:19:31Z | CONTRIBUTOR | Thanks for the info. Based on that, I lean towards I think a better rationale, though, would be to formalize the role of |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Read grid mapping and bounds as coords 424265093 | |
597374274 | https://github.com/pydata/xarray/pull/2844#issuecomment-597374274 | https://api.github.com/repos/pydata/xarray/issues/2844 | MDEyOklzc3VlQ29tbWVudDU5NzM3NDI3NA== | dopplershift 221526 | 2020-03-10T23:47:43Z | 2020-03-10T23:47:43Z | CONTRIBUTOR | As a downstream user, I just want to be told what to do (assuming So to clarify: is this about whether they should be in one spot or the other? Or is it about having |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Read grid mapping and bounds as coords 424265093 | |
570089112 | https://github.com/pydata/xarray/issues/3653#issuecomment-570089112 | https://api.github.com/repos/pydata/xarray/issues/3653 | MDEyOklzc3VlQ29tbWVudDU3MDA4OTExMg== | dopplershift 221526 | 2020-01-01T22:34:46Z | 2020-01-01T22:34:46Z | CONTRIBUTOR | This isn't accessing netCDF using opendap, it's directly accessing using HTTP. Did xarray or netCDF gain support for this and I failed to notice? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
"[Errno -90] NetCDF: file not found: b" when opening netCDF from server 543197350 | |
565183733 | https://github.com/pydata/xarray/issues/3256#issuecomment-565183733 | https://api.github.com/repos/pydata/xarray/issues/3256 | MDEyOklzc3VlQ29tbWVudDU2NTE4MzczMw== | dopplershift 221526 | 2019-12-12T20:57:56Z | 2019-12-12T20:57:56Z | CONTRIBUTOR | Yeah, it's not much shorter, but it's a much easier concept for new users to grasp, IMO. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
.item() on a DataArray with dtype='datetime64[ns]' returns int 484699415 | |
563511116 | https://github.com/pydata/xarray/issues/3256#issuecomment-563511116 | https://api.github.com/repos/pydata/xarray/issues/3256 | MDEyOklzc3VlQ29tbWVudDU2MzUxMTExNg== | dopplershift 221526 | 2019-12-10T01:00:00Z | 2019-12-10T01:00:00Z | CONTRIBUTOR | Well it'd be nice to have something, because what we have to do right now is:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
.item() on a DataArray with dtype='datetime64[ns]' returns int 484699415 | |
554457858 | https://github.com/pydata/xarray/pull/3537#issuecomment-554457858 | https://api.github.com/repos/pydata/xarray/issues/3537 | MDEyOklzc3VlQ29tbWVudDU1NDQ1Nzg1OA== | dopplershift 221526 | 2019-11-15T17:41:52Z | 2019-11-15T19:02:03Z | CONTRIBUTOR | IMO, it's always best to release code that you know will work rather than rely on upstream to get something into the next release in order for you not to be broken. I say that both as a downstream user of xarray and a library maintainer. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Numpy 1.18 support 523438384 | |
547592948 | https://github.com/pydata/xarray/pull/3111#issuecomment-547592948 | https://api.github.com/repos/pydata/xarray/issues/3111 | MDEyOklzc3VlQ29tbWVudDU0NzU5Mjk0OA== | dopplershift 221526 | 2019-10-29T19:33:20Z | 2019-10-29T19:33:20Z | CONTRIBUTOR | I just got it to render fine--I blame general GitHub flakiness around notebooks lately. |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
A new example Notebook that plots Discrete Sampling Geometry Data 467746047 | |
538684049 | https://github.com/pydata/xarray/pull/3367#issuecomment-538684049 | https://api.github.com/repos/pydata/xarray/issues/3367 | MDEyOklzc3VlQ29tbWVudDUzODY4NDA0OQ== | dopplershift 221526 | 2019-10-05T20:05:58Z | 2019-10-05T20:05:58Z | CONTRIBUTOR | All good, thanks. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Remove setting of universal wheels 501730864 | |
537694464 | https://github.com/pydata/xarray/pull/3367#issuecomment-537694464 | https://api.github.com/repos/pydata/xarray/issues/3367 | MDEyOklzc3VlQ29tbWVudDUzNzY5NDQ2NA== | dopplershift 221526 | 2019-10-02T21:44:03Z | 2019-10-02T21:44:03Z | CONTRIBUTOR | You don’t run into a problem when doing |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Remove setting of universal wheels 501730864 | |
525441141 | https://github.com/pydata/xarray/issues/3257#issuecomment-525441141 | https://api.github.com/repos/pydata/xarray/issues/3257 | MDEyOklzc3VlQ29tbWVudDUyNTQ0MTE0MQ== | dopplershift 221526 | 2019-08-27T19:06:42Z | 2019-08-27T19:06:42Z | CONTRIBUTOR | On Travis I have some deploy hooks: 1. Using Travis' built-in PyPI support, it uploads wheels and sdist, only on tags 2. Execute a custom script to commit built docs to github pages (not RTD). On master builds, this updates dev docs. On a tag, it adds a new directory for that version of the docs. |
{ "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 } |
0.13.0 release 484711431 | |
525405541 | https://github.com/pydata/xarray/issues/3257#issuecomment-525405541 | https://api.github.com/repos/pydata/xarray/issues/3257 | MDEyOklzc3VlQ29tbWVudDUyNTQwNTU0MQ== | dopplershift 221526 | 2019-08-27T17:34:55Z | 2019-08-27T17:34:55Z | CONTRIBUTOR | The benefit to automation also makes it easier to distribute the workload to other people, helping with project sustainability. On my projects, I find it very nice that I make a new release on GitHub and packages appear on PyPI and the web docs are automatically updated to the new version. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0.13.0 release 484711431 | |
525404631 | https://github.com/pydata/xarray/issues/3268#issuecomment-525404631 | https://api.github.com/repos/pydata/xarray/issues/3268 | MDEyOklzc3VlQ29tbWVudDUyNTQwNDYzMQ== | dopplershift 221526 | 2019-08-27T17:32:32Z | 2019-08-27T17:32:32Z | CONTRIBUTOR | I don't mind needing to update our accessor code. My only request is don't have a version that suddenly breaks it so that we only work on the old version or the new version. 😉 |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Stateful user-defined accessors 485708282 | |
524400034 | https://github.com/pydata/xarray/pull/3247#issuecomment-524400034 | https://api.github.com/repos/pydata/xarray/issues/3247 | MDEyOklzc3VlQ29tbWVudDUyNDQwMDAzNA== | dopplershift 221526 | 2019-08-23T17:35:49Z | 2019-08-23T17:35:49Z | CONTRIBUTOR | cc: @jthielen Might make some things in MetPy easier... |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Update filter_by_attrs to use 'variables' instead of 'data_vars' 484243962 | |
523228118 | https://github.com/pydata/xarray/pull/2956#issuecomment-523228118 | https://api.github.com/repos/pydata/xarray/issues/2956 | MDEyOklzc3VlQ29tbWVudDUyMzIyODExOA== | dopplershift 221526 | 2019-08-20T23:02:38Z | 2019-08-20T23:02:38Z | CONTRIBUTOR | Yeah, |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Picking up #1118: Do not convert subclasses of `ndarray` unless required 443157666 | |
499921424 | https://github.com/pydata/xarray/issues/2871#issuecomment-499921424 | https://api.github.com/repos/pydata/xarray/issues/2871 | MDEyOklzc3VlQ29tbWVudDQ5OTkyMTQyNA== | dopplershift 221526 | 2019-06-07T15:06:29Z | 2019-06-07T15:06:29Z | CONTRIBUTOR | Just to correct something here, Near the bottom is a specific note about removing the deprecation. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xr.open_dataset(f1).to_netcdf(file2) is not idempotent 429914958 | |
499699528 | https://github.com/pydata/xarray/issues/2419#issuecomment-499699528 | https://api.github.com/repos/pydata/xarray/issues/2419 | MDEyOklzc3VlQ29tbWVudDQ5OTY5OTUyOA== | dopplershift 221526 | 2019-06-06T23:04:11Z | 2019-06-06T23:04:11Z | CONTRIBUTOR | So I ran into this working with a climate scientist the other day. The use case we had was given some model output that had data like:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Document ways to reshape a DataArray 361237908 | |
497885383 | https://github.com/pydata/xarray/pull/2989#issuecomment-497885383 | https://api.github.com/repos/pydata/xarray/issues/2989 | MDEyOklzc3VlQ29tbWVudDQ5Nzg4NTM4Mw== | dopplershift 221526 | 2019-05-31T23:04:03Z | 2019-05-31T23:04:03Z | CONTRIBUTOR | Thanks for bringing this to the 🏁 @abrammer ! |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Add strftime() to datetime accessor with cftimeindex and dask support 448330247 | |
495770271 | https://github.com/pydata/xarray/pull/2144#issuecomment-495770271 | https://api.github.com/repos/pydata/xarray/issues/2144 | MDEyOklzc3VlQ29tbWVudDQ5NTc3MDI3MQ== | dopplershift 221526 | 2019-05-24T19:56:21Z | 2019-05-24T19:56:21Z | CONTRIBUTOR | So I finally have time to work on this...but if someone has working code to do this instead, I'm totally fine with that going in instead. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Add strftime() to datetime accessor 323823894 | |
484758877 | https://github.com/pydata/xarray/issues/2697#issuecomment-484758877 | https://api.github.com/repos/pydata/xarray/issues/2697 | MDEyOklzc3VlQ29tbWVudDQ4NDc1ODg3Nw== | dopplershift 221526 | 2019-04-19T03:47:37Z | 2019-04-19T03:47:37Z | CONTRIBUTOR | I haven't had any time to start on this (and I'm a few more weeks out), so feel free to take a cut. I'm not sure what @shoyer or @rabernat have in mind for API. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
read ncml files to create multifile datasets 401874795 | |
483799967 | https://github.com/pydata/xarray/issues/525#issuecomment-483799967 | https://api.github.com/repos/pydata/xarray/issues/525 | MDEyOklzc3VlQ29tbWVudDQ4Mzc5OTk2Nw== | dopplershift 221526 | 2019-04-16T18:54:37Z | 2019-04-16T18:54:37Z | CONTRIBUTOR | @shoyer I agree with that wrapping order. I think I'd also be in favor of starting with an experiment to disable coercing to arrays. @nbren12 The non-communicative multiplication is a consequence of operator dispatch in Python, and the reason why we want |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
support for units 100295585 | |
454179986 | https://github.com/pydata/xarray/issues/2656#issuecomment-454179986 | https://api.github.com/repos/pydata/xarray/issues/2656 | MDEyOklzc3VlQ29tbWVudDQ1NDE3OTk4Ng== | dopplershift 221526 | 2019-01-14T22:07:19Z | 2019-01-14T22:07:19Z | CONTRIBUTOR | I'm not aware of any standard out there for JSON representation of netCDF, but I know it's been at least (briefly) discussed. @WardF, anything out there you're aware of? Another spelling of this could be |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
dataset info in .json format 396285440 | |
443908495 | https://github.com/pydata/xarray/issues/2583#issuecomment-443908495 | https://api.github.com/repos/pydata/xarray/issues/2583 | MDEyOklzc3VlQ29tbWVudDQ0MzkwODQ5NQ== | dopplershift 221526 | 2018-12-03T23:20:29Z | 2018-12-03T23:20:29Z | CONTRIBUTOR | @lesserwhirls @dennisHeimbigner Is there any reason to expect a difference between the downloaded file and the opendap view on TDS? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
decode_cf not scaling and off-setting correctly 386268842 | |
432747410 | https://github.com/pydata/xarray/issues/2503#issuecomment-432747410 | https://api.github.com/repos/pydata/xarray/issues/2503 | MDEyOklzc3VlQ29tbWVudDQzMjc0NzQxMA== | dopplershift 221526 | 2018-10-24T17:13:14Z | 2018-10-24T17:13:14Z | CONTRIBUTOR | Oh, I didn't even catch that the original was on defaults. |
{ "total_count": 1, "+1": 0, "-1": 0, "laugh": 1, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Problems with distributed and opendap netCDF endpoint 373121666 | |
432744441 | https://github.com/pydata/xarray/issues/2503#issuecomment-432744441 | https://api.github.com/repos/pydata/xarray/issues/2503 | MDEyOklzc3VlQ29tbWVudDQzMjc0NDQ0MQ== | dopplershift 221526 | 2018-10-24T17:06:01Z | 2018-10-24T17:06:01Z | CONTRIBUTOR | That version has the fix for the issue. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Problems with distributed and opendap netCDF endpoint 373121666 | |
432739449 | https://github.com/pydata/xarray/issues/2503#issuecomment-432739449 | https://api.github.com/repos/pydata/xarray/issues/2503 | MDEyOklzc3VlQ29tbWVudDQzMjczOTQ0OQ== | dopplershift 221526 | 2018-10-24T16:54:05Z | 2018-10-24T16:54:05Z | CONTRIBUTOR | The original version of libnetcdf in @rabernat 's environment definitely had the opendap timeout issue. Not sure if that's the root cause of the problem, or not, but it's suspect. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Problems with distributed and opendap netCDF endpoint 373121666 | |
432422763 | https://github.com/pydata/xarray/issues/2503#issuecomment-432422763 | https://api.github.com/repos/pydata/xarray/issues/2503 | MDEyOklzc3VlQ29tbWVudDQzMjQyMjc2Mw== | dopplershift 221526 | 2018-10-23T21:16:05Z | 2018-10-23T21:16:16Z | CONTRIBUTOR | @lesserwhirls That's an interesting idea. (@rsignell-usgs That's the one.) @rabernat What version of the conda-forge libnetcdf package is deployed wherever you're running? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Problems with distributed and opendap netCDF endpoint 373121666 | |
432370887 | https://github.com/pydata/xarray/issues/2503#issuecomment-432370887 | https://api.github.com/repos/pydata/xarray/issues/2503 | MDEyOklzc3VlQ29tbWVudDQzMjM3MDg4Nw== | dopplershift 221526 | 2018-10-23T18:43:23Z | 2018-10-23T18:43:23Z | CONTRIBUTOR | Just so I'm clear on how the workflow looks: 1. Open dataset with NetCDF/OPeNDAP 2. Serialize NetCDFDataStore (pickle? netcdf file?) 3. Ship to Dask workers 4. Reconstitute NetCDFDataStore Certainly does seem like there's something stale in what the remote workers are getting. Confused why it works for the others, though. I can prioritize this a bit and dig in to see what I can figure out--though I'm teaching through tomorrow. May be able to dig into this while at ECMWF. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Problems with distributed and opendap netCDF endpoint 373121666 | |
424100432 | https://github.com/pydata/xarray/pull/2144#issuecomment-424100432 | https://api.github.com/repos/pydata/xarray/issues/2144 | MDEyOklzc3VlQ29tbWVudDQyNDEwMDQzMg== | dopplershift 221526 | 2018-09-24T19:44:21Z | 2018-09-24T19:44:21Z | CONTRIBUTOR | Just haven't had the cycles to bring this to the finish line. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Add strftime() to datetime accessor 323823894 | |
420470343 | https://github.com/pydata/xarray/pull/2398#issuecomment-420470343 | https://api.github.com/repos/pydata/xarray/issues/2398 | MDEyOklzc3VlQ29tbWVudDQyMDQ3MDM0Mw== | dopplershift 221526 | 2018-09-12T00:25:54Z | 2018-09-12T00:25:54Z | CONTRIBUTOR | Why would you sort the array? Aren't you taking differences of values and dividing by differences between the matching coordinates? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
implement Gradient 356698348 | |
419225007 | https://github.com/pydata/xarray/issues/2368#issuecomment-419225007 | https://api.github.com/repos/pydata/xarray/issues/2368 | MDEyOklzc3VlQ29tbWVudDQxOTIyNTAwNw== | dopplershift 221526 | 2018-09-06T20:10:24Z | 2018-09-06T20:10:24Z | CONTRIBUTOR | That sounds reasonable to me. I don't necessarily expect all of the xarray goodness to work with those files, but I do expect them to open without error. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Let's list all the netCDF files that xarray can't open 350899839 | |
419176628 | https://github.com/pydata/xarray/issues/2368#issuecomment-419176628 | https://api.github.com/repos/pydata/xarray/issues/2368 | MDEyOklzc3VlQ29tbWVudDQxOTE3NjYyOA== | dopplershift 221526 | 2018-09-06T17:28:14Z | 2018-09-06T17:28:14Z | CONTRIBUTOR | @rabernat While I agree that they're (somewhat) confusing files, I think you're missing two things:
IMO, xarray is being overly pedantic here. XArray states that it adopts the Common Data Model (CDM); netCDF-java and the CDM were the tools used to generate the failing examples above. |
{ "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Let's list all the netCDF files that xarray can't open 350899839 | |
413723632 | https://github.com/pydata/xarray/issues/988#issuecomment-413723632 | https://api.github.com/repos/pydata/xarray/issues/988 | MDEyOklzc3VlQ29tbWVudDQxMzcyMzYzMg== | dopplershift 221526 | 2018-08-17T00:33:47Z | 2018-08-17T00:33:47Z | CONTRIBUTOR | I see your argument, but here's my problem. In this future where things work (assuming that NEP is accepted), and I want distributed computing with dask, units, and xarray, I have: xarray wrapping a pint array wrapping a dask array. I like composition, but that level of wrapping...feels wrong to me for some reason. Is there some elegance I'm missing here? (Other than array-like things playing together.) And then I still need hooks in xarray so that when pint does a calculation, it can update the metadata in xarray; so it feels like we're back here anyway. |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Hooks for custom attribute handling in xarray operations 173612265 | |
413360749 | https://github.com/pydata/xarray/issues/988#issuecomment-413360749 | https://api.github.com/repos/pydata/xarray/issues/988 | MDEyOklzc3VlQ29tbWVudDQxMzM2MDc0OQ== | dopplershift 221526 | 2018-08-15T22:36:21Z | 2018-08-15T22:36:21Z | CONTRIBUTOR | @shoyer I know elsewhere you said you weren't sure about this idea any more, but personally I'd like to push forward on this idea. Do you have problems with this approach we need to resolve? Any chance you have some preliminary code? I think this is the right way to solve the unit issue in XArray, since at it's core unit handling is mostly a metadata operation. |
{ "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Hooks for custom attribute handling in xarray operations 173612265 | |
413281638 | https://github.com/pydata/xarray/issues/2368#issuecomment-413281638 | https://api.github.com/repos/pydata/xarray/issues/2368 | MDEyOklzc3VlQ29tbWVudDQxMzI4MTYzOA== | dopplershift 221526 | 2018-08-15T17:58:12Z | 2018-08-15T17:58:12Z | CONTRIBUTOR | Here's a sample CDL for a file: ``` netcdf temp { dimensions: profile = 1 ; station = 1 ; isobaric = 31 ; station_name_strlen = 10 ; station_description_strlen = 33 ; variables: float isobaric(station, profile, isobaric) ; isobaric:standard_name = "isobaric" ; isobaric:long_name = "isobaric" ; isobaric:units = "Pa" ; isobaric:positive = "down" ; isobaric:axis = "Z" ; float Geopotential_height_isobaric(station, profile, isobaric) ; Geopotential_height_isobaric:standard_name = "Geopotential_height_isobaric" ; Geopotential_height_isobaric:long_name = "Geopotential_height_isobaric" ; Geopotential_height_isobaric:units = "gpm" ; Geopotential_height_isobaric:coordinates = "time longitude latitude isobaric" ; char station_name(station, station_name_strlen) ; station_name:long_name = "station name" ; station_name:cf_role = "timeseries_id" ; char station_description(station, station_description_strlen) ; station_description:long_name = "station description" ; station_description:standard_name = "platform_name" ; double latitude(station) ; latitude:units = "degrees_north" ; latitude:long_name = "profile latitude" ; double longitude(station) ; longitude:units = "degrees_east" ; longitude:long_name = "profile longitude" ; double time(station, profile) ; time:units = "Hour since 2018-08-15T12:00:00Z" ; time:calendar = "proleptic_gregorian" ; time:standard_name = "time" ; time:long_name = "GRIB forecast or observation time" ; // global attributes:
:Conventions = "CDM-Extended-CF" ;
:history = "Written by CFPointWriter" ;
:title = "Extract Points data from Grid file /data/ldm/pub/native/grid/NCEP/GFS/Global_0p5deg/GFS_Global_0p5deg_20180815_1200.grib2.ncx3#LatLon_361X720-p25S-180p0E" ;
:featureType = "timeSeriesProfile" ;
:time_coverage_start = "2018-08-15T18:00:00Z" ;
:time_coverage_end = "2018-08-15T18:00:00Z" ;
:geospatial_lat_min = 39.9995 ;
:geospatial_lat_max = 40.0005 ;
:geospatial_lon_min = -105.0005 ;
:geospatial_lon_max = -104.9995 ;
}
MissingDimensionsError Traceback (most recent call last) <ipython-input-10-d6f8d8651b9f> in <module>() 4 query.add_lonlat().accept('netcdf4') 5 nc = ncss.get_data(query) ----> 6 xr.open_dataset(NetCDF4DataStore(nc)) ~/miniconda3/envs/py36/lib/python3.6/site-packages/xarray/backends/api.py in open_dataset(filename_or_obj, group, decode_cf, mask_and_scale, decode_times, autoclose, concat_characters, decode_coords, engine, chunks, lock, cache, drop_variables, backend_kwargs) 352 store = backends.ScipyDataStore(filename_or_obj) 353 --> 354 return maybe_decode_store(store) 355 356 ~/miniconda3/envs/py36/lib/python3.6/site-packages/xarray/backends/api.py in maybe_decode_store(store, lock) 256 store, mask_and_scale=mask_and_scale, decode_times=decode_times, 257 concat_characters=concat_characters, decode_coords=decode_coords, --> 258 drop_variables=drop_variables) 259 260 _protect_dataset_variables_inplace(ds, cache) ~/miniconda3/envs/py36/lib/python3.6/site-packages/xarray/conventions.py in decode_cf(obj, concat_characters, mask_and_scale, decode_times, decode_coords, drop_variables) 428 vars, attrs, concat_characters, mask_and_scale, decode_times, 429 decode_coords, drop_variables=drop_variables) --> 430 ds = Dataset(vars, attrs=attrs) 431 ds = ds.set_coords(coord_names.union(extra_coords).intersection(vars)) 432 ds._file_obj = file_obj ~/miniconda3/envs/py36/lib/python3.6/site-packages/xarray/core/dataset.py in init(self, data_vars, coords, attrs, compat) 363 coords = {} 364 if data_vars is not None or coords is not None: --> 365 self._set_init_vars_and_dims(data_vars, coords, compat) 366 if attrs is not None: 367 self.attrs = attrs ~/miniconda3/envs/py36/lib/python3.6/site-packages/xarray/core/dataset.py in _set_init_vars_and_dims(self, data_vars, coords, compat) 381 382 variables, coord_names, dims = merge_data_and_coords( --> 383 data_vars, coords, compat=compat) 384 385 self._variables = variables ~/miniconda3/envs/py36/lib/python3.6/site-packages/xarray/core/merge.py in merge_data_and_coords(data, coords, compat, join) 363 indexes = dict(extract_indexes(coords)) 364 return merge_core(objs, compat, join, explicit_coords=explicit_coords, --> 365 indexes=indexes) 366 367 ~/miniconda3/envs/py36/lib/python3.6/site-packages/xarray/core/merge.py in merge_core(objs, compat, join, priority_arg, explicit_coords, indexes) 433 coerced = coerce_pandas_values(objs) 434 aligned = deep_align(coerced, join=join, copy=False, indexes=indexes) --> 435 expanded = expand_variable_dicts(aligned) 436 437 coord_names, noncoord_names = determine_coords(coerced) ~/miniconda3/envs/py36/lib/python3.6/site-packages/xarray/core/merge.py in expand_variable_dicts(list_of_variable_dicts) 209 var_dicts.append(coords) 210 --> 211 var = as_variable(var, name=name) 212 sanitized_vars[name] = var 213 ~/miniconda3/envs/py36/lib/python3.6/site-packages/xarray/core/variable.py in as_variable(obj, name) 112 'dimensions %r. xarray disallows such variables because they ' 113 'conflict with the coordinates used to label ' --> 114 'dimensions.' % (name, obj.dims)) 115 obj = obj.to_index_variable() 116 MissingDimensionsError: 'isobaric' has more than 1-dimension and the same name as one of its dimensions ('station', 'profile', 'isobaric'). xarray disallows such variables because they conflict with the coordinates used to label dimensions. ``` |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Let's list all the netCDF files that xarray can't open 350899839 | |
413279893 | https://github.com/pydata/xarray/issues/2368#issuecomment-413279893 | https://api.github.com/repos/pydata/xarray/issues/2368 | MDEyOklzc3VlQ29tbWVudDQxMzI3OTg5Mw== | dopplershift 221526 | 2018-08-15T17:52:36Z | 2018-08-15T17:52:36Z | CONTRIBUTOR |
```pytbMissingDimensionsError Traceback (most recent call last) <ipython-input-6-e2a87d803d99> in <module>() ----> 1 xr.open_dataset(gfs_cat.datasets[0].access_urls['OPENDAP']) ~/miniconda3/envs/py36/lib/python3.6/site-packages/xarray/backends/api.py in open_dataset(filename_or_obj, group, decode_cf, mask_and_scale, decode_times, autoclose, concat_characters, decode_coords, engine, chunks, lock, cache, drop_variables, backend_kwargs) 344 lock = _default_lock(filename_or_obj, engine) 345 with close_on_error(store): --> 346 return maybe_decode_store(store, lock) 347 else: 348 if engine is not None and engine != 'scipy': ~/miniconda3/envs/py36/lib/python3.6/site-packages/xarray/backends/api.py in maybe_decode_store(store, lock) 256 store, mask_and_scale=mask_and_scale, decode_times=decode_times, 257 concat_characters=concat_characters, decode_coords=decode_coords, --> 258 drop_variables=drop_variables) 259 260 _protect_dataset_variables_inplace(ds, cache) ~/miniconda3/envs/py36/lib/python3.6/site-packages/xarray/conventions.py in decode_cf(obj, concat_characters, mask_and_scale, decode_times, decode_coords, drop_variables) 428 vars, attrs, concat_characters, mask_and_scale, decode_times, 429 decode_coords, drop_variables=drop_variables) --> 430 ds = Dataset(vars, attrs=attrs) 431 ds = ds.set_coords(coord_names.union(extra_coords).intersection(vars)) 432 ds._file_obj = file_obj ~/miniconda3/envs/py36/lib/python3.6/site-packages/xarray/core/dataset.py in init(self, data_vars, coords, attrs, compat) 363 coords = {} 364 if data_vars is not None or coords is not None: --> 365 self._set_init_vars_and_dims(data_vars, coords, compat) 366 if attrs is not None: 367 self.attrs = attrs ~/miniconda3/envs/py36/lib/python3.6/site-packages/xarray/core/dataset.py in _set_init_vars_and_dims(self, data_vars, coords, compat) 381 382 variables, coord_names, dims = merge_data_and_coords( --> 383 data_vars, coords, compat=compat) 384 385 self._variables = variables ~/miniconda3/envs/py36/lib/python3.6/site-packages/xarray/core/merge.py in merge_data_and_coords(data, coords, compat, join) 363 indexes = dict(extract_indexes(coords)) 364 return merge_core(objs, compat, join, explicit_coords=explicit_coords, --> 365 indexes=indexes) 366 367 ~/miniconda3/envs/py36/lib/python3.6/site-packages/xarray/core/merge.py in merge_core(objs, compat, join, priority_arg, explicit_coords, indexes) 433 coerced = coerce_pandas_values(objs) 434 aligned = deep_align(coerced, join=join, copy=False, indexes=indexes) --> 435 expanded = expand_variable_dicts(aligned) 436 437 coord_names, noncoord_names = determine_coords(coerced) ~/miniconda3/envs/py36/lib/python3.6/site-packages/xarray/core/merge.py in expand_variable_dicts(list_of_variable_dicts) 209 var_dicts.append(coords) 210 --> 211 var = as_variable(var, name=name) 212 sanitized_vars[name] = var 213 ~/miniconda3/envs/py36/lib/python3.6/site-packages/xarray/core/variable.py in as_variable(obj, name) 112 'dimensions %r. xarray disallows such variables because they ' 113 'conflict with the coordinates used to label ' --> 114 'dimensions.' % (name, obj.dims)) 115 obj = obj.to_index_variable() 116 MissingDimensionsError: 'time' has more than 1-dimension and the same name as one of its dimensions ('reftime', 'time'). xarray disallows such variables because they conflict with the coordinates used to label dimensions. ``` |
{ "total_count": 5, "+1": 5, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Let's list all the netCDF files that xarray can't open 350899839 | |
410782982 | https://github.com/pydata/xarray/issues/2304#issuecomment-410782982 | https://api.github.com/repos/pydata/xarray/issues/2304 | MDEyOklzc3VlQ29tbWVudDQxMDc4Mjk4Mg== | dopplershift 221526 | 2018-08-06T17:17:38Z | 2018-08-06T17:17:38Z | CONTRIBUTOR | Ah, ok, not scaling per-se (i.e. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
float32 instead of float64 when decoding int16 with scale_factor netcdf var using xarray 343659822 | |
410779271 | https://github.com/pydata/xarray/issues/2304#issuecomment-410779271 | https://api.github.com/repos/pydata/xarray/issues/2304 | MDEyOklzc3VlQ29tbWVudDQxMDc3OTI3MQ== | dopplershift 221526 | 2018-08-06T17:06:22Z | 2018-08-06T17:06:22Z | CONTRIBUTOR | I'm not following why the data are scaled twice. Your point about the rounding being different is well-taken, though. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
float32 instead of float64 when decoding int16 with scale_factor netcdf var using xarray 343659822 | |
410774955 | https://github.com/pydata/xarray/issues/2304#issuecomment-410774955 | https://api.github.com/repos/pydata/xarray/issues/2304 | MDEyOklzc3VlQ29tbWVudDQxMDc3NDk1NQ== | dopplershift 221526 | 2018-08-06T16:52:42Z | 2018-08-06T16:52:53Z | CONTRIBUTOR | @shoyer But since it's a downstream calculation issue, and does not impact the actual precision of what's being read from the file, what's wrong with saying "Use |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
float32 instead of float64 when decoding int16 with scale_factor netcdf var using xarray 343659822 | |
410769706 | https://github.com/pydata/xarray/issues/2304#issuecomment-410769706 | https://api.github.com/repos/pydata/xarray/issues/2304 | MDEyOklzc3VlQ29tbWVudDQxMDc2OTcwNg== | dopplershift 221526 | 2018-08-06T16:34:44Z | 2018-08-06T16:36:16Z | CONTRIBUTOR | A float32 values has 24 bits of precision in the significand, which is more than enough to store the 16-bits in in the original data; the exponent (8 bits) will more or less take care of the ```python
What you're seeing is an artifact of printing out the values. I have no idea why something is printing out a float (only 7 decimal digits) out to 17 digits. Even float64 only has 16 digits (which is overkill for this application). The difference in subtracting the 32- and 64-bit values above are in the 8th decimal place, which is beyond the actual precision of the data; what you've just demonstrated is the difference in precision between 32-bit and 64-bit values, but it had nothing to do whatsoever with the data. If you're really worried about precision round-off for things like std. dev, you should probably calculate it using the raw integer values and scale afterwards. (I don't actually think this is necessary, though.) |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
float32 instead of float64 when decoding int16 with scale_factor netcdf var using xarray 343659822 | |
391783509 | https://github.com/pydata/xarray/issues/2176#issuecomment-391783509 | https://api.github.com/repos/pydata/xarray/issues/2176 | MDEyOklzc3VlQ29tbWVudDM5MTc4MzUwOQ== | dopplershift 221526 | 2018-05-24T16:47:20Z | 2018-05-24T16:47:20Z | CONTRIBUTOR | My problem with custom classes (subclasses or composition) is that you will never get those from a I’m also looking at moving from pint to unyt, which is Yt’s unit support, brought into a standalone package. Beyond some performance benefits (I’ve heard), it has the benefit of being a ndarray subclass, which means |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Advice on unit-aware arithmetic 325810810 | |
390345890 | https://github.com/pydata/xarray/pull/2163#issuecomment-390345890 | https://api.github.com/repos/pydata/xarray/issues/2163 | MDEyOklzc3VlQ29tbWVudDM5MDM0NTg5MA== | dopplershift 221526 | 2018-05-18T22:10:36Z | 2018-05-18T22:10:36Z | CONTRIBUTOR | Versioneer has worked great for us. Cutting a release is triggered just by making a new release on GitHub. |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Versioneer 324544072 | |
389730062 | https://github.com/pydata/xarray/pull/2144#issuecomment-389730062 | https://api.github.com/repos/pydata/xarray/issues/2144 | MDEyOklzc3VlQ29tbWVudDM4OTczMDA2Mg== | dopplershift 221526 | 2018-05-17T03:04:56Z | 2018-05-17T03:04:56Z | CONTRIBUTOR | Since this can wait for 0.10.5, I can go slowly and get this right. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Add strftime() to datetime accessor 323823894 | |
389699324 | https://github.com/pydata/xarray/pull/2144#issuecomment-389699324 | https://api.github.com/repos/pydata/xarray/issues/2144 | MDEyOklzc3VlQ29tbWVudDM4OTY5OTMyNA== | dopplershift 221526 | 2018-05-16T23:40:32Z | 2018-05-16T23:40:32Z | CONTRIBUTOR | Any chance this makes it into 0.10.4? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Add strftime() to datetime accessor 323823894 | |
388440871 | https://github.com/pydata/xarray/pull/817#issuecomment-388440871 | https://api.github.com/repos/pydata/xarray/issues/817 | MDEyOklzc3VlQ29tbWVudDM4ODQ0MDg3MQ== | dopplershift 221526 | 2018-05-11T18:04:18Z | 2018-05-11T18:04:18Z | CONTRIBUTOR | Regarding the testing issue, another option is to use something like vcrpy to record and playback http responses for opendap requests. I've had good luck with that for Siphon. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
modified: xarray/backends/api.py 146079798 | |
382906786 | https://github.com/pydata/xarray/pull/2016#issuecomment-382906786 | https://api.github.com/repos/pydata/xarray/issues/2016 | MDEyOklzc3VlQ29tbWVudDM4MjkwNjc4Ng== | dopplershift 221526 | 2018-04-19T23:04:25Z | 2018-04-19T23:04:25Z | CONTRIBUTOR | @shoyer Happy to put in a PR to address...provided you can tell me what that should actually look like. 😁 |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Allow _FillValue and missing_value to differ (Fixes #1749) 308768432 | |
377049291 | https://github.com/pydata/xarray/pull/2016#issuecomment-377049291 | https://api.github.com/repos/pydata/xarray/issues/2016 | MDEyOklzc3VlQ29tbWVudDM3NzA0OTI5MQ== | dopplershift 221526 | 2018-03-28T21:50:46Z | 2018-03-28T21:50:46Z | CONTRIBUTOR | Reworked due to corner case with PyNio on Python 2.7. Final solution looks simpler to me. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Allow _FillValue and missing_value to differ (Fixes #1749) 308768432 | |
377030861 | https://github.com/pydata/xarray/pull/2016#issuecomment-377030861 | https://api.github.com/repos/pydata/xarray/issues/2016 | MDEyOklzc3VlQ29tbWVudDM3NzAzMDg2MQ== | dopplershift 221526 | 2018-03-28T20:47:34Z | 2018-03-28T20:47:34Z | CONTRIBUTOR | Fixed flake8 (unnecessary import) |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Allow _FillValue and missing_value to differ (Fixes #1749) 308768432 | |
376993012 | https://github.com/pydata/xarray/pull/2016#issuecomment-376993012 | https://api.github.com/repos/pydata/xarray/issues/2016 | MDEyOklzc3VlQ29tbWVudDM3Njk5MzAxMg== | dopplershift 221526 | 2018-03-28T18:41:36Z | 2018-03-28T18:41:36Z | CONTRIBUTOR | Rebased on current master. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Allow _FillValue and missing_value to differ (Fixes #1749) 308768432 | |
376352184 | https://github.com/pydata/xarray/pull/2016#issuecomment-376352184 | https://api.github.com/repos/pydata/xarray/issues/2016 | MDEyOklzc3VlQ29tbWVudDM3NjM1MjE4NA== | dopplershift 221526 | 2018-03-27T00:08:47Z | 2018-03-27T00:08:47Z | CONTRIBUTOR | Test failures look unrelated--they also happen to me locally. Installing SciPy 1.0.0 locally fixes them, so I'm guessing they're caused by SciPy 1.0.1. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Allow _FillValue and missing_value to differ (Fixes #1749) 308768432 | |
374351614 | https://github.com/pydata/xarray/pull/1899#issuecomment-374351614 | https://api.github.com/repos/pydata/xarray/issues/1899 | MDEyOklzc3VlQ29tbWVudDM3NDM1MTYxNA== | dopplershift 221526 | 2018-03-19T20:01:29Z | 2018-03-19T20:01:29Z | CONTRIBUTOR | So did this remove/rename I don't mind updating, but I wanted to make sure this was intentional. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Vectorized lazy indexing 295838143 | |
371888626 | https://github.com/pydata/xarray/issues/1976#issuecomment-371888626 | https://api.github.com/repos/pydata/xarray/issues/1976 | MDEyOklzc3VlQ29tbWVudDM3MTg4ODYyNg== | dopplershift 221526 | 2018-03-09T17:45:35Z | 2018-03-09T17:45:35Z | CONTRIBUTOR | Somehow managed to forget about an issue I was previously involved with. Lovely. Closing in favor of that issue. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
What's wrong with "conflicting" _FillValue and missing_value? 303727896 | |
371681303 | https://github.com/pydata/xarray/pull/1962#issuecomment-371681303 | https://api.github.com/repos/pydata/xarray/issues/1962 | MDEyOklzc3VlQ29tbWVudDM3MTY4MTMwMw== | dopplershift 221526 | 2018-03-09T01:20:03Z | 2018-03-09T01:20:03Z | CONTRIBUTOR | Right. But such hooks would be sufficient to properly maintain the |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Support __array_ufunc__ for xarray objects. 302153432 | |
371680160 | https://github.com/pydata/xarray/pull/1962#issuecomment-371680160 | https://api.github.com/repos/pydata/xarray/issues/1962 | MDEyOklzc3VlQ29tbWVudDM3MTY4MDE2MA== | dopplershift 221526 | 2018-03-09T01:13:16Z | 2018-03-09T01:13:16Z | CONTRIBUTOR | At this point I'd be happy to have hooks that let me intercept/wrap ufunc operations, though I guess that's what #1938 is supporting in a more systematic way. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Support __array_ufunc__ for xarray objects. 302153432 | |
371650629 | https://github.com/pydata/xarray/pull/1962#issuecomment-371650629 | https://api.github.com/repos/pydata/xarray/issues/1962 | MDEyOklzc3VlQ29tbWVudDM3MTY1MDYyOQ== | dopplershift 221526 | 2018-03-08T22:43:26Z | 2018-03-08T22:43:26Z | CONTRIBUTOR | This looks awesome. Thoughts on where you think units fits in here? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Support __array_ufunc__ for xarray objects. 302153432 | |
368171446 | https://github.com/pydata/xarray/issues/1935#issuecomment-368171446 | https://api.github.com/repos/pydata/xarray/issues/1935 | MDEyOklzc3VlQ29tbWVudDM2ODE3MTQ0Ng== | dopplershift 221526 | 2018-02-23T23:43:07Z | 2018-02-23T23:43:07Z | CONTRIBUTOR | @tlechauve One way or another, I'm sure @lesserwhirls would be interested to hear your thoughts on THREDDS. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Not compatible with PyPy and dask.array. 299346082 | |
367778778 | https://github.com/pydata/xarray/pull/1924#issuecomment-367778778 | https://api.github.com/repos/pydata/xarray/issues/1924 | MDEyOklzc3VlQ29tbWVudDM2Nzc3ODc3OA== | dopplershift 221526 | 2018-02-22T18:40:25Z | 2018-02-22T18:40:25Z | CONTRIBUTOR | I don't have much preference between the two--I picked one and it worked for me. I just fixed up the initial import cleanups by hand and didn't find the need for a tool to do so. In my experience, having formatting errors hold up a PR hasn't been a problem. IMO, import ordering is a more important style issue than many of the things that flake8 catches. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
isort 298437967 | |
367771955 | https://github.com/pydata/xarray/pull/1924#issuecomment-367771955 | https://api.github.com/repos/pydata/xarray/issues/1924 | MDEyOklzc3VlQ29tbWVudDM2Nzc3MTk1NQ== | dopplershift 221526 | 2018-02-22T18:17:27Z | 2018-02-22T18:17:27Z | CONTRIBUTOR | I recommend flake8-import-order. If you install that plugin, then flake8 will enforce import ordering and grouping. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
isort 298437967 | |
348634874 | https://github.com/pydata/xarray/issues/1749#issuecomment-348634874 | https://api.github.com/repos/pydata/xarray/issues/1749 | MDEyOklzc3VlQ29tbWVudDM0ODYzNDg3NA== | dopplershift 221526 | 2017-12-01T22:48:58Z | 2017-12-01T22:48:58Z | CONTRIBUTOR | Given the definitions of these two, as I understand them, I think it makes sense to mask both values. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
_FillValue and missing_value not allowed for the same variable 278122300 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
issue >30