issue_comments
37 rows where user = 3698640 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: issue_url, reactions, created_at (date), updated_at (date)
user 1
- delgadom · 37 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
1234643971 | https://github.com/pydata/xarray/issues/3731#issuecomment-1234643971 | https://api.github.com/repos/pydata/xarray/issues/3731 | IC_kwDOAMm_X85JlywD | delgadom 3698640 | 2022-09-01T18:36:30Z | 2022-09-01T18:36:30Z | CONTRIBUTOR | FWIW I could definitely see use cases for allowing something like this... I have used cumbersome/ugly workarounds to work with variance-covariance matrices etc. So I'm not weighing in on the "this should raise an error" debate. I got briefly excited when I saw it didn't raise an error, until everything started unraveling 🙃 |
{ "total_count": 2, "+1": 1, "-1": 0, "laugh": 1, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Repeated coordinates leads to unintuitive (broken?) indexing behaviour 557257598 | |
1234639080 | https://github.com/pydata/xarray/issues/3731#issuecomment-1234639080 | https://api.github.com/repos/pydata/xarray/issues/3731 | IC_kwDOAMm_X85Jlxjo | delgadom 3698640 | 2022-09-01T18:31:08Z | 2022-09-01T18:31:08Z | CONTRIBUTOR | ooh this is a fun one! came across this issue when we stumbled across a pendantic case writing tests (H/T @brews). I expected this to "fail loudly in the constructor" but it doesn't. note that currently AFAICT you cannot use positional slicing to achieve an intuitive result - the behavior seems more undefined/unpredictable ```python setupimport xarray as xr, pandas as pd, numpy as np
da = xr.DataArray(np.arange(8).reshape(2, 2, 2), coords=[[0, 1], [0, 1], ['a', 'b']], dims=["ni", "ni", "shh"])
Coordinates:
* ni (ni) int64 0 1
* shh (shh) <U1 'a' 'b'
Coordinates: * ni (ni) int64 0 1 * shh (shh) <U1 'a' 'b' In [7]: da[:, 0, :] # positional slicing along second dim slices both dims Out[7]: <xarray.DataArray (shh: 2)> array([0, 1]) Coordinates: ni int64 0 * shh (shh) <U1 'a' 'b' ``` |
{ "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Repeated coordinates leads to unintuitive (broken?) indexing behaviour 557257598 | |
1115045538 | https://github.com/pydata/xarray/issues/3476#issuecomment-1115045538 | https://api.github.com/repos/pydata/xarray/issues/3476 | IC_kwDOAMm_X85Cdj6i | delgadom 3698640 | 2022-05-02T15:38:11Z | 2022-08-08T15:32:52Z | CONTRIBUTOR | This has been happening a lot to me lately when writing to zarr. Thanks to @bolliger32 for the tip - this usually works like a charm: ```python for v in list(ds.coords.keys()): if ds.coords[v].dtype == object: ds.coords[v] = ds.coords[v].astype("unicode") for v in list(ds.variables.keys()):
if ds[v].dtype == object:
ds[v] = ds[v].astype("unicode")
note the flag raised by @FlorisCalkoen below - don't just throw this at all your writes! there are other object types (e.g. CFTime) which you probably don't want to convert to string. This is just a patch to get around this issue for dataarrays with string coords/variables. |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Error when writing string coordinate variables to zarr 516306758 | |
1208280450 | https://github.com/pydata/xarray/issues/3476#issuecomment-1208280450 | https://api.github.com/repos/pydata/xarray/issues/3476 | IC_kwDOAMm_X85IBOWC | delgadom 3698640 | 2022-08-08T15:30:52Z | 2022-08-08T15:31:13Z | CONTRIBUTOR | ha - yeah that's a good flag. I definitely didn't intend for that to be a universally applied patch! so probably should have included a buyer beware. but we did find that clearing the encoding doesn't always do the trick for string arrays. So a comprehensive patch will probably need to be more nuanced. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Error when writing string coordinate variables to zarr 516306758 | |
1170638295 | https://github.com/pydata/xarray/pull/6727#issuecomment-1170638295 | https://api.github.com/repos/pydata/xarray/issues/6727 | IC_kwDOAMm_X85FxoXX | delgadom 3698640 | 2022-06-30T01:00:21Z | 2022-06-30T01:00:21Z | CONTRIBUTOR | D'oh! Thanks so much for the catch! |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
resolve the timeouts on RTD 1284543780 | |
1165821172 | https://github.com/pydata/xarray/pull/6542#issuecomment-1165821172 | https://api.github.com/repos/pydata/xarray/issues/6542 | IC_kwDOAMm_X85FfQT0 | delgadom 3698640 | 2022-06-24T18:18:47Z | 2022-06-24T18:18:47Z | CONTRIBUTOR | @andersy005 last I saw this was still not building on readthedocs! I never figured out how to get around the build timeout are you sure this PR was good to go? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
docs on specifying chunks in to_zarr encoding arg 1221393104 | |
1123949197 | https://github.com/pydata/xarray/pull/6542#issuecomment-1123949197 | https://api.github.com/repos/pydata/xarray/issues/6542 | IC_kwDOAMm_X85C_hqN | delgadom 3698640 | 2022-05-11T15:45:24Z | 2022-05-11T15:45:24Z | CONTRIBUTOR |
@jakirkham appreciate that! And just to clarify my confusion/frustration in the past was simply around the issue that I'm documenting here, which I've finally figured out! so hopefully this will help resolve the problem for future users. I agree with Ryan that there might be some changes in the defaults that could be helpful here, though setting intuitive defaults other than zarr's defaults could get messy for complex datasets with a mix of dask and in-memory arrays. But I think that's all on the xarray side? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
docs on specifying chunks in to_zarr encoding arg 1221393104 | |
1122796867 | https://github.com/pydata/xarray/pull/6542#issuecomment-1122796867 | https://api.github.com/repos/pydata/xarray/issues/6542 | IC_kwDOAMm_X85C7IVD | delgadom 3698640 | 2022-05-10T19:48:34Z | 2022-05-10T19:49:04Z | CONTRIBUTOR | @jakirkham were you thinking a reference to the dask docs for more info on optimal chunk sizing and aligning with storage? or are you suggesting the proposed docs change is too complex? I was trying to address the lack of documentation on specifying chunks within a zarr array for non-dask arrays/coordinates, but also covering the weedsy (but common) case of datasets with a mix of dask & in-memory arrays/coords like in my example. I have been frustrated by zarr stores I've written with a couple dozen array chunks and thousands of coordinate chunks for this reason, but it's definitely a gnarly topic to cover concisely :P |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
docs on specifying chunks in to_zarr encoding arg 1221393104 | |
1122790681 | https://github.com/pydata/xarray/pull/6542#issuecomment-1122790681 | https://api.github.com/repos/pydata/xarray/issues/6542 | IC_kwDOAMm_X85C7G0Z | delgadom 3698640 | 2022-05-10T19:41:12Z | 2022-05-10T19:41:12Z | CONTRIBUTOR | thank you all for being patient with this PR! seems the build failed again for the same reason. I think there might be something wrong with my examples, though it beats me what the issue is. As far as I can tell, most builds come in somewhere in the mid 900s range on readthedocs but my branch consistently times out at 1900s. I'll see if I can muster a bit of time to run through the exact rtd build workflow and figure out what's going on but probably won't get to it until this weekend. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
docs on specifying chunks in to_zarr encoding arg 1221393104 | |
1119775222 | https://github.com/pydata/xarray/pull/6542#issuecomment-1119775222 | https://api.github.com/repos/pydata/xarray/issues/6542 | IC_kwDOAMm_X85Cvmn2 | delgadom 3698640 | 2022-05-06T16:07:44Z | 2022-05-06T16:07:44Z | CONTRIBUTOR | now I'm getting reproducible build timeouts 😭 The docs build alone (just Thanks for all the help everyone! |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
docs on specifying chunks in to_zarr encoding arg 1221393104 | |
1118786888 | https://github.com/pydata/xarray/pull/6542#issuecomment-1118786888 | https://api.github.com/repos/pydata/xarray/issues/6542 | IC_kwDOAMm_X85Cr1VI | delgadom 3698640 | 2022-05-05T16:36:02Z | 2022-05-05T16:36:02Z | CONTRIBUTOR | @dcherian - yep I was trying to follow that guide closely but was still struggling with building using conda on my laptop's miniconda environment. The sphinx ipython directive kept running in the wrong conda environment, even after deleting sphinx, ipython, and ipykernel on my base env and making sure the env I was running |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
docs on specifying chunks in to_zarr encoding arg 1221393104 | |
1118775756 | https://github.com/pydata/xarray/pull/6542#issuecomment-1118775756 | https://api.github.com/repos/pydata/xarray/issues/6542 | IC_kwDOAMm_X85CrynM | delgadom 3698640 | 2022-05-05T16:31:06Z | 2022-05-05T16:31:06Z | CONTRIBUTOR | !!! @andersy005 thank you so much! yes - I was using a shallow clone inside the docker version of the build. I really appreciate the review and for catching my error. I'll clean this up and push the changes. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
docs on specifying chunks in to_zarr encoding arg 1221393104 | |
1114097888 | https://github.com/pydata/xarray/pull/6542#issuecomment-1114097888 | https://api.github.com/repos/pydata/xarray/issues/6542 | IC_kwDOAMm_X85CZ8jg | delgadom 3698640 | 2022-05-01T01:47:27Z | 2022-05-01T01:47:27Z | CONTRIBUTOR | sorry I know I said I'd fix this but I'm having a very hard time figuring out what is wrong and how to build the docs. I had to set up a docker image to build them because I couldn't get the ipython directive to use the right conda env on my laptop, and now I'm getting a version error when pandas encounters the default build number on my fork. I'm a bit embarrassed that I can't figure this out, but... I think I might need a hand getting this across the finish line 😕 ``` .... 13 1597.6 reading sources... [ 97%] getting-started-guide/faq13 1600.8 reading sources... [ 97%] getting-started-guide/index13 1600.8 reading sources... [ 97%] getting-started-guide/installing13 1601.0 reading sources... [ 97%] getting-started-guide/quick-overview13 1609.3 WARNING:13 1609.3 >>>-------------------------------------------------------------------------13 1609.3 Exception in /xarray/doc/getting-started-guide/quick-overview.rst at block ending on line 16913 1609.3 Specify :okexcept: as an option in the ipython:: block to suppress this message13 1609.3 ---------------------------------------------------------------------------13 1609.3 ImportError Traceback (most recent call last)13 1609.3 Input In [40], in <cell line: 1>()13 1609.3 ----> 1 series.to_xarray()13 1609.313 1609.3 File /srv/conda/envs/xarray-doc/lib/python3.9/site-packages/pandas/core/generic.py:3173, in NDFrame.to_xarray(self)13 1609.3 3096 @final13 1609.3 3097 def to_xarray(self):13 1609.3 3098 """13 1609.3 3099 Return an xarray object from the pandas object.13 1609.3 310013 1609.3 (...)13 1609.3 3171 speed (date, animal) int64 350 18 361 1513 1609.3 3172 """13 1609.3 -> 3173 xarray = import_optional_dependency("xarray")13 1609.3 3175 if self.ndim == 1:13 1609.3 3176 return xarray.DataArray.from_series(self)13 1609.313 1609.3 File /srv/conda/envs/xarray-doc/lib/python3.9/site-packages/pandas/compat/_optional.py:164, in import_optional_dependency(name, extra, errors, min_version)13 1609.3 162 return None13 1609.3 163 elif errors == "raise":13 1609.3 --> 164 raise ImportError(msg)13 1609.3 166 return module13 1609.313 1609.3 ImportError: Pandas requires version '0.15.1' or newer of 'xarray' (version '0.1.dev1+gfdf7303' currently installed).13 1609.313 1609.3 <<<-------------------------------------------------------------------------13 1609.313 1609.3 Exception occurred:13 1609.3 File "/srv/conda/envs/xarray-doc/lib/python3.9/site-packages/IPython/sphinxext/ipython_directive.py", line 584, in process_input13 1609.3 raise RuntimeError('Non Expected exception in
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
docs on specifying chunks in to_zarr encoding arg 1221393104 | |
1113712207 | https://github.com/pydata/xarray/pull/6542#issuecomment-1113712207 | https://api.github.com/repos/pydata/xarray/issues/6542 | IC_kwDOAMm_X85CYeZP | delgadom 3698640 | 2022-04-29T20:44:43Z | 2022-04-29T20:44:43Z | CONTRIBUTOR | hmmm seems I've messed something up in the docs build. apologize for the churn - will fix. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
docs on specifying chunks in to_zarr encoding arg 1221393104 | |
1098574761 | https://github.com/pydata/xarray/issues/6456#issuecomment-1098574761 | https://api.github.com/repos/pydata/xarray/issues/6456 | IC_kwDOAMm_X85Beuup | delgadom 3698640 | 2022-04-13T23:34:16Z | 2022-04-13T23:34:48Z | CONTRIBUTOR |
when I said "you're overwriting the file every iteration" I meant to put the emphasis on overwiting. by using See the docs on
This interpretation of mode is consistent across all of python - see the docs for python builtins: open So I think changing your writes to |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Writing a a dataset to .zarr in a loop makes all the data NaNs 1197117301 | |
1096953860 | https://github.com/pydata/xarray/pull/6467#issuecomment-1096953860 | https://api.github.com/repos/pydata/xarray/issues/6467 | IC_kwDOAMm_X85BYjAE | delgadom 3698640 | 2022-04-12T16:39:53Z | 2022-04-12T16:39:53Z | CONTRIBUTOR | oof - debugging CI build process is no fun. good luck, and thanks @max-sixty !! |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
allow other and drop arguments in where (gh#6466) 1199127752 | |
1094411214 | https://github.com/pydata/xarray/issues/6456#issuecomment-1094411214 | https://api.github.com/repos/pydata/xarray/issues/6456 | IC_kwDOAMm_X85BO2PO | delgadom 3698640 | 2022-04-10T23:40:49Z | 2022-04-10T23:40:49Z | CONTRIBUTOR | @tbloch1 following up on Max's suggestion - it looks like you might be overwriting the file with every iteration. See the docs on ds.to_zarr - To me, this doesn't seem likely to be a bug, but is more of a usage question. Have you tried asking on stackoverflow with the xarray tag? |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Writing a a dataset to .zarr in a loop makes all the data NaNs 1197117301 | |
1094409424 | https://github.com/pydata/xarray/pull/6467#issuecomment-1094409424 | https://api.github.com/repos/pydata/xarray/issues/6467 | IC_kwDOAMm_X85BO1zQ | delgadom 3698640 | 2022-04-10T23:29:38Z | 2022-04-10T23:29:38Z | CONTRIBUTOR | hmm. readthedocs failed because of concurrency limits (my bad) but seems to have failed to automatically retry. can someone give it a nudge? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
allow other and drop arguments in where (gh#6466) 1199127752 | |
1094342045 | https://github.com/pydata/xarray/pull/6467#issuecomment-1094342045 | https://api.github.com/repos/pydata/xarray/issues/6467 | IC_kwDOAMm_X85BOlWd | delgadom 3698640 | 2022-04-10T18:26:32Z | 2022-04-10T18:26:32Z | CONTRIBUTOR | thanks @max-sixty! I was mostly trying to be overly-cautious because the ValueError was so clearly raised intentionally, and I can't figure out why it was necessary. But maybe it was just left over from a time when providing both arguments really would have caused a problem, and it's no longer the case. |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
allow other and drop arguments in where (gh#6466) 1199127752 | |
1005162696 | https://github.com/pydata/xarray/issues/6036#issuecomment-1005162696 | https://api.github.com/repos/pydata/xarray/issues/6036 | IC_kwDOAMm_X8476ZDI | delgadom 3698640 | 2022-01-04T20:53:36Z | 2022-01-04T20:54:13Z | CONTRIBUTOR | This isn't a fix for the overhead required to manage an arbitrarily large graph, but note that creating chunks this small (size 1 in this case) is explicitly not recommended. See the dask docs on Array Best Practices: Select a good chunk size - they recommend chunks no smaller than 100 MB. Your chunks are 8 bytes. This creates 1 billion tasks, which does result in an enormous overhead - there's no way around this. Note that storing this on disk would not help - the problem results from the fact that 1 billion tasks will almost certainly overwhelm any dask scheduler. The general dask best practices guide recommends keeping the number of tasks below 1 million if possible. Also, I don't think that the issue here is in specifying the universe of the tasks that need to be created, but rather in creating and managing the python task objects themselves. So pre-computing or storing them wouldn't help. For me, changing to (1000, 1000, 100) chunks (~750MB for a float64 array) reduces the time to a couple ms:
With this chunking scheme, you could store and work with much, much more data. In fact, scaling the size of your example by 3 orders of magnitude only increases the runtime by ~5x:
|
{ "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
`xarray.open_zarr()` takes too long to lazy load when the data arrays contain a large number of Dask chunks. 1068225524 | |
1003460778 | https://github.com/pydata/xarray/issues/5994#issuecomment-1003460778 | https://api.github.com/repos/pydata/xarray/issues/5994 | IC_kwDOAMm_X847z5iq | delgadom 3698640 | 2021-12-31T22:36:04Z | 2021-12-31T22:41:51Z | CONTRIBUTOR | This looks like it could be a good improvement! Just flagging that in Do you see the Your point about using the shortcut for |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
interpolate_na, x and y arrays must have at least 2 entries, warning instead of raise ? 1055867960 | |
1002777049 | https://github.com/pydata/xarray/issues/6124#issuecomment-1002777049 | https://api.github.com/repos/pydata/xarray/issues/6124 | IC_kwDOAMm_X847xSnZ | delgadom 3698640 | 2021-12-29T21:09:12Z | 2021-12-29T21:09:12Z | CONTRIBUTOR | I realize this may be a larger discussion, but the implementation is so easy I went ahead and filed a PR that issues a PendingDeprecationWarning in |
{ "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 } |
bool(ds) should raise a "the truth value of a Dataset is ambiguous" error 1090229430 | |
1002673070 | https://github.com/pydata/xarray/issues/6124#issuecomment-1002673070 | https://api.github.com/repos/pydata/xarray/issues/6124 | IC_kwDOAMm_X847w5Ou | delgadom 3698640 | 2021-12-29T16:22:10Z | 2021-12-29T16:22:10Z | CONTRIBUTOR |
@max-sixty im not sure what this would look like. Do you mean a warning or are you hinting that the bar that would need to be met is a silver bullet that preserves bool(ds) but somehow isn’t confusing? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
bool(ds) should raise a "the truth value of a Dataset is ambiguous" error 1090229430 | |
1002671461 | https://github.com/pydata/xarray/issues/6124#issuecomment-1002671461 | https://api.github.com/repos/pydata/xarray/issues/6124 | IC_kwDOAMm_X847w41l | delgadom 3698640 | 2021-12-29T16:18:35Z | 2021-12-29T16:18:35Z | CONTRIBUTOR | Yeah… I do understand how it’s currently working and why, and the behavior is certainly intuitive to those who appreciate the mapping inheritance. That said, I feel I have to make a last stand argument because this trips people up quite often (on my team and elsewhere). I haven’t yet come across an example of anyone using this correctly, but I see users misusing it all the time. The examples and behavior you’re showing @Illviljan seem to me like more the natural result of an implementation detail than a critical principle of the dataset design. While it’s obvious why I don’t know much about the mapping protocol or how closely it must be followed. Is the idea here that packages building on xarray (or interoperability features in e.g. numpy or dask) depend on a strict adherence to the full spec? |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
bool(ds) should raise a "the truth value of a Dataset is ambiguous" error 1090229430 | |
610615488 | https://github.com/pydata/xarray/issues/3951#issuecomment-610615488 | https://api.github.com/repos/pydata/xarray/issues/3951 | MDEyOklzc3VlQ29tbWVudDYxMDYxNTQ4OA== | delgadom 3698640 | 2020-04-07T20:55:11Z | 2020-04-07T20:55:11Z | CONTRIBUTOR | Here's a test script I'm using for this: ```bash echo '$ conda create -n py37xr14 -c conda-forge --yes python=3.7 xarray=0.14.1' conda create -n py37xr14 -c conda-forge --yes python=3.7 xarray=0.14.1 > /dev/null echo '$ conda create -n py37xr15 -c conda-forge --yes python=3.7 xarray=0.15.1' conda create -n py37xr15 -c conda-forge --yes python=3.7 xarray=0.15.1 > /dev/null echo '$ conda run -n py37xr14 python test.py' conda run -n py37xr14 python test.py echo echo '$ conda run -n py37xr15 python test.py' conda run -n py37xr15 python test.py echo echo '$ conda list -n py37xr14' conda list -n py37xr14 echo echo '$ conda list -n py37xr15' conda list -n py37xr15 conda env remove -n py37xr14 > /dev/null 2>&1 conda env remove -n py37xr15 > /dev/null 2>&1 ``` |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
series.to_xarray() fails when MultiIndex not sorted in xarray 0.15.1 596115014 | |
610612332 | https://github.com/pydata/xarray/issues/3951#issuecomment-610612332 | https://api.github.com/repos/pydata/xarray/issues/3951 | MDEyOklzc3VlQ29tbWVudDYxMDYxMjMzMg== | delgadom 3698640 | 2020-04-07T20:47:51Z | 2020-04-07T20:47:51Z | CONTRIBUTOR | yeah I use this pattern all the time - df.stack().to_xarray() seems to now fail unless your columns were sorted alphabetically. not sure yet where this is happening but it does result in some pernicious bad data errors that can be hard to debug if you catch them at all. |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
series.to_xarray() fails when MultiIndex not sorted in xarray 0.15.1 596115014 | |
373541528 | https://github.com/pydata/xarray/issues/1075#issuecomment-373541528 | https://api.github.com/repos/pydata/xarray/issues/1075 | MDEyOklzc3VlQ29tbWVudDM3MzU0MTUyOA== | delgadom 3698640 | 2018-03-15T22:21:51Z | 2018-03-16T01:33:24Z | CONTRIBUTOR | xarray==0.10.2 netCDF4==1.3.1 Just tried it again and didn't have any issues: ```python patt = ( 'http://nasanex.s3.amazonaws.com/NEX-GDDP/BCSD/{scen}/day/atmos/{var}/' + 'r1i1p1/v1.0/{var}day_BCSD{scen}r1i1p1{model}_{year}.nc') def open_url_dataset(url):
ds = open_url_dataset(url=patt.format( model='GFDL-ESM2G', scen='historical', var='tasmax', year=1988)) ds ``` |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Support creating DataSet from streaming object 186895655 | |
357125148 | https://github.com/pydata/xarray/issues/1075#issuecomment-357125148 | https://api.github.com/repos/pydata/xarray/issues/1075 | MDEyOklzc3VlQ29tbWVudDM1NzEyNTE0OA== | delgadom 3698640 | 2018-01-12T02:27:27Z | 2018-01-12T02:27:27Z | CONTRIBUTOR | yes! Thanks @jhamman and @shoyer. I hadn't tried it yet, but just did. worked great! ```python In [1]: import xarray as xr ...: import requests ...: import netCDF4 ...: ...: %matplotlib inline In [2]: res = requests.get( ...: 'http://nasanex.s3.amazonaws.com/NEX-GDDP/BCSD/rcp45/day/atmos/tasmin/' + ...: 'r1i1p1/v1.0/tasmin_day_BCSD_rcp45_r1i1p1_CESM1-BGC_2073.nc') In [3]: res.status_code Out [3]: 200 In [4]: res.headers['content-type'] Out [4]: 'application/x-netcdf' In [5]: nc4_ds = netCDF4.Dataset('tasmin_day_BCSD_rcp45_r1i1p1_CESM1-BGC_2073', memory=res.content) In [6]: store = xr.backends.NetCDF4DataStore(nc4_ds) In [7]: ds = xr.open_dataset(store) In [8]: ds.tasmin.isel(time=0).plot()
/global/home/users/mdelgado/git/public/xarray/xarray/plot/utils.py:51: FutureWarning: 'pandas.tseries.converter.register' has been moved and renamed to 'pandas.plotting.register_matplotlib_converters'.
converter.register()
Out [8]: <matplotlib.collections.QuadMesh at 0x2aede3c922b0>
|
{ "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Support creating DataSet from streaming object 186895655 | |
354469859 | https://github.com/pydata/xarray/pull/1802#issuecomment-354469859 | https://api.github.com/repos/pydata/xarray/issues/1802 | MDEyOklzc3VlQ29tbWVudDM1NDQ2OTg1OQ== | delgadom 3698640 | 2017-12-29T16:45:37Z | 2017-12-29T16:45:37Z | CONTRIBUTOR | Ok this is good to go if you all do want to enable _FillValue for variable-length unicode strings with a netCDF4 backend. Seems like there's a lot of prior work/thinking in this space though so no worries if you want to wait. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Handle _FillValue in variable-length unicode string variables 285006452 | |
354379246 | https://github.com/pydata/xarray/pull/1802#issuecomment-354379246 | https://api.github.com/repos/pydata/xarray/issues/1802 | MDEyOklzc3VlQ29tbWVudDM1NDM3OTI0Ng== | delgadom 3698640 | 2017-12-29T00:22:57Z | 2017-12-29T00:22:57Z | CONTRIBUTOR | lol. no I'm just walking around in your footsteps @shoyer. I've just enabled the tests you presumably wrote for #1647 & #1648. Curious why variable-length unicode strings with _FillHoles using netCDF4 doesn't currently work in master? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Handle _FillValue in variable-length unicode string variables 285006452 | |
354378158 | https://github.com/pydata/xarray/pull/1802#issuecomment-354378158 | https://api.github.com/repos/pydata/xarray/issues/1802 | MDEyOklzc3VlQ29tbWVudDM1NDM3ODE1OA== | delgadom 3698640 | 2017-12-29T00:11:53Z | 2017-12-29T00:11:53Z | CONTRIBUTOR | hmm. Seems I'm touching on a much larger issue here: Unidata/netcdf4-python#730 The round-trip works for me using a netcdf4 engine once this fix is implemented in conventions.py. There are tests that are ready to demonstrate this in test_backends.py:836-843, but running these tests (by removing the Should these use cases be split up? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Handle _FillValue in variable-length unicode string variables 285006452 | |
354370636 | https://github.com/pydata/xarray/issues/1781#issuecomment-354370636 | https://api.github.com/repos/pydata/xarray/issues/1781 | MDEyOklzc3VlQ29tbWVudDM1NDM3MDYzNg== | delgadom 3698640 | 2017-12-28T22:55:28Z | 2017-12-28T22:55:28Z | CONTRIBUTOR | I've got a reproducible example of this (sorry for the length): ```python In [1]: import xarray as xr ...: import numpy as np ...: import pandas as pd ...: import netCDF4 ```
This seems to be produced by the fact that |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
UnboundLocalError when opening netCDF file 282061228 | |
347705483 | https://github.com/pydata/xarray/issues/1075#issuecomment-347705483 | https://api.github.com/repos/pydata/xarray/issues/1075 | MDEyOklzc3VlQ29tbWVudDM0NzcwNTQ4Mw== | delgadom 3698640 | 2017-11-28T23:58:41Z | 2017-11-28T23:58:41Z | CONTRIBUTOR | Thanks @shoyer. So you can download the entire object into memory and then create a file image and read that? While not a full fix, it's definitely an improvement over download-to-disk-then-read workflow! |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Support creating DataSet from streaming object 186895655 | |
314154076 | https://github.com/pydata/xarray/issues/1472#issuecomment-314154076 | https://api.github.com/repos/pydata/xarray/issues/1472 | MDEyOklzc3VlQ29tbWVudDMxNDE1NDA3Ng== | delgadom 3698640 | 2017-07-10T16:08:30Z | 2017-07-10T16:08:30Z | CONTRIBUTOR | I wasn't misled by the docs, just by my intuition. But now that you've made the distinction between ```python In [24]: arr.sel(time='2012') Out[24]: <xarray.DataArray (time: 1, age: 4)> array([[ 0.244146, 0.702819, 0.06614 , 0.211758]]) Coordinates: * age (age) object 'age0' 'age1' 'age2' 'age3' * time (time) datetime64[ns] 2012-12-31 In [25]: arr.sel(time='2012-12-31') Out[25]: <xarray.DataArray (age: 4)> array([ 0.244146, 0.702819, 0.06614 , 0.211758]) Coordinates: * age (age) object 'age0' 'age1' 'age2' 'age3' time datetime64[ns] 2012-12-31 In [26]: arr.sel(time=np.datetime64('2012-12-31')) Out[26]: <xarray.DataArray (age: 4)> array([ 0.244146, 0.702819, 0.06614 , 0.211758]) Coordinates: * age (age) object 'age0' 'age1' 'age2' 'age3' time datetime64[ns] 2012-12-31 ``` |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
.sel(drop=True) fails to drop coordinate 241389297 | |
314136206 | https://github.com/pydata/xarray/issues/1472#issuecomment-314136206 | https://api.github.com/repos/pydata/xarray/issues/1472 | MDEyOklzc3VlQ29tbWVudDMxNDEzNjIwNg== | delgadom 3698640 | 2017-07-10T15:10:50Z | 2017-07-10T15:10:50Z | CONTRIBUTOR | thanks for the schooling. learned something new! |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
.sel(drop=True) fails to drop coordinate 241389297 | |
258025809 | https://github.com/pydata/xarray/issues/1075#issuecomment-258025809 | https://api.github.com/repos/pydata/xarray/issues/1075 | MDEyOklzc3VlQ29tbWVudDI1ODAyNTgwOQ== | delgadom 3698640 | 2016-11-02T23:03:34Z | 2016-11-02T23:03:34Z | CONTRIBUTOR | Got it. :( Thanks! |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Support creating DataSet from streaming object 186895655 | |
257062571 | https://github.com/pydata/xarray/issues/798#issuecomment-257062571 | https://api.github.com/repos/pydata/xarray/issues/798 | MDEyOklzc3VlQ29tbWVudDI1NzA2MjU3MQ== | delgadom 3698640 | 2016-10-29T01:26:47Z | 2016-10-29T01:26:47Z | CONTRIBUTOR | Could this extend the OpeNDAP interface? That would solve the metadata problem and would provide quick access to the distributed workers. |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Integration with dask/distributed (xarray backend design) 142498006 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
issue 15