issue_comments
36 rows where user = 22566757 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: issue_url, reactions, created_at (date), updated_at (date)
user 1
- DWesl · 36 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
1259913775 | https://github.com/pydata/xarray/pull/7080#issuecomment-1259913775 | https://api.github.com/repos/pydata/xarray/issues/7080 | IC_kwDOAMm_X85LGMIv | DWesl 22566757 | 2022-09-27T18:47:00Z | 2022-09-27T18:47:00Z | CONTRIBUTOR | I think the current default for two-dimensional plots is to try to re-use an existing axis if neither |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Fix `utils.get_axis` with kwargs 1385143758 | |
1259894192 | https://github.com/pydata/xarray/issues/7076#issuecomment-1259894192 | https://api.github.com/repos/pydata/xarray/issues/7076 | IC_kwDOAMm_X85LGHWw | DWesl 22566757 | 2022-09-27T18:28:26Z | 2022-09-27T18:28:26Z | CONTRIBUTOR | Fix confirmed, thank you. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Can't unstack concatenated DataArrays 1384465119 | |
1115251379 | https://github.com/pydata/xarray/issues/6439#issuecomment-1115251379 | https://api.github.com/repos/pydata/xarray/issues/6439 | IC_kwDOAMm_X85CeWKz | DWesl 22566757 | 2022-05-02T19:00:47Z | 2022-05-02T19:05:19Z | CONTRIBUTOR | Oh, right, you suggested that a bit ago. When I checkout Still not sure what fixed this, but since it's working, I don't care so much. I will wait for this to show up in a release. Thank you! |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Unstacking the diagonals of a sequence of matrices raises ValueError: IndexVariable objects must be 1-dimensional 1192449540 | |
1115127711 | https://github.com/pydata/xarray/issues/6439#issuecomment-1115127711 | https://api.github.com/repos/pydata/xarray/issues/6439 | IC_kwDOAMm_X85Cd3-f | DWesl 22566757 | 2022-05-02T17:04:13Z | 2022-05-02T17:45:56Z | CONTRIBUTOR |
I am aware that I can extract the diagonal of the arrays by using the same index for each argument of The bit that interests me is unstacking the relevant dimension, because the data in the original case comes to me with, effectively, a stacked dimension, and I would like to turn it back into an unstacked dimension because that is what I am used to using That is to say, skipping the unstacking rather defeats the purpose of what I am trying to do, unless you have suggestions for how to create a two-dimensional plot (one using something like |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Unstacking the diagonals of a sequence of matrices raises ValueError: IndexVariable objects must be 1-dimensional 1192449540 | |
1112552981 | https://github.com/pydata/xarray/issues/2780#issuecomment-1112552981 | https://api.github.com/repos/pydata/xarray/issues/2780 | IC_kwDOAMm_X85CUDYV | DWesl 22566757 | 2022-04-28T18:57:26Z | 2022-04-28T19:01:34Z | CONTRIBUTOR | I found a way to get the sample dataset to save to a smaller netCDF: ```python import os import numpy as np import numpy.testing as np_tst import pandas as pd import xarray as xr Original exampleCreate pandas DataFramedf = pd.DataFrame( np.random.randint(low=0, high=10, size=(100000, 5)), columns=["a", "b", "c", "d", "e"], ) Make 'e' a column of stringsdf["e"] = df["e"].astype(str) Make 'f' a column of floatsDIGITS = 1 df["f"] = np.around(10 ** DIGITS * np.random.random(size=df.shape[0]), DIGITS) Save to csvdf.to_csv("df.csv") Convert to an xarray's Datasetds = xr.Dataset.from_dataframe(df) Save NetCDF fileds.to_netcdf("ds.nc") Additionsdef dtype_for_int_array(arry: "array of integers") -> np.dtype: """Find the smallest integer dtype that will encode arry.
def dtype_for_str_array( arry: "xr.DataArray of strings", for_disk: bool = True ) -> np.dtype: """Find a good string dtype for encoding arry.
Set up encoding for saving to netCDFencoding = {} for name, var in ds.items(): encoding[name] = {}
ds.to_netcdf("ds_encoded.nc", encoding=encoding) Display resultsstat_csv = os.stat("df.csv") stat_nc = os.stat("ds.nc") stat_enc = os.stat("ds_encoded.nc") sizes = pd.Series( index=["CSV", "default netCDF", "encoded netCDF"], data=[stats.st_size for stats in [stat_csv, stat_nc, stat_enc]], name="File sizes", ) print("File sizes (kB):", np.right_shift(sizes, 10), sep="\n", end="\n\n") print("Sizes relative to CSV:", sizes / sizes.iloc[0], sep="\n", end="\n\n") Check that I didn't break the floatsfrom_disk = xr.open_dataset("ds_encoded.nc")
np_tst.assert_allclose(ds["f"], from_disk["f"], rtol=10-DIGITS, atol=10-DIGITS)
Sizes relative to CSV: CSV 1.000000 default netCDF 5.230366 encoded netCDF 0.708063 Name: File sizes, dtype: float64 10M ds.nc 1.9M df.csv 1.4M ds_encoded.nc ``` I added a column of floats with one digit before and after the decimal point to the example dataset, because why not. Does this satisfy your use-case? Should I turn the giant loop into a function to go into xarray somewhere? If so, I should probably tie the float handling in with the new |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Automatic dtype encoding in to_netcdf 412180435 | |
1069092987 | https://github.com/pydata/xarray/issues/6310#issuecomment-1069092987 | https://api.github.com/repos/pydata/xarray/issues/6310 | IC_kwDOAMm_X84_uRB7 | DWesl 22566757 | 2022-03-16T12:50:50Z | 2022-03-16T12:50:50Z | CONTRIBUTOR | That could work. Are you set up to check that? That can be either a full repository checkout or an XArray installation you can edit. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Only auxiliary coordinates are listed in nc variable attribute 1154014066 | |
1069084130 | https://github.com/pydata/xarray/issues/6310#issuecomment-1069084130 | https://api.github.com/repos/pydata/xarray/issues/6310 | IC_kwDOAMm_X84_uO3i | DWesl 22566757 | 2022-03-16T12:40:20Z | 2022-03-16T12:40:20Z | CONTRIBUTOR | Given this:
https://github.com/pydata/xarray/blob/613a8fda4f07181fbc41d6ff2296fec3726fd351/xarray/conventions.py#L782-L783
I think that should be working. This:
https://github.com/pydata/xarray/blob/613a8fda4f07181fbc41d6ff2296fec3726fd351/xarray/conventions.py#L770-L779
explicitly says it should, and is probably the part where things go wrong, but it should be going wrong the same way for I think
https://github.com/pydata/xarray/blob/613a8fda4f07181fbc41d6ff2296fec3726fd351/xarray/conventions.py#L758-L768
may need to be split into two conditionals, one for |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Only auxiliary coordinates are listed in nc variable attribute 1154014066 | |
1069064616 | https://github.com/pydata/xarray/issues/6310#issuecomment-1069064616 | https://api.github.com/repos/pydata/xarray/issues/6310 | IC_kwDOAMm_X84_uKGo | DWesl 22566757 | 2022-03-16T12:17:37Z | 2022-03-16T12:17:37Z | CONTRIBUTOR | I tried to find what the CF conventions say about including dimension coordinates (I'm using the name from scitools-iris rather than "coordinate variable" as used in the CF conventions to keep myself from getting confused) in the From what I remember, XArray is based on the netCDF data model, rather than the CF data model, so initializing
Based on this, I think doing solution one from the previous post on writing a dataset will always be consistent with CF, but assuming that netCDF files XArray reads into datasets will always follow this pattern would be a problem. I suspect there are tests for reading netCDF files with dimension coordinates included in
If you want to try solution three, almost all Discrete Sampling Geometry files must have a global attribute called The references from CF on whether dimension coordinates can be included in the The fifth paragraph of CF section five says:
I think this is saying that if you can represent a coordinate using just one dimension, you shouldn't use two (that is, avoid using
The first paragraph of the section on Discrete sampling geometries:
I think dimension coordinates are explicit enough to count as "unambiguously associated", even without inclusion in the
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Only auxiliary coordinates are listed in nc variable attribute 1154014066 | |
866314326 | https://github.com/pydata/xarray/issues/5510#issuecomment-866314326 | https://api.github.com/repos/pydata/xarray/issues/5510 | MDEyOklzc3VlQ29tbWVudDg2NjMxNDMyNg== | DWesl 22566757 | 2021-06-22T20:33:44Z | 2021-06-22T20:37:53Z | CONTRIBUTOR | ~ ```python
Not producing a Making |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Can't remove coordinates attribute from DataArrays 927336712 | |
778717611 | https://github.com/pydata/xarray/pull/2844#issuecomment-778717611 | https://api.github.com/repos/pydata/xarray/issues/2844 | MDEyOklzc3VlQ29tbWVudDc3ODcxNzYxMQ== | DWesl 22566757 | 2021-02-14T03:35:55Z | 2021-02-14T03:35:55Z | CONTRIBUTOR |
It seems you've already figured this out, but for anyone else with this question, the repeat of the call on that file is part of the warning that the file does not have all the variables the attributes refer to. You can fix this by recreating the file with the listed variables added ( |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Read grid mapping and bounds as coords 424265093 | |
778629061 | https://github.com/pydata/xarray/pull/2844#issuecomment-778629061 | https://api.github.com/repos/pydata/xarray/issues/2844 | MDEyOklzc3VlQ29tbWVudDc3ODYyOTA2MQ== | DWesl 22566757 | 2021-02-13T14:46:25Z | 2021-02-13T14:46:25Z | CONTRIBUTOR | I think this looks good. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Read grid mapping and bounds as coords 424265093 | |
761842344 | https://github.com/pydata/xarray/pull/2844#issuecomment-761842344 | https://api.github.com/repos/pydata/xarray/issues/2844 | MDEyOklzc3VlQ29tbWVudDc2MTg0MjM0NA== | DWesl 22566757 | 2021-01-17T16:48:39Z | 2021-01-17T16:48:39Z | CONTRIBUTOR | Looks good to me. I was wondering where those docstrings were. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Read grid mapping and bounds as coords 424265093 | |
670778071 | https://github.com/pydata/xarray/issues/4121#issuecomment-670778071 | https://api.github.com/repos/pydata/xarray/issues/4121 | MDEyOklzc3VlQ29tbWVudDY3MDc3ODA3MQ== | DWesl 22566757 | 2020-08-07T23:07:14Z | 2020-08-17T13:09:27Z | CONTRIBUTOR | 2844 used to move these variables to
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
decode_cf doesn't work for ancillary_variables in attributes 630573329 | |
670996109 | https://github.com/pydata/xarray/pull/2844#issuecomment-670996109 | https://api.github.com/repos/pydata/xarray/issues/2844 | MDEyOklzc3VlQ29tbWVudDY3MDk5NjEwOQ== | DWesl 22566757 | 2020-08-09T02:17:07Z | 2020-08-09T16:36:12Z | CONTRIBUTOR | That's two people with that view so I made the change. Again, I feel that the quality flags are essentially meaningless on their own, useful primarily in the context of their associated variables, like the items currently put in the XArray On a related note, I should probably check whether this breaks conversion to an |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Read grid mapping and bounds as coords 424265093 | |
670816691 | https://github.com/pydata/xarray/pull/2844#issuecomment-670816691 | https://api.github.com/repos/pydata/xarray/issues/2844 | MDEyOklzc3VlQ29tbWVudDY3MDgxNjY5MQ== | DWesl 22566757 | 2020-08-08T03:25:17Z | 2020-08-08T03:53:39Z | CONTRIBUTOR | You are correct; My personal view is that the quality information should stay with the variable it describes unless explicitly dropped; I think your view is that quality information can always be extracted from the original dataset, and that no variable should carry quality information for a different variable. At this point it would be simple to remove I should point out that a similar situation arises for |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Read grid mapping and bounds as coords 424265093 | |
670765806 | https://github.com/pydata/xarray/pull/2844#issuecomment-670765806 | https://api.github.com/repos/pydata/xarray/issues/2844 | MDEyOklzc3VlQ29tbWVudDY3MDc2NTgwNg== | DWesl 22566757 | 2020-08-07T22:29:20Z | 2020-08-07T22:29:20Z | CONTRIBUTOR | The MinimumVersionsPolicy error appears to be a series of internal |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Read grid mapping and bounds as coords 424265093 | |
670730008 | https://github.com/pydata/xarray/pull/2844#issuecomment-670730008 | https://api.github.com/repos/pydata/xarray/issues/2844 | MDEyOklzc3VlQ29tbWVudDY3MDczMDAwOA== | DWesl 22566757 | 2020-08-07T22:02:47Z | 2020-08-07T22:02:47Z | CONTRIBUTOR | pydata/xarray-data#19 |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Read grid mapping and bounds as coords 424265093 | |
667744209 | https://github.com/pydata/xarray/pull/2844#issuecomment-667744209 | https://api.github.com/repos/pydata/xarray/issues/2844 | MDEyOklzc3VlQ29tbWVudDY2Nzc0NDIwOQ== | DWesl 22566757 | 2020-08-03T00:14:24Z | 2020-08-03T19:42:02Z | CONTRIBUTOR | The Another option is to just delete the |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Read grid mapping and bounds as coords 424265093 | |
667737160 | https://github.com/pydata/xarray/pull/2844#issuecomment-667737160 | https://api.github.com/repos/pydata/xarray/issues/2844 | MDEyOklzc3VlQ29tbWVudDY2NzczNzE2MA== | DWesl 22566757 | 2020-08-02T23:13:26Z | 2020-08-02T23:13:26Z | CONTRIBUTOR | The example the doc build doesn't like: ```python ds = xr.tutorial.load_dataset("rasm") ds.to_zarr("rasm.zarr", mode="w") import zarr zgroup = zarr.open("rasm.zarr")
print(zgroup.tree())
dict(zgroup["Tair"].attrs)
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Read grid mapping and bounds as coords 424265093 | |
658329779 | https://github.com/pydata/xarray/issues/4215#issuecomment-658329779 | https://api.github.com/repos/pydata/xarray/issues/4215 | MDEyOklzc3VlQ29tbWVudDY1ODMyOTc3OQ== | DWesl 22566757 | 2020-07-14T18:07:05Z | 2020-07-14T18:07:05Z | CONTRIBUTOR |
Is that "putting the variables in these attributes in
I tend to think of uncertainties and status flags as important for the interpretation of the associated variables that should stay with the data variables unless a decision is explicitly made to drop them. On the other hand, since XArray seems to associate coordinates with dimensions rather than with variables, I can see why this might be less than desirable. This argument would also apply to
Should this be part of #2844 or should preserving |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
setting variables named in CF attributes as coordinate variables 654889988 | |
497948836 | https://github.com/pydata/xarray/pull/2844#issuecomment-497948836 | https://api.github.com/repos/pydata/xarray/issues/2844 | MDEyOklzc3VlQ29tbWVudDQ5Nzk0ODgzNg== | DWesl 22566757 | 2019-06-01T14:20:08Z | 2020-07-14T15:55:17Z | CONTRIBUTOR | On 5/31/2019 11:50 AM, dcherian wrote:
At present, the proper, CF-compliant way to do this is to have both |grid_mapping| and |bounds| variables in |data_vars|, and maintain the attributes yourself, including making sure the variables get copied into the result after relevant |ds[var_name]| and |ds.sel(axis=bounds)| operations. If you decide to move these variables to |coords|, the |bounds| variables will still get dropped on any subsetting operation, including those where the relevant axis was retained, the |grid_mapping| variables will be included in the result of all subsetting operations (including pulling out, for example, a time coordinate), and both will be included in some |coordinates| attribute when written to disk, breaking CF compliance. This PR only really addresses getting these variables in |coords| initially and keeping them out of the global |coordinates| attribute when writing to disk.
You have a point about |grid_mapping|, but applying the MetPy approach of saving the information in another, more directly useful format (|cartopy.Projection| instances) immediately after loading the file would be a way around that. For |bounds|, I think |pd.PeriodIndex| would be the most natural representation for time, and |pd.IntervalIndex| for most other 1-D cases, but that still leaves |bounds| for two-or-more-dimensional coordinates. That's a design choice I'll leave to the maintainers.
At present, |set(ds[var_name].attrs["coordinates"].split())| and |set(ds[var_name].coords) - set(ds[var_name].indexes[dim_name])| would be identical, since the |coordinates| attribute is essentially computed from the second expression on write. Do you have a use case in mind where you need specifically the list of CF auxiliary coordinates, or is that just an example of something that would change under the new proposal? I assume |units| would be moved to |encoding| only for |datetime64[ns]| and |timedelta64[ns]| variables. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Read grid mapping and bounds as coords 424265093 | |
644405067 | https://github.com/pydata/xarray/pull/2844#issuecomment-644405067 | https://api.github.com/repos/pydata/xarray/issues/2844 | MDEyOklzc3VlQ29tbWVudDY0NDQwNTA2Nw== | DWesl 22566757 | 2020-06-15T21:40:49Z | 2020-06-15T21:40:49Z | CONTRIBUTOR | This PR currently puts |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Read grid mapping and bounds as coords 424265093 | |
633296515 | https://github.com/pydata/xarray/issues/2780#issuecomment-633296515 | https://api.github.com/repos/pydata/xarray/issues/2780 | MDEyOklzc3VlQ29tbWVudDYzMzI5NjUxNQ== | DWesl 22566757 | 2020-05-24T20:45:43Z | 2020-05-24T20:45:43Z | CONTRIBUTOR | For the example given, this would mean finding For the character/string variables, the smallest representation varies a bit more: a fixed-width encoding ( Doing this correctly for floating-point types would be difficult, but I think that's outside the scope of this issue. Hopefully this gives you something to work with. ```python import numpy as np def dtype_for_int_array(arry: "array of integers") -> np.dtype: """Find the smallest integer dtype that will encode arry.
``` Looking at
It looks like pandas always uses object dtype for string arrays, so the numbers in that column likely reflect the size of an array of pointers. XArray lets you use a dtype of "S1" or "U1", but I haven't found the equivalent of the |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Automatic dtype encoding in to_netcdf 412180435 | |
633253434 | https://github.com/pydata/xarray/pull/2844#issuecomment-633253434 | https://api.github.com/repos/pydata/xarray/issues/2844 | MDEyOklzc3VlQ29tbWVudDYzMzI1MzQzNA== | DWesl 22566757 | 2020-05-24T16:09:04Z | 2020-05-24T16:09:04Z | CONTRIBUTOR | Should I change this to put |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Read grid mapping and bounds as coords 424265093 | |
633251217 | https://github.com/pydata/xarray/issues/4068#issuecomment-633251217 | https://api.github.com/repos/pydata/xarray/issues/4068 | MDEyOklzc3VlQ29tbWVudDYzMzI1MTIxNw== | DWesl 22566757 | 2020-05-24T15:53:10Z | 2020-05-24T15:53:10Z | CONTRIBUTOR | For others reading this issue, the h5netcdf workaround was discussed in #3297, with further discussion on supporting complex numbers in netCDF in cf-convention/cf-conventions#204. The short version: There is a longer discussion of why netCDF-C doesn't understand these files at Unidata/netcdf-c#267. That specific issue is for booleans, but complex numbers are likely the same. |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
utility function to save complex values as a netCDF file 619347681 | |
597375929 | https://github.com/pydata/xarray/pull/2844#issuecomment-597375929 | https://api.github.com/repos/pydata/xarray/issues/2844 | MDEyOklzc3VlQ29tbWVudDU5NzM3NTkyOQ== | DWesl 22566757 | 2020-03-10T23:54:41Z | 2020-03-10T23:54:41Z | CONTRIBUTOR | I think the choice is between If it helps lean your decision one way or the other, |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Read grid mapping and bounds as coords 424265093 | |
587466776 | https://github.com/pydata/xarray/issues/3689#issuecomment-587466776 | https://api.github.com/repos/pydata/xarray/issues/3689 | MDEyOklzc3VlQ29tbWVudDU4NzQ2Njc3Ng== | DWesl 22566757 | 2020-02-18T13:44:27Z | 2020-02-18T13:44:27Z | CONTRIBUTOR |
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Decode CF bounds to coords 548607657 | |
587466093 | https://github.com/pydata/xarray/pull/2844#issuecomment-587466093 | https://api.github.com/repos/pydata/xarray/issues/2844 | MDEyOklzc3VlQ29tbWVudDU4NzQ2NjA5Mw== | DWesl 22566757 | 2020-02-18T13:43:12Z | 2020-02-18T13:43:12Z | CONTRIBUTOR | The test failures seem to all be due to recent changes in Is sticking the |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Read grid mapping and bounds as coords 424265093 | |
586273656 | https://github.com/pydata/xarray/pull/2844#issuecomment-586273656 | https://api.github.com/repos/pydata/xarray/issues/2844 | MDEyOklzc3VlQ29tbWVudDU4NjI3MzY1Ng== | DWesl 22566757 | 2020-02-14T12:47:06Z | 2020-02-14T12:47:06Z | CONTRIBUTOR | I just noticed pandas.PeriodIndex would be an alternative to pandas.IntervalIndex for time data if which side the interval is closed on is largely irrelevant for such data. Is there an interest in using these for 1D coordinates with bounds? I think |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Read grid mapping and bounds as coords 424265093 | |
586261327 | https://github.com/pydata/xarray/pull/3724#issuecomment-586261327 | https://api.github.com/repos/pydata/xarray/issues/3724 | MDEyOklzc3VlQ29tbWVudDU4NjI2MTMyNw== | DWesl 22566757 | 2020-02-14T12:07:21Z | 2020-02-14T12:07:21Z | CONTRIBUTOR | Not yet, at least: https://github.com/pydata/xarray/network/dependents GitHub points my projects using XArray at https://github.com/thadncs/https-github.com-pydata-xarray rather than this repository, There seem to be a decent number of repositories there: https://github.com/thadncs/https-github.com-pydata-xarray/network/dependents I have no idea why GitHub shifted them, nor what to do about it. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
setuptools-scm (3) 555752381 | |
497566742 | https://github.com/pydata/xarray/pull/2844#issuecomment-497566742 | https://api.github.com/repos/pydata/xarray/issues/2844 | MDEyOklzc3VlQ29tbWVudDQ5NzU2Njc0Mg== | DWesl 22566757 | 2019-05-31T04:00:17Z | 2019-05-31T04:00:17Z | CONTRIBUTOR | Switched to use Re: If so, would it be sufficient to change their |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Read grid mapping and bounds as coords 424265093 | |
497558317 | https://github.com/pydata/xarray/pull/2844#issuecomment-497558317 | https://api.github.com/repos/pydata/xarray/issues/2844 | MDEyOklzc3VlQ29tbWVudDQ5NzU1ODMxNw== | DWesl 22566757 | 2019-05-31T03:04:06Z | 2019-05-31T03:13:53Z | CONTRIBUTOR | This is briefly mentioned above, in
https://github.com/pydata/xarray/pull/2844#discussion_r270595609
The rationale was that everywhere else xarray uses CF attributes for something, the original values of those attributes are recorded in If you feel strongly to the contrary, there's an idea at the top of this thread for getting For |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Read grid mapping and bounds as coords 424265093 | |
478290053 | https://github.com/pydata/xarray/pull/2844#issuecomment-478290053 | https://api.github.com/repos/pydata/xarray/issues/2844 | MDEyOklzc3VlQ29tbWVudDQ3ODI5MDA1Mw== | DWesl 22566757 | 2019-03-30T21:17:17Z | 2019-03-30T21:17:17Z | CONTRIBUTOR | I can shift this to use encoding only, but I'm having trouble figuring out where that code would go. Would the preferred path be to create VariableCoder classes for each and add them to encode_cf_variable, then add tests to xarray.tests.test_coding? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Read grid mapping and bounds as coords 424265093 | |
478248763 | https://github.com/pydata/xarray/pull/2843#issuecomment-478248763 | https://api.github.com/repos/pydata/xarray/issues/2843 | MDEyOklzc3VlQ29tbWVudDQ3ODI0ODc2Mw== | DWesl 22566757 | 2019-03-30T14:04:12Z | 2019-03-30T14:04:12Z | CONTRIBUTOR | I just checked and can't find that section of the documentation now, so that seems to be consistent. I suppose that's a vote for "be sure to check current behavior before submitting old packages". I'll change my code to this new method then. Thanks |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Allow passing _FillValue=False in encoding for vlen str variables. 424262546 | |
476586154 | https://github.com/pydata/xarray/pull/2844#issuecomment-476586154 | https://api.github.com/repos/pydata/xarray/issues/2844 | MDEyOklzc3VlQ29tbWVudDQ3NjU4NjE1NA== | DWesl 22566757 | 2019-03-26T11:31:05Z | 2019-03-26T11:31:05Z | CONTRIBUTOR | Related to #1475 and #2288 , but this is just keeping the metadata consistent where already present, not extending the data model to include bounds, cells, or projections. I should add a test to ensure saving still works if the bounds are lost when pulling out variables. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Read grid mapping and bounds as coords 424265093 | |
309484883 | https://github.com/pydata/xarray/pull/814#issuecomment-309484883 | https://api.github.com/repos/pydata/xarray/issues/814 | MDEyOklzc3VlQ29tbWVudDMwOTQ4NDg4Mw== | DWesl 22566757 | 2017-06-19T15:58:27Z | 2017-06-19T15:58:27Z | CONTRIBUTOR | If you're still looking for the old tests, it looks like they disappeared in the last merge commit, f48de5. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray to and from iris 145140657 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
issue 14