home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

19 rows where user = 23484003 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: issue_url, reactions, created_at (date), updated_at (date)

issue 15

  • Dataset groups 3
  • Hooks for custom attribute handling in xarray operations 2
  • Add trapz to DataArray for mathematical integration 2
  • Do not convert subclasses of `ndarray` unless required 1
  • 2D coordinates to DataArray: erroneous error message 1
  • Logarithmic colorbar ticks are jumbled 1
  • argmin / argmax behavior doesn't match documentation 1
  • Structured numpy arrays, xarray and netCDF(4) 1
  • NetCDF coordinates in parent group is not used when reading sub group 1
  • Add support for querying netCDF4 file for groups 1
  • Nondeterministic bug with bytestring decoding 1
  • line labels for 1D plotting 1
  • "_center" postfix on axis label resulting from groupby_bins persists after renaming variable 1
  • Subplots w/ xarrays 1
  • Should Xarray stop doing automatic index-based alignment? 1

user 1

  • lamorton · 19 ✖

author_association 1

  • NONE 19
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
1324489293 https://github.com/pydata/xarray/issues/7045#issuecomment-1324489293 https://api.github.com/repos/pydata/xarray/issues/7045 IC_kwDOAMm_X85O8hpN lamorton 23484003 2022-11-23T03:05:50Z 2022-11-23T03:06:57Z NONE

IMO nearly all the complication and confusion emerge from the mixed concept of a dimension coordinate in the Xarray data model.

My take: the main confusion is from trying to support a relational-database-like data model (where inner/outer joins make sense because values are discrete/categorical) AND a multi-dimensional array model for physical sciences (where typically values are floating-point, exact alignment is required, and interpolation is used when alignment is inexact). As a physical sciences guy, I basically never use the database-like behavior, and it only serves to silence alignment errors so that the fallout happens downstream (NaNs from outer joins, empty arrays on inner joins), making it harder to debug. TIL I can just xarray.set_options(arithmetic_join='exact') and get what I wanted all along.

Why can't we use loc/sel with a non-dimension (non-index) coord?

What happens if I have Cartesian x/y dimensions plus r/theta cylindrical coordinates defined on the x / y, and I select some range in r? It's not slicing an array at that point, that's more like a relational database query. The thing you get back isn't an array anymore because not all i,j combinations are valid.

confusion emerge[s] from the mixed concept of a dimension coordinate

From my perspective, the dimensions are special coordinates that the arrays happen to be sampled in a rectangular grid on. It's not confusing to me, but maybe that's b/c of my perspective from physical sciences background/usecases. I suppose one could in principle have an array with coordinates such that none of the coordinates aligned with any particular axis, but it seems improbable.

What do you think of making the default FloatIndex use a reasonable (hard to define!) rtol for comparisons?

IMO this is asking for weird bugs. In my work I either expect exact alignment, or I want to interpolate. I never want to ignore a mismatch because it's basically just sweeping an error under the rug. In fact, I'd really just like to test that all the dimension coordinates are the same objects, although Python's semantics don't really work with that.

imagine cases where a coordinate is defined in separate units.

Getting this right would be really powerful.

{
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Should Xarray stop doing automatic index-based alignment? 1376109308
1212207881 https://github.com/pydata/xarray/issues/6907#issuecomment-1212207881 https://api.github.com/repos/pydata/xarray/issues/6907 IC_kwDOAMm_X85IQNMJ lamorton 23484003 2022-08-11T16:21:30Z 2022-08-11T16:21:30Z NONE

Ahh, thank you! That did the trick.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Subplots w/ xarrays 1335419018
670757610 https://github.com/pydata/xarray/issues/4322#issuecomment-670757610 https://api.github.com/repos/pydata/xarray/issues/4322 MDEyOklzc3VlQ29tbWVudDY3MDc1NzYxMA== lamorton 23484003 2020-08-07T22:23:24Z 2020-08-07T22:23:24Z NONE

@dcherian: OK, thanks, now I understand why it is happening -- there's no unambiguous way to represent the intervals as floats, so one needs to use either the left/right/midpoint & indicate that. For my case, I think I will just replace the array of intervals with the array of midpoints of the intervals.

The "_center" tag still doesn't work with the automatic units labeling though:

import xarray as xr import numpy as np data_vars={'y':('x',np.ones((101)),{'units':'kg/m'})} coords={'x':('x',np.linspace(0,1,101,endpoint=True),{'units':'m'})} ds = xr.Dataset(data_vars,coords) dsd = ds.groupby_bins('x',np.linspace(0,1,11,endpoint=True),right=False).sum(dim='x') dsd.x_bins.attrs = dsd.x_bins.attrs dsd.y.plot() #The x-axis label still looks like "x [m]_center"

The "_center" tag should be applied before the "[m]" one.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  "_center" postfix on axis label resulting from groupby_bins persists after renaming variable 675288247
662714444 https://github.com/pydata/xarray/issues/4255#issuecomment-662714444 https://api.github.com/repos/pydata/xarray/issues/4255 MDEyOklzc3VlQ29tbWVudDY2MjcxNDQ0NA== lamorton 23484003 2020-07-22T21:47:37Z 2020-07-22T21:47:37Z NONE

Thanks @dcherian, that's what I'm looking for.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  line labels for 1D plotting 664067837
617402384 https://github.com/pydata/xarray/issues/3991#issuecomment-617402384 https://api.github.com/repos/pydata/xarray/issues/3991 MDEyOklzc3VlQ29tbWVudDYxNzQwMjM4NA== lamorton 23484003 2020-04-21T20:39:56Z 2020-04-21T20:39:56Z NONE

Thanks, I'll close this, since it looks like an issue of bad input. I can't use h5netcdf due to conda env nonsense, but I've worked around it by just dropping the 'name' variable during loading.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Nondeterministic bug with bytestring decoding 604210297
612165732 https://github.com/pydata/xarray/issues/3538#issuecomment-612165732 https://api.github.com/repos/pydata/xarray/issues/3538 MDEyOklzc3VlQ29tbWVudDYxMjE2NTczMg== lamorton 23484003 2020-04-10T18:47:02Z 2020-04-10T18:47:02Z NONE

I hacked a quick solution for exploring HDF5 files that might be of interest. import h5py def explore_file(filepath,show="arrays"): """ View the internal structure of an HDF5 file Returns a dictionary of the entity names & representations of their values Arguments: filepath: string show: one of ('groups','arrays','all') groups: display the number of direct array-type members of each group/subgroup arrays: display the shape & dtype of each array (if not a scalar) all: display the shape & dtype of every array """ with h5py.File(filepath,mode='r') as f: descriptions = {} if show=="groups": def visitor(k,v): if isinstance(v, h5py.Group): arrays = [k for k in v.keys() if isinstance(v[k],h5py.Dataset)] if len(arrays) >0: descriptions[k] = len(arrays) elif show == "arrays": def visitor(k,v): if isinstance(v,h5py.Dataset) and len(v.shape)>0: descriptions[k] = "{},{}".format(v.shape,v.dtype) elif show =="all": def visitor(k,v): if isinstance(v,h5py.Dataset): descriptions[k] = "{},{}".format(v.shape,v.dtype) f.visititems(visitor)#Apply names.append to each name in the file return descriptions

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add support for querying netCDF4 file for groups 523572262
612137754 https://github.com/pydata/xarray/issues/1982#issuecomment-612137754 https://api.github.com/repos/pydata/xarray/issues/1982 MDEyOklzc3VlQ29tbWVudDYxMjEzNzc1NA== lamorton 23484003 2020-04-10T17:38:50Z 2020-04-10T17:38:50Z NONE

I'm currently working around this by loading the root group & the branch group with two separate calls and then merging the resulting datasets. It's ugly b/c I have to manually associate the 'phony_dim_x' dimensions from one group with the other.

Maybe I can find the time during quarantine to make an attempt at resolving #1092, which I think would facilitate resolving this issue as well.

Another option would be to allow the group kwarg to be a tuple of group names, and load_dataset could yield a (flat) Dataset including both the root and the branch variables.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  NetCDF coordinates in parent group is not used when reading sub group 304314787
427195935 https://github.com/pydata/xarray/issues/1626#issuecomment-427195935 https://api.github.com/repos/pydata/xarray/issues/1626 MDEyOklzc3VlQ29tbWVudDQyNzE5NTkzNQ== lamorton 23484003 2018-10-04T22:59:19Z 2018-10-08T15:10:54Z NONE

I just got bit with this as well. I was basically using tuples of indices as coordinates in order to implement a multidimensional sparse array .

My workaround is to use plain dimension index_dim to index the points in the N-dimensional space that I actually populate, and to have several coordinates (say X,Y) that all have index_dim as their only dimension. It's easy enough to see what the coordinates are once you select a value along index_dim, but I have to go outside xarray to locate a populated point based on it's X,Y-coordinates, because I can't slice along those arrays as (A) they aren't aliased to a dimension (B) they have non-unique values.

I've come up with an ugly method for selecting by tuples of X,Y-coordinates:

pairs = zip(x_wanted,y_wanted)

pair2index = {(dataset.x[i].item(), dataset.y[i].item()):i for i in dataset.index_dim.data}

try:

     found_indices = [pair2index[p] for p in pairs]

     found = dataset.isel(index_dim=found_indices)

except KeyError:

     print "Coordinate {} not found in dataset.".format(p)

     raise
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Structured numpy arrays, xarray and netCDF(4) 264582338
359525739 https://github.com/pydata/xarray/issues/1288#issuecomment-359525739 https://api.github.com/repos/pydata/xarray/issues/1288 MDEyOklzc3VlQ29tbWVudDM1OTUyNTczOQ== lamorton 23484003 2018-01-22T18:51:34Z 2018-01-22T19:15:31Z NONE

@gajomi I can find a place to upload what I have. I foresee some difficulty making a general wrapper due to the issue of naming conventions, but I like the idea too.

Edit: Here's what I have so far ... YMMV, it's still kinda rough. https://github.com/lamorton/SciPyXarray

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add trapz to DataArray for mathematical integration 210704949
298253809 https://github.com/pydata/xarray/issues/1388#issuecomment-298253809 https://api.github.com/repos/pydata/xarray/issues/1388 MDEyOklzc3VlQ29tbWVudDI5ODI1MzgwOQ== lamorton 23484003 2017-04-30T20:08:25Z 2017-04-30T20:08:25Z NONE

Well, xarray at least agrees with numpy's implementation of that function, but that's not to say it is 'correct.' It would be nice if numpy.argmin worked intuitively. That aside, it seems to me that applying min() to a xr.DataArray should return a reduced array with length 1 in each dimension; then you could just query this object and find the coordinate/dimension values. Perhaps then argmin() would just get a tuple of axis indices, such that arr[*arr.argmin()] == arr.min() would hold.

The next question is, what happens if you start supplying coordinate/dimension optional arguments to argmin? It doesn't make sense to minimize over a coordinate, so only dimensions should be accepted. This should result in a tuple of lists, the way numpy.where does.

Does that seem reasonable?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  argmin / argmax behavior doesn't match documentation 224878728
293979667 https://github.com/pydata/xarray/issues/1288#issuecomment-293979667 https://api.github.com/repos/pydata/xarray/issues/1288 MDEyOklzc3VlQ29tbWVudDI5Mzk3OTY2Nw== lamorton 23484003 2017-04-13T18:14:53Z 2017-04-13T18:14:53Z NONE

If you give a mouse a cookie, he'll ask for a glass of milk. There are a whole slew of Numpy/Scipy functions that would really benefit from using xarray to organize input/out. I've written wrappers for svd, fft, psd, gradient, and specgram, for starts. Perhaps a new package would be in order?

{
    "total_count": 3,
    "+1": 3,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add trapz to DataArray for mathematical integration 210704949
290224441 https://github.com/pydata/xarray/issues/1092#issuecomment-290224441 https://api.github.com/repos/pydata/xarray/issues/1092 MDEyOklzc3VlQ29tbWVudDI5MDIyNDQ0MQ== lamorton 23484003 2017-03-29T21:00:42Z 2017-03-29T21:04:08Z NONE

@shoyer I see your point about the string manipulation. On the other hand, this is exactly how h5py and netCDF4-python implement the group/subgroup access syntax: just like a filepath.

I'm also having thoughts about the attribute access: if ds['flux']['poloidal'] = subset does not work, then neither does ds.flux.poloidal = subset, correct? If so, it is almost pointless to have the attribute access in the first place. I suppose that is the price to pay for merely making it appear as though there is attribute-access.

For my own understanding, I tried to translate between xarray and netCDF4-python : - nc.Variable <--> xr.Variable - nc.????? <--> xr.DataArray (netCDF doesn't distinguish vars/coords, so no analog is possible) - nc.Group <--> xr.NestableDataset - nc.Dataset <--> xr.NestableDataset

From netCDF4-python

Groups define a hierarchical namespace within a netCDF file. They are analogous to directories in a unix filesystem. Each Group behaves like a Dataset within a Dataset, and can contain it's own variables, dimensions and attributes (and other Groups). Group inherits from Dataset, so all the Dataset class methods and variables are available to a Group instance (except the close method).

It appears that the only things special about a nc.Dataset as compared to an nc.Group are: 1. The file access is tied to the nc.Dataset. 2. The nc.Dataset group has children but no parent.

A big difference between xarray and netCDF4-python datasets is that the children datasets in xarray can go have a life of their own, independent of their parent & the file it represents. It makes sense to me to have just a single xarray type (modified version of xarray.Dataset) to deal with both of these cases.

The nc.Group instances have an attribute groups that lists all the subgroups. So one option I suppose would be to follow that route and actually have Datasets that contain other datasets alongside everything else.

As an aside, it seems that ragged arrays are now supported in netCDF4-python:VLen.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Dataset groups 187859705
290159834 https://github.com/pydata/xarray/issues/1092#issuecomment-290159834 https://api.github.com/repos/pydata/xarray/issues/1092 MDEyOklzc3VlQ29tbWVudDI5MDE1OTgzNA== lamorton 23484003 2017-03-29T17:18:23Z 2017-03-29T17:19:19Z NONE

@darothen: Hmm, are your coordinate grids identical for each simulation (ie, any(ds1.x != ds2.x) evaluates as false)?
- If so, then it really does make sense to do what you described and create new dimensions for the experimental factors, on top of the spatial dimensions of the simulations.
- If not, but the length of all the dimensions is the same, one could still keep all the simulations in the same dataset, one would just need to index the coordinates with the experimental factors as well. - Finally, if the shape of the coordinate arrays varies with the experimental factor (for instance, doing convergence studies with finer meshes), that violates the xarray data model for having a single set of dimensions, each of which has a fixed length throughout the dataset, in order to enable smart broadcasting by dimension name. If (and only if) the dimensions are changing length, it would be better to keep a collection of datasets in some other type of data structure.

It might work for my case to convert my 'tags' to indexes for new dimensions (ie, ds.sel(quantity='flux',direction='poloidal',variation='perturbed'). However, there are two issues: 1. The background flux is defined to be uniform in some coordinates, so it is lower-dimensionality than the total flux. It doesn't make sense to turn a 1-D variable into a 3-D variable just to match the others so I can put it into an array. This goes especially for scalars and metadata that really should not be turned into arrays, but do belong with the subsets. 2. During my processing sequence, I may want to add something like ds.flux.helical.background. In order to do this, however, I'd be forced to define the 'perturbed' and 'total' helical fluxes at that time. But often I don't want or need to compute these.

There is still a good reason to have a flexible data model for lumping more heterogeneous collections together under some headings, with the potential for recursion. I suppose my question is, what is the most natural data model & corresponding access syntax?
- Attribute-style access is convenient and idiomatic; it implies a tree-like structure. This probably makes the most sense. - An alternative data model would be sets with subsets, which could be accessed by something similar to ds.sel but accepting set names as *args rather than **kwargs. Then requesting members of some set could return a dataset with those members, and the new dataset would lack the membership flag for variables, much the way slicing reduces dimensionality. In fact, one could even keep a record of the applied set requests much like point axes. A variable's key in data_vars would essentially just be a list/tuple of sets of which it is a member. Assignment would be tricky because it could create new sets, and the membership of existing elements in a new set would probably require user intervention to clarify...

@shoyer: Your approach is quite clever, and 'smells' much better than parsing strings. I do have two quibbles though. - Accessing via ds['flux','poloidal'] is a bit confusing because ds[] is (I think) a dictionary, but supplying multiple names is suggestive of either array indexing or getting a list with two things inside, flux and poloidal. That is, the syntax doesn't reflect the semantics very well. - If I am at the console, and I start typing ds.flux and use the tab-completion, does that end up creating a new dataset just so I can see what is inside ds.flux? Is that an expensive operation?

[Edited for formatting]

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Dataset groups 187859705
289916013 https://github.com/pydata/xarray/issues/1092#issuecomment-289916013 https://api.github.com/repos/pydata/xarray/issues/1092 MDEyOklzc3VlQ29tbWVudDI4OTkxNjAxMw== lamorton 23484003 2017-03-28T21:51:30Z 2017-03-28T21:51:30Z NONE

One important reason to keep the tree-like structure within a dataset is that it provides some assurance to the recipient of the dataset that all the variables 'belong' in the same coordinate space. Constructing a tree (from a nested dictionary, say) whose leaves are datasets or dataArrays doesn't guarantee that the coordinates/dimensions in all the leaves are compatible, whereas a tree within the dataset does make a guarantee about the leaves.

As far as motivation for making trees, I find myself with several dozen variable names such as ds.fluxPoloidalPerturbation and ds.fieldToroidalBackground and various permutations, so it would be logical to be able to write ds.flux.poloidal and get a sub-dataset that contains dataArrays named perturbation and background.

As far as implementation, the DataGroup could really just be syntactic sugar around a flat dataset that is hidden from the user, and has keys like 'flux.poloidal.perturbed,' so that dg.flux.poloidal.perturbed would be an alias to dg.__hiddenDataset__['flux.poloidal.perturbed'], and dg.flux.poloidal would be an alias to dg.__hiddenDataset__[['flux.poloidal.perturbed','flux.poloidal.background']]. Seems like it would require mucking with dg.__getattr__, dg.__setattr__, and dg.__dir__ at a minimum to get it off the ground, but by making the tree virtual, one avoids the difficulties with slicing, etc. The return type of dg.__getattr__ should be another DataGroup as long as there are branches in the output, but it should fall back to a Dataset when there are only leaves.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Dataset groups 187859705
288161493 https://github.com/pydata/xarray/issues/1315#issuecomment-288161493 https://api.github.com/repos/pydata/xarray/issues/1315 MDEyOklzc3VlQ29tbWVudDI4ODE2MTQ5Mw== lamorton 23484003 2017-03-21T17:46:00Z 2017-03-21T17:46:00Z NONE

I discovered that it is a problem with my environment. Sorry for the confusion.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Logarithmic colorbar ticks are jumbled  215821510
287861154 https://github.com/pydata/xarray/pull/1118#issuecomment-287861154 https://api.github.com/repos/pydata/xarray/issues/1118 MDEyOklzc3VlQ29tbWVudDI4Nzg2MTE1NA== lamorton 23484003 2017-03-20T18:51:44Z 2017-03-20T18:51:44Z NONE

Is there anything I can do to help move this forward? I'd really like to have this capability.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Do not convert subclasses of `ndarray` unless required 189095110
283524084 https://github.com/pydata/xarray/issues/988#issuecomment-283524084 https://api.github.com/repos/pydata/xarray/issues/988 MDEyOklzc3VlQ29tbWVudDI4MzUyNDA4NA== lamorton 23484003 2017-03-02T01:09:44Z 2017-03-02T01:09:44Z NONE

@gerritholl In my line of work we often deal with 2+1 or 3+1 dimensional datasets (space + time). I have been bitten when I expected space in meters, but it was in centimeters, or time in seconds but it was in milliseconds. Also, I would like to improve the plotting functionality so that publication-quality plots can be made directly by automatically including units in the axis labels (and while I'm wishing for a pony, there could be pretty-printing versions of coordinate names (ie, LaTeX symbols or something)).

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Hooks for custom attribute handling in xarray operations 173612265
283492897 https://github.com/pydata/xarray/issues/988#issuecomment-283492897 https://api.github.com/repos/pydata/xarray/issues/988 MDEyOklzc3VlQ29tbWVudDI4MzQ5Mjg5Nw== lamorton 23484003 2017-03-01T22:32:24Z 2017-03-01T22:32:24Z NONE

@gerritholl Interesting! The difficulty I am seeing with this approach is that the units apply only to the main data array, and not the coordinates. In a scientific application, the coordinates are generally physical quantities with units as well. If we want xarray with units to be really useful for scientific computation, we need to have the coordinate arrays be unitful 'quantities' too, rather than tacking the units on as an attribute of xarray.DataArray. I tinkered with making the 'units' attribute into a dictionary, with units for each coordinate (and for the data) as key-value pairs, but it is very cumbersome and goes against my philosophy (for instance, extracting a coordinate from a DataArray leaves it without units).

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Hooks for custom attribute handling in xarray operations 173612265
275580656 https://github.com/pydata/xarray/issues/1233#issuecomment-275580656 https://api.github.com/repos/pydata/xarray/issues/1233 MDEyOklzc3VlQ29tbWVudDI3NTU4MDY1Ng== lamorton 23484003 2017-01-27T03:21:02Z 2017-01-27T03:21:02Z NONE

Hi Stephan, Thanks for your help. I see that I was confused about the nature of the data model. Lucas

On Thu, Jan 26, 2017 at 10:11 PM, Stephan Hoyer notifications@github.com wrote:

Indeed, we should have a better error message here.

The xarray data model actually does not allow coordinates with the same name as a dimension unless they are a 1-dimensional array with the same length as the dimension size. You should make a separate variable for holding the current position, which can vary along both x and t.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/pydata/xarray/issues/1233#issuecomment-275579647, or mute the thread https://github.com/notifications/unsubscribe-auth/AWZWYw1dcD3aNH48TfKM_q3xTbbc8U4nks5rWWBQgaJpZM4LvX8k .

--


Lucas A. Morton

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  2D coordinates to DataArray: erroneous error message 203543958

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 16.985ms · About: xarray-datasette