home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

23 rows where user = 167802 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: issue_url, reactions, created_at (date), updated_at (date)

issue 12

  • Fix h5netcdf saving scalars with filters or chunks 4
  • API for multi-dimensional resampling/regridding 3
  • Attributes are dropped after `clip` even if `keep_attrs` is True 3
  • Assigning data to vector-indexed data doesn't seem to work 2
  • Coordinate attributes as DataArray type doesn't export to netcdf 2
  • Can't create weakrefs on DataArrays since xarray 0.13.0 2
  • dataarray arithmetics restore removed coordinates in xarray 0.15 2
  • DataArray.unstack taking unreasonable amounts of memory 1
  • Rules for propagating attrs and encoding 1
  • Indexing Variable objects with a mask 1
  • Feature Request: Hierarchical storage and processing in xarray 1
  • New deep copy behavior in 2022.9.0 causes maximum recursion error 1

user 1

  • mraspaud · 23 ✖

author_association 1

  • CONTRIBUTOR 23
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
1266619173 https://github.com/pydata/xarray/issues/7111#issuecomment-1266619173 https://api.github.com/repos/pydata/xarray/issues/7111 IC_kwDOAMm_X85LfxMl mraspaud 167802 2022-10-04T08:53:46Z 2022-10-04T08:54:12Z CONTRIBUTOR

Thanks for pinging me. Regarding the ancillary variables, this comes from the CF conventions, allowing to "link" two or more arrays together. For example, we might have a radiance array, with quality_flags as an ancillary variable array, that characterises the quality of each radiance pixel. Now, in netcdf/CF, the ancillary variables are just references, but the logical way to do this in xarray is to use an ancillary_variables attribute to a DataArray. I'm not sure how we could do it in another way.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  New deep copy behavior in 2022.9.0 causes maximum recursion error 1392878100
1040778284 https://github.com/pydata/xarray/issues/4118#issuecomment-1040778284 https://api.github.com/repos/pydata/xarray/issues/4118 IC_kwDOAMm_X84-CQQs mraspaud 167802 2022-02-15T20:48:51Z 2022-07-18T13:05:09Z CONTRIBUTOR

Thanks for launching this discussion @TomNicholas ! I'm a core dev of pytroll/satpy which handles earth observing satellite data. I got interested in DataTree because we have data from the same instruments available at mulitple resolution, hence not fitting into a single Dataset. For use Option 1 is probably feeling better. Even when having data at multiple resolutions, it is still a limited number of resolutions and hence splitting them in groups is the natural way of going I would say. We do not use the features you mention in Zarr or GRIB, as a majority of the satellite data we use is provided in netcdf nowadays. Don't hesitate to ask if you want to know more or if something is unclear, we are really interested in these developments, so if we can help that way...

{
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Feature Request: Hierarchical storage and processing in xarray 628719058
581940682 https://github.com/pydata/xarray/issues/3746#issuecomment-581940682 https://api.github.com/repos/pydata/xarray/issues/3746 MDEyOklzc3VlQ29tbWVudDU4MTk0MDY4Mg== mraspaud 167802 2020-02-04T14:40:09Z 2020-02-04T14:40:09Z CONTRIBUTOR

Thanks for the clarification. I can confirm that drop_vars works as expected. As a user, I would vote for having __delitem__ call drop_vars, at least for now to keep backwards compatibility ?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  dataarray arithmetics restore removed coordinates in xarray 0.15 559645981
581925761 https://github.com/pydata/xarray/issues/3746#issuecomment-581925761 https://api.github.com/repos/pydata/xarray/issues/3746 MDEyOklzc3VlQ29tbWVudDU4MTkyNTc2MQ== mraspaud 167802 2020-02-04T14:08:03Z 2020-02-04T14:08:03Z CONTRIBUTOR

@keewis thanks for the quick reply. I wasn't aware the builtin del wasn't supposed to work here, I'll try with drop_vars instead.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  dataarray arithmetics restore removed coordinates in xarray 0.15 559645981
545829813 https://github.com/pydata/xarray/issues/3433#issuecomment-545829813 https://api.github.com/repos/pydata/xarray/issues/3433 MDEyOklzc3VlQ29tbWVudDU0NTgyOTgxMw== mraspaud 167802 2019-10-24T09:23:14Z 2019-10-24T09:23:14Z CONTRIBUTOR

Ok, then I probably won't have the time to dig into this for now, sorry. If someone else with better knowledge of the code can work on this it would probably be for the best.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Attributes are dropped after `clip` even if `keep_attrs` is True 510892578
545303756 https://github.com/pydata/xarray/issues/3433#issuecomment-545303756 https://api.github.com/repos/pydata/xarray/issues/3433 MDEyOklzc3VlQ29tbWVudDU0NTMwMzc1Ng== mraspaud 167802 2019-10-23T07:12:22Z 2019-10-23T07:13:22Z CONTRIBUTOR

ooh, maybe I answered too fast. I hadn't really looked at the code yet... but let's see: if I understand correctly, in the function you cited, the functions get wrapped and include in the current cls. So I would need to fix the wrapper itself (_func_slash_method_wrapper) so that the attrs get copied and applied to the resulting array (if keep_attrs is True that is). However, do we have a guarantee that the wrapped functions are actually returning something that can have a .attrs ?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Attributes are dropped after `clip` even if `keep_attrs` is True 510892578
545295063 https://github.com/pydata/xarray/issues/3433#issuecomment-545295063 https://api.github.com/repos/pydata/xarray/issues/3433 MDEyOklzc3VlQ29tbWVudDU0NTI5NTA2Mw== mraspaud 167802 2019-10-23T06:47:00Z 2019-10-23T06:47:00Z CONTRIBUTOR

Sure! What do you reckon the default should be ? False for backwards compatibility ?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Attributes are dropped after `clip` even if `keep_attrs` is True 510892578
532683009 https://github.com/pydata/xarray/issues/3317#issuecomment-532683009 https://api.github.com/repos/pydata/xarray/issues/3317 MDEyOklzc3VlQ29tbWVudDUzMjY4MzAwOQ== mraspaud 167802 2019-09-18T13:25:52Z 2019-09-18T13:28:17Z CONTRIBUTOR

@crusaderky yep, supporting DataArray and Dataset objects is what we need.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Can't create weakrefs on DataArrays since xarray 0.13.0 495198361
532683407 https://github.com/pydata/xarray/issues/3317#issuecomment-532683407 https://api.github.com/repos/pydata/xarray/issues/3317 MDEyOklzc3VlQ29tbWVudDUzMjY4MzQwNw== mraspaud 167802 2019-09-18T13:26:47Z 2019-09-18T13:26:47Z CONTRIBUTOR

and thanks for the workaround and impressively fast reaction :)

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Can't create weakrefs on DataArrays since xarray 0.13.0 495198361
445786351 https://github.com/pydata/xarray/pull/2591#issuecomment-445786351 https://api.github.com/repos/pydata/xarray/issues/2591 MDEyOklzc3VlQ29tbWVudDQ0NTc4NjM1MQ== mraspaud 167802 2018-12-10T11:38:24Z 2018-12-10T11:38:24Z CONTRIBUTOR

Ok good, I'll let this run it's course then. How about whats-new.rst, do you want me to fill in something there?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fix h5netcdf saving scalars with filters or chunks 387732534
445739994 https://github.com/pydata/xarray/pull/2591#issuecomment-445739994 https://api.github.com/repos/pydata/xarray/issues/2591 MDEyOklzc3VlQ29tbWVudDQ0NTczOTk5NA== mraspaud 167802 2018-12-10T09:01:49Z 2018-12-10T09:01:49Z CONTRIBUTOR

Ok travis seems to pass, how about appveyor ? There are many time encoding errors, should it be so ?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fix h5netcdf saving scalars with filters or chunks 387732534
444842759 https://github.com/pydata/xarray/pull/2591#issuecomment-444842759 https://api.github.com/repos/pydata/xarray/issues/2591 MDEyOklzc3VlQ29tbWVudDQ0NDg0Mjc1OQ== mraspaud 167802 2018-12-06T11:36:35Z 2018-12-06T11:36:35Z CONTRIBUTOR

Am I right in thinking the failing travis and appveyor build aren't my doing ?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fix h5netcdf saving scalars with filters or chunks 387732534
444469807 https://github.com/pydata/xarray/pull/2591#issuecomment-444469807 https://api.github.com/repos/pydata/xarray/issues/2591 MDEyOklzc3VlQ29tbWVudDQ0NDQ2OTgwNw== mraspaud 167802 2018-12-05T12:33:58Z 2018-12-05T12:33:58Z CONTRIBUTOR

I'm not sure if I need to document anything as this is just a bugfix ?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fix h5netcdf saving scalars with filters or chunks 387732534
368436995 https://github.com/pydata/xarray/issues/1906#issuecomment-368436995 https://api.github.com/repos/pydata/xarray/issues/1906 MDEyOklzc3VlQ29tbWVudDM2ODQzNjk5NQ== mraspaud 167802 2018-02-26T09:19:32Z 2018-02-26T09:19:32Z CONTRIBUTOR

I'm satisfied with this answer, thanks for taking the time !

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Coordinate attributes as DataArray type doesn't export to netcdf 296673404
365284450 https://github.com/pydata/xarray/issues/1906#issuecomment-365284450 https://api.github.com/repos/pydata/xarray/issues/1906 MDEyOklzc3VlQ29tbWVudDM2NTI4NDQ1MA== mraspaud 167802 2018-02-13T14:35:02Z 2018-02-13T14:35:02Z CONTRIBUTOR

Also DataArrays can have attributes, so storing them as attributes could lead to quite intricate situations ;)

I agree totally, but when writing to netcdf, I was expecting xarray to take out the DataArrays out of attributes, replace them with their name, and save them as separate array in the netcdf files. But maybe that's not xarray's job do deal with ?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Coordinate attributes as DataArray type doesn't export to netcdf 296673404
362529551 https://github.com/pydata/xarray/issues/1614#issuecomment-362529551 https://api.github.com/repos/pydata/xarray/issues/1614 MDEyOklzc3VlQ29tbWVudDM2MjUyOTU1MQ== mraspaud 167802 2018-02-02T09:13:38Z 2018-02-02T09:13:38Z CONTRIBUTOR

This issue is very relevant for me too. I would like to also propose that a user could provide a function that would know how to combine the attrs of different DataArrays.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Rules for propagating attrs and encoding 264049503
349583110 https://github.com/pydata/xarray/pull/1751#issuecomment-349583110 https://api.github.com/repos/pydata/xarray/issues/1751 MDEyOklzc3VlQ29tbWVudDM0OTU4MzExMA== mraspaud 167802 2017-12-06T09:28:16Z 2017-12-06T09:28:16Z CONTRIBUTOR

That looks fantastic @shoyer , looking forward to testing it :)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Indexing Variable objects with a mask 278325492
349582067 https://github.com/pydata/xarray/issues/486#issuecomment-349582067 https://api.github.com/repos/pydata/xarray/issues/486 MDEyOklzc3VlQ29tbWVudDM0OTU4MjA2Nw== mraspaud 167802 2017-12-06T09:24:16Z 2017-12-06T09:24:16Z CONTRIBUTOR

@shoyer absolutely, I will look into it, soon I hope

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  API for multi-dimensional resampling/regridding 96211612
348910192 https://github.com/pydata/xarray/issues/486#issuecomment-348910192 https://api.github.com/repos/pydata/xarray/issues/486 MDEyOklzc3VlQ29tbWVudDM0ODkxMDE5Mg== mraspaud 167802 2017-12-04T09:43:02Z 2017-12-04T09:43:02Z CONTRIBUTOR

@jhamman One possibility would be to have a .resample on a DataArray (or equivalent independent function) that would be provided also a set of new coordinates, and that would return a new DataArray resampled to the new coordinates. One step further would be to implement this in sel or isel directly somehow.

{
    "total_count": 3,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 3,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  API for multi-dimensional resampling/regridding 96211612
348165798 https://github.com/pydata/xarray/issues/486#issuecomment-348165798 https://api.github.com/repos/pydata/xarray/issues/486 MDEyOklzc3VlQ29tbWVudDM0ODE2NTc5OA== mraspaud 167802 2017-11-30T11:47:06Z 2017-11-30T11:47:06Z CONTRIBUTOR

thanks @shoyer

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  API for multi-dimensional resampling/regridding 96211612
347594411 https://github.com/pydata/xarray/issues/1743#issuecomment-347594411 https://api.github.com/repos/pydata/xarray/issues/1743 MDEyOklzc3VlQ29tbWVudDM0NzU5NDQxMQ== mraspaud 167802 2017-11-28T17:11:56Z 2017-11-28T17:11:56Z CONTRIBUTOR

@jhamman Same problem rises with a regular numpy array as a source for the DataArray.

The use case I have is spatial resampling: assigning source data pixels to a destination grid. So in the example provided above, l_indices and c_indices would be the shape of the of the destination grid and tell us which pixels to take from the source data.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Assigning data to vector-indexed data doesn't seem to work 277441150
347573516 https://github.com/pydata/xarray/issues/1743#issuecomment-347573516 https://api.github.com/repos/pydata/xarray/issues/1743 MDEyOklzc3VlQ29tbWVudDM0NzU3MzUxNg== mraspaud 167802 2017-11-28T16:09:30Z 2017-11-28T16:09:30Z CONTRIBUTOR

@shoyer this is what I showed you earlier

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Assigning data to vector-indexed data doesn't seem to work 277441150
327891893 https://github.com/pydata/xarray/issues/1560#issuecomment-327891893 https://api.github.com/repos/pydata/xarray/issues/1560 MDEyOklzc3VlQ29tbWVudDMyNzg5MTg5Mw== mraspaud 167802 2017-09-07T18:55:39Z 2017-09-07T18:55:39Z CONTRIBUTOR

Yes, I have the latest version, still takes some time with a 9000x9000 array: In [4]: %time arr.unstack('flat_dim') CPU times: user 26.1 s, sys: 7.8 s, total: 33.9 s Wall time: 35.3 s

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  DataArray.unstack taking unreasonable amounts of memory 255989233

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 14.315ms · About: xarray-datasette