issues
6 rows where user = 6883049 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: comments, created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1596511582 | PR_kwDOAMm_X85KloU- | 7551 | Support for the new compression arguments. | markelg 6883049 | closed | 0 | 30 | 2023-02-23T09:32:56Z | 2023-12-21T15:24:34Z | 2023-12-21T15:24:16Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/7551 | Use a dict for the arguments and update it with the encoding, so all variables are passed.
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7551/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1359914824 | PR_kwDOAMm_X84-RHbw | 6981 | Support the new compression argument in netCDF4 > 1.6.0 | markelg 6883049 | closed | 0 | 5 | 2022-09-02T09:06:42Z | 2023-10-30T16:37:58Z | 2022-12-01T22:41:51Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/6981 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6981/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1343038233 | I_kwDOAMm_X85QDSMZ | 6929 | Support new netcdf4 1.6.0 compression arguments | markelg 6883049 | closed | 0 | 2 | 2022-08-18T12:35:34Z | 2022-12-01T22:41:53Z | 2022-12-01T22:41:53Z | CONTRIBUTOR | Is your feature request related to a problem?When using the netcdf4 engine, I am not able to use the new "compression" argument to choose a compression scheme different from zlib in the encoding. ``` if raise_on_invalid: invalid = [k for k in encoding if k not in valid_encodings] if invalid:
../../../../netCDF4_.py:279: ValueError ``` Furthermore, according to the release notes of 1.6.0, zlib argument is to be deprecated:
I am using the last versions
Describe the solution you'd likeUpdate the netcdf4 backend to support these arguments. Should not be too difficult. Describe alternatives you've consideredNo response Additional contextI can try to do this myself, it does not look hard. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6929/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
104484316 | MDU6SXNzdWUxMDQ0ODQzMTY= | 557 | CDO-like convenience methods to select times | markelg 6883049 | open | 0 | 9 | 2015-09-02T13:42:48Z | 2022-04-18T16:03:35Z | CONTRIBUTOR | I feel like the time selecting features of xray can be improved. Currently, some common operations are too involved or verbose, like selecting the data in a group of months that are not a standard season (e.g. the monsoon season in india JJAS), or in non consecutive years (e.g. El Niño years). I think it would be great to implement (and easy), some methods inspired in the widely used Climate Data Operators https://code.zmaw.de/projects/cdo For example: selyear, selmon, selday and selhour. Then we could easily do a composite of JJAS seasons in El Niño years like this:
This would make me very happy. The way to go would be to write methods that call grouby, then select the years/months, merge them, and return the corresponding dataset/dataarray, but I am not sure about what is the most efficient way to do this. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/557/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
94012395 | MDU6SXNzdWU5NDAxMjM5NQ== | 457 | xray raises error when opening datasets with multi-dimensional coordinate variables | markelg 6883049 | closed | 0 | 2 | 2015-07-09T10:32:16Z | 2015-09-03T13:34:42Z | 2015-09-03T13:34:42Z | CONTRIBUTOR | Hello and thank you for this great package. I have a (opendap) dataset where one coordinate (time24), is attached to a 2-dimensional coordinate variable. The reason is that it contains a set of forecasts that overlap in time, so the value of time24 depends on the run. Unfortunately it's not open so I can't share it for tests. The main variable is:
And the coordinate variables are: ``` int32 run(run) long_name: Run time for ForecastModelRunCollection standard_name: forecast_reference_time units: hours since 1981-01-01T00:00:00 _CoordinateAxisType: RunTime |S1 member(member, maxStrlen64) standard_name: realization _CoordinateAxisType: Ensemble int32 time24(run, time24) long_name: Forecast time for ForecastModelRunCollection standard_name: time units: hours since 1981-01-01T00:00:00 _CoordinateAxisType: Time float32 lon(lon) units: degrees_east float32 lat(lat) units: degrees_north ``` xray is currently unable to open this dataset:
Which its OK, this looks like something difficult to support, but it will be fine if at least I could simply exclude the variable time24 for being read by xray. A flag like "exclude_variable=(var1, var2, ...)". And then xray would fill the coordinate with the default int64 values (0, 1, 2, 3, 4...) that uses when there is no coordinate for a dimension. This would be very useful also to exclude troublesome variables (e.g. corrupt, with weird data types, inconsistent when concatenating) that are present in many datasets. Another way to go could be to issue a warning instead of an error, and then fill the variable with the default values (0, 1, 2, 3, 4...) I am looking at the code to see if I can implement this by myself, but I am not sure about how to proceed. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/457/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
101050642 | MDExOlB1bGxSZXF1ZXN0NDI0NzQ2MTk= | 532 | Add a --drop-variables flag to xray.open_dataset to exclude certain variables | markelg 6883049 | closed | 0 | 7 | 2015-08-14T16:43:23Z | 2015-08-19T18:27:35Z | 2015-08-19T18:27:35Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/532 | Related to issue #457. I implemented this flag following the instructions given by @shoyer in the issue thread. I have a decent amount of experience with python, but this is the first pull request I set up in GitHub, and I am a begginer with git (more used to svn). I was careful but please check that I did not mess up something ; ) |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/532/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);