issue_comments
12 rows where author_association = "NONE" and user = 44142765 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: issue_url, created_at (date), updated_at (date)
user 1
- zoj613 · 12 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
1526051246 | https://github.com/pydata/xarray/issues/7713#issuecomment-1526051246 | https://api.github.com/repos/pydata/xarray/issues/7713 | IC_kwDOAMm_X85a9bGu | zoj613 44142765 | 2023-04-27T17:11:09Z | 2023-04-27T17:11:34Z | NONE | @kmuehlbauer It looks like a bug in the code if indeed tuples are meant to be treated the same as any sequence of data. These lines https://github.com/pydata/xarray/blob/0f4e99d036b0d6d76a3271e6191eacbc9922662f/xarray/core/variable.py#L259-L260 suggest that when a tuple is passed in, it is converted to a 0-dimension array of type object via https://github.com/pydata/xarray/blob/0f4e99d036b0d6d76a3271e6191eacbc9922662f/xarray/core/utils.py#L344-L348 Maybe removing the tuple type check and relying on this line https://github.com/pydata/xarray/blob/0f4e99d036b0d6d76a3271e6191eacbc9922662f/xarray/core/variable.py#L287-L288 is better? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
`Variable/IndexVariable` do not accept a tuple for data. 1652227927 | |
1478331489 | https://github.com/pydata/xarray/issues/7377#issuecomment-1478331489 | https://api.github.com/repos/pydata/xarray/issues/7377 | IC_kwDOAMm_X85YHYxh | zoj613 44142765 | 2023-03-21T17:42:29Z | 2023-03-21T17:42:29Z | NONE | Does this work for an array of quantiles and also does it require the time coordinate to have a single chunk? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Aggregating a dimension using the Quantiles method with `skipna=True` is very slow 1497031605 | |
1474097791 | https://github.com/pydata/xarray/issues/6891#issuecomment-1474097791 | https://api.github.com/repos/pydata/xarray/issues/6891 | IC_kwDOAMm_X85X3PJ_ | zoj613 44142765 | 2023-03-17T16:30:58Z | 2023-03-17T16:31:12Z | NONE | @alrho007 I still get this error using version TypeError Traceback (most recent call last) Cell In[1], line 18 3 import numpy as np 5 da = xr.DataArray( 6 7 np.random.rand(4, 3), (...) 16 17 ) ---> 18 da.curvefit(coords=["time"], func=lambda x, params: x, method="trf") TypeError: curvefit() got an unexpected keyword argument 'method' In [2]: xr.version Out[2]: '2022.9.0' ``` |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Passing extra keyword arguments to `curvefit` throws an exception. 1331985070 | |
1464128552 | https://github.com/pydata/xarray/issues/3653#issuecomment-1464128552 | https://api.github.com/repos/pydata/xarray/issues/3653 | IC_kwDOAMm_X85XRNQo | zoj613 44142765 | 2023-03-10T17:24:35Z | 2023-03-10T17:26:06Z | NONE |
When trying this snippet more than once I get a |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
"[Errno -90] NetCDF: file not found: b" when opening netCDF from server 543197350 | |
1453906696 | https://github.com/pydata/xarray/issues/4122#issuecomment-1453906696 | https://api.github.com/repos/pydata/xarray/issues/4122 | IC_kwDOAMm_X85WqNsI | zoj613 44142765 | 2023-03-03T18:08:07Z | 2023-03-03T18:08:07Z | NONE | Based on the docs
It appears scipy engine is safe is one does not need to be bothered with specifying engines.By the way, what are the limitations of the |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Document writing netcdf from xarray directly to S3 631085856 | |
1453897364 | https://github.com/pydata/xarray/issues/4122#issuecomment-1453897364 | https://api.github.com/repos/pydata/xarray/issues/4122 | IC_kwDOAMm_X85WqLaU | zoj613 44142765 | 2023-03-03T18:00:33Z | 2023-03-03T18:00:33Z | NONE | I never needed to specify an engine when writing, you only need it when reading the file. I use the |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Document writing netcdf from xarray directly to S3 631085856 | |
1401040677 | https://github.com/pydata/xarray/issues/4122#issuecomment-1401040677 | https://api.github.com/repos/pydata/xarray/issues/4122 | IC_kwDOAMm_X85Tgi8l | zoj613 44142765 | 2023-01-23T21:49:46Z | 2023-01-23T21:52:29Z | NONE | What didn't work:
Changing the above to
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Document writing netcdf from xarray directly to S3 631085856 | |
1400564474 | https://github.com/pydata/xarray/issues/4122#issuecomment-1400564474 | https://api.github.com/repos/pydata/xarray/issues/4122 | IC_kwDOAMm_X85Teur6 | zoj613 44142765 | 2023-01-23T15:44:20Z | 2023-01-23T15:44:20Z | NONE |
Thanks, this actually worked for me. It seems as though initializing an s3 store using |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Document writing netcdf from xarray directly to S3 631085856 | |
1400519887 | https://github.com/pydata/xarray/issues/4122#issuecomment-1400519887 | https://api.github.com/repos/pydata/xarray/issues/4122 | IC_kwDOAMm_X85TejzP | zoj613 44142765 | 2023-01-23T15:16:21Z | 2023-01-23T15:16:21Z | NONE | Is there any reliable to use to write a xr.Dataset object as a netcdf file in 2023? I tried using the above approach with |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Document writing netcdf from xarray directly to S3 631085856 | |
1308665605 | https://github.com/pydata/xarray/issues/508#issuecomment-1308665605 | https://api.github.com/repos/pydata/xarray/issues/508 | IC_kwDOAMm_X85OAKcF | zoj613 44142765 | 2022-11-09T12:16:13Z | 2022-11-09T12:16:13Z | NONE | Any plans to support this? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Ignore missing variables when concatenating datasets? 98587746 | |
1253668750 | https://github.com/pydata/xarray/issues/5888#issuecomment-1253668750 | https://api.github.com/repos/pydata/xarray/issues/5888 | IC_kwDOAMm_X85KuXeO | zoj613 44142765 | 2022-09-21T12:53:18Z | 2022-09-21T12:53:18Z | NONE | I experienced something similar, but instead when using a string glob. The |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
open_[mf]dataset ds.encoding['source'] for pathlib.Path? 1033950863 | |
1074074385 | https://github.com/pydata/xarray/issues/6395#issuecomment-1074074385 | https://api.github.com/repos/pydata/xarray/issues/6395 | IC_kwDOAMm_X85ABRMR | zoj613 44142765 | 2022-03-21T15:56:27Z | 2022-03-21T15:56:27Z | NONE |
Thanks for the quick response. I wasn't aware of this. I had assumed a new Dataset object will have data that is a copy of the underlying |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
two Dataset objects reference the same numpy array memory block upon creation. 1175517164 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
issue 8