issue_comments
23 rows where user = 34353851 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: issue_url, reactions, created_at (date), updated_at (date)
user 1
- JavierRuano · 23 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
829638401 | https://github.com/pydata/xarray/issues/5208#issuecomment-829638401 | https://api.github.com/repos/pydata/xarray/issues/5208 | MDEyOklzc3VlQ29tbWVudDgyOTYzODQwMQ== | JavierRuano 34353851 | 2021-04-29T22:33:05Z | 2021-04-29T22:33:05Z | NONE | import numpy as np import xarray as xr Creates DataArraysnt = 4 time = np.arange (nt) * 86400.0 time = xr.DataArray (time, coords=[time,], dims=["time",]) aa = time * 2.0 Adding attributes to DataArraystime.attrs['units'] = "second" aa.attrs['units'] = "whatever" Attributes are visible in the DataArraysprint ('----------> time DataArray: ') print (time) print ('----------> aa DataArray : ' ) print (aa) print ('----------> aa attributes : ') print (aa.attrs ) Creating a Datasetds = xr.Dataset( { "aa": (["time",], aa), }, coords={"time": (["time",], time), }, ) Attributes are not visible in the Datasetprint ('----------> DataSet before setting attributes') print (ds) My request #1 : attributes of the DataArrays should be added to the DataSet (may be optional)print ('----------> Attributes of aa in DataSet : none') print ( ds['aa'].attrs ) print ('----------> Attributes of aa outside DataSet : still here') print ( aa.attrs ) print ('----------> Attributes are not written to the NetCDF file') ds.to_netcdf ('sample1.nc') Adding attributes directly to the DatasetAttributes are still not visible in the Datasetprint ('----------> DataSet after setting attributes : attributes not shown' )
ds=ds.assign_attrs({'Visible':'NotInvisibleMan'})
ds['time'].attrs['units']="second"
ds['aa'].attrs['units']="whatever" |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
DataArray attributes not present in DataSet. Coherency problem between DataSet and NetCDF file 865003095 | |
828306616 | https://github.com/pydata/xarray/issues/5225#issuecomment-828306616 | https://api.github.com/repos/pydata/xarray/issues/5225 | MDEyOklzc3VlQ29tbWVudDgyODMwNjYxNg== | JavierRuano 34353851 | 2021-04-28T09:31:22Z | 2021-04-28T09:31:22Z | NONE | I'm sorry I couldn't be more helpful this time |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
python3.9 dask/array/slicing.py in slice_wrap_lists Don't yet support nd fancy indexing 869180122 | |
828256725 | https://github.com/pydata/xarray/issues/5225#issuecomment-828256725 | https://api.github.com/repos/pydata/xarray/issues/5225 | MDEyOklzc3VlQ29tbWVudDgyODI1NjcyNQ== | JavierRuano 34353851 | 2021-04-28T08:23:43Z | 2021-04-28T08:23:43Z | NONE | It works again, i have tried to save a netcdf file what produces the bug, but nothing. The traceback was from debug mode of django. |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
python3.9 dask/array/slicing.py in slice_wrap_lists Don't yet support nd fancy indexing 869180122 | |
828002269 | https://github.com/pydata/xarray/issues/5225#issuecomment-828002269 | https://api.github.com/repos/pydata/xarray/issues/5225 | MDEyOklzc3VlQ29tbWVudDgyODAwMjI2OQ== | JavierRuano 34353851 | 2021-04-27T23:00:32Z | 2021-04-27T23:00:32Z | NONE | Linux streamDebian 5.10.0-6-amd64 #1 SMP Debian 5.10.28-1 (2021-04-09) x86_64 GNU/Linux
INSTALLED VERSIONScommit: None python: 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110] python-bits: 64 OS: Linux OS-release: 5.10.0-6-amd64 machine: x86_64 processor: byteorder: little LC_ALL: None LANG: en_GB.UTF-8 LOCALE: en_GB.UTF-8 libhdf5: 1.12.0 libnetcdf: 4.7.4 xarray: 0.17.0 pandas: 1.2.4 numpy: 1.19.5 scipy: 1.6.3 netCDF4: 1.5.6 pydap: None h5netcdf: None h5py: None Nio: None zarr: None cftime: 1.4.1 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: None dask: 2021.04.1 distributed: None matplotlib: 3.4.1 cartopy: 0.18.0 seaborn: None numbagg: None pint: None setuptools: 52.0.0 pip: 20.3.4 conda: None pytest: 6.0.2 IPython: 7.20.0 sphinx: None |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
python3.9 dask/array/slicing.py in slice_wrap_lists Don't yet support nd fancy indexing 869180122 | |
822295458 | https://github.com/pydata/xarray/issues/5085#issuecomment-822295458 | https://api.github.com/repos/pydata/xarray/issues/5085 | MDEyOklzc3VlQ29tbWVudDgyMjI5NTQ1OA== | JavierRuano 34353851 | 2021-04-19T08:52:19Z | 2021-04-19T08:52:19Z | NONE | Thanks for your quick response, it is true that you have greatly improved the examples in the documentation, I do not know to what extent it would solve any lack. The np.ufunc examples seemed insufficient a year ago but you have already solved. And it is always better to be aware of the improvements that you are introducing before writing new documentation. Thanks for your attention. |
{ "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Add example in your wiki. 842610988 | |
822066185 | https://github.com/pydata/xarray/issues/5085#issuecomment-822066185 | https://api.github.com/repos/pydata/xarray/issues/5085 | MDEyOklzc3VlQ29tbWVudDgyMjA2NjE4NQ== | JavierRuano 34353851 | 2021-04-18T21:39:38Z | 2021-04-18T21:39:38Z | NONE | Shall i close this issue? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Add example in your wiki. 842610988 | |
815276368 | https://github.com/pydata/xarray/issues/5085#issuecomment-815276368 | https://api.github.com/repos/pydata/xarray/issues/5085 | MDEyOklzc3VlQ29tbWVudDgxNTI3NjM2OA== | JavierRuano 34353851 | 2021-04-07T21:27:28Z | 2021-04-07T21:46:39Z | NONE | Ok, thanks @max-sixty and @keewis . I hope it is useful for the proper and efficient use of xarray. you know better what development path it is taking. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Add example in your wiki. 842610988 | |
814932746 | https://github.com/pydata/xarray/issues/5085#issuecomment-814932746 | https://api.github.com/repos/pydata/xarray/issues/5085 | MDEyOklzc3VlQ29tbWVudDgxNDkzMjc0Ng== | JavierRuano 34353851 | 2021-04-07T13:52:38Z | 2021-04-07T13:52:38Z | NONE | Hi again @max-sixty , thanks for your advice, sure, the outputs have a lot of sense. I think this new example has the good structure like another examples. https://github.com/JavierRuano/ASI_Steady/blob/main/Examples/AirStagnationIndex_Wang_Xarray_Example.ipynb Regards Javier Ruano. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Add example in your wiki. 842610988 | |
814462311 | https://github.com/pydata/xarray/issues/5085#issuecomment-814462311 | https://api.github.com/repos/pydata/xarray/issues/5085 | MDEyOklzc3VlQ29tbWVudDgxNDQ2MjMxMQ== | JavierRuano 34353851 | 2021-04-06T21:53:31Z | 2021-04-06T21:53:31Z | NONE | I think it is interesting the operations over the time axis with numpy reduce, and the ufuncs operations. From my point of view there are a lot of pandas and dask users who could learn to use xarray with that example or what they are using netcdf-4 and numpy directly. I have created the example from the library, and it is not problem to change it. But it shows a stagnation calculation, it was the intention. But i understand what xarray is climate focused and you know if the topic is interesting and the example is really useful for your project. For me xarray have been very useful. If you prefer another type of example, we could refactor it. Regards Javier Ruano. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Add example in your wiki. 842610988 | |
631636660 | https://github.com/pydata/xarray/issues/4085#issuecomment-631636660 | https://api.github.com/repos/pydata/xarray/issues/4085 | MDEyOklzc3VlQ29tbWVudDYzMTYzNjY2MA== | JavierRuano 34353851 | 2020-05-20T18:07:22Z | 2020-05-20T18:07:22Z | NONE | I use http://xarray.pydata.org/en/stable/generated/xarray.apply_ufunc.html because it is faster. El mié., 20 may. 2020 a las 20:01, Javier Ruano (javier.ruanno@gmail.com) escribió:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
lazy evaluation of large arrays fails 621968474 | |
631633988 | https://github.com/pydata/xarray/issues/4085#issuecomment-631633988 | https://api.github.com/repos/pydata/xarray/issues/4085 | MDEyOklzc3VlQ29tbWVudDYzMTYzMzk4OA== | JavierRuano 34353851 | 2020-05-20T18:01:42Z | 2020-05-20T18:01:42Z | NONE | if you append compute() it should not be a lazy operation. But my advice is like user only. El mié., 20 may. 2020 a las 19:51, Rob Hetland (notifications@github.com) escribió:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
lazy evaluation of large arrays fails 621968474 | |
624384220 | https://github.com/pydata/xarray/issues/4016#issuecomment-624384220 | https://api.github.com/repos/pydata/xarray/issues/4016 | MDEyOklzc3VlQ29tbWVudDYyNDM4NDIyMA== | JavierRuano 34353851 | 2020-05-06T00:54:38Z | 2020-05-06T00:54:38Z | NONE | I think another solution you could add an aleat microsecond or nanosecond field ... in the datetime index and the index should be different. You could test if there are colision in the index (it means the index<->time is not empty like the sort algorithm, something similar to https://en.wikipedia.org/wiki/Radix_sort) It is like the reverse of this solution Add another coordinate could be overloaded and xarray is very powerful to extract slice of time ds.sel(time =slice('2000-06-01', '2000-06-10')) http://xarray.pydata.org/en/stable/time-series.html#datetime-indexing I hope it could be useful. Regards Javier Ruano El mié., 29 abr. 2020 17:34, Javier Ruano javier.ruanno@gmail.com escribió:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Concatenate DataArrays on one dim when another dim has difference sizes 609108666 | |
621290319 | https://github.com/pydata/xarray/issues/4016#issuecomment-621290319 | https://api.github.com/repos/pydata/xarray/issues/4016 | MDEyOklzc3VlQ29tbWVudDYyMTI5MDMxOQ== | JavierRuano 34353851 | 2020-04-29T15:35:01Z | 2020-04-29T15:35:01Z | NONE | pandas doesn't have that problem import pandas as pd x1=pd.DataFrame([['1','2','3']]) x2=pd.DataFrame([['4','5','6']]) pd.concat([x1,x2],axis=1) El mié., 29 abr. 2020 a las 17:23, Javier Ruano (javier.ruanno@gmail.com) escribió:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Concatenate DataArrays on one dim when another dim has difference sizes 609108666 | |
621283446 | https://github.com/pydata/xarray/issues/4016#issuecomment-621283446 | https://api.github.com/repos/pydata/xarray/issues/4016 | MDEyOklzc3VlQ29tbWVudDYyMTI4MzQ0Ng== | JavierRuano 34353851 | 2020-04-29T15:23:59Z | 2020-04-29T15:23:59Z | NONE | the time is the same. do you have tried to change the second one time index? El mié., 29 abr. 2020 a las 16:36, Xin Zhang (notifications@github.com) escribió:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Concatenate DataArrays on one dim when another dim has difference sizes 609108666 | |
616214519 | https://github.com/pydata/xarray/issues/3984#issuecomment-616214519 | https://api.github.com/repos/pydata/xarray/issues/3984 | MDEyOklzc3VlQ29tbWVudDYxNjIxNDUxOQ== | JavierRuano 34353851 | 2020-04-19T19:49:19Z | 2020-04-19T19:49:19Z | NONE | I dont try it, but i know your problem. If you try to create from dataarray df.to_dataset(name='participant_A') df.to_dataset(name='participant_B') and after merge them? xr.merge([ds1, ds2], compat='no_conflicts') http://xarray.pydata.org/en/stable/combining.html In potter case you could create nan values to create the same dimensions. But i have never tried. I found another solution for my data, but it was my alternative. El dom., 19 abr. 2020 20:57, (Ray) Jinbiao Yang notifications@github.com escribió:
|
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Support flexible DataArray shapes in Dataset 602793814 | |
611302006 | https://github.com/pydata/xarray/issues/3957#issuecomment-611302006 | https://api.github.com/repos/pydata/xarray/issues/3957 | MDEyOklzc3VlQ29tbWVudDYxMTMwMjAwNg== | JavierRuano 34353851 | 2020-04-09T03:04:53Z | 2020-04-09T03:04:53Z | NONE | Yes, but with a lot of information, dask is the only option, and working well with the index. https://github.com/dask/dask/issues/958 El jue., 9 abr. 2020 a las 2:54, Xin Zhang (notifications@github.com) escribió:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Sort DataArray by data values along one dim 596606599 | |
611295039 | https://github.com/pydata/xarray/issues/3957#issuecomment-611295039 | https://api.github.com/repos/pydata/xarray/issues/3957 | MDEyOklzc3VlQ29tbWVudDYxMTI5NTAzOQ== | JavierRuano 34353851 | 2020-04-09T02:36:56Z | 2020-04-09T02:36:56Z | NONE | You could access directly to data as ndarray and you could transform dataarray into a dataframe of pandas. Pandas has sort_values. You searched sorting values according z, it is shown in z index. With more dataArray you could read about Dataset concept... but i dont develop xarray, i am only user of that module, perhaps you search another type of answer. http://xarray.pydata.org/en/stable/generated/xarray.Dataset.sortby.html according to values of 1-D dataarrays that share dimension with calling object. El jue., 9 abr. 2020 4:22, Xin Zhang notifications@github.com escribió:
|
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Sort DataArray by data values along one dim 596606599 | |
611047964 | https://github.com/pydata/xarray/issues/3957#issuecomment-611047964 | https://api.github.com/repos/pydata/xarray/issues/3957 | MDEyOklzc3VlQ29tbWVudDYxMTA0Nzk2NA== | JavierRuano 34353851 | 2020-04-08T16:08:00Z | 2020-04-08T16:08:00Z | NONE | cld.reindex(z=cld[:,0,0].sortby(cld[:,0,0]).z) with this solution [0] [1] <xarray.DataArray (z: 5, y: 2, x: 4)> array([[[ 0. , 1. , 2. , 3. ], [ 4. , 5. , 6. , 7. ]],
Coordinates: * z (z) int64 0 4 1 2 3 Dimensions without coordinates: y, x [0] https://stackoverflow.com/questions/41077393/how-to-sort-the-index-of-a-xarray-dataset-dataarray [1] https://github.com/pydata/xarray/issues/967 El mié., 8 abr. 2020 a las 14:06, Xin Zhang (notifications@github.com) escribió:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Sort DataArray by data values along one dim 596606599 | |
610715885 | https://github.com/pydata/xarray/issues/3954#issuecomment-610715885 | https://api.github.com/repos/pydata/xarray/issues/3954 | MDEyOklzc3VlQ29tbWVudDYxMDcxNTg4NQ== | JavierRuano 34353851 | 2020-04-08T02:25:34Z | 2020-04-08T02:25:34Z | NONE | import xarray as xr import numpy as np x = 2 y = 4 z = 3 data = np.arange(xyz).reshape(z, x, y) 3d array with coordsa = xr.DataArray(data, dims=['z', 'y', 'x'], coords={'z': np.arange(z)}) 2d array without coordsb = xr.DataArray(np.arange(xy).reshape(x, y)1.5, dims=['y', 'x']) expand 2d to 3db = b.assign_coords({'z':3}) comb = xr.concat([a, b], dim='z') perhaps you need another thing. http://xarray.pydata.org/en/stable/generated/xarray.concat.html ** consist of variables and coordinates with matching shapes if you compare your shape are differents a.shape and b.shape Regards. Javier Ruano. El mié., 8 abr. 2020 a las 1:36, Xin Zhang (notifications@github.com) escribió:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Concatenate 3D array with 2D array 596249070 | |
590375940 | https://github.com/pydata/xarray/issues/3795#issuecomment-590375940 | https://api.github.com/repos/pydata/xarray/issues/3795 | MDEyOklzc3VlQ29tbWVudDU5MDM3NTk0MA== | JavierRuano 34353851 | 2020-02-24T15:21:42Z | 2020-02-24T15:26:57Z | NONE | df1=xarray.open_mfdataset plus parallel=True df1=df1.rename({'xarray_dataarray_variable':'v'}) The chunksize changes to 365 and the dataset creates inside a Datarray of 365, not the global size which is 14610. [xarray.Dataset('u':df1,'v'df2)] df2 same operation Pseudo Solution for me. xarray.Dataset({'u':df1.u.chunk(14610),'v':df2.v.chunk(14610)},coords={'time':time_Index,'latitude':latitude_Index,'longitude':longitude_Index,'level':level_Index}) |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Dataset problem with chunk DataArray. 569806418 | |
590351872 | https://github.com/pydata/xarray/issues/3795#issuecomment-590351872 | https://api.github.com/repos/pydata/xarray/issues/3795 | MDEyOklzc3VlQ29tbWVudDU5MDM1MTg3Mg== | JavierRuano 34353851 | 2020-02-24T14:39:11Z | 2020-02-24T15:06:16Z | NONE | backends\api.py DATAARRAY_NAME = "xarray_dataarray_name" DATAARRAY_VARIABLE = "xarray_dataarray_variable" The name is auto when i open the file ,xarrray.open_dataset( parallel=True) core\dataarray.py def rename( self, new_name_or_name_dict: Union[Hashable, Mapping[Hashable, Hashable]] = None, **names: Hashable, ) -> "DataArray":
I think an operation changing the previous chunksize or something with xarrray.open_dataset parallel=True, (core\parallel.py) because the chunksize changes to 365 based on days of the year Sorry i cannot help more. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Dataset problem with chunk DataArray. 569806418 | |
590297372 | https://github.com/pydata/xarray/issues/3795#issuecomment-590297372 | https://api.github.com/repos/pydata/xarray/issues/3795 | MDEyOklzc3VlQ29tbWVudDU5MDI5NzM3Mg== | JavierRuano 34353851 | 2020-02-24T12:30:34Z | 2020-02-24T12:30:34Z | NONE | After i modify again the chunksize to 365 to avoid the Memory Error <xarray.Dataset> Dimensions: (latitude: 68, level: 47, longitude: 81, time: 14610) Data variables: u (time, level, latitude, longitude) float32 dask.array<shape=(14610, 47, 68, 81), chunksize=(365, 47, 68, 81)> v (time, level, latitude, longitude) float32 dask.array<shape=(14610, 47, 68, 81), chunksize=(365, 47, 68, 81)> |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Dataset problem with chunk DataArray. 569806418 | |
590294555 | https://github.com/pydata/xarray/issues/3795#issuecomment-590294555 | https://api.github.com/repos/pydata/xarray/issues/3795 | MDEyOklzc3VlQ29tbWVudDU5MDI5NDU1NQ== | JavierRuano 34353851 | 2020-02-24T12:21:39Z | 2020-02-24T12:21:39Z | NONE | The strange is the DataArray chunksize changes after i modify the name 'xarray_dataarray_variable' to use another to the Dataset. <xarray.Dataset> Dimensions: (latitude: 68, level: 47, longitude: 81, time: 14610) Data variables: u (time, level, latitude, longitude) float32 dask.array<shape=(14610, 47, 68, 81), chunksize=(14610, 47, 68, 81)> v (time, level, latitude, longitude) float32 dask.array<shape=(14610, 47, 68, 81), chunksize=(14610, 47, 68, 81)> |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Dataset problem with chunk DataArray. 569806418 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
issue 9