id,node_id,number,title,user,state,locked,assignee,milestone,comments,created_at,updated_at,closed_at,author_association,active_lock_reason,draft,pull_request,body,reactions,performed_via_github_app,state_reason,repo,type 406812274,MDU6SXNzdWU0MDY4MTIyNzQ=,2745,reindex doesn't preserve chunks,4711805,open,0,,,1,2019-02-05T14:37:24Z,2023-12-04T20:46:36Z,,CONTRIBUTOR,,,,"The following code creates a small (100x100) chunked `DataArray`, and then re-indexes it into a huge one (100000x100000): ```python import xarray as xr import numpy as np n = 100 x = np.arange(n) y = np.arange(n) da = xr.DataArray(np.zeros(n*n).reshape(n, n), coords=[x, y], dims=['x', 'y']).chunk(n, n) n2 = 100000 x2 = np.arange(n2) y2 = np.arange(n2) da2 = da.reindex({'x': x2, 'y': y2}) da2 ``` But the re-indexed `DataArray` has `chunksize=(100000, 100000)` instead of `chunksize=(100, 100)`: ``` dask.array Coordinates: * x (x) int64 0 1 2 3 4 5 6 ... 99994 99995 99996 99997 99998 99999 * y (y) int64 0 1 2 3 4 5 6 ... 99994 99995 99996 99997 99998 99999 ``` Which immediately leads to a memory error when trying to e.g. store it to a `zarr` archive: ```python ds2 = da2.to_dataset(name='foo') ds2.to_zarr(store='foo', mode='w') ``` Trying to re-chunk to 100x100 before storing it doesn't help, but this time it takes a lot more time before crashing with a memory error: ```python da3 = da2.chunk(n, n) ds3 = da3.to_dataset(name='foo') ds3.to_zarr(store='foo', mode='w') ```","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/2745/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,issue 1320562401,I_kwDOAMm_X85Oti7h,6844,module 'xarray.core' has no attribute 'rolling',4711805,closed,0,,,2,2022-07-28T08:13:41Z,2022-07-28T08:26:13Z,2022-07-28T08:26:12Z,CONTRIBUTOR,,,,"### What happened? There used to be a `xarray.core.rolling`, and the documentation suggests it still [exists](https://docs.xarray.dev/en/v2022.06.0/generated/xarray.core.rolling.DataArrayCoarsen.html#xarray-core-rolling-dataarraycoarsen), but with `xarray-2022.6.0` I get: module `xarray.core` has no attribute `rolling`. It works with `xarray-2022.3.0`. ### What did you expect to happen? Shouldn't we be able to access `xarray.core.rolling`? ### Minimal Complete Verifiable Example ```Python import xarray as xr xr.core.rolling # Traceback (most recent call last): # File """", line 1, in # AttributeError: module 'xarray.core' has no attribute 'rolling' ``` ### MVCE confirmation - [X] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray. - [X] Complete example — the example is self-contained, including all data and the text of any traceback. - [X] Verifiable example — the example copy & pastes into an IPython prompt or [Binder notebook](https://mybinder.org/v2/gh/pydata/xarray/main?urlpath=lab/tree/doc/examples/blank_template.ipynb), returning the result. - [X] New issue — a search of GitHub Issues suggests this is not a duplicate. ### Relevant log output _No response_ ### Anything else we need to know? _No response_ ### Environment
/home/david/mambaforge/envs/xarray_leaflet/lib/python3.10/site-packages/_distutils_hack/__init__.py:33: UserWarning: Setuptools is replacing distutils. warnings.warn(""Setuptools is replacing distutils."") INSTALLED VERSIONS ------------------ commit: None python: 3.10.5 | packaged by conda-forge | (main, Jun 14 2022, 07:06:46) [GCC 10.3.0] python-bits: 64 OS: Linux OS-release: 5.15.0-41-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: ('en_US', 'UTF-8') libhdf5: None libnetcdf: None xarray: 2022.6.0 pandas: 1.4.3 numpy: 1.23.1 scipy: None netCDF4: None pydap: None h5netcdf: None h5py: None Nio: None zarr: None cftime: None nc_time_axis: None PseudoNetCDF: None rasterio: 1.3.0 cfgrib: None iris: None bottleneck: None dask: 2022.7.1 distributed: None matplotlib: 3.5.2 cartopy: None seaborn: None numbagg: None fsspec: 2022.7.0 cupy: None pint: None sparse: None flox: None numpy_groupies: None setuptools: 63.2.0 pip: 22.2.1 conda: None pytest: None IPython: 8.4.0 sphinx: None
","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/6844/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 414641120,MDU6SXNzdWU0MTQ2NDExMjA=,2789,Appending to zarr with string dtype,4711805,open,0,,,2,2019-02-26T14:31:42Z,2022-04-09T02:18:05Z,,CONTRIBUTOR,,,,"```python import xarray as xr da = xr.DataArray(['foo']) ds = da.to_dataset(name='da') ds.to_zarr('ds') # no special encoding specified ds = xr.open_zarr('ds') print(ds.da.values) ``` The following code prints `['foo']` (string type). The encoding chosen by zarr is `""dtype"": ""|S3""`, which corresponds to bytes, but it seems to be decoded to a string, which is what we want. ``` $ cat ds/da/.zarray { ""chunks"": [ 1 ], ""compressor"": { ""blocksize"": 0, ""clevel"": 5, ""cname"": ""lz4"", ""id"": ""blosc"", ""shuffle"": 1 }, ""dtype"": ""|S3"", ""fill_value"": null, ""filters"": null, ""order"": ""C"", ""shape"": [ 1 ], ""zarr_format"": 2 } ``` The problem is that if I want to append to the zarr archive, like so: ```python import zarr ds = zarr.open('ds', mode='a') da_new = xr.DataArray(['barbar']) ds.da.append(da_new) ds = xr.open_zarr('ds') print(ds.da.values) ``` It prints `['foo' 'bar']`. Indeed the encoding was kept as `""dtype"": ""|S3""`, which is fine for a string of 3 characters but not for 6. If I want to specify the encoding with the maximum length, e.g: ```python ds.to_zarr('ds', encoding={'da': {'dtype': '|S6'}}) ``` It solves the length problem, but now my strings are kept as bytes: `[b'foo' b'barbar']`. If I specify a Unicode encoding: ```python ds.to_zarr('ds', encoding={'da': {'dtype': 'U6'}}) ``` It is not taken into account. The zarr encoding is `""dtype"": ""|S3""` and I am back to my length problem: `['foo' 'bar']`. The solution with `'dtype': '|S6'` is acceptable, but I need to encode my strings to bytes when indexing, which is annoying.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/2789/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,issue 636480145,MDU6SXNzdWU2MzY0ODAxNDU=,4141,xarray.where() drops attributes,4711805,closed,0,,,3,2020-06-10T19:06:32Z,2022-01-19T19:35:41Z,2022-01-19T19:35:41Z,CONTRIBUTOR,,,," #### MCVE Code Sample ```python import xarray as xr da = xr.DataArray(1) da.attrs['foo'] = 'bar' xr.where(da==0, -1, da).attrs # shows: {} ``` #### Expected Output `{'foo': 'bar'}` #### Problem Description I would expect the attributes to remain in the data array. #### Versions
Output of xr.show_versions() INSTALLED VERSIONS ------------------ commit: None python: 3.8.2 | packaged by conda-forge | (default, Apr 24 2020, 08:20:52) [GCC 7.3.0] python-bits: 64 OS: Linux OS-release: 5.4.0-33-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: en_US.UTF-8 libhdf5: None libnetcdf: None xarray: 0.15.1 pandas: 1.0.4 numpy: 1.18.4 scipy: 1.4.1 netCDF4: None pydap: None h5netcdf: None h5py: None Nio: None zarr: None cftime: None nc_time_axis: None PseudoNetCDF: None rasterio: 1.1.4 cfgrib: None iris: None bottleneck: None dask: 2.16.0 distributed: None matplotlib: 3.2.1 cartopy: None seaborn: None numbagg: None setuptools: 46.2.0 pip: 20.1 conda: None pytest: None IPython: 7.14.0 sphinx: 3.0.4
","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/4141/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 777670351,MDU6SXNzdWU3Nzc2NzAzNTE=,4756,feat: reindex multiple DataArrays,4711805,open,0,,,1,2021-01-03T16:23:01Z,2021-01-03T19:05:03Z,,CONTRIBUTOR,,,,"When e.g. creating a `Dataset` from multiple `DataArray`s that are supposed to share the same grid, but are not exactly aligned (as is often the case with floating point coordinates), we usually end up with undesirable `NaN`s inserted in the data set. For instance, consider the following data arrays that are not exactly aligned: ```python import xarray as xr da1 = xr.DataArray([[0, 1, 2], [3, 4, 5], [6, 7, 8]], coords=[[0, 1, 2], [0, 1, 2]], dims=['x', 'y']).rename('da1') da2 = xr.DataArray([[0, 1, 2], [3, 4, 5], [6, 7, 8]], coords=[[1.1, 2.1, 3.1], [1.1, 2.1, 3.1]], dims=['x', 'y']).rename('da2') da1.plot.imshow() da2.plot.imshow() ``` ![image](https://user-images.githubusercontent.com/4711805/103482830-542bbe80-4de3-11eb-814b-bb1f705967c4.png) ![image](https://user-images.githubusercontent.com/4711805/103482836-61e14400-4de3-11eb-804b-f549c2551562.png) They show gaps when combined in a data set: ```python ds = xr.Dataset({'da1': da1, 'da2': da2}) ds['da1'].plot.imshow() ds['da2'].plot.imshow() ``` ![image](https://user-images.githubusercontent.com/4711805/103482959-3f9bf600-4de4-11eb-9513-900319cb485a.png) ![image](https://user-images.githubusercontent.com/4711805/103482966-47f43100-4de4-11eb-853b-2b44f7bc8d7f.png) I think this is a frequent enough situation that we would like a function to re-align all the data arrays together. There is a `reindex_like` method, which accepts a tolerance, but calling it successively on every data array, like so: ```python da1r = da1.reindex_like(da2, method='nearest', tolerance=0.2) da2r = da2.reindex_like(da1r, method='nearest', tolerance=0.2) ``` would result in the intersection of the coordinates, rather than their union. What I would like is a function like the following: ```python import numpy as np from functools import reduce def reindex_all(arrays, dims, tolerance): coords = {} for dim in dims: coord = reduce(np.union1d, [array[dim] for array in arrays[1:]], arrays[0][dim]) diff = coord[:-1] - coord[1:] keep = np.abs(diff) > tolerance coords[dim] = np.append(coord[:-1][keep], coord[-1]) reindexed = [array.reindex(coords, method='nearest', tolerance=tolerance) for array in arrays] return reindexed da1r, da2r = reindex_all([da1, da2], ['x', 'y'], 0.2) dsr = xr.Dataset({'da1': da1r, 'da2': da2r}) dsr['da1'].plot.imshow() dsr['da2'].plot.imshow() ``` ![image](https://user-images.githubusercontent.com/4711805/103483065-00ba7000-4de5-11eb-8581-fb156970a7e8.png) ![image](https://user-images.githubusercontent.com/4711805/103483072-0748e780-4de5-11eb-8b42-6bd9b248ab1e.png) I have not found something equivalent. If you think this is worth it, I could try and send a PR to implement such a feature.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/4756/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,issue 393742068,MDU6SXNzdWUzOTM3NDIwNjg=,2628,Empty plot with pcolormesh in the notebook,4711805,closed,0,,,3,2018-12-23T11:21:48Z,2020-12-05T05:06:36Z,2020-12-05T05:06:36Z,CONTRIBUTOR,,,,"#### Code Sample ```python import numpy as np import xarray as xr import matplotlib.pyplot as plt %matplotlib inline a = np.ones((4000, 4000), dtype=np.uint8) a[:1000, :1000] = 0 lat = np.array([0-i for i in range(a.shape[0])]) lon = np.array([0+i for i in range(a.shape[1])]) da = xr.DataArray(a, coords=[lat, lon], dims=['lat', 'lon']) da.plot.pcolormesh() ``` #### Problem description The code above shows an empty plot **in a notebook** (but not in the console). #### Expected Output If I replace `da.plot.pcolormesh()` with `da.plot.imshow()` or `plt.pcolormesh(a)`, it works fine. Also, if I decrease the size of the array, it works. #### Output of ``xr.show_versions()``
INSTALLED VERSIONS ------------------ commit: None python: 3.6.6.final.0 python-bits: 64 OS: Linux OS-release: 4.10.0-42-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: en_US.UTF-8 xarray: 0.10.7 pandas: 0.23.4 numpy: 1.15.4 scipy: 1.1.0 netCDF4: 1.4.1 h5netcdf: 0.6.2 h5py: 2.8.0 Nio: 1.5.4 zarr: 2.2.0 bottleneck: 1.2.1 cyordereddict: None dask: 0.20.2 distributed: 1.24.2 matplotlib: 3.0.2 cartopy: None seaborn: None setuptools: 40.6.2 pip: 18.1 conda: 4.5.12 pytest: None IPython: 7.1.1 sphinx: None
","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/2628/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 395384938,MDU6SXNzdWUzOTUzODQ5Mzg=,2644,DataArray concat with tolerance,4711805,closed,0,,,3,2019-01-02T21:23:21Z,2019-08-15T15:15:26Z,2019-08-15T15:15:26Z,CONTRIBUTOR,,,,"I would like to concatenate many `DataArray` whose dimension coordinates are almost aligned, allowing for some offset in the coordinates: ```python import xarray as xr da1 = xr.DataArray([[0, 1], [2, 3]], coords=[[0, 1], [0, 1]], dims=['x', 'y']) da2 = xr.DataArray([[4, 5], [6, 7]], coords=[[1.1, 2.1], [1.1, 2.1]], dims=['x', 'y']) da = xr.concat([da1, da2], 'z', method='nearest', tolerance=0.2) # doesn't exist yet da ``` And I would get: ``` array([[[ 0., 1., nan], [ 2., 3., nan], [nan, nan, nan]], [[nan, nan, nan], [nan, 4., 5.], [nan, 6., 7.]]]) Coordinates: * x (x) float64 0 1 2.1 * y (y) float64 0 1 2.1 Dimensions without coordinates: z ``` @jhamman suggested to use `reindex_like` in this [StackOverflow question](https://stackoverflow.com/questions/54007495/concatenating-dataarray-with-tolerance-in-xarray/54010432#54010432), but it doesn't produce the union of coordinates, so I cannot chain them like this: ```python da2 = da2.reindex_like(da1, method='nearest', tolerance=0.2) ``` Is there another work around? Do you think it would be worth having this feature in `concat`?","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/2644/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 467338176,MDU6SXNzdWU0NjczMzgxNzY=,3100,Append to zarr redundant with mode='a',4711805,closed,0,,,3,2019-07-12T10:21:04Z,2019-07-29T15:54:45Z,2019-07-29T15:54:45Z,CONTRIBUTOR,,,,"When appending to a zarr store, we need to set `append_dim=dim_name` but also `mode='a'`. Any reason to also specify the writing mode? I think it should automatically be set to `'a'` if `append_dim is not None`.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/3100/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 467267153,MDU6SXNzdWU0NjcyNjcxNTM=,3099,Interpolation doesn't apply on time coordinate,4711805,closed,0,,,1,2019-07-12T07:37:18Z,2019-07-13T15:05:39Z,2019-07-13T15:05:39Z,CONTRIBUTOR,,,,"#### MCVE Code Sample ```python In [1]: import numpy as np ...: import pandas as pd ...: import xarray as xr In [2]: da = xr.DataArray([1, 2], [('time', pd.date_range('2000-01-01', '2000-01-02', periods=2))]) ...: da Out[2]: array([1, 2]) Coordinates: * time (time) datetime64[ns] 2000-01-01 2000-01-02 In [3]: da.interp(time=da.time-np.timedelta64(1, 'D')) Out[3]: array([nan, 1.]) Coordinates: * time (time) datetime64[ns] 2000-01-01 2000-01-02 ``` #### Problem Description The data has been interpolated, but not the time coordinate. #### Expected Output When we explicitly get the values of the time coordinate, it works fine: ```python In [4]: da.interp(time=da.time.values-np.timedelta64(1, 'D')) Out[4]: array([nan, 1.]) Coordinates: * time (time) datetime64[ns] 1999-12-31 2000-01-01 ``` #### Output of ``xr.show_versions()``
INSTALLED VERSIONS ------------------ commit: None python: 3.7.3 (default, Mar 27 2019, 22:11:17) [GCC 7.3.0] python-bits: 64 OS: Linux OS-release: 4.10.0-42-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: en_US.UTF-8 libhdf5: None libnetcdf: None xarray: 0.12.2 pandas: 0.24.2 numpy: 1.16.4 scipy: 1.2.1 netCDF4: None pydap: None h5netcdf: None h5py: None Nio: None zarr: 2.3.2 cftime: None nc_time_axis: None PseudonetCDF: None rasterio: None cfgrib: None iris: None bottleneck: None dask: 2.0.0 distributed: 2.0.1 matplotlib: 3.1.0 cartopy: None seaborn: None numbagg: None setuptools: 41.0.0 pip: 19.0.3 conda: 4.7.5 pytest: None IPython: 7.7.0.dev sphinx: None
","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/3099/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 463928027,MDU6SXNzdWU0NjM5MjgwMjc=,3078,Auto-generated API documentation,4711805,closed,0,,,2,2019-07-03T19:57:51Z,2019-07-03T20:02:11Z,2019-07-03T20:00:35Z,CONTRIBUTOR,,,,"For instance in http://xarray.pydata.org/en/stable/generated/xarray.apply_ufunc.html, there is no space between the parameters and their type description, e.g. `funccallable` instead of `func : callable`.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/3078/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 415614806,MDU6SXNzdWU0MTU2MTQ4MDY=,2793,Fit bounding box to coarser resolution,4711805,open,0,,,2,2019-02-28T13:07:09Z,2019-04-11T14:37:47Z,,CONTRIBUTOR,,,,"When using [coarsen](http://xarray.pydata.org/en/latest/generated/xarray.DataArray.coarsen.html), we often need to align the original DataArray with the coarser coordinates. For instance: ```python import xarray as xr import numpy as np da = xr.DataArray(np.arange(4*4).reshape(4, 4), coords=[np.arange(4, 0, -1) + 0.5, np.arange(4) + 0.5], dims=['lat', 'lon']) # # array([[ 0, 1, 2, 3], # [ 4, 5, 6, 7], # [ 8, 9, 10, 11], # [12, 13, 14, 15]]) # Coordinates: # * lat (lat) float64 4.5 3.5 2.5 1.5 # * lon (lon) float64 0.5 1.5 2.5 3.5 da.coarsen(lat=2, lon=2).mean() # # array([[ 2.5, 4.5], # [10.5, 12.5]]) # Coordinates: # * lat (lat) float64 4.0 2.0 # * lon (lon) float64 1.0 3.0 ``` But if the coarser coordinates are aligned like: ``` lat: ... 5 3 1 ... lon: ... 1 3 5 ... ``` Then directly applying `coarsen` will not work (here on the `lat` dimension). The following function extends the original DataArray so that it is aligned with the coarser coordinates: ```python def adjust_bbox(da, dims): """"""Adjust the bounding box of a DaskArray to a coarser resolution. Args: da: the DaskArray to adjust. dims: a dictionary where keys are the name of the dimensions on which to adjust, and the values are of the form [unsigned_coarse_resolution, signed_original_resolution] Returns: The DataArray bounding box adjusted to the coarser resolution. """""" coords = {} for k, v in dims.items(): every, step = v offset = step / 2 dim0 = da[k].values[0] - offset dim1 = da[k].values[-1] + offset if step < 0: # decreasing coordinate dim0 = dim0 + (every - dim0 % every) % every dim1 = dim1 - dim1 % every else: # increasing coordinate dim0 = dim0 - dim0 % every dim1 = dim1 + (every - dim1 % every) % every coord0 = np.arange(dim0+offset, da[k].values[0]-offset, step) coord1 = da[k].values coord2 = np.arange(da[k].values[-1]+step, dim1, step) coord = np.hstack((coord0, coord1, coord2)) coords[k] = coord return da.reindex(**coords).fillna(0) da = adjust_bbox(da, {'lat': (2, -1), 'lon': (2, 1)}) # # array([[ 0., 0., 0., 0.], # [ 0., 1., 2., 3.], # [ 4., 5., 6., 7.], # [ 8., 9., 10., 11.], # [12., 13., 14., 15.], # [ 0., 0., 0., 0.]]) # Coordinates: # * lat (lat) float64 5.5 4.5 3.5 2.5 1.5 0.5 # * lon (lon) float64 0.5 1.5 2.5 3.5 da.coarsen(lat=2, lon=2).mean() # # array([[0.25, 1.25], # [6.5 , 8.5 ], # [6.25, 7.25]]) # Coordinates: # * lat (lat) float64 5.0 3.0 1.0 # * lon (lon) float64 1.0 3.0 ``` Now `coarsen` gives the right result. But `adjust_bbox` is rather complicated and specific to this use case (evenly spaced coordinate points...). Do you know of a better/more general way of doing it?","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/2793/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,issue