html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue
https://github.com/pydata/xarray/issues/4203#issuecomment-654688921,https://api.github.com/repos/pydata/xarray/issues/4203,654688921,MDEyOklzc3VlQ29tbWVudDY1NDY4ODkyMQ==,14808389,2020-07-07T08:30:35Z,2020-07-07T15:38:11Z,MEMBER,"that's only the short repr, the values are not modified:
```python
In [5]: da.lat
Out[5]:
array([37.49944, 37.5004 , 37.50135, ..., 43.1014 , 43.10143, 43.10144])
Coordinates:
* lat (lat) float64 37.5 37.5 37.5 37.5 37.5 ... 43.1 43.1 43.1 43.1 43.1
```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,651101286
https://github.com/pydata/xarray/issues/4203#issuecomment-654210829,https://api.github.com/repos/pydata/xarray/issues/4203,654210829,MDEyOklzc3VlQ29tbWVudDY1NDIxMDgyOQ==,14808389,2020-07-06T12:42:43Z,2020-07-06T12:42:43Z,MEMBER,"thanks, that helps. First of all (unless I did something wrong with the `read_csv` call), there's a `Unnamed: 0` column that has to be removed.
Other than that, your data seems to be quite sparse so that's an ideal fit for [`sparse`](https://sparse.pydata.org):
```python
In [38]: %%time
...: df = pd.read_csv(""/tmp/data.csv"")
...: a = df.drop(""Unnamed: 0"", axis=1).set_index([""lat"", ""lon""])
...: a = a.stack()
...: a.index.names = [""lat"", ""lon"", ""time""]
...: a = a.sort_index()
...: a.name = ""T""
...: xr.DataArray.from_series(a, sparse=True)
...:
...:
CPU times: user 606 ms, sys: 63.9 ms, total: 670 ms
Wall time: 670 ms
Out[38]:
Coordinates:
* lat (lat) float64 37.5 37.5 37.5 37.5 37.5 ... 43.1 43.1 43.1 43.1 43.1
* lon (lon) float64 96.46 96.46 96.46 96.47 ... 102.6 102.6 102.6 102.6
* time (time) object '2011-01-01 00:00:00' ... '2011-01-31 00:00:00'
```","{""total_count"": 3, ""+1"": 2, ""-1"": 0, ""laugh"": 0, ""hooray"": 1, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,651101286