html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue https://github.com/pydata/xarray/issues/3564#issuecomment-565516039,https://api.github.com/repos/pydata/xarray/issues/3564,565516039,MDEyOklzc3VlQ29tbWVudDU2NTUxNjAzOQ==,1197350,2019-12-13T16:50:45Z,2019-12-13T16:50:45Z,MEMBER,"> if we're uploading real data for these, how big can/should the files be? It might affect what dataset I use. This is a good question. We need the tutorials to be able to run and build within a CI environment. That's the main constraint. For larger datasets, rather than storing them in github, a good approach is to create an archive on https://zenodo.org/ from which the data can be pulled.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,527323165