issue_comments
9 rows where author_association = "NONE" and issue = 336458472 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- xarray to zarr · 9 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
401745899 | https://github.com/pydata/xarray/issues/2256#issuecomment-401745899 | https://api.github.com/repos/pydata/xarray/issues/2256 | MDEyOklzc3VlQ29tbWVudDQwMTc0NTg5OQ== | NickMortimer 4338975 | 2018-07-02T10:03:36Z | 2018-07-02T10:03:36Z | NONE | As an update chunking could be improved, I've crunched over 800 floats into the structure with 140k profiles and even though the levels are expanded to 3000 (way over kill) the space on disk is 1/3 the original size and could be less than 1/4 if chunking was set nicely to prevent super small file sizes. I can now just access any profile by an index I might be happy! |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray to zarr 336458472 | |
401728326 | https://github.com/pydata/xarray/issues/2256#issuecomment-401728326 | https://api.github.com/repos/pydata/xarray/issues/2256 | MDEyOklzc3VlQ29tbWVudDQwMTcyODMyNg== | NickMortimer 4338975 | 2018-07-02T09:18:08Z | 2018-07-02T09:19:18Z | NONE | @rabernat thanks so far for all the help. So if pickle is not the way forward then I need to resize casts so they all have same dimensions. So I came up with the following code: ``` def expand_levels(dataset,maxlevel=1500): newds = xr.Dataset() blankstack = np.empty((dataset.N_PROF.size,maxlevel-dataset.N_LEVELS.size)) blankstack[:] = np.nan newds['N_PROF'] = dataset.N_PROF.values; newds['N_LEVELS'] = np.arange(maxlevel).astype('int64') newds['N_PARAM'] = dataset.N_PARAM newds['N_CALIB'] = dataset.N_CALIB for varname, da in dataset.data_vars.items(): if 'N_PROF' in da.dims: if 'N_LEVELS' in da.dims: newds[varname] = xr.DataArray(np.hstack((da.values, blankstack)), dims=da.dims, name=da.name, attrs=da.attrs) elif 'N_HISTORY' not in da.dims: newds[varname] = da newds.attrs = dataset.attrs return newds def append_to_zarr(dataset,zarrfile): for varname, da in dataset.data_vars.items(): zarrfile[varname].append(da.values) files =list(glob.iglob(r'D:\argo\csiro**_prof.nc', recursive=True)) expand_levels(xr.open_dataset(files[0]),3000).to_zarr(r'D:\argo\argo.zarr',mode='w') za =zarr.open(r'D:\argo\argo.zarr',mode='w+') for f in files[1:]: print(f) append_to_zarr(expand_levels(xr.open_dataset(f), 3000),za) ``` This basically appends nan on the end of the profiles to get them all the same length. Then I append them into the zarr structure. This is very experimental I just wanted to see how appending them all to big arrays would work. It might be better to save a resized netcdf and then open them all at once and do a to_zarr? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray to zarr 336458472 | |
401195638 | https://github.com/pydata/xarray/issues/2256#issuecomment-401195638 | https://api.github.com/repos/pydata/xarray/issues/2256 | MDEyOklzc3VlQ29tbWVudDQwMTE5NTYzOA== | NickMortimer 4338975 | 2018-06-28T22:46:32Z | 2018-06-28T22:47:09Z | NONE | Yes I agree Zarr is best for large arrays etc. that's kid of why I ended up on the array of xray objects idea. I guess that was sort of creating an object store in zarr. What I'd like to offer is a simple set of analytical tools based on jupyter allowing for easy processing of float data, getting away from the download and process pattern. I'm still trying to find the best way to do this as Argo data does not neatly fall into any one system because of it's lack of homogeneity |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray to zarr 336458472 | |
400910725 | https://github.com/pydata/xarray/issues/2256#issuecomment-400910725 | https://api.github.com/repos/pydata/xarray/issues/2256 | MDEyOklzc3VlQ29tbWVudDQwMDkxMDcyNQ== | NickMortimer 4338975 | 2018-06-28T04:56:48Z | 2018-06-28T04:57:33Z | NONE | @jhamman Ah thanks for that it looks interesting. Is there a way a specifying in the .to_zarr()? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray to zarr 336458472 | |
400909462 | https://github.com/pydata/xarray/issues/2256#issuecomment-400909462 | https://api.github.com/repos/pydata/xarray/issues/2256 | MDEyOklzc3VlQ29tbWVudDQwMDkwOTQ2Mg== | NickMortimer 4338975 | 2018-06-28T04:46:26Z | 2018-06-28T04:46:26Z | NONE |
I'd like to have both ;) |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray to zarr 336458472 | |
400908763 | https://github.com/pydata/xarray/issues/2256#issuecomment-400908763 | https://api.github.com/repos/pydata/xarray/issues/2256 | MDEyOklzc3VlQ29tbWVudDQwMDkwODc2Mw== | NickMortimer 4338975 | 2018-06-28T04:40:29Z | 2018-06-28T04:40:29Z | NONE | Now worries, at the moment I'm in play mode, everything is new to me pretty much! Ok the aim of this little set up is to be able to do things like compare floats with those nearby or create a climatology for a local area from Argo profiles. for example produce a report for every operational Argo float each cycle and feed that to some kind of AI/ML system to detect bad data in near real time. So initially I need a platform that I can easily data mine historical floats. Now with the pickle solution the entire data set can be accessed with a very small foot print. Why zarr? I seem to remember reading that reading/writing to from HFD5 was limited when compression was turned on. Plus I like the way zarr does things it looks a lot more fault tolerant Keep asking the questions they are very valuable Are you going to the pangeo meeting? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray to zarr 336458472 | |
400906158 | https://github.com/pydata/xarray/issues/2256#issuecomment-400906158 | https://api.github.com/repos/pydata/xarray/issues/2256 | MDEyOklzc3VlQ29tbWVudDQwMDkwNjE1OA== | NickMortimer 4338975 | 2018-06-28T04:20:28Z | 2018-06-28T04:20:28Z | NONE | With the Pickle solution I end up with 31 files in 3 folders with a size on disk of 1.2 MB storing 250 profiles of a single float I'm new to github and opensource! Thanks for the time and edit! |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray to zarr 336458472 | |
400905262 | https://github.com/pydata/xarray/issues/2256#issuecomment-400905262 | https://api.github.com/repos/pydata/xarray/issues/2256 | MDEyOklzc3VlQ29tbWVudDQwMDkwNTI2Mg== | NickMortimer 4338975 | 2018-06-28T04:12:47Z | 2018-06-28T04:18:07Z | NONE | Yes I agree with you I started out with the ds.to_zarr for each file, the problem was that each property of the cycle e.g. lat and long ended up in it's own file. one float with 250 cycles ended up with over 70,000 small files one my file system, because of cluster size they occupied over 100meg of hard disk. as there are over 4000 floats lots of small files are not going to be viable.
Yep this line is funny. CYCLE_NUMBER increments up with each cycle and starts at 1. Sometimes a cycle might be delayed and added at a later date, so did not want to make the assumption that the list of files had been sorted into the order of the float cycles, so instead I want to build an array of cycles in order. Also if a file is replaced by a newer version then I want it to overwrite the profile in the array
A single float file end up with 194 small files in 68 directories total size 30.4 KB (31,223 bytes) but size on disk 776 KB (794,624 bytes) I have tried
but fails with:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray to zarr 336458472 | |
400901163 | https://github.com/pydata/xarray/issues/2256#issuecomment-400901163 | https://api.github.com/repos/pydata/xarray/issues/2256 | MDEyOklzc3VlQ29tbWVudDQwMDkwMTE2Mw== | NickMortimer 4338975 | 2018-06-28T03:41:10Z | 2018-06-28T03:41:10Z | NONE | Thanks yep my goal is to provide a simple online notebook that can be used to process/qa/qc Argo float data. I'd like to create system that works intuitively with the the current file structure and not build a database of values on the top of them. here's a first go with some code ``` def processfloat(floatpath,zarrpath): root = zarr.open(zarrpath, mode='a') filenames = glob.glob(floatpath)
``` |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray to zarr 336458472 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1