issue_comments
16 rows where issue = 253136694 and user = 703554 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: reactions, created_at (date), updated_at (date)
issue 1
- WIP: Zarr backend · 16 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
350375750 | https://github.com/pydata/xarray/pull/1528#issuecomment-350375750 | https://api.github.com/repos/pydata/xarray/issues/1528 | MDEyOklzc3VlQ29tbWVudDM1MDM3NTc1MA== | alimanfoo 703554 | 2017-12-08T21:24:45Z | 2017-12-08T22:27:47Z | CONTRIBUTOR | Just to confirm, if writes are aligned with chunk boundaries in the destination array then no locking is required. Also if you're going to be moving large datasets into cloud storage and doing distributed computing then it may be worth investigating compressors and compressor options as good compression ratio may make a big difference where network bandwidth may be the limiting factor. I would suggest using the Blosc compressor with cname='zstd'. I would also suggest using shuffle, the Blosc codec in latest numcodecs has an AUTOSHUFFLE option so byte shuffle is used for arrays with >1 byte item size and bit shuffle is used for arrays with 1 byte item size . I would also experiment with compression level (clevel) to see how speed balances against compression ratio. E.g., Blosc(cname='zstd', clevel=5, shuffle=Blosc.AUTOSHUFFLE) may be a good starting point. The default compressor is Blosc(cname='lz4', ...) is more optimised for fast local storage, so speed is very good but compression ratio is moderate, this may not be best for distributed computing. |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
WIP: Zarr backend 253136694 | |
350379064 | https://github.com/pydata/xarray/pull/1528#issuecomment-350379064 | https://api.github.com/repos/pydata/xarray/issues/1528 | MDEyOklzc3VlQ29tbWVudDM1MDM3OTA2NA== | alimanfoo 703554 | 2017-12-08T21:40:40Z | 2017-12-08T22:27:35Z | CONTRIBUTOR | Some examples of compressor benchmarking here may be useful http://alimanfoo.github.io/2016/09/21/genotype-compression-benchmark.html The specific conclusions probably won't apply to your data but some of the code and ideas may be useful. Since writing that article I added Zstd and LZ4 compressors in numcodecs so those may also be worth trying in addition to Blosc with various configurations. (Blosc breaks up each chunk into blocks which enables multithreaded compression/decompression but can also reduce compression ratio over the same compressor library used without Blosc. I.e., Blosc(cname='zstd', clevel=1) will behave differently from Zstd(level=1) even though the same underlying compression library (Zstandard) is being used.) |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
WIP: Zarr backend 253136694 | |
348839453 | https://github.com/pydata/xarray/pull/1528#issuecomment-348839453 | https://api.github.com/repos/pydata/xarray/issues/1528 | MDEyOklzc3VlQ29tbWVudDM0ODgzOTQ1Mw== | alimanfoo 703554 | 2017-12-04T01:40:57Z | 2017-12-04T01:40:57Z | CONTRIBUTOR | I know you're not including string support in this PR, but for interest, there are a couple of changes coming into zarr via https://github.com/alimanfoo/zarr/pull/212 that may be relevant in future. It should now be impossible to generate a segfault via a badly configured object array. It is also now much harder to badly configure an object array. When creating an object array, an object codec should be provided via the |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
WIP: Zarr backend 253136694 | |
347385269 | https://github.com/pydata/xarray/pull/1528#issuecomment-347385269 | https://api.github.com/repos/pydata/xarray/issues/1528 | MDEyOklzc3VlQ29tbWVudDM0NzM4NTI2OQ== | alimanfoo 703554 | 2017-11-28T01:36:29Z | 2017-11-28T01:49:24Z | CONTRIBUTOR | FWIW I think the best option at the moment is to make sure you add either Pickle or MsgPack filter for any zarr array with an object dtype. BTW I was thinking that zarr should automatically add one of these filters any time someone creates an array with an object dtype, to avoid them hitting the pointer issue. If you have any thoughts on best solution drop them here: https://github.com/alimanfoo/zarr/issues/208 |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
WIP: Zarr backend 253136694 | |
347381734 | https://github.com/pydata/xarray/pull/1528#issuecomment-347381734 | https://api.github.com/repos/pydata/xarray/issues/1528 | MDEyOklzc3VlQ29tbWVudDM0NzM4MTczNA== | alimanfoo 703554 | 2017-11-28T01:16:07Z | 2017-11-28T01:16:07Z | CONTRIBUTOR | When still in the original interpreter session, all the objects still exist in memory, so all the pointers stored in the array are still valid. Restart the session and the objects are gone and the pointers are invalid. On Tue, Nov 28, 2017 at 1:14 AM, Alistair Miles alimanfoo@googlemail.com wrote:
-- Alistair Miles Head of Epidemiological Informatics Centre for Genomics and Global Health http://cggh.org Big Data Institute Building Old Road Campus Roosevelt Drive Oxford OX3 7LF United Kingdom Phone: +44 (0)1865 743596 Email: alimanfoo@googlemail.com Web: http://a http://purl.org/net/alimanlimanfoo.github.io/ Twitter: https://twitter.com/alimanfoo |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
WIP: Zarr backend 253136694 | |
347381500 | https://github.com/pydata/xarray/pull/1528#issuecomment-347381500 | https://api.github.com/repos/pydata/xarray/issues/1528 | MDEyOklzc3VlQ29tbWVudDM0NzM4MTUwMA== | alimanfoo 703554 | 2017-11-28T01:14:42Z | 2017-11-28T01:14:42Z | CONTRIBUTOR | Try exiting and restarting the interpreter, then running: zgs = zarr.open_group(store='zarr_directory') zgs.x[:] On Tue, Nov 28, 2017 at 1:10 AM, Ryan Abernathey notifications@github.com wrote:
-- Alistair Miles Head of Epidemiological Informatics Centre for Genomics and Global Health http://cggh.org Big Data Institute Building Old Road Campus Roosevelt Drive Oxford OX3 7LF United Kingdom Phone: +44 (0)1865 743596 Email: alimanfoo@googlemail.com Web: http://a http://purl.org/net/alimanlimanfoo.github.io/ Twitter: https://twitter.com/alimanfoo |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
WIP: Zarr backend 253136694 | |
347363503 | https://github.com/pydata/xarray/pull/1528#issuecomment-347363503 | https://api.github.com/repos/pydata/xarray/issues/1528 | MDEyOklzc3VlQ29tbWVudDM0NzM2MzUwMw== | alimanfoo 703554 | 2017-11-27T23:27:41Z | 2017-11-27T23:27:41Z | CONTRIBUTOR | For variable length strings (or any array with an object dtype) zarr needs a filter that can encode and pack the strings into a single buffer, except in the special case where the data are being stored in-memory (as in your first example). The filter has to be specified manually, some examples here: http://zarr.readthedocs.io/en/master/tutorial.html#string-arrays. There are two codecs currently in numcodecs that can do this, one is Pickle, the other is MsgPack. I haven't done any benchmarking of data size or encoding speed, but MsgPack may be preferable because it's more portable. There was some discussion a while back about creating a codec that handles variable-length strings by encoding via UTF8 then concatenating encoded bytes and lengths or offsets, IIRC similar to Arrow, and maybe even creating a special "text" dtype that inserts this filter automatically so you don't have to add it manually. But there hasn't been a strong motivation so far. On Mon, Nov 27, 2017 at 10:32 PM, Stephan Hoyer notifications@github.com wrote:
-- Alistair Miles Head of Epidemiological Informatics Centre for Genomics and Global Health http://cggh.org Big Data Institute Building Old Road Campus Roosevelt Drive Oxford OX3 7LF United Kingdom Phone: +44 (0)1865 743596 Email: alimanfoo@googlemail.com Web: http://a http://purl.org/net/alimanlimanfoo.github.io/ Twitter: https://twitter.com/alimanfoo |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
WIP: Zarr backend 253136694 | |
345619509 | https://github.com/pydata/xarray/pull/1528#issuecomment-345619509 | https://api.github.com/repos/pydata/xarray/issues/1528 | MDEyOklzc3VlQ29tbWVudDM0NTYxOTUwOQ== | alimanfoo 703554 | 2017-11-20T08:07:44Z | 2017-11-20T08:07:44Z | CONTRIBUTOR | Fantastic! On Monday, November 20, 2017, Matthew Rocklin notifications@github.com wrote:
-- Alistair Miles Head of Epidemiological Informatics Centre for Genomics and Global Health http://cggh.org Big Data Institute Building Old Road Campus Roosevelt Drive Oxford OX3 7LF United Kingdom Phone: +44 (0)1865 743596 Email: alimanfoo@googlemail.com Web: http://a http://purl.org/net/alimanlimanfoo.github.io/ Twitter: https://twitter.com/alimanfoo |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
WIP: Zarr backend 253136694 | |
345080945 | https://github.com/pydata/xarray/pull/1528#issuecomment-345080945 | https://api.github.com/repos/pydata/xarray/issues/1528 | MDEyOklzc3VlQ29tbWVudDM0NTA4MDk0NQ== | alimanfoo 703554 | 2017-11-16T22:18:04Z | 2017-11-16T22:18:04Z | CONTRIBUTOR | Re different zarr storage backends, main options are plain dict, DirectoryStore, ZipStore, and there's a new DBMStore class just merged which enables storage in any DBM-style database (e.g., Berkeley DB). ZipStore has some constraints because of how zip files work, you can't really replace an entry in a zip file which means anything that writes the same array chunk more than once will generate warnings. Dask's S3Map should also work, I haven't tried it and obviously not ideal for unit tests but I'd be interested if you get any experience with it. Re different combinations of zarr and dask chunks, it can be thread safe even if chunks are not aligned, just need to pass a synchronizer when instantiating the array or group. Zarr has a ThreadSynchronizer class which can be used for thread-based parallelism. If a synchronizer is provided, it is used to lock each chunk individually during write operations. More info here. Re fill values, zarr has a native concept of fill value for each array, with the fill value stored as part of the array metadata. Array metadata are stored as JSON and I recently merged a fix so that a bytes fill values could be used (via base64 encoding). I believe the netcdf way is to store fill value separately as value of "_FillValue" attribute? You could do this with zarr but user attributes are also JSON and so you would need to do your own encoding/decoding. But if possible I'd suggest using the native zarr fill_value support as it handles bytes fill value encoding and also checks to ensure fill values are valid wrt the array dtype. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
WIP: Zarr backend 253136694 | |
339897936 | https://github.com/pydata/xarray/pull/1528#issuecomment-339897936 | https://api.github.com/repos/pydata/xarray/issues/1528 | MDEyOklzc3VlQ29tbWVudDMzOTg5NzkzNg== | alimanfoo 703554 | 2017-10-27T07:42:34Z | 2017-10-27T07:42:34Z | CONTRIBUTOR | Suggest testing against GitHub master, there are a few other issues I'd like to work through before next release. On Thu, 26 Oct 2017 at 23:07, Ryan Abernathey notifications@github.com wrote:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
WIP: Zarr backend 253136694 | |
339800443 | https://github.com/pydata/xarray/pull/1528#issuecomment-339800443 | https://api.github.com/repos/pydata/xarray/issues/1528 | MDEyOklzc3VlQ29tbWVudDMzOTgwMDQ0Mw== | alimanfoo 703554 | 2017-10-26T21:04:17Z | 2017-10-26T21:04:17Z | CONTRIBUTOR | Just to say, support for 0d arrays, and for arrays with one or more zero-length dimensions, is in zarr master. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
WIP: Zarr backend 253136694 | |
335186616 | https://github.com/pydata/xarray/pull/1528#issuecomment-335186616 | https://api.github.com/repos/pydata/xarray/issues/1528 | MDEyOklzc3VlQ29tbWVudDMzNTE4NjYxNg== | alimanfoo 703554 | 2017-10-09T15:07:29Z | 2017-10-09T17:23:21Z | CONTRIBUTOR | I'm on paternity leave for the next 2 weeks, then will be catching up for a couple of weeks I expect. May be able to merge straightforward PRs but will have limited bandwidth. |
{ "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 3, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
WIP: Zarr backend 253136694 | |
335030993 | https://github.com/pydata/xarray/pull/1528#issuecomment-335030993 | https://api.github.com/repos/pydata/xarray/issues/1528 | MDEyOklzc3VlQ29tbWVudDMzNTAzMDk5Mw== | alimanfoo 703554 | 2017-10-08T19:17:27Z | 2017-10-08T23:37:47Z | CONTRIBUTOR | FWIW I think some JSON encoders for attributes would ultimately be a useful addition to zarr, but I won't be able to put any effort into zarr in the next month, so workarounds in xarray sounds like a good idea for now. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
WIP: Zarr backend 253136694 | |
325813339 | https://github.com/pydata/xarray/pull/1528#issuecomment-325813339 | https://api.github.com/repos/pydata/xarray/issues/1528 | MDEyOklzc3VlQ29tbWVudDMyNTgxMzMzOQ== | alimanfoo 703554 | 2017-08-29T21:43:48Z | 2017-08-29T21:43:48Z | CONTRIBUTOR | On Tuesday, August 29, 2017, Ryan Abernathey notifications@github.com wrote:
-- Alistair Miles Head of Epidemiological Informatics Centre for Genomics and Global Health http://cggh.org Big Data Institute Building Old Road Campus Roosevelt Drive Oxford OX3 7LF United Kingdom Phone: +44 (0)1865 743596 Email: alimanfoo@googlemail.com Web: http://a http://purl.org/net/alimanlimanfoo.github.io/ Twitter: https://twitter.com/alimanfoo |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
WIP: Zarr backend 253136694 | |
325729013 | https://github.com/pydata/xarray/pull/1528#issuecomment-325729013 | https://api.github.com/repos/pydata/xarray/issues/1528 | MDEyOklzc3VlQ29tbWVudDMyNTcyOTAxMw== | alimanfoo 703554 | 2017-08-29T17:02:41Z | 2017-08-29T17:02:41Z | CONTRIBUTOR | FWIW all filter (codec) classes have been migrated from zarr to a separate packaged called numcodecs and will be imported from there in the next (2.2) zarr release. Here is FixedScaleOffset. Implementation is basic numpy, probably some room for optimization. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
WIP: Zarr backend 253136694 | |
325727280 | https://github.com/pydata/xarray/pull/1528#issuecomment-325727280 | https://api.github.com/repos/pydata/xarray/issues/1528 | MDEyOklzc3VlQ29tbWVudDMyNTcyNzI4MA== | alimanfoo 703554 | 2017-08-29T16:56:55Z | 2017-08-29T16:56:55Z | CONTRIBUTOR | Following this with interest. Regarding autoclose, just to confirm that zarr doesn't really have any notion of whether something is open or closed. When using the DirectoryStore storage class (most common use case I imagine), all files are automatically closed, nothing is kept open. There are some storage classes (e.g., ZipStore) that do require an explicit close call to finalise the file on disk if you have been writing data, but I think you can ignore this in xarray and leave it up to the user to manage this themselves. Out of interest, @shoyer do you still think there would be value in writing a wrapper for zarr analogous to h5netcdf? Or does this PR provide all the necessary functionality? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
WIP: Zarr backend 253136694 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1