issue_comments
11 rows where issue = 59467251 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: reactions, created_at (date), updated_at (date)
issue 1
- Query about concat · 11 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
91445951 | https://github.com/pydata/xarray/issues/349#issuecomment-91445951 | https://api.github.com/repos/pydata/xarray/issues/349 | MDEyOklzc3VlQ29tbWVudDkxNDQ1OTUx | shoyer 1217238 | 2015-04-10T06:15:48Z | 2015-04-10T06:16:02Z | MEMBER | You can view the rendered docs on readthedocs, even for the dev version: http://xray.readthedocs.org/en/latest/io.html#combining-multiple-files open_mfdataset is not quite ready for prime-time -- it needs better documentation and the library we use to power it (dask) has a few annoying bugs that will hopefully be fixed soon. I can't offer any guarantees, but if you want to give it a try (you'll need to install the development version of dask), let me know how it goes. I'll be releasing a new version of xray once those dask fixes go in, probably within the next week or two. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Query about concat 59467251 | |
91437792 | https://github.com/pydata/xarray/issues/349#issuecomment-91437792 | https://api.github.com/repos/pydata/xarray/issues/349 | MDEyOklzc3VlQ29tbWVudDkxNDM3Nzky | aidanheerdegen 6063709 | 2015-04-10T05:49:17Z | 2015-04-10T05:49:17Z | CONTRIBUTOR | Great to see open_mfdataset implemented! Awesome. The links on the documentation pages seem borked though: https://github.com/xray/xray/blob/0cd100effc3866ed083c366723da0b502afa5a96/doc/io.rst e.g. ":py:func: |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Query about concat 59467251 | |
81424360 | https://github.com/pydata/xarray/issues/349#issuecomment-81424360 | https://api.github.com/repos/pydata/xarray/issues/349 | MDEyOklzc3VlQ29tbWVudDgxNDI0MzYw | shoyer 1217238 | 2015-03-16T05:20:14Z | 2015-03-16T05:20:14Z | MEMBER | I literally merged this into the dev version of the docs a few hours ago :). Lazy loading goodness is next on my to-do list. Hopefully I'll have more to share soon. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Query about concat 59467251 | |
81422044 | https://github.com/pydata/xarray/issues/349#issuecomment-81422044 | https://api.github.com/repos/pydata/xarray/issues/349 | MDEyOklzc3VlQ29tbWVudDgxNDIyMDQ0 | aidanheerdegen 6063709 | 2015-03-16T05:16:08Z | 2015-03-16T05:16:08Z | CONTRIBUTOR | Sorry, I thought I had read the docs (which are very good BTW). Thanks. I have some large files and only want to pick out a single variable from each, and was hoping for some lazy-loading goodness. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Query about concat 59467251 | |
81401283 | https://github.com/pydata/xarray/issues/349#issuecomment-81401283 | https://api.github.com/repos/pydata/xarray/issues/349 | MDEyOklzc3VlQ29tbWVudDgxNDAxMjgz | shoyer 1217238 | 2015-03-16T04:22:22Z | 2015-03-16T04:22:22Z | MEMBER | @aidanheerdegen Not directly (yet), but there are some straightforward recipes. In fact, this has been a popular question, so I wrote a new doc section on this the other day: http://xray.readthedocs.org/en/latest/io.html#combining-multiple-files For now, this is only the development version docs but everything is equally valid for the latest released version. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Query about concat 59467251 | |
81399209 | https://github.com/pydata/xarray/issues/349#issuecomment-81399209 | https://api.github.com/repos/pydata/xarray/issues/349 | MDEyOklzc3VlQ29tbWVudDgxMzk5MjA5 | aidanheerdegen 6063709 | 2015-03-16T04:11:37Z | 2015-03-16T04:11:37Z | CONTRIBUTOR | Is there support for an MFDataset-like multiple file open in xray? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Query about concat 59467251 | |
76891303 | https://github.com/pydata/xarray/issues/349#issuecomment-76891303 | https://api.github.com/repos/pydata/xarray/issues/349 | MDEyOklzc3VlQ29tbWVudDc2ODkxMzAz | shoyer 1217238 | 2015-03-03T05:53:11Z | 2015-03-03T05:53:11Z | MEMBER | Slicing the data you need before concatenating is definitely a good strategy here. Eventually, I'm optimistic that we'll be able to make |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Query about concat 59467251 | |
76890735 | https://github.com/pydata/xarray/issues/349#issuecomment-76890735 | https://api.github.com/repos/pydata/xarray/issues/349 | MDEyOklzc3VlQ29tbWVudDc2ODkwNzM1 | JoyMonteiro 7300413 | 2015-03-03T05:45:50Z | 2015-03-03T05:45:50Z | NONE | Thanks. But that really kills my machine, even though I have 12 GB of RAM. What I finally ended up doing is slicing the initial dataset created from one nc file to access the level+variable that I wanted. This gives me a DataArray object which I then xray.concat() with similar objects created from other variables. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Query about concat 59467251 | |
76883374 | https://github.com/pydata/xarray/issues/349#issuecomment-76883374 | https://api.github.com/repos/pydata/xarray/issues/349 | MDEyOklzc3VlQ29tbWVudDc2ODgzMzc0 | shoyer 1217238 | 2015-03-03T04:02:25Z | 2015-03-03T04:02:25Z | MEMBER | Oh, OK. In that case, you do want to use Something like this should work:
xray doesn't use or set unlimited dimensions. (It's pretty irrelevant for us, given that NumPy arrays can be stored in either row-major or column-major order.) |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Query about concat 59467251 | |
76883017 | https://github.com/pydata/xarray/issues/349#issuecomment-76883017 | https://api.github.com/repos/pydata/xarray/issues/349 | MDEyOklzc3VlQ29tbWVudDc2ODgzMDE3 | JoyMonteiro 7300413 | 2015-03-03T03:57:55Z | 2015-03-03T03:57:55Z | NONE | No, not really. each file contains one year of data for four variables, and I have 35 files (1979-...) I tried Dataset.merge as you suggested, but it says conflicting value for variable time, which I guess is what you would expect. Can xray modify the nc file to make the time dimension unlimited? then I could simply use something like MFDataset... TIA, Joy |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Query about concat 59467251 | |
76748269 | https://github.com/pydata/xarray/issues/349#issuecomment-76748269 | https://api.github.com/repos/pydata/xarray/issues/349 | MDEyOklzc3VlQ29tbWVudDc2NzQ4MjY5 | shoyer 1217238 | 2015-03-02T16:46:18Z | 2015-03-02T16:46:18Z | MEMBER | To clarify -- you have different files for different variables? For example, one file has temperature, another has dewpoint, etc? I think you want to use the Dataset.merge method for this. On Mon, Mar 2, 2015 at 3:09 AM, JoyMonteiro notifications@github.com wrote:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Query about concat 59467251 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 3