home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

6 rows where author_association = "MEMBER" and issue = 59467251 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: reactions, created_at (date), updated_at (date)

user 1

  • shoyer 6

issue 1

  • Query about concat · 6 ✖

author_association 1

  • MEMBER · 6 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
91445951 https://github.com/pydata/xarray/issues/349#issuecomment-91445951 https://api.github.com/repos/pydata/xarray/issues/349 MDEyOklzc3VlQ29tbWVudDkxNDQ1OTUx shoyer 1217238 2015-04-10T06:15:48Z 2015-04-10T06:16:02Z MEMBER

You can view the rendered docs on readthedocs, even for the dev version: http://xray.readthedocs.org/en/latest/io.html#combining-multiple-files

open_mfdataset is not quite ready for prime-time -- it needs better documentation and the library we use to power it (dask) has a few annoying bugs that will hopefully be fixed soon. I can't offer any guarantees, but if you want to give it a try (you'll need to install the development version of dask), let me know how it goes. I'll be releasing a new version of xray once those dask fixes go in, probably within the next week or two.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Query about concat 59467251
81424360 https://github.com/pydata/xarray/issues/349#issuecomment-81424360 https://api.github.com/repos/pydata/xarray/issues/349 MDEyOklzc3VlQ29tbWVudDgxNDI0MzYw shoyer 1217238 2015-03-16T05:20:14Z 2015-03-16T05:20:14Z MEMBER

I literally merged this into the dev version of the docs a few hours ago :).

Lazy loading goodness is next on my to-do list. Hopefully I'll have more to share soon.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Query about concat 59467251
81401283 https://github.com/pydata/xarray/issues/349#issuecomment-81401283 https://api.github.com/repos/pydata/xarray/issues/349 MDEyOklzc3VlQ29tbWVudDgxNDAxMjgz shoyer 1217238 2015-03-16T04:22:22Z 2015-03-16T04:22:22Z MEMBER

@aidanheerdegen Not directly (yet), but there are some straightforward recipes. In fact, this has been a popular question, so I wrote a new doc section on this the other day: http://xray.readthedocs.org/en/latest/io.html#combining-multiple-files

For now, this is only the development version docs but everything is equally valid for the latest released version.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Query about concat 59467251
76891303 https://github.com/pydata/xarray/issues/349#issuecomment-76891303 https://api.github.com/repos/pydata/xarray/issues/349 MDEyOklzc3VlQ29tbWVudDc2ODkxMzAz shoyer 1217238 2015-03-03T05:53:11Z 2015-03-03T05:53:11Z MEMBER

Slicing the data you need before concatenating is definitely a good strategy here.

Eventually, I'm optimistic that we'll be able to make concat not require loading everything into memory (https://github.com/xray/xray/issues/328)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Query about concat 59467251
76883374 https://github.com/pydata/xarray/issues/349#issuecomment-76883374 https://api.github.com/repos/pydata/xarray/issues/349 MDEyOklzc3VlQ29tbWVudDc2ODgzMzc0 shoyer 1217238 2015-03-03T04:02:25Z 2015-03-03T04:02:25Z MEMBER

Oh, OK. In that case, you do want to use concat.

Something like this should work:

python ds = xray.concat([xray.open_dataset(f) for f in my_files], dim='time')

xray doesn't use or set unlimited dimensions. (It's pretty irrelevant for us, given that NumPy arrays can be stored in either row-major or column-major order.)

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Query about concat 59467251
76748269 https://github.com/pydata/xarray/issues/349#issuecomment-76748269 https://api.github.com/repos/pydata/xarray/issues/349 MDEyOklzc3VlQ29tbWVudDc2NzQ4MjY5 shoyer 1217238 2015-03-02T16:46:18Z 2015-03-02T16:46:18Z MEMBER

To clarify -- you have different files for different variables? For example, one file has temperature, another has dewpoint, etc? I think you want to use the Dataset.merge method for this.

On Mon, Mar 2, 2015 at 3:09 AM, JoyMonteiro notifications@github.com wrote:

Hello, I have multiple nc files, and I want to pick one variable from all of them to write to a separate file, and if possible pick one vertical level. The issue is that it has no aggregation dimension, so MFDataset does not work. The idea is to get all data about one variable from one vertical level into a single file. When I use the example in the netCDF4-python website, concat merges all variables along all dimensions, making the in-memory size really large. I'm new to xray, and I was hoping something of this sort can be done. In fact, I don't really need to write it to a new file. Even if I can get one "descriptor" (instead of an array of Dataset objects) to access my data, I will be quite happy! TIA,

Joy

Reply to this email directly or view it on GitHub: https://github.com/xray/xray/issues/349

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Query about concat 59467251

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 14.805ms · About: xarray-datasette