home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

26 rows where author_association = "CONTRIBUTOR" and user = 1191149 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: issue_url, reactions, created_at (date), updated_at (date)

issue 7

  • Added PNC backend to xarray 16
  • Restore crashing CI tests on pseudonetcdf-3.1 3
  • v0.14.1 Release 2
  • PseudoNetCDF tests failing randomly 2
  • add scatter plot method to dataset 1
  • fix test with pseudonetcdf 3.2 1
  • reindex multidimensional fill_value skipping 1

user 1

  • barronh · 26 ✖

author_association 1

  • CONTRIBUTOR · 26 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
999838503 https://github.com/pydata/xarray/issues/6103#issuecomment-999838503 https://api.github.com/repos/pydata/xarray/issues/6103 IC_kwDOAMm_X847mFMn barronh 1191149 2021-12-22T20:17:38Z 2021-12-22T20:17:38Z CONTRIBUTOR

I wish delete was an option... I see now that the artifact starts in pandas.to_xarray.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  reindex multidimensional fill_value skipping 1087160635
945859209 https://github.com/pydata/xarray/pull/5875#issuecomment-945859209 https://api.github.com/repos/pydata/xarray/issues/5875 IC_kwDOAMm_X844YKqJ barronh 1191149 2021-10-18T14:53:45Z 2021-10-18T14:53:45Z CONTRIBUTOR

This makes sense to me. These attributes more fully describe the independent variable. They were added due to a lack of clarity and to allow better unit handling on the independent variable. Thank you for updating the test.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  fix test with pseudonetcdf 3.2 1029142676
578259713 https://github.com/pydata/xarray/issues/3711#issuecomment-578259713 https://api.github.com/repos/pydata/xarray/issues/3711 MDEyOklzc3VlQ29tbWVudDU3ODI1OTcxMw== barronh 1191149 2020-01-24T19:06:34Z 2020-01-24T19:06:34Z CONTRIBUTOR

Let me know if you need my input, but I think the testcase solution is more general than PseudoNetCDF.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  PseudoNetCDF tests failing randomly 552896124
576848923 https://github.com/pydata/xarray/issues/3711#issuecomment-576848923 https://api.github.com/repos/pydata/xarray/issues/3711 MDEyOklzc3VlQ29tbWVudDU3Njg0ODkyMw== barronh 1191149 2020-01-21T19:46:14Z 2020-01-21T19:46:14Z CONTRIBUTOR

I want to make sure I understand the genesis of the error. My guess is that if you added a print(k) statement, you'd see that this is failing on the VGLVLS attribute. Is that right?

If so, I'm guessing the error is that VGLVLS is not a scalar. NetCDF files and uamiv files may have attributes with values. This is the case for VGLVLS in the IOAPI format, which uamiv is made to emulate.

As a result, compatible is an array not a scalar. A simple solution would be to wrap compatible in a call to all.

Can you confirm which attribute this is failing on and what the value of compatible is when it fails?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  PseudoNetCDF tests failing randomly 552896124
549407230 https://github.com/pydata/xarray/issues/3434#issuecomment-549407230 https://api.github.com/repos/pydata/xarray/issues/3434 MDEyOklzc3VlQ29tbWVudDU0OTQwNzIzMA== barronh 1191149 2019-11-04T15:30:08Z 2019-11-04T15:30:08Z CONTRIBUTOR

Got it. This is a test update to a test that is backward and forward compatible. I'll get something checked in.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  v0.14.1 Release 510915725
548458836 https://github.com/pydata/xarray/issues/3434#issuecomment-548458836 https://api.github.com/repos/pydata/xarray/issues/3434 MDEyOklzc3VlQ29tbWVudDU0ODQ1ODgzNg== barronh 1191149 2019-10-31T16:29:32Z 2019-10-31T16:29:32Z CONTRIBUTOR

I believe there are two issues here.

First, the 3.1 release was not 2.7 compliant. It mixed named keyword arguments with **kwds. That could be fixed easily.

Second, the camx reader changed subclasses, which affects the xarray test for CF variables. I could update the test that would fix xarray test-case.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  v0.14.1 Release 510915725
544316240 https://github.com/pydata/xarray/pull/3420#issuecomment-544316240 https://api.github.com/repos/pydata/xarray/issues/3420 MDEyOklzc3VlQ29tbWVudDU0NDMxNjI0MA== barronh 1191149 2019-10-21T01:32:54Z 2019-10-21T01:32:54Z CONTRIBUTOR

Thank you. I need to update the xarray tests.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Restore crashing CI tests on pseudonetcdf-3.1 509655174
544305635 https://github.com/pydata/xarray/pull/3420#issuecomment-544305635 https://api.github.com/repos/pydata/xarray/issues/3420 MDEyOklzc3VlQ29tbWVudDU0NDMwNTYzNQ== barronh 1191149 2019-10-20T23:53:52Z 2019-10-20T23:53:52Z CONTRIBUTOR

I see you already did move back to 3.0.2. Can you point me to a log where the problem exists to get me started?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Restore crashing CI tests on pseudonetcdf-3.1 509655174
544305036 https://github.com/pydata/xarray/pull/3420#issuecomment-544305036 https://api.github.com/repos/pydata/xarray/issues/3420 MDEyOklzc3VlQ29tbWVudDU0NDMwNTAzNg== barronh 1191149 2019-10-20T23:45:06Z 2019-10-20T23:45:06Z CONTRIBUTOR

As a temporary fix, you can change xarray to require 3.0.2, which did not have this issue. I'll look into it.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Restore crashing CI tests on pseudonetcdf-3.1 509655174
393854456 https://github.com/pydata/xarray/pull/1905#issuecomment-393854456 https://api.github.com/repos/pydata/xarray/issues/1905 MDEyOklzc3VlQ29tbWVudDM5Mzg1NDQ1Ng== barronh 1191149 2018-06-01T11:35:41Z 2018-06-01T11:35:41Z CONTRIBUTOR

@shoyer - Thanks for all the help and guidance. I learned a lot. In the development version of pnc, it is now flake8 compliant. I now use pytest to organize my unittests. I have TravisCI implemented. I have a conda-forge recipe.

I’m grateful for the interactions.

{
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 1,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Added PNC backend to xarray  296561316
390963574 https://github.com/pydata/xarray/pull/1905#issuecomment-390963574 https://api.github.com/repos/pydata/xarray/issues/1905 MDEyOklzc3VlQ29tbWVudDM5MDk2MzU3NA== barronh 1191149 2018-05-22T11:56:37Z 2018-05-22T11:56:37Z CONTRIBUTOR

The two failures were not pnc:

xarray/tests/test_backends.py::TestRasterio::test_serialization FAILED [ 26%] ... xarray/tests/test_backends.py::TestDataArrayToNetCDF::test_open_dataarray_options FAILED [ 26%]

Neither seems related to PNC at all...

I'm pulling the master and remerging to see if that helps...

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Added PNC backend to xarray  296561316
386830815 https://github.com/pydata/xarray/pull/1905#issuecomment-386830815 https://api.github.com/repos/pydata/xarray/issues/1905 MDEyOklzc3VlQ29tbWVudDM4NjgzMDgxNQ== barronh 1191149 2018-05-05T19:56:44Z 2018-05-05T19:56:44Z CONTRIBUTOR

Depends on the format and expected size of data in that format. Some formats support lazy; others load immediately into memory. Sill others use memmaps, so that virtual memory is used immediately.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Added PNC backend to xarray  296561316
385182103 https://github.com/pydata/xarray/pull/1905#issuecomment-385182103 https://api.github.com/repos/pydata/xarray/issues/1905 MDEyOklzc3VlQ29tbWVudDM4NTE4MjEwMw== barronh 1191149 2018-04-28T14:58:47Z 2018-04-28T14:58:47Z CONTRIBUTOR

I trust that none of the other formats PNC supports use _FillValue, add_offset or scale_factor attributes?

The challenge here is that if specific formats use a similar functionality, it has already been applied and may or may not use the CF keywords. So, it should be disabled by default.

If it is possible to detect the inferred file format from PNC, then another option (other than requiring the explicit format argument) would be to load the data and raise an error if the detected file format is netCDF.

I was worried this would be hard to implement, but it was actually easier. So that is what I did.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Added PNC backend to xarray  296561316
383302806 https://github.com/pydata/xarray/pull/1905#issuecomment-383302806 https://api.github.com/repos/pydata/xarray/issues/1905 MDEyOklzc3VlQ29tbWVudDM4MzMwMjgwNg== barronh 1191149 2018-04-21T14:52:10Z 2018-04-21T14:52:10Z CONTRIBUTOR

I tried disabling mask and scale, but many other tests fail. At its root this is because I am implicitly supporting netCDF4 and other formats.

I see two ways to solve this. Right now, it is only important to add non-netcdf support to xarray via PseudoNetCDF. I am currently allowing dynamic identification of the file format, which implicitly supports netCDF. I could disable implicit format support, and require the format keyword. In that case, PseudoNetCDF tests no longer should be CFEncodedDataTest. Instead, I can simply test some round tripping with the other formats (uamiv and possibly adding one or two other formats).

What do you think?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Added PNC backend to xarray  296561316
382703923 https://github.com/pydata/xarray/pull/1905#issuecomment-382703923 https://api.github.com/repos/pydata/xarray/issues/1905 MDEyOklzc3VlQ29tbWVudDM4MjcwMzkyMw== barronh 1191149 2018-04-19T11:40:53Z 2018-04-19T11:40:53Z CONTRIBUTOR

Anything else needed?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Added PNC backend to xarray  296561316
378247961 https://github.com/pydata/xarray/pull/1905#issuecomment-378247961 https://api.github.com/repos/pydata/xarray/issues/1905 MDEyOklzc3VlQ29tbWVudDM3ODI0Nzk2MQ== barronh 1191149 2018-04-03T13:22:23Z 2018-04-03T13:22:23Z CONTRIBUTOR

The conda recipe was approved and merged. Feedstock should be ready soon.

https://github.com/conda-forge/staged-recipes/pull/5449

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Added PNC backend to xarray  296561316
378093157 https://github.com/pydata/xarray/pull/1905#issuecomment-378093157 https://api.github.com/repos/pydata/xarray/issues/1905 MDEyOklzc3VlQ29tbWVudDM3ODA5MzE1Nw== barronh 1191149 2018-04-03T00:51:19Z 2018-04-03T00:51:19Z CONTRIBUTOR

I've added the latest version to pip. Still waiting on the recipe in conda-forge.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Added PNC backend to xarray  296561316
377100408 https://github.com/pydata/xarray/pull/1905#issuecomment-377100408 https://api.github.com/repos/pydata/xarray/issues/1905 MDEyOklzc3VlQ29tbWVudDM3NzEwMDQwOA== barronh 1191149 2018-03-29T02:28:29Z 2018-03-29T02:28:29Z CONTRIBUTOR

@fujiisoup - I have a recipe in conda-forge that is passing all tests. Do you know how long it typically takes to get added or what I need to do to get it added?

https://github.com/conda-forge/staged-recipes/pull/5449

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Added PNC backend to xarray  296561316
376367463 https://github.com/pydata/xarray/pull/1905#issuecomment-376367463 https://api.github.com/repos/pydata/xarray/issues/1905 MDEyOklzc3VlQ29tbWVudDM3NjM2NzQ2Mw== barronh 1191149 2018-03-27T01:41:25Z 2018-03-27T01:41:25Z CONTRIBUTOR

Help me understand, there are now failures in scipy that seem unrelated to my changes. In fact, I had to switch the writer in my tests to netcdf4 to bypass the scipy problem. Is this going to hold up my branch?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Added PNC backend to xarray  296561316
376364948 https://github.com/pydata/xarray/pull/1905#issuecomment-376364948 https://api.github.com/repos/pydata/xarray/issues/1905 MDEyOklzc3VlQ29tbWVudDM3NjM2NDk0OA== barronh 1191149 2018-03-27T01:25:55Z 2018-03-27T01:25:55Z CONTRIBUTOR

I have added a recipe to conda-forge and it is passing all tests. I don't know when it will be in feedstock.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Added PNC backend to xarray  296561316
370996905 https://github.com/pydata/xarray/pull/1905#issuecomment-370996905 https://api.github.com/repos/pydata/xarray/issues/1905 MDEyOklzc3VlQ29tbWVudDM3MDk5NjkwNQ== barronh 1191149 2018-03-07T02:04:46Z 2018-03-07T02:04:46Z CONTRIBUTOR

I only mean to comment, not close and comment.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Added PNC backend to xarray  296561316
370950246 https://github.com/pydata/xarray/pull/1905#issuecomment-370950246 https://api.github.com/repos/pydata/xarray/issues/1905 MDEyOklzc3VlQ29tbWVudDM3MDk1MDI0Ng== barronh 1191149 2018-03-06T22:21:52Z 2018-03-06T22:21:52Z CONTRIBUTOR

@bbakernoaa - It may be a bit. I'm

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Added PNC backend to xarray  296561316
366533315 https://github.com/pydata/xarray/pull/1905#issuecomment-366533315 https://api.github.com/repos/pydata/xarray/issues/1905 MDEyOklzc3VlQ29tbWVudDM2NjUzMzMxNQ== barronh 1191149 2018-02-18T17:46:48Z 2018-02-18T17:46:48Z CONTRIBUTOR

Indexing support. Do you only support basic-indexing like x[0, :5] or is indexing with integer arrays also supported?

My variable objects present a pure numpy array, so they follow numpy indexing precisely with one exception. If the files are actually netCDF4, they have the same limitations of the netCDF4.Variable object.

Serialization/thread-safety. Can we simultaneously read a file with another process or thread using dask?

I have not tested separate processes. In many cases, I use numpy memmap. So that will be the limitation.

API consistency for scalar arrays. Do these require some sort of special API compared to non-scalar arrays?

Same as numpy, but also has support for the netCDF4 style.

Data types support. Are strings and datetimes converted properly into the format xarray expects?

I use relative dates following netcdf time conventions. Within my software, there are special functions for translation, but I have seen this be treated by xarray separately.

Continuous integration testing in PseudoNetCDF, at a minimum on TravicCI,, but Appveyor would be great too.

I added TravisCI, but haven't looked Appveyor.

A conda-forge package to facilitate easy installs of PseudoNetCDF

I added a ci/requirements-py36-netcdf4-dev.yml as a part of my TravisCI integration. I am also working on a recipe (like https://github.com/conda-forge/xarray-feedstock).

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Added PNC backend to xarray  296561316
365139584 https://github.com/pydata/xarray/pull/1905#issuecomment-365139584 https://api.github.com/repos/pydata/xarray/issues/1905 MDEyOklzc3VlQ29tbWVudDM2NTEzOTU4NA== barronh 1191149 2018-02-13T03:24:49Z 2018-02-13T03:24:49Z CONTRIBUTOR

First, I too quickly tried to fast forward and clearly some test was failing. I have updated my code to pass all tests with py.test.

Before closing this request and opening another, I want to make sure I am clear one the extent of what I should add. I can create binary data from within python and then read it, but all those tests are in my software package. Duplicating that seems like a bad idea.

I have added a NetCDF3Only testcase to test_backends.py and it passes. That doesn't stress the multi-format capabilities of pnc, but as I've said all the numerical assertions for the other formats are in my system's test cases. Is the NetCDF3Only test sufficient in this case?

Further, below are some simple applications that download my test data for CAMx and GEOS-Chem and plot it.

Thanks for the input.

``` import xarray as xr from urllib.request import urlretrieve

CAMx test

urlretrieve('https://github.com/barronh/pseudonetcdf/blob/master/src/PseudoNetCDF/testcase/camxfiles/uamiv/test.uamiv?raw=true', 'test.uamiv') xf = xr.open_dataset('test.uamiv', engine = 'pnc') pm = xf.O3.isel(TSTEP = 0, LAY = 0).plot() pm.axes.figure.savefig('test_camx.png')

pm.axes.figure.clf()

GEOS-Chem Test

urlretrieve("https://github.com/barronh/pseudonetcdf/blob/master/src/PseudoNetCDF/testcase/geoschemfiles/test.bpch?raw=true", "test.bpch") urlretrieve("https://github.com/barronh/pseudonetcdf/blob/master/src/PseudoNetCDF/testcase/geoschemfiles/tracerinfo.dat?raw=true", "tracerinfo.dat") urlretrieve("https://github.com/barronh/pseudonetcdf/blob/master/src/PseudoNetCDF/testcase/geoschemfiles/diaginfo.dat?raw=true", "diaginfo.dat") xf = xr.open_dataset('test.bpch', engine = 'pnc') xa = getattr(xf, 'IJ-AVG-$_Ox') xa2d = xa.isel(time = 0, layer3 = 0) pm = xa2d.plot() pm.axes.figure.savefig('test_bpch.png') ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Added PNC backend to xarray  296561316
365099755 https://github.com/pydata/xarray/pull/1905#issuecomment-365099755 https://api.github.com/repos/pydata/xarray/issues/1905 MDEyOklzc3VlQ29tbWVudDM2NTA5OTc1NQ== barronh 1191149 2018-02-12T23:32:42Z 2018-02-12T23:32:42Z CONTRIBUTOR

p.s., I just noticed my edits to whats-new.rst were not pushed until after my pull request.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Added PNC backend to xarray  296561316
304091977 https://github.com/pydata/xarray/issues/470#issuecomment-304091977 https://api.github.com/repos/pydata/xarray/issues/470 MDEyOklzc3VlQ29tbWVudDMwNDA5MTk3Nw== barronh 1191149 2017-05-25T18:49:29Z 2017-05-25T18:49:29Z CONTRIBUTOR

+1

Especially useful when using unstructured spatial datasets like observation stations.

import xarray as xr nstations = 9 data = np.random.random(size = nstations) longitude = np.random.random(size = nstations) + -90 latitude = np.random.random(size = nstations) + 40 da = xr.DataArray(np.random.random(nstations ), dims = ['station'], coords = dict(longitude = ('station', longitude), latitude = ('station', latitude)) da.scatter(x = 'longitude', y = 'latitude')

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  add scatter plot method to dataset 94787306

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 15.111ms · About: xarray-datasette