issues
21 rows where user = 12229877 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: comments, closed_at, created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
233350060 | MDU6SXNzdWUyMzMzNTAwNjA= | 1440 | If a NetCDF file is chunked on disk, open it with compatible dask chunks | Zac-HD 12229877 | closed | 0 | 26 | 2017-06-03T06:24:38Z | 2023-09-12T14:55:37Z | 2023-09-11T23:05:50Z | CONTRIBUTOR | NetCDF4 data can be saved as chunks on disk, which has several benefits including efficient reads when using a compatible chunk shape. This is particularly important for files with chunk-based compression (ie all nc4 files with compression) or on HPC and parallel file systems (eg), where IO is typically dominated by the number of reads and chunks-from-disk are often cached. Caches are also common in network data backends such as Thredds OPeNDAP, in which case using disk-compatible chunks will reduce cache pressure as well as latency. Xarray can use chunks, of course, but as of v0.9 the chunk size has to be specified manually - and the easiest way to discover it is to open the file and look at the If Dask is available and |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1440/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
290244473 | MDU6SXNzdWUyOTAyNDQ0NzM= | 1846 | Add a suite of property-based tests with Hypothesis | Zac-HD 12229877 | open | 0 | 3 | 2018-01-21T03:46:42Z | 2022-08-12T17:47:13Z | CONTRIBUTOR | Hypothesis is a library for writing property-based tests in Python: you describe input data and make assertions that should be true for all examples, then Hypothesis tries to find a counterexample. This came up in #1840, because We could add a (initially small) suite of property-based tests, to complement the traditional example-based tests Xarray is already using. Keeping them in independent files will ensure that they run in CI but the dependency on Hypothesis remains optional for local development. I have moved jobs and don't have time to do this myself, but I'd be very happy to help anyone who does 😄 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1846/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
411365882 | MDU6SXNzdWU0MTEzNjU4ODI= | 2773 | Feature request: show units in dataset overview | Zac-HD 12229877 | closed | 0 | 5 | 2019-02-18T08:57:44Z | 2021-05-14T21:16:04Z | 2021-05-14T21:16:04Z | CONTRIBUTOR | Here's a hypothetical dataset:
It would be really nice if the units of the coordinates and of the data variables were shown in the
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2773/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
256557897 | MDU6SXNzdWUyNTY1NTc4OTc= | 1566 | When reporting errors, note what value was invalid and why | Zac-HD 12229877 | closed | 0 | 3 | 2017-09-11T01:25:44Z | 2019-08-19T06:50:15Z | 2019-08-19T06:50:15Z | CONTRIBUTOR | I've regularly had to debug problems with unusual or slightly broken data - or my misunderstanding of various layers of the software stack -, and I can't be the only one. For example:
And of course there are many more examples. This manifesto has some good advice, but in essence:
This is quite an open-ended issue; as well as the code changes it probably requires some process changes to ensure that new errors are equally helpful. Ultimately, the goal is for errors to become a positive aid to learning rather than a frustrating barrier. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1566/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
411734784 | MDU6SXNzdWU0MTE3MzQ3ODQ= | 2775 | Improved inference of names when concatenating arrays | Zac-HD 12229877 | closed | 0 | 1 | 2019-02-19T04:01:03Z | 2019-03-04T05:39:21Z | 2019-03-04T05:39:21Z | CONTRIBUTOR | Problem descriptionUsing the name of the first element to concatenate as the name of the concatenated array is only correct if all names are identical. When names vary, using a clear placeholder name or the name of the new dimension would avoid misleading data users. This came up for me recently when stacking several bands of a satellite image to produce a faceted plot - the resulting colorbar was labelled "blue", even though that was clearly incorrect. A similar process is probably also desirable for aggregation of units across concatenated arrays - use first if identical, otherwise discard or error depending on the Code Sample, a copy-pastable example if possible```python ds = xr.Dataset({ k: xr.DataArray(np.random.random((2, 2)), dims="x y".split(), name=k) for k in "blue green red".split() }) arr.name == "blue", could be "band" or "concat_dim"arr = xr.concat([ds.blue, ds.green, ds.red], dim="band") label of colorbar is "blue", which is meaninglessarr.plot.imshow(col="band") ``` One implementation that would certainly be nice for this use-case (though perhaps not generally) is that concatenating
INSTALLED VERSIONS
------------------
commit: None
python: 3.6.5 |Anaconda, Inc.| (default, Mar 29 2018, 13:32:41) [MSC v.1900 64 bit (AMD64)]
python-bits: 64
OS: Windows
OS-release: 10
machine: AMD64
processor: Intel64 Family 6 Model 158 Stepping 9, GenuineIntel
byteorder: little
LC_ALL: None
LANG: None
LOCALE: None.None
libhdf5: 1.10.3
libnetcdf: 4.4.1.1
xarray: 0.11.2
pandas: 0.23.1
numpy: 1.14.5
scipy: 1.2.1
netCDF4: 1.4.2
pydap: None
h5netcdf: None
h5py: None
Nio: None
zarr: None
cftime: 1.0.3.4
PseudonetCDF: None
rasterio: None
cfgrib: None
iris: None
bottleneck: None
cyordereddict: None
dask: None
distributed: None
matplotlib: 3.0.2
cartopy: None
seaborn: 0.9.0
setuptools: 40.6.2
pip: 10.0.1
conda: None
pytest: 4.2.0
IPython: 6.4.0
sphinx: 1.8.0
I'd be happy to write a PR for this if it would be accepted. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2775/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
411755105 | MDExOlB1bGxSZXF1ZXN0MjU0MTIyNTUw | 2777 | Improved default behavior when concatenating DataArrays | Zac-HD 12229877 | closed | 0 | 14 | 2019-02-19T05:43:44Z | 2019-03-03T22:20:01Z | 2019-03-03T22:20:01Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/2777 |
This is really nice to have when producing faceted plots of satellite observations in various bands, and should be somewhere between useful and harmless in other cases. Example code:
Before - facets have an index, colorbar has misleading label: After - facets have meaningful labels, colorbar has no label: |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2777/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
364247513 | MDExOlB1bGxSZXF1ZXN0MjE4NDgxMjkz | 2442 | Use Hypothesis profile mechanism, not no-op mutation | Zac-HD 12229877 | closed | 0 | 2 | 2018-09-26T23:14:33Z | 2018-09-27T00:35:46Z | 2018-09-26T23:47:27Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/2442 | Closes #2441 - Hypothesis 3.72.0 turned a common no-op into an explicit error. Apparently this was such a common misunderstanding that I had done it too :disappointed: Anyway: while it hasn't been using the deadline at all until now, I've still translated it into the correct form rather than deleting it in order to avoid flaky tests if the Travis VM is slow. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2442/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
289853579 | MDExOlB1bGxSZXF1ZXN0MTYzODc5NTc3 | 1840 | Read small integers as float32, not float64 | Zac-HD 12229877 | closed | 0 | 4 | 2018-01-19T03:40:51Z | 2018-04-19T02:50:25Z | 2018-01-23T20:15:29Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/1840 |
Most satellites produce images with color depth in the range of eight to sixteen bits, which are therefore often stored as unsigned integers (with the quality mask in another variable). If you're lucky, they also have a This is fantastically convenient, and avoids all the bit-depth bugs from misremembered specifications. However, loading data as float64 when float32 is sufficient doubles memory usage in IO (even on multi-TB datasets...). While immediately downcasting helps, it's no substitute for doing the right thing first. So this patch does some conservative checks, and if we can be sure float32 is safe we use that instead. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1840/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
303103716 | MDExOlB1bGxSZXF1ZXN0MTczNDU1NzQz | 1972 | Starter property-based test suite | Zac-HD 12229877 | closed | 0 | 15 | 2018-03-07T13:45:07Z | 2018-03-20T12:51:28Z | 2018-03-20T12:40:12Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/1972 |
This is a small property-based test suite, to give two examples of the kinds of tests that we could write for Xarray using Hypothesis.
Things that I would like to know:
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1972/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
302695966 | MDExOlB1bGxSZXF1ZXN0MTczMTU0MTQ5 | 1967 | Fix RGB imshow with X or Y dim of size one | Zac-HD 12229877 | closed | 0 | 7 | 2018-03-06T13:14:04Z | 2018-03-09T01:49:08Z | 2018-03-08T23:51:45Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/1967 |
Not much more to say, really. Thanks to @fmaussion for pinging me - definitely faster to track down when you know the code! |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1967/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
295055292 | MDExOlB1bGxSZXF1ZXN0MTY3NjMyMDY1 | 1893 | Use correct dtype for RGB image alpha channel | Zac-HD 12229877 | closed | 0 | 4 | 2018-02-07T09:00:33Z | 2018-02-14T05:42:15Z | 2018-02-12T22:12:13Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/1893 |
The cause of the bug in #1880 was that I had forgotten to specify the dtype when creating an alpha channel, and therefore concatenating it cast the all the data to float64. I've fixed that, corrected the alpha value for integer arrays, and avoided a pointless copy to save memory. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1893/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
282369945 | MDExOlB1bGxSZXF1ZXN0MTU4NTU5OTM4 | 1787 | Include units (if set) in plot labels | Zac-HD 12229877 | closed | 0 | 7 | 2017-12-15T09:40:16Z | 2018-02-05T04:01:16Z | 2018-02-05T04:01:16Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/1787 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1787/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
287747803 | MDExOlB1bGxSZXF1ZXN0MTYyMzUzNzQ4 | 1819 | Normalisation for RGB imshow | Zac-HD 12229877 | closed | 0 | 6 | 2018-01-11T11:09:12Z | 2018-01-19T05:01:19Z | 2018-01-19T05:01:07Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/1819 | Follow-up to #1796, where normalisation and clipping of RGB[A] values were deferred so that we could match any upstream API. matplotlib/matplotlib#10220 implements clipping to the valid range, but a strong consensus against RGB normalisation in matplotlib has emerged. This pull therefore implements normalisation, and clips values only where our normalisation has pushed them out of range. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1819/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
288322322 | MDExOlB1bGxSZXF1ZXN0MTYyNzc2ODAx | 1824 | Make `flake8 xarray` pass | Zac-HD 12229877 | closed | 0 | 3 | 2018-01-13T11:37:43Z | 2018-01-14T23:10:01Z | 2018-01-14T20:49:20Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/1824 | Closes #1741 by @mrocklin (who did most of the work I'm presenting here). I had an evening free, so I rebased the previous pull on master, fixed the conflicts, and then made everything pass with The single change any non-pedant will notice: Travis now fails if there is a flake8 warning anywhere. My experience in other projects is that this is the only way to actually keep flake8 passing - it's just unrealistic to expect perfect attention to detail from every contributor, but "make the build green before we merge" is widely understood 😄 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1824/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
283566613 | MDExOlB1bGxSZXF1ZXN0MTU5NDE5NjYw | 1796 | Support RGB[A] arrays in plot.imshow() | Zac-HD 12229877 | closed | 0 | 16 | 2017-12-20T13:43:16Z | 2018-01-11T03:20:02Z | 2018-01-11T03:14:36Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/1796 |
This patch brings
I'm going to implement clip-to-range and color normalization upstream in matplotlib, then open a second PR here so that Xarray can use the same interface. And that's the commit log! It's not really a big feature, but each of the parts can be fiddly so I've broken the commits up logically 😄 Finally, a motivating example: visible-light Landsat data before, during (top-right), and after a fire at Sampson's Flat, Australia:
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1796/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
282087995 | MDExOlB1bGxSZXF1ZXN0MTU4MzQ3NTU2 | 1782 | Plot nans | Zac-HD 12229877 | closed | 0 | 3 | 2017-12-14T12:43:01Z | 2017-12-15T21:10:13Z | 2017-12-15T17:31:39Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/1782 |
CC @fmaussion for review; @BexDunn for interest |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1782/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
282000017 | MDU6SXNzdWUyODIwMDAwMTc= | 1780 | DataArray.plot raises exception if contents are all NaN | Zac-HD 12229877 | closed | 0 | 7 | 2017-12-14T06:58:38Z | 2017-12-15T17:31:39Z | 2017-12-15T17:31:39Z | CONTRIBUTOR | Code Sample, a copy-pastable example if possible
Problem descriptionIf you try to plot a Expected OutputPlot of the array extent, entirely in the missing-value colour as for partially-missing data. Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1780/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
268011986 | MDExOlB1bGxSZXF1ZXN0MTQ4MzgxNzE1 | 1653 | Minor documentation fixes | Zac-HD 12229877 | closed | 0 | 1 | 2017-10-24T12:28:07Z | 2017-10-25T03:47:25Z | 2017-10-25T03:47:18Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/1653 | This pull updates the comparison between Xarray and Pandas ND-Panels, fixes the zenodo links, and improves our configuration for the docs build. Closes #1541. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1653/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
237710101 | MDU6SXNzdWUyMzc3MTAxMDE= | 1462 | Dataset.to_dataframe loads dask arrays into memory | Zac-HD 12229877 | closed | 0 | 2 | 2017-06-22T01:46:30Z | 2017-10-13T02:15:47Z | 2017-10-13T02:15:47Z | CONTRIBUTOR |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1462/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
216611104 | MDExOlB1bGxSZXF1ZXN0MTEyMzY1ODc0 | 1322 | Shorter repr for attributes | Zac-HD 12229877 | closed | 0 | 6 | 2017-03-24T00:26:26Z | 2017-04-03T00:50:28Z | 2017-04-03T00:47:45Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/1322 | NetCDF files often have tens of attributes, including multi-paragraph summaries or the full modification history of the file. It's great to have this available in the .attrs, but we can truncate it substantially
in the repr! Hopefully this will stop people writing
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1322/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
216329175 | MDU6SXNzdWUyMTYzMjkxNzU= | 1319 | Truncate long lines in repr of Dataset.attrs | Zac-HD 12229877 | closed | 0 | 5 | 2017-03-23T07:21:01Z | 2017-04-03T00:47:45Z | 2017-04-03T00:47:45Z | CONTRIBUTOR | When loading from NetCDF, Given that these values are already truncated at 500 characters (including the indicative Another solution would be add appropriate indentation following newlines or wrapping, so that the structure remains clear. However, I think that it is better to print a fairly minimal representation of the metadata by default. ```
<xarray.Dataset> Dimensions: (time: 246, x: 4000, y: 4000) Coordinates: * y (y) float64 -3.9e+06 -3.9e+06 -3.9e+06 -3.9e+06 -3.9e+06 ... * x (x) float64 1.5e+06 1.5e+06 1.5e+06 1.5e+06 1.5e+06 1.5e+06 ... * time (time) datetime64[ns] 1999-07-16T23:49:39 1999-07-25T23:43:07 ... Data variables: crs int32 ... blue (time, y, x) float64 ... green (time, y, x) float64 ... red (time, y, x) float64 ... nir (time, y, x) float64 ... swir1 (time, y, x) float64 ... swir2 (time, y, x) float64 ... Attributes: date_created: 2017-03-07T11:57:26.511217 Conventions: CF-1.6, ACDD-1.3 history: 2017-03-07T11:57:26.511307+11:00 adh547 datacube-ncml (1.2.2+23.gd1f3512.dirty) ls7_nbart_albers.yaml, 1.0.6a, /short/v10/datacube/002/LS7_ETM_NBART/LS7_ETM_NBART_3577_15_-40.ncml, (15, -40) # Created NCML file to aggregate multiple NetCDF files along the time dimension geospatial_bounds: POLYGON ((148.49626113888138 -34.828378308133452,148.638689676063308 -35.720318326735864,149.734176111491877 -35.599556747691196,149.582601578289143 -34.708911907843387,148.49626113888138 -34.828378308133452)) geospatial_bounds_crs: EPSG:4326 geospatial_lat_min: -35.7203183267 geospatial_lat_max: -34.7089119078 geospatial_lat_units: degrees_north geospatial_lon_min: 148.496261139 geospatial_lon_max: 149.734176111 geospatial_lon_units: degrees_east comment: - Ground Control Points (GCP): new GCP chips released by USGS in Dec 2015 are used for re-processing - Geometric QA: each product undergoes geometric assessment and the assessment result will be recorded within v2 AGDC for filtering/masking purposes. - Processing parameter settings: the minimum number of GCPs for Ortho-rectified product generation has been reduced from 30 to 10. - DEM: 1 second SRTM DSM is used for Ortho-rectification. - Updated Calibration Parameter File (CPF): the latest/cu... product_suite: Surface Reflectance NBAR+T 25m publisher_email: earth.observation@ga.gov.au keywords_vocabulary: GCMD product_version: 2 cdm_data_type: Grid references: - Berk, A., Anderson, G.P., Acharya, P.K., Hoke, M.L., Chetwynd, J.H., Bernstein, L.S., Shettle, E.P., Matthew, M.W., and Adler-Golden, S.M. (2003) Modtran 4 Version 3 Revision 1 User s manual. Airforce Research Laboratory, Hanscom, MA, USA. - Chander, G., Markham, B.L., and Helder, D.L. (2009) Summary of current radiometric calibration coefficients for Landsat MSS, TM, ETM+, and EO-1 ALI sensors. Remote Sensing of Environment 113, 893-903. - Edberg, R., and Oliver, S. (2013) Projection-Indep... platform: LANDSAT-7 keywords: AU/GA,NASA/GSFC/SED/ESD/LANDSAT,REFLECTANCE,ETM+,TM,OLI,EARTH SCIENCE publisher_name: Section Leader, Operations Section, NEMO, Geoscience Australia institution: Commonwealth of Australia (Geoscience Australia) acknowledgment: Landsat data is provided by the United States Geological Survey (USGS) through direct reception of the data at Geoscience Australias satellite reception facility or download. license: CC BY Attribution 4.0 International License title: Surface Reflectance NBAR+T 25 v2 summary: Surface Reflectance (SR) is a suite of Earth Observation (EO) products from GA. The SR product suite provides standardised optical surface reflectance datasets using robust physical models to correct for variations in image radiance values due to atmospheric properties, and sun and sensor geometry. The resulting stack of surface reflectance grids are consistent over space and time which is instrumental in identifying and quantifying environmental change. SR is based on radiance data from the... instrument: ETM source: LANDSAT 7 ETM+ surface observation publisher_url: http://www.ga.gov.au ``` |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1319/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);