home / github

Menu
  • Search all tables
  • GraphQL API

labels_pull_requests

Table actions
  • GraphQL API for labels_pull_requests

88 rows where labels_id = 4429774983

✎ View and edit SQL

This data as json, CSV (advanced)

Link labels_id pull_requests_id
4429774983,569059113 io 4429774983 Cache files for different CachingFileManager objects separately 569059113
4429774983,906521905 io 4429774983 implement Zarr v3 spec support 906521905
4429774983,1016457019 io 4429774983 Avoid calling np.asarray on lazy indexing classes 1016457019
4429774983,1038005371 io 4429774983 Expose `memory` argument for "netcdf4" engine 1038005371
4429774983,1038963035 io 4429774983 Fixed type errors in `mypy` GitHub Action 1038963035
4429774983,1043204734 io 4429774983 Ensure encoding["source"] is available for a pathlib.Path object 1043204734
4429774983,1044674288 io 4429774983 Support the new compression argument in netCDF4 > 1.6.0 1044674288
4429774983,1048039191 io 4429774983 list available backends and basic descriptors 1048039191
4429774983,1052339012 io 4429774983 Generalize handling of chunked array types 1052339012
4429774983,1052467823 io 4429774983 Typing of abstract base classes 1052467823
4429774983,1056551947 io 4429774983 Writing dimensionless variables to NetCDF 1056551947
4429774983,1062097375 io 4429774983 More informative error for non-existent zarr store 1062097375
4429774983,1073897725 io 4429774983 Fix typing of backends 1073897725
4429774983,1074865129 io 4429774983 Fix pickling of Datasets created using open_mfdataset 1074865129
4429774983,1082013050 io 4429774983 Update open_dataset backend to ensure compatibility with new explicit index model 1082013050
4429774983,1088285081 io 4429774983 Fix doctest warnings, enable errors in CI 1088285081
4429774983,1088467433 io 4429774983 Lazy import dask.distributed to reduce import time of xarray 1088467433
4429774983,1089738782 io 4429774983 Lazy Imports 1089738782
4429774983,1096645684 io 4429774983 Backends descriptions 1096645684
4429774983,1109232628 io 4429774983 Use partial function in open_mfdataset example 1109232628
4429774983,1128397047 io 4429774983 deprecate pynio backend 1128397047
4429774983,1139443490 io 4429774983 Remove code used to support h5py<2.10.0 1139443490
4429774983,1139511392 io 4429774983 Enable mypy warn unused ignores 1139511392
4429774983,1197041777 io 4429774983 DRAFT: Implement `open_datatree` in BackendEntrypoint for preliminary DataTree support 1197041777
4429774983,1197183167 io 4429774983 Refer to open_zarr in open_dataset docstring 1197183167
4429774983,1210479701 io 4429774983 Lint with ruff 1210479701
4429774983,1210704870 io 4429774983 Add abstractmethods to backend classes 1210704870
4429774983,1210897214 io 4429774983 bump minimum versions, drop py38 1210897214
4429774983,1223601380 io 4429774983 deprecate open_zarr 1223601380
4429774983,1229273711 io 4429774983 Zarr: drop "source" and "original_shape" from encoding 1229273711
4429774983,1230953781 io 4429774983 [pre-commit.ci] pre-commit autoupdate 1230953781
4429774983,1238092838 io 4429774983 allow refreshing of backends 1238092838
4429774983,1244439811 io 4429774983 added 'storage_transformers' to valid_encodings 1244439811
4429774983,1251378494 io 4429774983 Support for the new compression arguments. 1251378494
4429774983,1259281081 io 4429774983 fix nczarr when libnetcdf>4.8.1 1259281081
4429774983,1263183724 io 4429774983 todel 1263183724
4429774983,1263301043 io 4429774983 Fix lazy negative slice rewriting. 1263301043
4429774983,1275970795 io 4429774983 Raise PermissionError when insufficient permissions 1275970795
4429774983,1289353458 io 4429774983 Delete built-in cfgrib backend 1289353458
4429774983,1289425832 io 4429774983 Delete built-in rasterio backend 1289425832
4429774983,1295123787 io 4429774983 Use read1 instead of read to get magic number 1295123787
4429774983,1299340017 io 4429774983 deprecate encoding setters 1299340017
4429774983,1335673210 io 4429774983 Generalize delayed 1335673210
4429774983,1337758866 io 4429774983 Array API fixes for astype 1337758866
4429774983,1342159373 io 4429774983 Preserve nanosecond resolution when encoding/decoding times 1342159373
4429774983,1359566001 io 4429774983 CF encoding should preserve vlen dtype for empty arrays 1359566001
4429774983,1361893722 io 4429774983 preserve vlen string dtypes, allow vlen string fill_values 1361893722
4429774983,1369047286 io 4429774983 don't use `CacheFileManager.__del__` on interpreter shutdown 1369047286
4429774983,1386907083 io 4429774983 Add '.hdf' extension to 'netcdf4' backend 1386907083
4429774983,1395497503 io 4429774983 Fix check for chunk_store in zarr backend 1395497503
4429774983,1411248888 io 4429774983 Implement preferred_chunks for netcdf 4 backends 1411248888
4429774983,1414413344 io 4429774983 ensure no forward slashes in names for HDF5-based backends 1414413344
4429774983,1426325882 io 4429774983 Move absolute path finder from open_mfdataset to own function 1426325882
4429774983,1432111234 io 4429774983 Fix typo in zarr.py 1432111234
4429774983,1447376547 io 4429774983 Zarr : Allow setting `write_empty_chunks` 1447376547
4429774983,1450661702 io 4429774983 (chore) min versions bump 1450661702
4429774983,1465015830 io 4429774983 Allow setting (or skipping) new indexes in open_dataset 1465015830
4429774983,1480188095 io 4429774983 Document drop_variables in open_mfdataset 1480188095
4429774983,1492101488 io 4429774983 fix miscellaneous `numpy=2.0` errors 1492101488
4429774983,1501781892 io 4429774983 [pre-commit.ci] pre-commit autoupdate 1501781892
4429774983,1503215624 io 4429774983 Add support for netCDF4.EnumType 1503215624
4429774983,1507766604 io 4429774983 Update Variable metadata when using 'a' mode in Zarr 1507766604
4429774983,1508705350 io 4429774983 Fix typos 1508705350
4429774983,1518276705 io 4429774983 decode variable with mismatched coordinate attribute 1518276705
4429774983,1533357764 io 4429774983 Migrate VariableArithmetic to NamedArrayArithmetic 1533357764
4429774983,1534897151 io 4429774983 fix zarr datetime64 chunks 1534897151
4429774983,1536677803 io 4429774983 Mandate kwargs on `to_zarr` 1536677803
4429774983,1540306398 io 4429774983 Add extra overload for to_netcdf 1540306398
4429774983,1553550639 io 4429774983 Avoid redundant metadata reads in `ZarrArrayWrapper` 1553550639
4429774983,1558999807 io 4429774983 Move parallelcompat and chunkmanagers to NamedArray 1558999807
4429774983,1561278861 io 4429774983 Fix for Dataset.to_zarr with both `consolidated` and `write_empty_chunks` 1561278861
4429774983,1565295227 io 4429774983 Enable subclassing the netCDF4 backend, changing the dataset class 1565295227
4429774983,1569402756 io 4429774983 Added driver parameter for h5netcdf 1569402756
4429774983,1573797440 io 4429774983 Fix typos found by codespell 1573797440
4429774983,1576028557 io 4429774983 Allow writing to zarr with differently ordered dims 1576028557
4429774983,1577658012 io 4429774983 Use numbagg for `ffill` by default 1577658012
4429774983,1591460526 io 4429774983 Add support for remote string paths to `h5netcdf` engine 1591460526
4429774983,1592808124 io 4429774983 Properly closes zarr groups in zarr store 1592808124
4429774983,1592876533 io 4429774983 Add mode='a-': Do not overwrite coordinates when appending to Zarr with `append_dim` 1592876533
4429774983,1594728535 io 4429774983 Automatic region detection and transpose for `to_zarr()` 1594728535
4429774983,1595867080 io 4429774983 Restore dask arrays rather than editing encoding 1595867080
4429774983,1597624356 io 4429774983 Remove PseudoNetCDF 1597624356
4429774983,1600906396 io 4429774983 Add keep_variables keyword to open_dataset() 1600906396
4429774983,1605017024 io 4429774983 Check for aligned chunks when writing to existing variables 1605017024
4429774983,1605107903 io 4429774983 Add initialize_zarr 1605107903
4429774983,1610027974 io 4429774983 Avoid duplicate Zarr array read 1610027974
4429774983,1616702120 io 4429774983 Fix Zarr region transpose 1616702120
4429774983,1620897547 io 4429774983 Minor to_zarr optimizations 1620897547

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [labels_pull_requests] (
   [labels_id] INTEGER REFERENCES [labels]([id]),
   [pull_requests_id] INTEGER REFERENCES [pull_requests]([id]),
   PRIMARY KEY ([labels_id], [pull_requests_id])
);
CREATE INDEX [idx_labels_pull_requests_pull_requests_id]
    ON [labels_pull_requests] ([pull_requests_id]);
CREATE INDEX [idx_labels_pull_requests_labels_id]
    ON [labels_pull_requests] ([labels_id]);
Powered by Datasette · Queries took 1840.833ms · About: xarray-datasette