issues
4 rows where type = "issue" and user = 1797906 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1364911775 | I_kwDOAMm_X85RWuaf | 7005 | Cannot re-index or align objects with conflicting indexes | jamesstidard 1797906 | open | 0 | 2 | 2022-09-07T16:22:46Z | 2022-09-09T16:04:05Z | NONE | What happened?I'm looking to rename the values of indices of an existing dataset, for both regular and multi-index. i.e. you might start with a dataset with and index I appear to be able to rename a couple using the method I've written, though renaming a second multi-index in the same
What did you expect to happen?I start with the
And remap the
Minimal Complete Verifiable Example```Python import numpy as np import pandas as pd import xarray as xr def map_coords(ds, *, name, mapping): """ Takes a xarray dataset's coordinate values and updates them with the given the provided mapping.
midx = pd.MultiIndex.from_product([list("abc"), [0, 1]], names=("x_one", "x_two")) midy = pd.MultiIndex.from_product([list("abc"), [0, 1]], names=("y_one", "y_two")) mda = xr.DataArray(np.random.rand(6, 6, 3), [("x", midx), ("y", midy), ("z", range(3))]) map_coords(mda, name="z", mapping={0: "zero", 1: "one", 2: "two"}). # success map_coords(mda, name="x_one", mapping={"a": "aa", "b": "bb", "c": "cc"}) # success map_coords(mda, name="y_one", mapping={"a": "aa", "b": "bb", "c": "cc"}) # ValueError ``` MVCE confirmation
Relevant log output
Anything else we need to know?I may also not be doing this remapping in the best way, this has been the easiest way I've found to do it. So maybe part of the problem is that, so open to alternative methods as well. Thanks. Environment
INSTALLED VERSIONS
------------------
commit: None
python: 3.10.4 (main, Mar 28 2022, 15:33:01) [Clang 13.1.6 (clang-1316.0.21.2)]
python-bits: 64
OS: Darwin
OS-release: 21.6.0
machine: x86_64
processor: i386
byteorder: little
LC_ALL: None
LANG: en_GB.UTF-8
LOCALE: ('en_US', 'UTF-8')
libhdf5: 1.12.2
libnetcdf: None
xarray: 2022.6.0
pandas: 1.4.4
numpy: 1.23.2
scipy: 1.9.1
netCDF4: None
pydap: None
h5netcdf: 1.0.2
h5py: 3.7.0
Nio: None
zarr: None
cftime: None
nc_time_axis: None
PseudoNetCDF: None
rasterio: None
cfgrib: None
iris: None
bottleneck: None
dask: None
distributed: None
matplotlib: None
cartopy: None
seaborn: None
numbagg: None
fsspec: None
cupy: None
pint: None
sparse: None
flox: None
numpy_groupies: None
setuptools: 63.4.3
pip: 22.2.2
conda: None
pytest: None
IPython: None
sphinx: None
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7005/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
291332965 | MDU6SXNzdWUyOTEzMzI5NjU= | 1854 | Drop coordinates on loading large dataset. | jamesstidard 1797906 | closed | 0 | 22 | 2018-01-24T19:35:46Z | 2020-02-15T14:49:53Z | 2020-02-15T14:49:53Z | NONE | I've been struggling for quite a while to load a large dataset so I thought it best ask as I think I'm missing a trick. I've also looked through the issues but, even though there are a fair few questions that seemed promising. I have a number of The goal is to go through that data and get all the history of a single latitude/longitude coordinate - instead of the data for all latitude and longitude for small periods. This is my current few lines of script:
However, this blows out the memory on my machine on the I was wondering if there's a way to either determine a good chunk size or maybe tell the I'm using version Would very much appreciate any help. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1854/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
257400162 | MDU6SXNzdWUyNTc0MDAxNjI= | 1572 | Modifying data set resulting in much larger file size | jamesstidard 1797906 | closed | 0 | 7 | 2017-09-13T14:24:06Z | 2017-09-18T08:59:24Z | 2017-09-13T17:12:28Z | NONE | I'm loading a 130MB Here's how I'm applying the mask: ```python import os import xarray as xr fp = 'ERA20c/swh_2010_01_05_05.nc' ds = xr.open_dataset(fp) ds = ds.where(ds.latitude > 50) head, ext = os.path.splitext(fp) xr.open_dataset(fp).to_netcdf('{}-duplicate{}'.format(head, ext)) ds.to_netcdf('{}-masked{}'.format(head, ext)) ``` Is there a way to reduce this file size of the masked dataset? I'd expect it to be roughly the same size or smaller. Thanks. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1572/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
255997962 | MDU6SXNzdWUyNTU5OTc5NjI= | 1561 | exit code 137 when using xarray.open_mfdataset | jamesstidard 1797906 | closed | 0 | 3 | 2017-09-07T16:31:50Z | 2017-09-13T14:16:07Z | 2017-09-13T14:16:06Z | NONE | While using the Does anyone know what might be causing this? Could it be the computer is completely running out of memory (RAM + SWAP + HDD)? Unsure what's causing this as I get no stack trace just the Thanks. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1561/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);