home / github

Menu
  • GraphQL API
  • Search all tables

issues

Table actions
  • GraphQL API for issues

4 rows where type = "issue" and user = 15239248 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: comments, created_at (date), updated_at (date), closed_at (date)

state 2

  • open 3
  • closed 1

type 1

  • issue · 4 ✖

repo 1

  • xarray 4
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
1510151748 I_kwDOAMm_X85aAxZE 7401 Allow passing figure handle to FacetGrid daliagachc 15239248 open 0     6 2022-12-24T17:06:40Z 2023-10-25T17:10:32Z   NONE      

Is your feature request related to a problem?

Sometimes i need to combine xarray Facet grids with other ax plots. It would be amazing if I could pass a created figure to the plot function. Event better a subfigure so that the possibilities are infinite!

Describe the solution you'd like

for example: ```python da = xr.tutorial.open_dataset("air_temperature")['air'] f = plt.figure() ( da [{'time':[1,2,4]}] .plot.contourf(col='time',fig = f) )

```

Describe alternatives you've considered

an alternative is to manually to all plots in a created figure, but this becomes cumbersome. I quickly checked the source code, and it does not seem very difficult to implement. mostly a modification to the get_axis function so that it accepts an already created figure. I managed to quickly make it work in seaborn (see image below)

Additional context

No response

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7401/reactions",
    "total_count": 3,
    "+1": 3,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
1265344088 I_kwDOAMm_X85La55Y 6677 inconsistency between resample and plotting daliagachc 15239248 open 0     0 2022-06-08T21:41:03Z 2022-06-08T21:41:03Z   NONE      

What is your issue?

I believe there is an inconsistency between using the resample function (left labeled) and the plotting function (center labeled). Maybe the example below help illustrating the issue:

```python d = [ [1,0,0,0,0,0,1,0,0,0,0,0], [0,0,1,1,0,0,0,1,1,1,1,0], [0,0,0,0,0,0,0,0,0,0,1,0], ]

t = pd.date_range('2000-01-01 00:05','2000-01-01 00:17',freq='1t',closed='left')

da = xr.DataArray(d,dims=['i','t'],coords={'t':t})

f,axs = plt.subplots(4,sharex=True,constrained_layout=True)

def plot(da,ax_): return da_.plot(ax=ax_,add_colorbar=False,vmin=0,vmax=1.001,levels=11)

for ax,tt in zip(axs[1:],['2t','3t','4t']): da_ = da.resample({'t':tt}).mean() gr = plot(da,ax) ax.set_title(f'resample: {tt}')

_plot(da,axs[0]) axs[0].set_title('orginal')

for ax in axs: ax.grid() ax.set_xticks(t) ax.set_xticklabels(t.strftime('%M')) ax.set_xlim(pd.to_datetime('2000-01-01 00:00'),pd.to_datetime('2000-01-01 00:20'))

f.colorbar(gr,ax=axs)

```

In the example above, the most relevant problem is the high value at min 11 in panel 1 that after resampling to 4 minutes in panel 4 gets shifted and displayed between minutes 6 and 10. I know that i can shift the results from resample with the parameter loffset (='30s' ). but this that not help since now the high value in panel 1 (min 11) is also shifted to minutes 6-10 in panel 4 ```python d = [ [1,0,0,0,0,0,1,0,0,0,0,0], [0,0,1,1,0,0,0,1,1,1,1,0], [0,0,0,0,0,0,0,0,0,0,1,0], ]

t = pd.date_range('2000-01-01 00:05','2000-01-01 00:17',freq='1t',closed='left')

da = xr.DataArray(d,dims=['i','t'],coords={'t':t})

f,axs = plt.subplots(4,sharex=True,constrained_layout=True)

def plot(da,ax_): return da_.plot(ax=ax_,add_colorbar=False,vmin=0,vmax=1.001,levels=11)

for ax,tt in zip(axs[1:],['2t','3t','4t']): da_ = da.resample({'t':tt},loffset='30s').mean() gr = plot(da,ax) ax.set_title(f'resample: {tt}')

_plot(da,axs[0]) axs[0].set_title('orginal')

for ax in axs: ax.grid() ax.set_xticks(t) ax.set_xticklabels(t.strftime('%M')) ax.set_xlim(pd.to_datetime('2000-01-01 00:00'),pd.to_datetime('2000-01-01 00:20'))

f.colorbar(gr,ax=axs)

```

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6677/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
1007368839 I_kwDOAMm_X848CzqH 5825 virtual daliagachc 15239248 closed 0     0 2021-09-26T11:52:15Z 2021-09-26T11:52:33Z 2021-09-26T11:52:33Z NONE      

Your issue content here.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5825/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
506205450 MDU6SXNzdWU1MDYyMDU0NTA= 3394 Update a small slice of a large netcdf file without overwriting the entire file. daliagachc 15239248 open 0     1 2019-10-12T16:06:18Z 2021-07-04T03:32:00Z   NONE      

MCVE Code Sample

```python

Your code here

orig = '/tmp/orig.h5'

ii = 100000

data = xr.Dataset( { 'x':('t',np.random.randn(ii)), 'y':('t',np.random.randn(ii)) }, coords={'t':range(ii)} )

function to save the large file usnig chunksizes

def save(ds,path,kwargs): dvars = ds.variables chunksize = 100 var_dic = {} for var in dvars: var_dic[var]={ 'chunksizes': (chunksize,) } delayed =ds.to_netcdf(path,encoding=var_dic,kwargs)

save(data,orig)

data.close()

open the file, using dask

data_1 = xr.open_mfdataset([orig],chunks={'t':100})

Change variable x

data_1['x']=data_1['x']+20 data_1.close()

update only variable x. This works!

data_1['x'].to_netcdf(orig,mode='a')

try the same but now update only a slice of the x variable

open the file, using dask

data_1 = xr.open_mfdataset(orig,chunks={'t':100})

Change variable x

data_1['x']=data_1['x']+20 data_1.close()

update only variable x. this doesnt work!

data_1['x'][{'t':slice(0,10)}].to_netcdf(orig,mode='a')

```

Expected Output

Problem Description

Hi, I have a large dataset that does not fit in memory. Lets say i only want to update a small portion of it. Is there any way to update this small portion without having to rewrite the entire file.

I was fiddling around and found a way to update one variable at a time, but i want to be able to update only a subsection of this variable

Output of xr.show_versions()

INSTALLED VERSIONS ------------------ commit: None python: 3.6.7 | packaged by conda-forge | (default, Jul 2 2019, 02:07:37) [GCC 4.2.1 Compatible Clang 4.0.1 (tags/RELEASE_401/final)] python-bits: 64 OS: Darwin OS-release: 18.7.0 machine: x86_64 processor: i386 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: en_US.UTF-8 libhdf5: 1.10.5 libnetcdf: 4.6.2 xarray: 0.12.3 pandas: 0.25.1 numpy: 1.17.1 scipy: 1.3.1 netCDF4: 1.5.1.2 pydap: None h5netcdf: 0.7.4 h5py: 2.9.0 Nio: None zarr: None cftime: 1.0.3.4 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: None dask: 2.3.0 distributed: None matplotlib: 3.1.1 cartopy: 0.17.0 seaborn: 0.9.0 numbagg: None setuptools: 41.2.0 pip: 19.2.3 conda: 4.7.11 pytest: 4.5.0 IPython: 7.8.0 sphinx: 2.2.0
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3394/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 21.551ms · About: xarray-datasette