issues
3 rows where state = "open" and user = 15239248 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
| id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 1510151748 | I_kwDOAMm_X85aAxZE | 7401 | Allow passing figure handle to FacetGrid | daliagachc 15239248 | open | 0 | 6 | 2022-12-24T17:06:40Z | 2023-10-25T17:10:32Z | NONE | Is your feature request related to a problem?Sometimes i need to combine xarray Facet grids with other ax plots. It would be amazing if I could pass a created figure to the plot function. Event better a subfigure so that the possibilities are infinite! Describe the solution you'd likefor example: ```python da = xr.tutorial.open_dataset("air_temperature")['air'] f = plt.figure() ( da [{'time':[1,2,4]}] .plot.contourf(col='time',fig = f) ) ``` Describe alternatives you've consideredan alternative is to manually to all plots in a created figure, but this becomes cumbersome. I quickly checked the source code, and it does not seem very difficult to implement. mostly a modification to the get_axis function so that it accepts an already created figure. I managed to quickly make it work in seaborn (see image below)
Additional contextNo response |
{
"url": "https://api.github.com/repos/pydata/xarray/issues/7401/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
xarray 13221727 | issue | ||||||||
| 1265344088 | I_kwDOAMm_X85La55Y | 6677 | inconsistency between resample and plotting | daliagachc 15239248 | open | 0 | 0 | 2022-06-08T21:41:03Z | 2022-06-08T21:41:03Z | NONE | What is your issue?I believe there is an inconsistency between using the resample function (left labeled) and the plotting function (center labeled). Maybe the example below help illustrating the issue: ```python d = [ [1,0,0,0,0,0,1,0,0,0,0,0], [0,0,1,1,0,0,0,1,1,1,1,0], [0,0,0,0,0,0,0,0,0,0,1,0], ] t = pd.date_range('2000-01-01 00:05','2000-01-01 00:17',freq='1t',closed='left') da = xr.DataArray(d,dims=['i','t'],coords={'t':t}) f,axs = plt.subplots(4,sharex=True,constrained_layout=True) def plot(da,ax_): return da_.plot(ax=ax_,add_colorbar=False,vmin=0,vmax=1.001,levels=11) for ax,tt in zip(axs[1:],['2t','3t','4t']): da_ = da.resample({'t':tt}).mean() gr = plot(da,ax) ax.set_title(f'resample: {tt}') _plot(da,axs[0]) axs[0].set_title('orginal') for ax in axs: ax.grid() ax.set_xticks(t) ax.set_xticklabels(t.strftime('%M')) ax.set_xlim(pd.to_datetime('2000-01-01 00:00'),pd.to_datetime('2000-01-01 00:20')) f.colorbar(gr,ax=axs) ```
In the example above, the most relevant problem is the high value at min 11 in panel 1 that after resampling to 4 minutes in panel 4 gets shifted and displayed between minutes 6 and 10. I know that i can shift the results from resample with the parameter loffset (='30s' ). but this that not help since now the high value in panel 1 (min 11) is also shifted to minutes 6-10 in panel 4 ```python d = [ [1,0,0,0,0,0,1,0,0,0,0,0], [0,0,1,1,0,0,0,1,1,1,1,0], [0,0,0,0,0,0,0,0,0,0,1,0], ] t = pd.date_range('2000-01-01 00:05','2000-01-01 00:17',freq='1t',closed='left') da = xr.DataArray(d,dims=['i','t'],coords={'t':t}) f,axs = plt.subplots(4,sharex=True,constrained_layout=True) def plot(da,ax_): return da_.plot(ax=ax_,add_colorbar=False,vmin=0,vmax=1.001,levels=11) for ax,tt in zip(axs[1:],['2t','3t','4t']): da_ = da.resample({'t':tt},loffset='30s').mean() gr = plot(da,ax) ax.set_title(f'resample: {tt}') _plot(da,axs[0]) axs[0].set_title('orginal') for ax in axs: ax.grid() ax.set_xticks(t) ax.set_xticklabels(t.strftime('%M')) ax.set_xlim(pd.to_datetime('2000-01-01 00:00'),pd.to_datetime('2000-01-01 00:20')) f.colorbar(gr,ax=axs) ```
|
{
"url": "https://api.github.com/repos/pydata/xarray/issues/6677/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
xarray 13221727 | issue | ||||||||
| 506205450 | MDU6SXNzdWU1MDYyMDU0NTA= | 3394 | Update a small slice of a large netcdf file without overwriting the entire file. | daliagachc 15239248 | open | 0 | 1 | 2019-10-12T16:06:18Z | 2021-07-04T03:32:00Z | NONE | MCVE Code Sample```python Your code hereorig = '/tmp/orig.h5' ii = 100000 data = xr.Dataset( { 'x':('t',np.random.randn(ii)), 'y':('t',np.random.randn(ii)) }, coords={'t':range(ii)} ) function to save the large file usnig chunksizesdef save(ds,path,kwargs): dvars = ds.variables chunksize = 100 var_dic = {} for var in dvars: var_dic[var]={ 'chunksizes': (chunksize,) } delayed =ds.to_netcdf(path,encoding=var_dic,kwargs) save(data,orig) data.close() open the file, using daskdata_1 = xr.open_mfdataset([orig],chunks={'t':100}) Change variable xdata_1['x']=data_1['x']+20 data_1.close() update only variable x. This works!data_1['x'].to_netcdf(orig,mode='a') try the same but now update only a slice of the x variableopen the file, using daskdata_1 = xr.open_mfdataset(orig,chunks={'t':100}) Change variable xdata_1['x']=data_1['x']+20 data_1.close() update only variable x. this doesnt work!data_1['x'][{'t':slice(0,10)}].to_netcdf(orig,mode='a') ``` Expected OutputProblem DescriptionHi, I have a large dataset that does not fit in memory. Lets say i only want to update a small portion of it. Is there any way to update this small portion without having to rewrite the entire file. I was fiddling around and found a way to update one variable at a time, but i want to be able to update only a subsection of this variable Output of
|
{
"url": "https://api.github.com/repos/pydata/xarray/issues/3394/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] (
[id] INTEGER PRIMARY KEY,
[node_id] TEXT,
[number] INTEGER,
[title] TEXT,
[user] INTEGER REFERENCES [users]([id]),
[state] TEXT,
[locked] INTEGER,
[assignee] INTEGER REFERENCES [users]([id]),
[milestone] INTEGER REFERENCES [milestones]([id]),
[comments] INTEGER,
[created_at] TEXT,
[updated_at] TEXT,
[closed_at] TEXT,
[author_association] TEXT,
[active_lock_reason] TEXT,
[draft] INTEGER,
[pull_request] TEXT,
[body] TEXT,
[reactions] TEXT,
[performed_via_github_app] TEXT,
[state_reason] TEXT,
[repo] INTEGER REFERENCES [repos]([id]),
[type] TEXT
);
CREATE INDEX [idx_issues_repo]
ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
ON [issues] ([user]);


