id,node_id,number,title,user,state,locked,assignee,milestone,comments,created_at,updated_at,closed_at,author_association,active_lock_reason,draft,pull_request,body,reactions,performed_via_github_app,state_reason,repo,type 2004250796,I_kwDOAMm_X853dnCs,8473,Regular (linspace) Coordinates/Index,35689176,open,0,,,9,2023-11-21T13:08:08Z,2024-04-18T22:11:39Z,,NONE,,,,"### Is your feature request related to a problem? Most of my dimension coordinates fall into three categories: - Categorical coordinates - Pandas multiindex - Regular coordinates, that is of the form `start + np.arange(n)/fs ` for some start, fs I feel the way the latter is currently handled in xarray is suboptimal (unless I'm misusing this great library) as it has the following drawbacks: - Visually: It is not obvious that the coordinate is a linear space: when printing the dataset/array we see some of the values. - Computation Usage: applying scipy functions that require a regular sampling (for example [scipy spectrogram](https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.spectrogram.html) is very annoying as one has to extract the fs and check that the coordinate is indeed regularly sampled. I currently use `step=np.diff(a)[0], assert (np.abs(np.diff(a)-step)) dataset[""start_time_2""], xr.where(dataset[""end_time_1""] > dataset[""end_time_2""], dataset[""end_time_2""]- dataset[""start_time_1""], dataset[""end_time_1""]- dataset[""start_time_1""] ), xr.where(dataset[""end_time_1""] > dataset[""end_time_2""], dataset[""end_time_2""]- dataset[""start_time_2""], dataset[""end_time_1""]- dataset[""start_time_2""] ) ) dataset[""relevant_pair""] = ( (dataset[""Session_1""] == dataset[""Session_2""]) & (dataset[""Contact_1""] != dataset[""Contact_2""]) & (dataset[""Structure_1""] == dataset[""Structure_2""]) & (dataset[""sig_type_1""] ==""bua"") & (dataset[""sig_type_2""] ==""spike_times"") & (~dataset[""resampled_continuous_path_1""].isnull()) & (~dataset[""resampled_continuous_path_2""].isnull()) & (dataset[""common_duration""] >10) ) dataset=dataset.stack(sig_preprocessing_pair=(""sig_preprocessing_1"",""sig_preprocessing_2""), Contact_pair=(""Contact_1"", ""Contact_2"")) dataset = dataset.where(dataset[""relevant_pair""].any(""sig_preprocessing_pair""), drop=True) dataset = dataset.where(dataset[""relevant_pair""].any(""Contact_pair""), drop=True) return dataset stack_size = 100 signal_pairs_split = [signal_pairs.isel(dict(Contact_1=slice(stack_size*i, stack_size*(i+1)), Contact_2=slice(stack_size*j, stack_size*(j+1)))) for i in range(int(np.ceil(signal_pairs.sizes[""Contact_1""]/stack_size))) for j in range(int(np.ceil(signal_pairs.sizes[""Contact_2""]/stack_size))) ] import concurrent.futures with concurrent.futures.ProcessPoolExecutor(max_workers=30) as executor: futures = [executor.submit(stack_dataset, dataset) for dataset in signal_pairs_split] signal_pairs_split_stacked = [future.result() for future in tqdm.tqdm(concurrent.futures.as_completed(futures), total=len(futures), desc=""Stacking"")] signal_pairs = xr.merge(signal_pairs_split_stacked) ```","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/8687/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,issue