html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue https://github.com/pydata/xarray/issues/735#issuecomment-177039255,https://api.github.com/repos/pydata/xarray/issues/735,177039255,MDEyOklzc3VlQ29tbWVudDE3NzAzOTI1NQ==,15167171,2016-01-30T01:18:08Z,2016-01-30T01:26:34Z,NONE,"Note I mis-typed `vecnorm` instead of `vnorm` previously. > I would be OK adding a norm method, although I don't think there's a super strong need for it -- usually I've been happy writing expressions like (x *\* 2).sum(['shapes', 'x']) *\* 0.5 instead. When using large dask arrays that operation tended to fill up ram. I'm not sure why, but it made the dask objects very large, I don't have a good understanding of how those graphs are constructed so I was unable to track down the problem. I tried out `da.std(['shapes', 'x'])` and it worked like a charm, but it felt a little silly rescaling it to get the norm. > You would either need to implement this all in xarray, or preferably write da.linalg.norm in dask and use that take. Take a look at the scipy source code for this function -- I suspect you could port this almost directly to dask. `dask` has a `vnorm`, http://dask.pydata.org/en/latest/array-api.html?highlight=norm#dask.array.core.Array.vnorm which I am currently trying to make use of, using the very helpful: `ops._dask_or_eager_func('vnorm', n_array_args=1)` unfortunately numpy calls their norm `norm` as opposed to `vnorm` so I think I'll need to put in a switch depending on the array type before hitting `_dask_or_eager_func` ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,129919128