note:  my apologies these notes are rather scattered.  i jotted them down before i realized there was a wiki.  if i have time later ill come back and tex/clean them up. -cm

6/16-
MRA decompostion for R^2:
let + denote direct product and * denote tensor product; v_j be the 1D space spanned by the scaling fuction for level j, and w_j be 1D space spanned by the wavelet function for level j. then we have:

V_(j+1)= v(j+1)v(j+1) = (v_j+w_j)(v_j+w_j) = (v_j*v_j)(v_j*w_j)(w_j*v_j)+(w_j*w_j)

where V_(j+1) is the 1D space spanned by the 2D scaling fuction at level j+1.

note that VAPOR uses frequency compression (ie level dependant hard thresholding with thr. coeff=0)

6/17-
ex- show that a wavelet basis is orthogonal if a_n is othog to a_(n+2k) for all k where a is the vector of scaling coeffs.

setting up wavelab on the analysis cluster: need to resolve path issues. correct wavepath.m script.

6/20-
ex- let h_i = up_i(h) where up_i is upsampling by 2^i, h is high pass filter coeffs. show that
h_i*l > psi (i>infty) ; l_i*l > phi (i> infty)
where * is convolution. (this is time domain restatement of inf product formula of Mallat)

6/21-
considering machine learning techniques for vortex extraction in wavelet domain:
bagging, random forests, boosting etc. weak classifier techniques.

bernoulli hyperprior problem.

more wavelet theory:

linear phase in freq response => time delay in filter

4 types of linear phase filter (w. coeffs in R): even tap/symmetric, even tap/anti-symm, odd tap/symm, odd tap/anti-symm

"noble equalities"

n-channel generalization of Haar DWT is n point FFT.

6/24-
week 2 to do:
implement 3d FWT in C++, finish wavelet toolbox tutorial, learn IDL, learn matlab binary data import, vnc deb package
optional: check out bearcave's java code, D/J's universal threshold, liftpack

need better domain knowledge... defn of vortex

6/26-
linear phase + othogonal filterbank + FIR => Haar

polyphase representations of filterbanks

checking out wavelet lifting:
good tutorials at www.bearcave.com, MIT ocw course on filterbanks, wavelets at home paper by sweldens

project ideas from week 2:
dbn wavelets possibly most general for smooth scalar fields... use taylor approximations and filter these.

possibly worth using nonseparable 3d wavelets with lifting and/or learning our own filterbanks from data.

results from separable filterbanks should be better for well aligned vorticity data.

checking out: empirical bayes for wavelet shrinkage (johnstone and silverman, 2004)

7/1-
connection between lifting filters and 2-channel filters in z domain:
H(z) = H_e(z^D) + z^t * H_o(z^D)
primal and dual vanishing moments, wavelet vanishing moments vs scaling function vanishing moments
Neville filters- n-D generalization of coiflets
ex- show that all FIR filters are Neville of order 2 / tau=H^(1)(1) (obvious)

7/5-
notes on sweldens/kovacevic paper... this is definitely the paper to read if you need to implement a 3D wavelet lifting scheme.

interpolating filters can be written in terms of the cosets of the dilation operator
special case- interpolating scaling fuctions in 1D: phi(1)=0, phi(2n)=0 for n!=0.

P a filter matrix => if p* is impulse resp of P* then p*(k) = p(k-c)... reln in z domain?

are neville filters interpolating in the above sense? no. but the resulting 2 channel high pass filters are.

you can make a coiflet out of a neville filter and its adjoint: PP*=C

check out wavelets and one point quadrature formulae

bi-orthogonality implies: H = (D)h , H~ = (D)h~ then HH~ = I

connections between predict and update , DM and PM (moment conditions)
DM and PM are defined in terms of 2 channel filter banks...
imlies that predict filters shift polynomials

relation between lifting and 2 channel.. note similarity to lazy filter reconstruction

7/11-
reading: adaptive wavelet transforms via lifting
predict suppresses low order polynomials vs update supresses high order polynomials... but u can switch these

7/14-
thresholding ideas:

Farge threshold: switch out enstrophy for MAD estimate of sigma, compare performance somehow

physical considerations: power spectrum, can incoherent portion be modeled as additive white noise (possibly exponentially distributed) in the velocity data.

Farges threshold seems suspect... applied like UT but without the same assumptions

idea for methodology:
use physical considerations to inform level-dependant thresh scheme. then validate physically. THEN compare to common statistical schemes.

SURE minimizes "incoherent" portions of data-enstrophy (under hypothetical repeated sampling) this is basically what you want.

  • No labels