Questions for Summer 2009 (reference Farge, Pellegrino and Schneider 2001 - see "annotatedfarge01" attachment):

A.    Is this true? The fundamentally underlying question is whether this interpretation of turbulence is correct.  Can the flow be separated into the superposition of coherent vortices (not in statistical equilibrium) and incoherent random motions (in statistical equilibrium)?  What is meant by statistical equilibrium here, and why not try to measure it directly for the two components?

JC: Farge, Schneider, Kevlahan 1999 (attachment "128.pdf") provides a more detailed discussion on Farge et al's conjectures regarding what is being separated by wavelet thresholding. In FSK99 the claim is that the filtering process is essentially a denoising operation, separating gaussian noise from non-gaussian structure.

JC: Does later work by Farge et al maintain the same story? Pablo Mininni indicated that they may have weakened their position. Is there other literature supporting or refuting their claims?

B.    Can wavelets do this?  At the next level the question is, if the characterization above is correct, can wavelet filtering separate the two components?

JC: FSK99 discuss alternative methods (e.g. thresholding enstrophy and bandpass filtering) to CVS for separating "coherent" and "incoherent" components.

C.    Are the vorticity components the best variable to use in the filtering process?  How about using the enstrophy?  And if gradients help localize the field, why not go to even higher order operators before filtering?  At what order do the Galilean invariance and H and K theorems break down?

D.    The whole issue of thresholds:  What type of threshold best brings out the underlying physical components?  Can one design the wavelet and the threshold to extract the components you what?  Hard vs. soft thresholds ....

JC: FSK99 has further details on thresholding process and rationale

E.    How do the results from TG compare with those of VM?  Is the underlying flow the issue, or the resolution?  We have two different TG simulations of differing resolution.  What can we learn by comparing those?

F.    How sensitive are the results to wavelet choice, and is that telling us anything?

JC: FKS99 claims that the process is relatively insenstive to wavelet family (though in later papers they seem to advocate 12-tap coifletts).

G.    What do these percentages mean in light of the fact that there are so many more coefficients needed to describe the small scales that the large scales?  Is there a better way to express this, such as the fraction of coefficients at each scale retained, or the about of information at each scale retained?

JC: Is there any discussion in any of the Farge papers regarding where (what level) the "incoherent" coefficients are coming from? This seems like something we should explore. If the first level detail coefficients are simply eliminated how do the results compare? Would this give us an indication into possible role of grid-level noise?

H.    Is this the only interpretation of Figure 1?  How constraining is the spectrum?  What if one constructed a "flow" from an exactly -5/3 spectrum, but using random phases, then filtered it and reconstructed Figure 1, what would it look like?  Is the physical interpretation provided necessary?

I.    How constraining are the pdfs?  How different from Gaussian are the two components?

J.    Work with Kenny uncovers both minimally and maximally helical vortical structures.  We should apply those techniques to both filtered and unfiltered data.

K.    Kenny and I get very different results.

L.    Again, J.

M.    This might well be true, but can it be demonstrated by wavelet filtering?  More fundamentally, can we clarify the physical picture that accurately accompanies the Kolmogorov spectrum?  Do the phase relationships contain anything universal?

Paper Summaries

On the structure and dynamics of sheared and rotating turbulence: Direct numerical simulation and wavelet-based coherent vortex extraction Frank G. Jacobitz, Lukas Liechtenstein, Kai Schneider, and Marie Farge. Phys. Fluids 20 045103 (2008) 

I think this is the most recent Farge/Schneider journal article on CVE (though not as first authors). Some of the signficant aspects of the article (see JLSF2009 attachment) are:

  1. The flow tested is not isotropic homegenous turbulence.It is sheared turbulence with varying degrees of rotation.
  2. In addtition to the anaysis performed in the many other papers, the authors advance the DNS in time using only the coherent components and compare results to the "total" flow. I.e. they restart the simulation at a time t after tossing out the incoherent components, evolve the simulation to a later time, and then compare the solution based on the evolved coherent components of the field with the unmolested simulation. The comparison between the two "cannot be distinguished".
  3. Another new analysis looks at which coefficients make up the incoherent part of the flow. As Chris noted last year the discarded coefficients are found almost entirely at the smallest scale.
  4. A number of experiments were performed where the "inclination angle of the vortical structures" is varied. This may have relevance with regard to directional bias of separable wavelet filters. However, their claim is that inclination angle is related to rotation rate, and higher rotation rate coresponds to higher coherency of the flow.
  5. The coiflet 30 wavlet is used, not the usual coiflet 12. No explanation given.
  6. There is some additional relevant discussion on threshold selection.

Some notable excepts from the paper:

  • "The spectral transport terms suggest that the dynamics of coherent and incoherent components are decoupled. The coherent vortices are responsible for the nonlinear dynamics of the flow and determine the future evolution of the flows. The incoherent part is of dissipative nature and can be modeled as turbulent diffusion."
  • "..the flow is split into two parts: a coherent flow, corresponding to the coherent vortices, and an incoherent flow, corresponding to the background noise."
  • "Coherent vortex extraction, based on the orthogonal wavelet decomposition of vorticity, is applied to split the flow into coherent and incoherent parts. It was found that the coherent part preserves the vortical structures using only a few percent of the degrees of freedom. The incoherent part was found to be structureless and of mainly dissipative nature. With increasing rotation rates, the number of wavelet modes representing the coherent vortices decreases, indicating an increased coherency of the flow."

Questions:

  1. Are the physical implications as strong in this paper as in the earlier work?
  2. Significance of incoherent components coming from smallest scales?

On the structure and dynamics of sheared and rotating turbulence: direct numerical simulation and wavelet-based coherent vortex extraction, O. Roussel, K. Schneider, M. Farge, Journal of Turbulence, 6 (11) 2005

Compares orthogonal (coiflett12) and biorthgonal (harten-3) wavelets in context of CVE. Biorthogonal wavelets are suggested by  Vasilyev due to superior properties for solving PDEs. However, lack of orthgonality implies that they are not conservative in the sense that Z != Zc + Zi as is the case for orthgonal wavelets.

  • Incoherent field resulting from orthgonal wavelet is claimed to be guassian, structureless, and decorrelated.
  • Incoherent field from biorthogonal is neither structureless or guassian (using same compression rate as orthogonal)
  • Both decomposistions retain 99% of engery in 3% of components. 75% and 69% of total enstrophy is retained for orthogonal and biorthogonal, respectively
  • More details given on biot-savart - performed in frequency space as we have done
  • Biorthgonal wavelet does not conserve helicity or enstrophy (due to lack of orthgonality)

Vasilyev Papers; 6/15/09

regarding the result in farge et al's 2008 paper: 

a guess about whats actually happening:

they evolve some numerical PDE approx for awhile, then make a copy where they do the CVE stuff, then evolve both copies some more.  they then show that the difference between the two copies is small.

if their evolution scheme is dissipative (which almost all numerical pde schemes are to some extent right?) , then the high-freq components will be damped out like exp(-ct*|k|^2) where c>0 is some const given

by the scheme and |k|^2 is the squared l2 norm of the wavenumber.  so after awhile (when they filter) there wont be much there for the top level wavelets to pick up, hence the 97% compression factor.

it looks like theres some other research on using wavelets for CVS,  from a more numerical perspective.  one notable guy is Oleg Vasilyev, who is at CU.  Ive attached two of his papers on using wavelets for

grid adaptation.  John found some more on CVS.  It looks like hes doing some flavor of multigrid using 2nd gen wavelets.  Perhaps Farge et al. are doing something close to a 2-level multigrid without knowing it?

This approach may also go some ways towards answering John's question about what happens if you do the same CVE/CVS on an interpolated grid of twice the sidelength. -chris

Coherent vortex extraction in 3D homogeneous turbulence: comparison between orthogonal and biorthogonal wavelet decompositions

Meeting Notes

6/9/09

Meetings: we will meet regularly once a week. A schedule will be set up once Jesse returnes to boulder

paper contents: reproduce Farge et al results using TG data set. Offer alternate explanations for Farge claims. Possibly "improve" results using other wavelets, thresholding, etc.

wiki: keep updated to faciliate collaboration

  • Links to relevant papers and summaries of their significant contributions with regard to our work
  • Summaries of our own efforts and findings
  • meeting notes

Next steps:

  1. Review literature, in particular later Farge papers
    • Are later methods consistent with those used in FPS2001 paper
    • Are claims consistent with FPS2001?
    • What questions are answered?
    • New questions?
  2. Identify experiments to run
    • Reproduce Farge results
    • Novel work

Discussion items:

  • Discussed a number of possible approaches for inverse curl operator. We may be able to exploit the fact that field is non-compressible (divergence free) and periodic (need to verify)

Action items for next week:

  • complete lit review (chris, jesse)
  • complete inverse curl operator (jesse)
  • Verify TG field is periodic (john)

6/19/09

Action items:

  • Run farge experiments. Need to verify coherent and incoherent fields produced last summer used current Farge threshold strategy (chris)
    • Produce farge plots with TG data (energy spectrum, velocity pdf, vorticity pdf, helicity pdf) (jesse)
    • Produce plot showing fractional incoherent wavelet coefficients at each scale (see fig 6, JKSF2008) (jesse)
    • Plot showing energy at each scale (see fig2, GVK2005) (jesse)
  • Explore effect of data translation (chris)

6/26/09

We have only six weeks remaining this summer before Chris departs. If we're going to produce a paper we'll need to wrap up research/experiments in the next two to three weeks. Simply reproducing the farge results with the TG data is probably not sufficient for publication. There are numerous other publications that have demonstrated the compression capabilities of the Farge approach using various forms of turbulence. The efforts we've made to date to explain what the "CVE" process is separating the field into have been inconclusive. It is our belief, however, that turbulent fields are not being separated into physically different flows as Farge states in earlier work (in later work Farge appears to back of this postition but still maintains that the "incoherent" components are "noise"). We're not convinced. We believe the "incoherent" components, which appear mostly at the grid level, are simply small enough in amplitude such that their removal has little impact on information content of the field. How to prove this is not clear.

Chris has been exploring one of the other applications of the Farge method, coherent vortex simulation (CVS): evolving the simulation using only the coherent components. Chris feels that the Farge approach is ill suited to CVS. Initial experiments with translating the periodic filed and examining the changes to the coherent/incoherent components support this. The dyadic nature of the wavelets employed by Farge is the root of the problem.

Given the dificulties in addressing the initial questions posed by Mark, and the changes in position by Farge in later papers, we've decided on a slightly different direction for the paper. As planned will compare the results of CVE on the TG data set with those obtained by Farge and other authors on other turublence data sets. Jesse will lead this effort. Chris, however, will explore the CVS issue. He believes he can show why the Farge approach is flawed, and possibly offer an alternate method that is more appropriate for the solution to the NS equations.

JC

7/23/09

Mark arrives in NY on Sunday and begins drive back to Boulder. Should arrive 8/1. Cell phone: 303.618.0844

Need to explore stability of "incoherent component.

Action items:

  1. Reconstruct velocity field using "converged" threshold. Update velocity plots (jesse)
  2. Examine mid-range coefficients. Pick a threshold value between the Farge-selected threshold and something smaller. Reconstruct the vorticity field in this range. Volume render the reconstructed field. Alternatively, reconstruct the incoherent field selected by the Farge method with out including the coeffients at the smallest scale (jesse)
  3. Apply Farge threshold method to both coherent and incoherent filed resulting from (1) above. What are the results? Are new coherent and incoherent fields generated? (jesse)
  4. Explore how well the incohrent field fits a gaussian (Chris suggest Q-Q method). Relatedly, look at how well fields reconstructed at each level, from finest to coarses match a gaussian. I.e. reconstruct field using only smallest scale coefficients, than next to smallest coefficients, and so on. (jesse)
  5. Filter velocity data and compare to reconstructed velocity from (1) above. Use Farge method to select threshold. If compression is very different, use same compression rate as in (1). (jesse)
  6. Call kieth julien (john)
  7. Meeting with oleg (chris)
  8. Fetch addtional TG time steps (john)

Chris- Status Update 6/21/09

coherent vortex extraction:

After a few weeks of reading Farge papers and trying simple tests (see below), my 2 cents is as follows:

'is CVE extracting regions of high vorticity?'     definitely yes.

'is the so-called incoherent field really just physical noise?'    the data is synthetic so who knows?  but it certainly seems possible to model it as noise.  this is

perhaps similar to sub-grid models from large eddy simulation.  how sensitive this model is to various flow parameters is unclear.  from what i saw last summer

its not terribly sensitive to various wavelet parameters.  from various discussions with people here it also seems possible that it includes part of the inertial range.

'are the structures retained by CVE physically meaningful?'     i dont know.  i think they could be numerically meaningful for simulations (farges' CVS), but there

are some problems:

1- the CVE/CVS papers all use flow simulations with periodic BC's.  so their method should be agnostic to the coordinate system (ie produce the same results if you shift

the data).  this definitely isnt the case with the wavelets theyre using.  i messed around with this idea a bit in python (see attached scripts) and found that the wavelet coefficient

tree is altered rather dramatically by shifting the data.  the relative energy at each level changes a little (on the order of 1e0 to 1e-2 depending on the level), but the locations/signs

of the coefficients change a lot, especially at the lower (coarser) levels.  these levels are completely determined by what is removed at the higher levels and none of the wavelets

used are linear phase filters so they cause dispersion (though i think farge et al try to minimize this effect later by using very long wavelets).  this could be trouble for CVS because

in a wavelet basis the pdes you need to solve at each timestep will produce very different looking linear systems if you translate the data. also i was just translating stuff by half the

period.  id imagine that the results would be even worse if you translated by some non-integer quantity (ie by  interpolating,upsampling, translating by say 5, and then lowpass filtering/downsampling).

doing this abstractly would result in a function that isnt neccessarily in ANY  of the approx spaces V_k for finite k.

2- i also looked at marks k^{-5/3} energy, random phase idea.  one problem i had with this though is that the x,y,z components are all out of phase with each other as well so taking the curl

produces tons of high freq energy and the result is unphysical.  i read somewhere that vorticity should have a k^1/3 energy spectrum though so i tried that with random phase and the

farge method removed everything (see attached script)

coherent vortex simulation:

to me the question of whether CVS is a good numerical idea seems a lot easier to answer. my thoughts on this so far are:

1- some of the farge et al results in this area seem closely related to other established methods such as subgrid-models for LES and oleg vasilyev's adaptive grid refinement, though i havent seen any references to work of this sort in their papers.  maybe figuring out what the connections are exactly would be useful for somebody?

2- i dont think the tools that theyre using (separable 1st gen wavelets) are the best for the job.  again translation is a problem (see subspace  comment above).  ditto differentiation for same reason.

farge et al have a (pretty hacky i think) solution to this wherin they include a 'safety zone'  in their wavelet based galerkin-petrov method for updating the vorticity at each timestep.  see for example the farge/schneider book chapter from 2006.

note (chris 6/26/09): i think this is the biggest area for improvement in their method.  theyre basically forced to guess where the flow is gonna go in wavelet space and then

include those vectors in their galerkin solver.  this introduces inaccuracy and inefficiency.  if they were using an MRA that was invariant under translation/differentiation (eg prolate spheroidal wavefunctions or meyer wavelets)  then they wouldnt have to guess.

3- there are (at least) two alternative ideas.  one:  keep their basic CVS idea (toss out small coeffs, solve vorticity in wavelet domain using some kindof galerkin scheme) but replace with a better basis.

one obvious choice would be prolate spheroidal wavefunctions and the associated wavelets.  the V_k defined by pswf's have much nicer properties wrt translation and differentiation, which makes them much better suited to solving pde's with periodic BCs in general.  the multiresolution aspect should make them more efficient on highly compressible fields such as vorticity.  this approach is also amenable to a parallel implementation i think. im guessing probably farge et al used the tools they did because a lot of the newer wavelet research (2nd gen, pswf, etc) was either very new or still not

around when they started working on this stuff.  for numerical pde the newer tools really make a big difference though.

two:  adaptively refine spacial grid and solve with 2nd gen wavelets.  this is the approach taken by vasilyev.  his papers on the subject are more numerically oriented and rigorously motivated than the farge approach.  advantages:  better suited for arbitrary domains/boundary conds.   disadvantages:  apparently very difficult to parallelize

Filtered Data July 22nd

I used the Donoho and Johnstone universal threshold and the Coiflet 12 wavelet to filter the TG data, using the iterative method described in Jacobitz et al 2008 (number 240 on Farge website).  The first set of filtered coherent and incoherent vorticity fields were used by taking the variance of the original vorticity.  I then took the variance of the incoherent vorticity field and used this as the next threshold.  Each threshold is determined by the incoherent vorticity in the previous step, but each filtering process is done on the original vorticity.

The first filtering process produced a vorticity field that contained 61% of the enstrophy and had a compression ratio of 0.9987.  In the step used in the velocity reconstruction (the fourth iteration), the coherent vorticity contained 81% of the enstrophy, the coherent velocity contained 98.8% of the kinetic energy, and had a 0.9952 compression ratio.  I did not have time to reconstruct the incoherent velocity, but the plots for the pdf of vorticity (with Gaussian best fit of the incoherent vorticity), the pdf of velocity (with the Gaussian best fit of the coherent velocity), and the pdf of helicity are attached.  These agree reasonably well with the Farge et al 2001 work.

I continued the iterative process to see if it converged while the inverse curl was running (we will probably not use these filtered velocities in the final paper, since it seems to make more sense to use the later iterations).  The largest scale coefficients are almost completely retained for the coherent vorticity, and none of the smallest scale coefficients are used.  It looks the second level above the smallest scale coefficients have converged, and the second largest coefficients are also close to converging.  This may be because smallest and largest coefficients have more extremes (large absolute value or small absolute value), while the middle coefficients have more moderate values that will continue to be included in lower thresholds.  See compression7.dat and compression8.dat, they contain the compression information for each size scale of coefficients and each of the directions of the wavelet.  They are text files and the end of the file contains the total compression of the iteration, these last two runs have the same total compression to four digits.

Filtered Data July 30th

After filtering the coherent and incoherent vorticity, I found that the filtering process essentially leaves them unchanged.  Filtering the coherent vorticity at first seperated two components, but this was simply because the variance of the coherent vorticity (which sets the threshold) was large compared to the threshold used to make the coherent vorticity.  Continuing the iterative process yields an incoherent vorticity with a quickly lowered variance.  After the second iteration the variance of the incoherent vorticity was a few orders of magnitude below the threshold that set the original coherent vorticity, and, thus, the next step in the process yields only the coherent vorticity and no new incoherent component.  When filtering the incoherent vorticity, you only shave off a small number of coefficients, which leave the incoherent vorticity virtually unchanged.  The only reason that any coefficients are changed is that the threshold has not completely converged.  If the threshold had converged there would be no difference between the "filtered" and "original" incoherent vorticity (since the incoherent vorticity sets the threshold).

By looking at volume renderings of the coherent velocity we can detect no significant difference in the histogram and in the image itself when compared to the original velocity.  This is also true when examining the velocity that was simply filtered using the Farge method (i.e. choosing a threshold, wavelet transforming the velocity, seperating wavelet coefficients and dividing it into coherent and incoherent components).  The direct coherent velocity contains 99.6% of the total kinetic energy, while the inverse curl incoherent velocity contains 98.9%.  This seems to indicate that filtering the vorticity might not be necessary.

Looking at the volume rendering of the incoherent vorticity shows some signs of structure and non-gaussian behaviour.  I also isolated each level of wavelet coefficients that make up the incoherent vorticity.  The histograms of each of these levels have extreme values outside the gaussian fit.  We have not yet determined a method for statistically analyzing these histograms, but it is likely that we will use the Shapiro-Wilk method.

In order to examine the coherent and incoherent fields with a similar compression ratio to that seen in Farge 2001, I have created the vorticity data sets in /ptmp/lordjw/tgdata, under w[x,y,z]_coh_97.raw and w[x,y,z]_incoh_97.raw.  These have 97.0% compression, and might help us determine if a lower threshold would yield an incoherent field that had no structure and was truly Gaussian.  Pdf of vorticity in attached files under vort97_pdf.ps.

Conf call - 1/22/10

Should 2nd part of paper cover: 1) comparison of coherent and total field under coherent structure analysis  a la Kenny's tool, or 2) explore translation invariance issues (TI)

Consensus is that 2nd part of paper should address TI.

How do we demonstrate the TI problem with TG data?

Chris: has code to generate new wavelet

Chris: new wavelet is not usable in practice (too big), but can demonstrate the problem

What is the audience for the paper, math or physics? For physics, what is practical importance of work? Phys of fluids paper needs to reference heavy math stuff, either via a published paper or as appendix.

Chris: farge cvs method introduces frequency resolution error

chris: a saw tooth signal with the haar wavelet should demonstrate the TI problem (+1 and -1 periodic saw tooth with width of 2 samples, shift by one sample to demonstrate problem).  L2 norm at each level should not be invariant for shifted and non-shifted data when using coifflet. This demonstrates the problem.

chris: most wavelet code doesn't properly handle circular convolution correctly with large wavelets (wavelets as wide as data)

chris will be in boulder in late march.

Next steps:

jesse completes analysis of coifflett incoherent componenet

jesse translates data by some amount and re-applies analysis. t. Should start with simple 1d signal (saw tooth).

chris verifies python code correctly handles new wavelet and passes on code to jesse

Jesse: re-runs analysis with new wavelet

 Statistical Analysis of Gaussian Fit - Jesse - 2/10/10

Statistical Tests of Incoherent Component:

Below are the descriptions of two hypothesis tests that I ran on the incoherent vorticity to test how close their distribution is to Gaussian.  I also ran the chi squared test on the original and coherent data.  Since the chi squared test is run on the binned data we can examine which vorticity or velocity bin values vary most from a Gaussian pdf.  In the attachments (the file is chi2.tar) I have put the plots of the reduced chi squared value (on the y axis) versus the value in that bin.  We expect the reduced chi squared value to be very close to 1 for Gaussian data, so the solid line accross the bottom is the expected contribution from each bin.  Thus any point with a value larger than that line is a point that deviates from a Gaussian distribution.

I also plotted the points based on whether they were above a Gaussian or below a Gaussian.  The thick line (actually diamond shapes) indicates that the bin value is larger than for a Gaussian, while the thin line indicates that the bin value is below that of a Gaussian.  The bin values are normalized to be the standard deviation away from the mean of the distribution, so that we can compare properties of the pdfs as a function of the standard deviation instead of the true physical value of the distribution.

The file names indicate what data it is, I included the original, coherent, incoherent, 80% compression, and 20% compression plots of the x vorticity to illustrate how the data deviates from a Gaussian as a function of threshold.

I have also put in that tar file a picture of the reduced chi squared value versus the compression fraction of all of the incoherent data.  The highest compression fraction is 1.0, i.e. the full data.  Then the converged compression, which is a 0.9923, then the rest of the points are artificial thresholds to look at various levels of compression (0.97, 0.90, 0.80,..., 0.20, 0.10).  The result that is closest to Gaussian is the 0.80 compression, and I am computing 0.85 and 0.75 now to test if they are closer to Gaussian.  It seems clear from this plot that the Farge method is not extracting Gaussian noise, but it is possible that, using a different thershold, the wavelet filtering might extract Gaussian noise.

Descriptions of Statistical Tests:

We ran two types of hypothesis tests with the null hypothesis that the incoherent vorticity (and velocity) is Gaussian distributed at the alpha=0.01 confidence level.  First is the chi squared test, with the test statistic chi squared equal to (O_i - E_i)^2 / E_i where O_i is the number of observed data points in bin i, and the expected number of data points in bin i.  E_i is given by a Gaussian distribution that is computed from the mean and variance of the data.

The data is binned using the IDL histogram function, with the minimum and maximum bin values given by mu-6*sig and m+6*sig where mu is the mean and sig is the standard deviation.  A Monte Carlo simulation on random Gaussian distributed test data volumes found that 37275 bins in the histogram were the most likely to accurately accept the Gaussian test data as normally distributed.  It was also suggested in:

http://www.itl.nist.gov/div898/handbook/eda/section3/eda35f.htm

that we should only include bins where the expected number of data points is greater than or equal to 5.  We take the number of points from mu +/- 4.5*sig, or E_i >~ 5.53.  This suggestion seems slightly arbitrary (i.e. why not one or ten instead of five) but it is only ignoring the data in the wings of the distribution, so it provides a stricter result on the distribution of the data (i.e. you are more likely to accept non-gaussian data as gaussian).  The degrees of freedom is the number of bins used (37,275 minus the number of bins where E_i < 5.53) minus two for estimating the mean and variance from the data minus one (as defined by the nist website above).  The probability to exceed must be less than 0.01 in order to reject the null hypothesis.

The second test uses the test statistic A^2 = -N-S, where S has the functional form:

(Sum i=1, N) of

((2i-1)/N) * ( ln(F(x_i)) + ln(1-F(x_N-i+1)) )
where x_i is the sorted data from smallest to largest, N is the total number of data points, and F is the analytic cdf of whatever distribution you are testing.

Most Gaussian Compression:

Coiflet 12: 82% using the Anderson-Darling Test

Coiflet 30: 83.25% using the Anderson-Darling Test

Conf. Call - 2/12/10

Mark: for this process to work (removal of gaussian component of signal), either a wavelet or thresholding process has to be tuned to it's removal

Chris: there may not be a gaussian component to field

Mark: if you can model small scale turbulence as gaussian random noise it would be a huge win

Mark: what wavelet is known to be best for denoising gaussian noise?

Jesse: best choice of wavelet for denoising is dependent on signal.

Mark: would haar wavelet give a chi2 minimum at 82% like coif12 does?

Chris: wavelet dictionary should be tested because...

Chris: we have filter coefficients for "McKinlay Wavelet", not sure if machinery will work given size of data

Mark: hold off on McKinlay wavelet until Chris and Jesse have more time

Chris: Farge probably moved to coif30 because it suffers least from translation invariance issues due to its width.

Chris: McKinlay wavelet may not be signficiantly better than coif30 for translation invariance

Chris will be here and of March for a week.

Mark: can we demonstrate TG is all signal?

Chris: over complete dictionary will give better compression, but may still show weird statistical anomalies

Mark: conjecture: in the TG flow everything we have is signal, even the smallest scale structures. Can we show turbulence is a hierarchy of self-similar structures?

Action items

Jesse rerun chi2 square analysis using coif30 and haar wavelet

Chris research wavelet packets and consider applicability to TG

CU Meeting - 3/28/10

Ideally paper should have three parts: 1. reproduce farge results, 2. discuss gaussian fit issues, 3. new contributions beyond refuting farge et. al.'s work - what has process taught us about the flow?

Jesse discussed issues with non-gaussianess of incoherent data generated using coif12 wavelet. Based on chi^2 test the data are not statistically gaussian. Moreover, farge thresholding process does not do as advertised: select the threshold the results in the most gaussian incoherent data.

Is optimal gaussian threshold found by Jesse consistent for all vars or time steps?

Jesse is running another gaussian fit test (Anderson Darling)

Action

Compare gaussian fit results obtained with coif12 wavelet with coif30 and haar wavelet. Need to test if ideal threshold is consistent across variables and time steps. - Jesse

Explore possibility of using "over complete dictionary" to either filter off gaussian components, or retain structures - Chris

Important Quote, likely the focus of the paper:

"The choice of the threshold is based on theorems proving optimality of the wavelet representation for de-noising signals – optimality in the sense that wavelet-based estimators minimize the maximum L2-error for functions with inhomogeneous regularity in the presence of Gaussian white noise."

The incoherent velocity PDF is Gaussian and the incoherent vorticity PDF is exponential decay.

From Kadoch et al. 2009 (with Farge and Schneider)

One more thought to add to paper:

What type of velocity distribution provides a k^2 kinetic energy power spectrum in Fourier space?

Then use whatever filter will find the k^2 component of the velocity.  Instead of trying to find Gaussian noise in vorticity, try to find the k^2 component in velocity.

  • No labels

3 Comments

  1. A: this is a comment added by john

  2. Here are a few comments/confusions I have regarding the Farge, Pellegrino and Schneider 2001 paper:

    1- What exactly is meant by the terms 'homogenous' and 'isotropic' as applied to turbulent flows?

    Are such flows only 'isotropic' statistically?  Are these conds. sufficient/necessary for successful application of a separable (ie directionally biased) 3D wavelet transform?

    What would happen if you rotated/shifted their data before computing curl / wavelet transforming / thresholding etc.?  If you did this and retained 10% of the coefficients instead of 3%, what would that mean?   I suppose this is along the same line of questioning as Mark's random phase k^-5/3 power idea, which I think we should definitely try.

    2- (Comment I above)  The denoising cutoff coefficient they use assumes that the noise is normally distributed.  However when the compute the PDF of the incoherent vorticity they find that it

    is exponentially distributed.  So obviously whatever they separated out is not Gaussian noise.  I think that this might invalidate their use of Donoho's threshold.  I also found last summer that for Taylor/Green Donoho's hard threshold was not the optimal one to use in terms of minimizing (%coeffs retained)*(L_2 norm of residual).

    3- (Comment H above)  What do they mean by 'white noise scales as k^2 in 3D'?   I thought that the power spectrum of white noise was iid chi-square random variables since the Fourier transform, being unitary takes iid N(0,sigma) signals to iid N(0,sigma) signals.

  3. I confirmed that the boundaries for the TG data set are periodic in all directions. The 2006 Mininni et al paper is attached.