Geometric Wavelets

Ah! Just learned about Geometric Wavelets. Most recent work on wavelets as been very uninspiring to me, but this is pretty good. Seems it has been around for a number of years (most recent paper I can see is Le Pennec and Mallat in 2000).

The idea is simple, but I only saw one talk about it, so don’t take my word for it, read the papers.

Take a function f(x) over any (convex) region S. The function is approximated by some polynomial (of fixed degree), call it p(x). Split the region exactly in two by a straight line, so that the two remaining regions S1 and S2 are still convex. Over each subregions, by regression, solve for the polynomials (of fixed degree) p1(x) and p2(x) approximating f(x). Optimize the splitting so that the approximation error made by p1 and p2 is minimal. Then your two new wavelets are the polynomials p(x)-p1(x) and p(x)-p2(x) limited to S1 and S2 respectively. These wavelets are not continuous, but if f(x) is itself a polynomial (up to a fixed degree), then the wavelets vanish. Repeat this splitting for as long as you need to.

The gist of the result is that you can do state-of-the-art image compression. This is surprising because the wavelets are rather complex and difficult to encode especially if you consider that encoding their support region is no joke. The key point is that you can define the region over which each wavelet is defined simply by encoding the various splitting lines, and that’s an easier problem than encoding polygonal regions from scratch.

I strongly suspect that these algorithms are not practical though. They use up too many CPU cycles to save up on bandwidth and storage space in a world where storage is infinite but CPU cycles are comparatively more expensive. But the idea is neat!

3 thoughts on “Geometric Wavelets”

  1. This sounds interesting. Am I understanding this right: by splitting the region (say image) along straight lines, it’s possible to find wavelets that encode the sub-regions with high fidelity. This technique can be successively applied to the resulting regions till you get a good enough encoding.

    Sounds a little like simple mixture-of-experts setups with neural nets; a simple linear classification on top of more complex systems allows them specialize and produce better results.

    I also saw some paper on face compression that used a technique of breaking up the image into rectangular regions and apply more sophisticated techniques to the sub regions. I think a similar thing is done with some forms of SVMs.

    Seems like a general technique – break up the problem along simple linear lines, solve the sub-problems.

  2. Yes, though “possible to find wavelets” should read “possible to find polynomials”. The idea is to divide the image into smaller and small regions and to apply polynomial fitting over each subregion. The compression kicks in when you discard some of these small regions because fitting a polynomial over a subregion doesn’t improve the (local) accuracy much.

    Yes, it is a very general paradigm, one I work a lot with these days since I do time series segmentation.

  3. Wavelets are very powerful and gives better result than other approximations. Wavelets will be the future, the processors very powerful, so we dont need to worry about the number of cycles, memory is also not a big issue nowadays.

Leave a Reply

Your email address will not be published. Required fields are marked *