Subband decomposition using Daubechies filter
        Posted  
        
            by 
                misha
            
        on Stack Overflow
        
        See other posts from Stack Overflow
        
            or by misha
        
        
        
        Published on 2011-03-14T08:58:47Z
        Indexed on 
            2011/03/16
            0:10 UTC
        
        
        Read the original article
        Hit count: 299
        
image-processing
|wavelet
I have the following two 8-tap filters:
h0 ['-0.010597', '0.032883', '0.030841', '-0.187035', '-0.027984', '0.630881', '0.714847', '0.230378']
h1 ['-0.230378', '0.714847', '-0.630881', '-0.027984', '0.187035', '0.030841', '-0.032883', '-0.010597']
Here they are on a graph:

I'm using it to obtain the approximation (lower subband of an image).  This is a(m,n) in the following diagram:  

I got the coefficients and diagram from the book Digital Image Processing, 3rd Edition, so I trust that they are correct. The star symbol denotes one dimensional convolution (either over rows or over columns). The down arrow denotes downsampling in one dimension (either over rows, or columns).
My problem is that the filter coefficients for h0 and h1 sum to greater than 1 (approximately 1.4 or sqrt(2) to be exact).  Naturally, if I convolve any image with the filter, the image will get brighter.  Indeed, here's what I get (expected result on right):
 
Can somebody suggest what the problem is here? Why should it work if the convolution filter coefficients sum to greater than 1?
I have the source code, but it's quite long so I'm hoping to avoid posting it here. If it's absolutely necessary, I'll put it up later.
EDIT
What I'm doing is:
- Decompose into subbands
 - Filter one of the subbands
 - Recompose subbands into original image
 
Note that the point isn't just to have a displayable subband-decomposed image -- I have to be able to perfectly reconstruct the original image from the subbands as well. So if I scale the filtered image in order to compensate for my decomposition filter making the image brighter, this is what I will have to do:
- Decompose into subbands
 - Apply intensity scaling
 - Filter one of the subbands
 - Apply inverse intensity scaling
 - Recompose subbands into original image
 
Step 2 performs the scaling. This is what @Benjamin is suggesting. The problem is that then step 4 becomes necessary, or the original image will not be properly reconstructed. This longer method will work. However, the textbook explicitly says that no scaling is performed on the approximation subband. Of course, it's possible that the textbook is wrong. However, what's more possible is I'm misunderstanding something about the way this all works -- this is why I'm asking this question.
© Stack Overflow or respective owner