Linear Interpolation. How to implement this algorithm in C ? (Python version is given)

Posted by psihodelia on Stack Overflow See other posts from Stack Overflow or by psihodelia
Published on 2010-12-16T10:49:53Z Indexed on 2010/12/26 15:54 UTC
Read the original article Hit count: 167

Filed under:
|
|
|
|

There exists one very good linear interpolation method. It performs linear interpolation requiring at most one multiply per output sample. I found its description in a third edition of Understanding DSP by Lyons. This method involves a special hold buffer. Given a number of samples to be inserted between any two input samples, it produces output points using linear interpolation. Here, I have rewritten this algorithm using Python:

temp1, temp2 = 0, 0
iL = 1.0 / L
for i in x:
   hold = [i-temp1] * L
   temp1 = i
   for j in hold:
      temp2 += j
      y.append(temp2 *iL)

where x contains input samples, L is a number of points to be inserted, y will contain output samples.

My question is how to implement such algorithm in ANSI C in a most effective way, e.g. is it possible to avoid the second loop?

NOTE: presented Python code is just to understand how this algorithm works.

UPDATE: here is an example how it works in Python:

x=[]
y=[]
hold=[]
num_points=20
points_inbetween = 2

temp1,temp2=0,0

for i in range(num_points):
   x.append( sin(i*2.0*pi * 0.1) )

L = points_inbetween
iL = 1.0/L
for i in x:
   hold = [i-temp1] * L
   temp1 = i
   for j in hold:
      temp2 += j
      y.append(temp2 * iL)

Let's say x=[.... 10, 20, 30 ....]. Then, if L=1, it will produce [... 10, 15, 20, 25, 30 ...]

© Stack Overflow or respective owner

Related posts about python

Related posts about c