I have written the below code outline, basically to sum an array (a) where each element is multiplied by a value x^i:
y = a(0)
i = 0
{y = sum from i=0 to (n-1) a(i) * x^i AND 0 <= n <= a.length} //Invariant
while (i < (n-1))
{y = sum from i=0 to (n-1) a(i) * x^i AND 0 <= n <= a.length AND i < (n-1)}
y = y + a(i)*x^i
i = i + 1
end while
{y = sum from i=0 to (n-1) a(i) * x^i} //Postcondition
Note that I do not expect the code to compile - it's just a sensible outline of how the code should work. I need to improve the efficiency of the code by using a tracking variable, and thus, a linking invariant to bridge said variable with the rest of the code. This is where I am stuck. What would be useful to track in this case? I have thought about retaining sum values at each iteration, but I'm not sure if that does the trick. If I could figure out what to track, I'm pretty sure it would be trivial to link it to the space. Can anyone see how my algorithm might be improved via a tracking variable?
Your invariant logic has off-by-1 problems. Here is a corrected version that tracks partial power operations.
// Precondition: 1 <= n <= a.length
// Invariant:
{ 0 <= i < n AND xi = x^i AND y = sum(j = 0..i) . a(j) * x^j }
// Establish invariant at i = 0:
// xi = x^0 = 1 AND y = sum(j=0..0) . a(j) * x^j = a(0) * x^0 = a(0)
i = 0;
xi = 1;
y = a(0);
while (i < n - 1) {
i = i + 1; // Break the invariant
xi = xi * x; // Re-establish it
y = y + a(i) * xi
}
// Invariant was last established at i = n-1, so we have post condition:
{ y = sum(j = 0..n-1) . a(j) * x^j }
The more common and numerically stable way to calculate polynomials is with Horner's Rule
y = 0
for i = n-1 downto 0 do y = y * x + a(i)
So it seems like you're trying to end up with this:
(a(0)*x^0) + (a(1)*x^1) + ... + (a(n-1)*x^(n-1))
Is that right?
The only way I can see to improve performance would be if the ^
operation is more costly than the *
operation. In that case, you could keep track of the x^n
variable as you go, multiplying x
by the value through each iteration.
In fact, in that case you could probably start at the end of the array and work your way backwards, multiplying by x
each time, to produce:
(((...((a(n-1)*x+a(n-2))*x+...)+a(2))*x+a(1))*x)+a(0)
That would theoretically be slightly faster than recalculating x^i each time, but it's not going to be algorithmically faster. It probably wouldn't be an order of magnitude faster.