One of the subjects which I posted about before, is the fact that in practical computing, often there is an advantage to using numerical approximations, over trying to compute Algebraically exact solutions. And one place where this happens, is with integrals. The advantage can be somewhere, between achieving greater simplicity, and making a solution possible. Therefore, in Undergraduate Education, emphasis is placed in Calculus 2 courses already, on not just computing certain integrals Algebraically, but also, on understanding which numerical approximations will give good results.

One numerical approximation that ‘gives good results’, is The so-called Simpson’s Sum. What it does, is to perform a weighted summation of the most-recent 3 data-points, into an accumulated result. And the version of it which I was taught, sought to place a weight of 2/3 on the Midpoint, as well as to place a weight of 1/3 on the Trap Sum.

*In general, the direction in which the Midpoint is inaccurate, will be opposite to the direction in which the Trap Sum is inaccurate. I.e., if the curve is concave-up, then the Midpoint will tend to underestimate the area under it, while the Trap Sum will tend to overestimate. And in general, with integration, consistent under- or over-estimation, is more problematic, than random under- or over-estimation would be*.

But casual inspection of the link above will reveal, that this is not the only way to weight the three data-points. In effect, it’s also possible to place a weight of 1/2 on the Midpoint, plus a weight of 1/2 on the Trap Sum.

And so a logical question to ask would be, ‘Which method of summation is best?’

The answer has to do, mainly, with whether the stream of data-points is ‘critically sampled’ or not. When I was taking ‘Cal 2′, the Professor drew curves, which were *gradual*, and the frequencies of which, if the data-points did form a stream, would have been below half the Nyquist Frequency. With those curves, the Midpoint was visibly more-accurate, than the Trap Function.

But one parameter for how Electrical Engineers have been designing circuits more recently, not only states that an analog signal is to be sampled in the time domain, but that the stream of samples which results will be critically sampled. This means considerable signal-energy, all the way up to the Nyquist Frequency. What will result is effectively, that all 3 data-points will be as if random, with great variance between even those 3.

Under those conditions, there is a slight advantage to computing the average, between the Midpoint, and the Trap Sum.

(Update 12/08/2018, 20h25 : )

I’ve just prepared a small ‘wxMaxima’ work-sheet, to demonstrate the accuracy of the 2/3 + 1/3 method, applied to a sine-wave at 1/2 Nyquist Frequency:

(Update 1/6/2019, 13h55 : )

I have created a second worksheet, this time to illustrate a more-widely-known aspect, to how the Simpson’s Sum works:

But, some people might prefer that such an idea be proven analytically, rather than visually. And, so I have provided some background material, for any readers that might just be starting to learn Calculus, and which, together with the visual approach shown above, might make the question clearer (JavaScript from ‘mathjax.org’ may need to be enabled to view correctly):

Now, If I wanted to extend my exercise, to illustrate what would happen, if a series of points was to represent a critically-sampled stream, Then the main problem I’d face, would be in establishing what the continuous curve is supposed to be, that interpolates between those points. Such an exercise would need to use a larger window of points to interpolate, and controversy could result, over what method of interpolation would give the best results.

There are Other places in my blog, Where I attempted to address this subject, but the bulk of the work would then fall outside the subject, of merely computing a Simpson’s Sum.

Dirk