My First Digital Audio Player

One of the facts which people have been aware of for several decades now, is that we can buy a portable player, specifically for MP3 files, and that if we do, the sound quality will not be so great.

But in more recent years, Digital Audio Players have emerged on the consumer market, that promise lossless playback of high-fidelity sound, the last part of which is just referred to as “High Resolution Sound” by now. This lossless playback-capability does not come, when we listen to MP3-Files with them, but rather, if we actually play back FLAC, or ALAC -Files.

I just bought This sort of device, which is a Fiio X1 II. One of the remarkable facts about this device is, that its Digital-Analog conversion can run at up to 192kHz, and it sports the possibility of 32-bit sound. What I assume in such a case is, that even if I was to listen to a 48kHz -sampled audio file, 4-factor oversampling would in fact take place, because the D/A converter would continue to run at 192kHz, and I’d also assume that the analog filter would stay as-is, with a cutoff-frequency around 20kHz. But because I am in fact listening to 44.1kHz -sampled sound, I also assume that the whole D/A converter is being slowed down to 176.4kHz. ( :1 )

I have this working with My recently-purchased headphones, and am listening to a mix of MP3, OGG and FLAC -compressed music. I would say that this combination has significantly better sound, than the sound-chip in my Samsung Galaxy S6 phone does. ( :2 )

When I received this DAP, it had firmware version 1.6 already installed. But, I updated the firmware to the latest, v1.7… In fact, formatting the SD card with ‘exFAT’, as well as applying the firmware update, worked easily for me, even from Linux computers. The SD Card is a Sony.

My only regret is, that I personally, don’t have the manual dexterity which would have been needed to install the supplied screen-protector properly. I had the presence of mind to pull it back off, when it did not align correctly, and to dispose of the screen-protector. So I can expect some scuff-marks in the future. :-)

Happy, with Music,

Dirk

(Updated 07/09/2018, 14h55 … )

Continue reading My First Digital Audio Player

A Clarification on Polynomial Approximations… Refining the Exercise

Some time ago, I posted an idea, on how the concept of a polynomial approximation can be simplified, in terms of the real-time Math that needs to be performed, in order to produce 4x oversampling, in which the positions of the interpolated samples with respect to time, are fixed positions.

In order for the reader to understand the present posting, which is a reiteration, he or she would need to read the posting I linked to above. Without reading that posting, the reader will not understand the Matrix, which I included below.

There was something clearly wrong with the idea, which I wrote above, but what is wrong, is not the fact that I computed, or assume the usefulness, of a product between two matrices. What is wrong with the idea as first posted above, is that the order of the approximation is only 4, thus implying a polynomial of the 3rd degree. This is a source of poor approximations close to the Nyquist Frequency.

But As I wrote before, the idea of using anything based on polynomials, can be extended to 7th-order approximations, which imply polynomials of the 6th degree. Further, there is no reason why a 7×7 matrix cannot be pre-multiplied by a 3×7 matrix. The result will only be a 3×7 matrix.

Hence, if we were to assume that such a matrix is to be used, this is the worksheet, which computed what that matrix would have to be:

Work-Sheet

The way this would be used in a practical application is, that a vector of input-samples be formed, corresponding to

t = [ -3, -2, -1, 0, +1, +2, +3 ]

And that the interpolation should result corresponding to

t = [ 0, 1/4, 1/2, 3/4 ]

Further, the interpolation at t = 0 does not need to be recomputed, as it was already provided by the 4th element of the input vector. So the input-vector would only need to be multiplied by the suggested matrix, to arrive at the other 3 values. After that, a new sample can be used as the new, 7th element of the vector, while the old 1st element is dropped, so that another 3 interpolated samples can be computed.

This would be an example of an idea which does not work out well according to a first approximation, but which will produce high-quality results, when the method is applied more rigorously.

Dirk

 

Polynomial Interpolation: Practice Versus Theory

I have posted several times, that it is possible to pre-compute a matrix, such that to multiply a set of input-samples by this matrix, will result in the coefficients of a polynomial, and that next, a fractional position within the center-most interval of this polynomial can be computed – as a polynomial – to arrive at a smoothing function. There is a difference between how I represented this subject, and how it would be implemented.

I assumed non-negative values for the Time parameter, from 0 to 3 inclusively, such that the interval from 1 … 2 can be smoothed. This might work well for degrees up to 3, i.e. for orders up to 4. But in order to compute the matrices accurately, even using computers, when the degree of the polynomial is anything greater than 3, it makes sense to assume x-coordinates from -1 … +2 , or from -3 … +3 . Because, we can instruct a computer to divide by 3 to the 6th power more easily than by 6 to the 6th power.

And then in general, the evaluation of the polynomial will take place over the interval 0 … +1 .

The results can easily be shifted anywhere along the x-axis, as long as we only do the interpolation closest to the center. But the computation of the inverse matrix cannot.

My Example

Also, if it was our goal to illustrate the system to a reader who is not used to Math, then the hardest fact to prove, would be that the matrix of terms has a non-zero determinant, and is therefore invertible, when some of the terms are negative, as it was before.

Continue reading Polynomial Interpolation: Practice Versus Theory

There do in fact exist detailed specs about the Scarlett Focusrite 2i2.

One fact which I have written about before, is that I own a Scarlett Focusrite 2i2 USB-sound-device, and that I have tested whether it can be made to work on several platforms not considered standard, such as under Linux, with the JACK sound daemon, and under Android.

One fact which has reassured me, is that The company Web-site does in fact publish full specifications for it by now.

One conclusion which I can reach from this, is that the idea of setting my Linux software to a sample-rate of 192kHz, was simply a false memory. According to my own, earlier blog entry, I only noticed a top sample-rate of 96kHz at the time. And, my Android software only offered me a top sample-rate of 48kHz with this device.

The official specs state that its analog input frequency-response is a very high-quality version of 20Hz-20kHz, while its conversion is stated at 96kHz. What this implies is that when set to output audio at 44.1 or 48kHz, it must apply its own internal down-sampling, i.e. a digital low-pass filter, while at 88.2 or 96kHz, it must be applying the same analog filter, but not down-sampling its digital stream.

And so, whether we should be using it to record at 96kHz or at 48kHz, may depend on whether we think that our audio software will perform down-sampling using higher-quality filters than its internal processing does. But there can be an opposite point of view on that.

Just as some uses of computers see work offloaded from the main CPU, to external acceleration hardware, we could just as easily decide that the processing power built-in to this external sound device, can ease the workload on our CPU. After all, just because I got no buffer underruns during a simple test, does not imply necessarily, that I would get no sound drop-outs, if I was running a complex audio project in real-time.

Dirk

(Edit 03/21/207 : )

Continue reading There do in fact exist detailed specs about the Scarlett Focusrite 2i2.