Upcoming changes to my Internet, and what they may mean for this blog.

One of the unusual things I do is, to host this blog, as well as anything else belonging to my site, on a PC at home. I don’t recommend that everybody do it this way, it’s just how I’ve been doing it over the years.

Well naturally, since I’m doing this, the availability of my site is only as good as:

  • The reliability of this PC, which acts as my server,
  • The reliability of my Internet connection,
  • The reliability of the standard domestic power supply to my home.

The Internet connection which I presently have, is really only a DSL. But what has been announced is, that our building will be upgraded to fibre-optic Internet in the coming months. It’s going to be a big project. I welcome this upgrade, and have been rooting for it, because the existing DSL wires have been deteriorating for several years now. This will bring better Internet to my home.

However, there is no way for me to be certain that with fibre-optic Internet, I will be able to continue to host this blog (or site). The reasons are numerous and not worth for me to elaborate on.

For that reason, this site and blog might just disappear in the coming months.

I’m sorry if any of my readers will miss it.

Dirk

 

Print Friendly, PDF & Email

Sound Fonts: How something that I blogged, was still not 100% accurate.

Sometimes it can happen that, in order to explain a subject 100% accurately, would seem to require writing an almost endless amount of text, and that, with short blog postings, I’m limited to always posting a mere approximation of the subject. The following posting would be a good example:

(Link to an earlier posting.)

Its title clearly states, that there are exactly two types of interpolation-for-use-in-resampling (audio). After some thought, I realized that a third type of interpolation might exist, and that it might especially be useful for Sound Fonts.

According to my posting, the situation can exist, in which the relationship between the spacing of (interpolated) output samples and that of (Sound Font) input samples, is an irrational relationship, and then plausibly, the approach would be, to derive a polynomial’s actual coefficients from the input sample-values (which would be the result of one matrix multiplication), and compute the value of the resulting polynomial at (x = t), (0.0 <= t < 1.0).

But there is an alternative, which is, that the input samples could be up-sampled, arriving at a set of sub-samples with fixed positions, and then, every output sample could arise as a mere linear interpolation, between two sub-samples.

It would seem then, that a third way to interpolate is feasible, even when the spacing of output samples is irrational with respect to that of input samples.

Also, with Sound Fonts, the possibility presents itself, that the Sound Font itself could have been recorded professionally, at a sample rate considerably higher than 44.1kHz, such as maybe at 96kHz, just so that, if the Sound Font Player did rely on linear interpolation, doing so would not mess up the sound as much, as if the Sound Font itself had been recorded at 44.1kHz.

Further, specifically with Sound Font Players, the added problem presents itself, that the virtual instrument could get bent upward in pitch, even though its recording already had frequencies approaching the Nyquist Frequency, so that those frequencies could end up being pushed ‘higher than the output Nyquist Frequency’, thereby resulting in aliasing – i.e., getting reflected back down to lower frequencies – even though each output sample could have been interpolated super-finely by itself.

These are the main reasons why, as I did experience, to play a sampled sound at a very high, bent pitch, actually just results in ‘screeching’.

Yet, the Sound Font Player could again be coded cleverly, so that it would also sub-sample its output sample rate. I.e., if expected to play the virtual instrument at a sample rate of 44.1kHz, it could actually compute interpolated samples closer together than that, corresponding to 88.2kHz, and then the Sound Font Player could compute each ‘real output sample’ as the average between two ‘virtual, sub-sampled output samples’. This would effectively insert a low-pass filter, which would flatten the screeching that would result from frequencies higher than 22kHz, being reflected below 22kHz, and eventually, all the way back down to 0kHz. And admittedly, the type of (very simple) low-pass filter such an arrangement would imply, would be The Haar Wavelet again. :oops:

If you asked me what the best was, which a Soundblaster sound card from 1998 would have been able to do, I’d say, ‘Just compute each audio sample as a linear interpolation between two, Sound Font samples.’ Doing so would have required an added lookup into an address in (shared) RAM, a subtraction, a multiplication, and an addition. In fact, basing this speculation on my estimation of how much circuit-complexity such an early Soundblaster card just couldn’t have had, I’d say that those cards would need to have applied integer arithmetic, with a limited number of fractional bits – maybe 8 – to state which Sound Font sample-position, a given audio sample was being ‘read from’, ?  It would have been up to the driver, to approximate the integer fed to the hardware. And then, if that sound card was poorly designed, its logic might have stated, ‘Just truncate the Sound-Font sample-position being read from, to the nearest sample.’

In contrast, when Audio software is being programmed today, one of the first things the developer will insist on, is to apply floating-point numbers wherever possible…

Also, if a hypothetical, superior Sound Font Player did have as logic, ‘If the sample rate of the loaded Sound Font (< 80kHz), up-sample it 2x; if that sample rate is actually (< 40kHz), up-sample it 4x…’, just to simplify the logic to the point of making it plausible, this up-sampling would only take place once, when the Sound Font is actually being loaded into RAM. By contrast, the oversampling of the output of the virtual instrument, as well as the low-pass filter, would need to be applied in real-time… ‘If the output sample rate is (>= 80kHz), replace adjacent Haar Wavelets with overlapping Haar Wavelets.’

Food for thought.

Sincerely,
Dirk

Print Friendly, PDF & Email

One possible way, to split carbon dioxide into breathable oxygen and carbon.

One of the facts which made the news either today or yesterday was, that the latest rover which NASA landed on Mars, performed a successful demonstration, of a device “roughly the size of a car battery”, which split Mars’s atmospheric CO2 – of which Mars’s atmosphere is composed to 95% – into breathable oxygen and presumably carbon, resulting in enough oxygen for a Human to breathe “for 10 minutes”. Pictures of the gold-plated device were shown.

According to the news, this was achieved “by applying extreme heat”. At first glance, what was said might almost sound impossible, because of the common-sense knowledge, that if carbon is merely heated in the presence of O2, it burns, CO2 is generated, and the reaction spontaneously goes in the wrong direction. Further, it seems unlikely that NASA used H2 in any way to do this, and one reason to think not would be, the question of where sufficient quantities of H2 would come from.

But what this exercise really demonstrates is the fact that, when it comes to basic operations in Science, such as, to split CO2 into Cs and O2, they are generally available in the modern era. And one way actually to do it would be, to heat the CO2 to a temperature at which it becomes a plasma – hence, electrically conductive – and then, to pass an electric current through it. O2 would form at the anode, and Cs would form at the cathode.

Not only that, but extreme voltages would not even be required, IF the reason for which the CO2 became ionized was thermal.

This approach might benefit from the added fact, that most gasses can be made to ionize slightly more easily, when their density is ‘low’ – such as in Mars’s atmosphere – than they could be made to do so at their natural densities in Earth’s atmosphere.

(Updated 4/24/2021, 15h00… )

Continue reading One possible way, to split carbon dioxide into breathable oxygen and carbon.

Print Friendly, PDF & Email

Creating a C++ Hash-Table quickly, that has a QString as its Key-Type.

This posting will assume that the reader has a rough idea, what a hash table is. In case this is not so, this WiKiPedia Article explains that as a generality. Just by reading that article, especially if the reader is of a slightly older generation like me, he or she could get the impression that, putting a hash-table into a practical C++ program is arduous and complex. In reality, the most efficient implementation possible for a hash table, requires some attention to such ideas as, whether the hashes will generally tend to be coprime with the modulus of the hash table, or at least, have the lowest GCDs possible. Often, bucket-arrays with sizes that are powers of (2), and hashes that contain higher prime factors help. But, when writing practical code in C++, one will find that the “Standard Template Library” (‘STL’) already provides one, that has been implemented by experts. It can be put into almost any C++ program, as an ‘unordered_map’. (:1)

But there is a caveat. Any valid data-type meant to act as a key needs to have a hashing function defined, as a template specialization, and not all data-types have one by far. And of course, this hashing function should be as close to unique as possible, for unique key-values, while never changing, if there could be more than one possible representation of the same key-value. Conveniently, C-strings, which are denoted in C or C++ as the old-fashioned ‘char *’ data-type, happen to have this template specialization. But, because C-strings are a very old type of data-structure, and, because the reader may be in the process of writing code that uses the Qt GUI library, the reader may want to use ‘QString’ as his key-data-type, only to find, that a template specialization has not been provided…

For such cases, C++ allows the programmer to define his own hashing function, for QStrings if so chosen. In fact, Qt even allows a quick-and-dirty way to do it, via the ‘qHash()’ function. But there is something I do not like about ‘qHash()’ and its relatives. They tend to produce 32-bit hash-codes! While this does not directly cause an error – only the least-significant 32 bits within a modern 64-bit data-field will contain non-zero values – I think it’s very weak to be using 32-bit hash-codes anyway.

Further, I read somewhere that the latest versions of Qt – as of 5.14? – do, in fact, define such a specialization, so that the programmer does not need to worry about it anymore. But, IF the programmer is still using an older version of Qt, like me, THEN he or she might want to define their own 64-bit hash-function. And so, the following is the result which I arrived at this afternoon. I’ve tested that it will not generate any warnings when compiled under Qt 5.7.1, and that a trivial ‘main()’ function that uses it, will only generate warnings, about the fact that the trivial ‘main()’ function also received the standard ‘argc’ and ‘argv’ parameters, but made no use of them. Otherwise, the resulting executable also produced no messages at run-time, while having put one key-value pair into a sample hash table… (:2)

 

/*  File 'Hash_QStr.h'
 * 
 *  Regular use of a QString in an unordered_map doesn't
 * work, because in earlier versons of Qt, there was no
 * std::hash<QString>() specialization.
 * Thus, one could be included everywhere the unordered_map
 * is to be used...
 */

#ifndef HASH_QSTR_H
#define HASH_QSTR_H


#include <unordered_map>
#include <QString>
//#include <QHash>
#include <QChar>

#define PRIME 5351


/*
namespace cust {
	template<typename T>
	struct hash32 {};

  template<> struct hash32<QString> {
    std::size_t operator()(const QString& s) const noexcept {
      return (size_t) qHash(s);
    }
  };
}

*/


namespace cust
{
	inline size_t QStr_Orther(const QChar & mychar) noexcept {
		return ((mychar.row() << 8) | mychar.cell());
	}
	
	template<typename T>
	struct hash64 {};
	
    template<> struct hash64<QString>
    {
        size_t operator()(const QString& s) const noexcept
        {
            size_t hash = 0;

            for (int i = 0; i < s.size(); i++)
				hash = (hash << 4) + hash + QStr_Orther(s.at(i));

            return hash * PRIME;
        }
    };
}


#endif  //  HASH_QSTR_H

 

(Updated 4/12/2021, 8h00… )

Continue reading Creating a C++ Hash-Table quickly, that has a QString as its Key-Type.

Print Friendly, PDF & Email